uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,565,715 | arxiv | \section{Introduction}
Modern buildings have been of crucial importance in daily life for human's activities and can significantly affect the productivity, and physiological and psychological characteristics~\cite{enescu2017review}. One critical function, maintaining occupant thermal comfort in buildings, especially large-scale commercial buildings, heavily depends on the efficacy and efficiency of control and maintenance strategies. To design feasible control strategies, it typically requires sufficiently adaptive environment models to capture the dynamics of thermal zones in buildings and to adjust for the unknown disturbances, either being internal or external. Recently data-driven methodologies have been receiving considerable attention and shown to be effective in capturing the thermal dynamics, such as artificial neural networks~\cite{reynolds2018zone}, support vector machine~\cite{molina2017data}, autoregressive models~\cite{jiang2018data,chinde2015comparative,lee2015optimal}, and deep neural networks~\cite{mocanu2016deep}.
While a data-driven model can be adopted easily to describe the thermal dynamics for a specific building in a time interval, there exists difficulty that needs to be addressed on transferring established models to more buildings. Generally speaking, for different buildings or different components within the same building, one may repeatedly derive data-driven models using different data and ignoring already existing models. However, deriving from scratch every time based on different data can be time-consuming and even unfeasible without enough historical data, especially when one building is brand-new and not yet commissioned. Therefore, transferability is quite critical in the building thermal dynamics modeling and has not been sufficiently explored previously. \textit{Transfer Learning} has received considerable attention recently and been successfully used in various areas, e.g., indoor localization~\cite{pan2008transfer}, image-processing~\cite{motiian2017unified}, natural language processing~\cite{conneau2017supervised}, and biological applications~\cite{muandet2013domain}. If a model trained in one domain (any variable of interest with a large amount of data) could be adapted in another domain (any variable of interest with a limited amount of data), then one is able to avoid training a model from scratch and some valuable prior knowledge could be transferred accordingly for improving the model learning performance. While autoregressive models are able to represent well the building energy system thermal dynamics, it could be difficult to determine what order of model should be selected properly. Although other aforementioned machine learning methods were shown to be useful for approximating the thermal dynamics, the stochastic nature of real buildings resulting in complex data patterns requires more advanced models. Hence, the model adopted in this paper is a category of deep neural networks (DNN), which have been shown provably efficient for time-series prediction~\cite{ding2015deep,zhao2017lstm}.
In this context, we propose a deep supervised domain adaptation (DSDA) method for thermal dynamics modeling of building indoor temperature evolution and energy consumption using a deep learning model, Long Short Term Memory Network based Sequence to Sequence (LSTM S2S)~\cite{sutskever2014sequence} scheme. We pre-train the LSTM S2S model using a large amount of data from one building (referred to as source building). Then, we adopt parameters from the pre-trained model to initialize parameters of a model defined for another building (referred as to target building). We use a limited amount of data from the target building to fine-tune the model parameters.
To the best of our knowledge, this is the first attempt to address the knowledge transfer using deep transfer learning techniques in building thermal dynamics. We show that the proposed approach outperforms learning from scratch given only a limited amount of data. Our experimental results also imply that useful knowledge can be transferred between different tasks for improving performance.
One key concerning in this paper is that the resultant adaptation is between two close domains. Essentially, the source and target buildings may be required to have similar measured variables and parameters. This concerning can be practically justified as for most buildings, key variables and parameters are quite similar, although sample distributions can be significantly different due to various geographic locations or operation conditions. Nonetheless, our experimental results suggest the sampling frequency between two different domains is not required to be the same for the DSDA, which is practically useful in data collection in real buildings.
\textbf{Related Work.} Grubinger el al.~\cite{grubinger2017generalized} developed a generalized online transfer learning for climate control in residential buildings and showed promising results on the convergence and experiments. They also used Transfer Component Analysis~\cite{pan2011domain} to allow several sources instead of single source to benefit the target task. Although such a framework enables the transferability between different houses, it relies on the physics-based modeling approach, which can be quite complicated and computationally intractable. In building design, Singaravel et al.~\cite{singaravel2018deep} came up with the component-based machine learning (CBML) to incorporate transfer learning as an approach to predict the cooling and heating energy with high accuracy. In another work~\cite{geyer2018component}, they applied the similar method to conduct the parameterized components learning of the design as well. Mocuna et al.~\cite{mocanu2016unsupervised} also developed a cross-building transfer learning framework for unsupervised energy prediction in a smart grid context.
\section{Preliminaries}
In this context, we consider two datasets which correspond to the source ($S$) and target ($T$) buildings and denote by $\mathcal{D}_S$ and $\mathcal{D}_T$, respectively. Then, we have $\mathcal{D}_S=\{(\mathbf{x}^S_i, \mathbf{y}^S_i)\}^m_{i=1}, \mathcal{D}_T=\{(\mathbf{x}^T_j, \mathbf{y}^T_j)\}^n_{j=1}$
where $\mathbf{x}$ which is a realization from a random variable $X$, represents either a source or target domain input in $\mathcal{X}$, and $\mathbf{y}$ which is a realization from a random variable $Y$, represents either a source or target domain output in $\mathcal{Y}$. Since we focus on time series prediction, specifically, for the source building, $\mathbf{x}^S$ is defined as $x_1^Sx_2^S...x_K^S$ to be a time series of length $K$, where $x_k^S\in\mathbb{R}^d, k\in\{1,2,...,K\}$ represents a $d$ variables of vector at the time-instant $k$. Similarly, we have $\mathbf{y}^S=y_1^Sy_2^S...y_L^S$, where $y_l^S\in\mathbb{R}^p, l\in\{1,2,...,L\}$ indicates a $p$ variables of vector at the time-instant $t$. By following the same definitions, for the target domain, $\mathbf{x}^T = x_1^Tx_2^T...x_O^T$, where $x^T_o\in\mathbb{R}^d, o\in\{1,2,...,O\}$, and $\mathbf{y}^T = y_1^Ty_2^T...y_U^T$, where $y^T_u\in\mathbb{R}^p, h\in\{1,2,...,H\}$. One motivation for us to use transfer learning is that $n\ll m$, which implies that the number of samples in the target dataset is much smaller than that in the source dataset.
One quite practical issue for thermal dynamics modeling using data-driven techniques is that for some brand-new buildings which are not commissioned yet, modeling cannot be conducted with only a limited amount of data, especially for deep learning models. Therefore, domain adaptation can leverage the transferrability from one building to another building.
We consider \textit {supervised domain adaptation}, which typically refers to different domains resulting in a covariate shift \cite{bickel2009discriminative} between $X^S$ and $X^T$, and the model is pre-trained using $\mathcal{D}_S$ via the supervised learning way in our problem.
Under this setting, the goal is to learn a prediction function mapping from $\mathcal{X}$ to $\mathcal{Y}$ that is able to perform well on $\mathcal{D}_T$.
\section{DSDA: from source building to target building}
This section presents how DSDA can be applied to the thermal dynamics modeling with building time series by means of a deep model, i.e., Long Short Term Memory Network based Sequence to Sequence (LSTM S2S) model. Taking the zone temperature evoluation (or energy consumption) as a time series prediction task, we train LSTM S2S as a deep regressor using a large dataset from the source building and then adapt it to the target building involving different but related tasks. Specifically, we consider the following approach to adapt the pre-trained model to an unseen related target task, as shown in Fig.~\ref{DSDA}. We first collect data from the source building to do the off-line learning by pre-training the LSTM S2S model; we then use the pre-trained model to initialize the parameters of a model for the target building with some unseen tasks, and fine-tune the model for the target building using a limited amount of data, which is a prior-knowledge aided training. Although some similar ideas were proposed to solve image classification problems~\cite{motiian2017unified}, few results have been reported in time series prediction, in particular for building thermal dynamics modeling.
To get a limited amount of data for the target building, one can conduct simple system identification experiments. However, due to different outside environment conditions, some variables of interest can be significantly different at different times. In that case, the task adaptation technique can still be applied because periodically retraining the model maintains a certain level of accuracy by incorporating more information of the outside environment condition.
\begin{figure*}[h!]
\includegraphics[width=0.7\textwidth]{Figures/Proposed_Approaches.PNG}
\centering
\caption{Proposed DSDA: LSTM S2S model pre-trained and adapted between source and target buildings}
\label{DSDA}
\end{figure*}
\begin{figure*}[h!]
\includegraphics[width=0.5\textwidth]{Figures/LSTM_S2S.PNG}
\centering
\caption{Schematic of LSTM S2S Model}
\label{LSTM S2S}
\end{figure*}
\subsection{Pre-training LSTM S2S}
Due to the space limit, we skip the updates for each LSTM cell and refer readers to~\cite{park2018sequence} for more details. For the pre-trained model, the LSTM S2S architecture is adopted accordingly, which maps input $\mathbf{x}_i^S\in\mathcal{D}_S$ to output $\mathbf{\hat{y}}_i^S$, as shown in Fig.~\ref{LSTM S2S}. Specifically, each input $\mathbf{x}_i^S$ is encoded as a vector, which corresponds to the final state of the encoder denoted by $\mathbf{h}^{K}_e\in\mathbb{R}^c$, where $c$ is the number of LSTM units in the hidden layer of the encoder. Then, $\mathbf{h}^{K}_e$ is used as the initial state for activating the decoder with the current measurement $y_{0}$, on top of which a dense layer with linear activation is used to recursively predict each time step in $\mathbf{y}_i^S$. In every update, the decoder feeds the predicted output $\hat{y}_l$ obtained from the previous update to the input for the current update.
It should be noted that one can either apply the teacher forcing or non-teacher forcing way ~\cite{lamb2016professor} to train the LSTM S2S architecture. For avoiding the learning becoming ``lazy", which means the corresponding predicted output is obtained by making small modification to the corresponding ground truth, non-teacher forcing is adopted.
The parameters of the network are obtained by minimizing the mean square error loss given $\mathcal{J}_1$, i.e., $\mathcal{J}_1(y_i^l,\hat{y}_i^l) = \frac{1}{n\times L}\sum_{i=1}^n\sum_{l=1}^L|y_i^l-\hat{y}_i^l|^2$, via some adaptive gradient descent methods. $\hat{y}_l$ is the predicted output at time instant $l$. For denotation, we move $l$ to the superscript in $\mathcal{J}_1$. During training, the decoder can directly uses the ground truth $y^l_i$ as input instead of $\hat{y}^l_i$, which can speed up the training.
\subsection{Fine-tuning LSTM S2S}
After pre-training the LSTM S2S model using a larget amount of data from the source building, we adapt the model to the target building for initializing the target task specific LSTM parameters. Different from classification, which may require to freeze parameters of the encoder and decoder and only to fine-tune the parameters of the dense layer, we re-train the whole model using a limited amount of data from the target building in this context. Compared to typical convolutional neural networks using multiple layers to extract features, LSTM S2S can be treated as a nonlinear state-space model which holds the recurrency to model the temporal dynamics and single layer for either the encoder and decoder is practically feasible.
Hence, the loss given $\mathcal{J}_2$ can be immediately obtained:
$\mathcal{J}_2(y_j^u,\tilde{y}_j^u) = \frac{1}{m\times U}\sum_{j=1}^m\sum_{u=1}^U|y_j^u-\tilde{y}_j^u|^2$. By minimizing $\mathcal{J}_2$ with a limited amount of data, the adapted model for the target building is correspondingly acquired and then can be used for the inference.
\begin{table}[h]
\caption{Description of Datasets}
\begin{center}
\begin{threeparttable}
\begin{tabular}{c c c c c}
\toprule
\textbf{Dataset} & \textbf{SF\tnote{1}} & \textbf{Size} &\textbf{\# of features}& \textbf{Domain} \\ \midrule
SML & 15 min & 1373 & 15 &Target \\
AHU & 1 min & 35098 &15& Source \\\midrule
Building 1& 15 min & 2000 &4& Target\\
Building 2& 15 min& 34940 &4& Source\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1] Sampling frequency
\end{tablenotes}
\end{threeparttable}
\end{center}
\label{table:dataset}
\end{table}
\begin{table}[h]
\caption{Comparison of Metrics between DSDA and learning from scratch for testing (temperature evolution)}
\begin{center}
\begin{threeparttable}
\begin{tabular}{c c c c c}
\toprule
\textbf{Dataset} & \textbf{CVRMSE} & \textbf{NMBE} & \textbf{MAPE} &\textbf{RMSE} \\ \midrule
SML(15 min)&5.983\% &-5.099\% &5.253\% &1.274\\
AHU$\to$SML(15 min) & \textbf{1.671\%} & \textbf{-0.877\%} &\textbf{1.396\%} &\textbf{0.355}\\ \midrule
SML(2 h)&8.421\%&-7.442\%&7.612\%&1.788\\
AHU$\to$SML(2 h)&\textbf{3.674\%}&\textbf{-2.721\%}&\textbf{2.945\%}&\textbf{0.780}\\\midrule
SML(4 h)&10.613\%&-9.534\%&9.747\%&2.243\\
AHU$\to$SML(4 h)&\textbf{7.513\%}&\textbf{-5.381\%}&\textbf{6.055\%}&\textbf{1.588}\\\midrule
SML(6 h)&12.405\%&-11.099\%&11.233\%&2.607\\
AHU$\to$SML(6 h)&\textbf{11.143\%}&\textbf{-7.198\%}&\textbf{8.618\%}&\textbf{2.342}\\
\bottomrule
\end{tabular}
\end{threeparttable}
\end{center}
\label{table:metrics}
\end{table}
\begin{table}[h]
\caption{Comparison of Metrics between DSDA and learning from scratch for testing (energy consumption)}
\begin{center}
\begin{threeparttable}
\begin{tabular}{c c c c c}
\toprule
\textbf{Dataset} & \textbf{CVRMSE} & \textbf{NMBE} & \textbf{MAPE} &\textbf{RMSE} \\ \midrule
Building 1(15 min)&5.715\% &\textbf{-0.930\%} &4.308\% &13.198\\
Building2$\to$1(15 min) & \textbf{4.842\%} & -1.647\% &\textbf{3.402\%} &\textbf{11.182}\\ \midrule
Building 1(2 h)&7.016\% & -2.713\% &4.699\% & 16.220\\
Building2$\to$1(2 h) &\textbf{5.497\%} & \textbf{-2.334\%}& \textbf{3.672\%} & \textbf{12.709}\\\midrule
Building 1(4 h)&8.033\% & \textbf{-1.835\%} &4.871\% & 18.709\\
Building2$\to$1(4 h) &\textbf{6.466\%} & -2.717\%& \textbf{4.241\%} & \textbf{15.058}\\\midrule
Building 1(6 h)&11.344\% & \textbf{-0.913\%} &6.093\% & 26.600\\
Building2$\to$1(6 h) &\textbf{7.099\%} & -3.208\%& \textbf{4.763\%} & \textbf{16.646}\\
\bottomrule
\end{tabular}
\end{threeparttable}
\end{center}
\label{table:metrics_1}
\end{table}
\begin{table}[h]
\caption{Comparison of RMSE between cross-task and learning from scratch for testing}
\begin{center}
\begin{threeparttable}
\begin{tabular}{c c c c c}
\toprule
\textbf{Dataset} & \textbf{15 min} & \textbf{2 h} & \textbf{4 h} &\textbf{6 h} \\ \midrule
SML&1.274 &1.788 &2.243 &2.607\\
Building2$\to$SML & \textbf{0.403} & \textbf{0.735} &\textbf{1.016} &\textbf{1.454}\\ \midrule
Building 1&13.198 & 16.220 &18.709 & 26.600\\
AHU$\to$Building 1 &\textbf{13.160} & \textbf{15.717}& \textbf{16.492} & \textbf{19.457}\\
\bottomrule
\end{tabular}
\end{threeparttable}
\end{center}
\label{table:RMSE}
\end{table}
\section{Experiments}
This paper considers four publicly available datasets: SML~\cite{zamora2014line} and AHU~\cite{OpenEI_1} for builidng indoor temperature evolution; two long-term datasets from two different commercial buildings for energy consumption~\cite{OpenEI}. Please see the Table~\ref{table:dataset} for more details. SML and AHU respectively have 15 different feature inputs and the similar output of interest is indoor temperature, while Buildings 1 and 2 have 4 identical feature inputs (total power consumption, outdoor temperature, day of week, time of day) and the common output of interest is the whole building energy consumption. For completeness, the appendix includes feature details of AHU and SML datasets. For implementation, we use the whole source dataset to pre-train the model, and then split the target dataset into training and testing in chronological time with a ratio being 0.67. The ratio is fixed in this context, while a minimum ratio will be in the future work to figure out the minimum amount of data needed for DSDA. We perform 15 min, 2 hours, 4 hours, and 6 hours ahead prediction and it should be noted that the prediction horizon depends on the target tasks.
The architecture of LSTM S2S has one LSTM layer for both the encoder and decoder, respectively. To pre-train LSTM S2S using the data from the source building and to fine-tune the parameters using the data from the target building, we adopt the mini-batch stochastic optimization based on Adam optimizer~\cite{kingma2014adam}. We also set the hyperparameters based on the optimal performance we obtain. Tables~\ref{table:metrics} and~\ref{table:metrics_1} show the metrics of DSDA used for this study and the comparison with learning from scratch. They are the coefficient of variance of root mean square error (CVRMSE), normalized mean bias error (NMBE), mean absolute percentage error (MAPE), and root mean square error (RMSE).
\textbf{Case Study 1: temperature evolution.} As Table~\ref{table:dataset} shows, we use two datasets, i.e., SML and AHU for validating the domain adapatation of temperature evolution. Prediction tasks in the source and target domains are quite related, as both of them are indoor temperature evolution, even with different features. From Table~\ref{table:metrics}, which shows predictions of different horizons, it can be observed that with pre-trained model by the AHU data, the predictive performance can be improved in terms of each metric for the SML data. To study the transferability between different tasks, Table \ref{table:RMSE} shows the comparison between cross-task and learning from scratch. We pre-train the model using the Building 2 data to extract temporal dependencies for the energy consumption and then adapt it to learn the temperature evolution for the target task. It should be noted that the number of input features of SML is adjusted such that the prediction task can be conducted accordingly. From Table~\ref{table:RMSE}, the results of Building 2$\to$ SML show that even if the pre-trained model is from another different source which has a completely different task, temporal dependencies of the source data can still be extracted to improve the predictive performance in the target task, compared to learning from scratch.
\textbf{Case Study 2: energy consumption.} In this case study, we use the total energy consumption data from two different commercial buildings to validate the proposed scheme. Prediction tasks for the source and target domains are the same in this case with identical feature inputs, as mentioned above. As shown in Table~\ref{table:metrics_1}, while DSDA and learning from scratch have close performance, the former one still outperforms by using the pre-trained model. The only slight improvement is attributed to that in the total energy consumption, a single good pattern can be quickly extracted even with a small size of data. It is noted that NMBE in this case does not indicate improvement since based on the definition of NMBE, the error cancellation may take place for the data. Table~\ref{table:RMSE} also shows the prediction of energy consumption based on the trained model using temperature features, i.e., AHU$\to$Building 1. Similarly, the trasnferability across different tasks is useful for improving performance, which is consistent with the conclusion made in Case Study 1.
\section{Conclusion and Future Works}
This study proposed a deep supervised domain adaptation for thermal dynamics modeling in smart buildings. It aims at solving the problem of generalizing an established model from one building to another building. We adpat a pre-trained LSTM S2S model from the source building to the target building using the model fine-turning method. Extensive numerical results show that such a proposed scheme outperforms learning from scratch. The approach is critically important in scenarios when buildings don't have enough data for learning a model and allows facility managers to quickly establish maintenance and control strategies for building energy systems.
Future directions include how to learn from multiple sources to single target as well as unsupervised domain adaptation for missing variables.
\bibliographystyle{IEEEtran}
|
1,108,101,565,716 | arxiv | \section{Introduction}
Multi-state models are the most common models used for the description of longitudinal survival data. A multi-state model is a model for a stochastic process, which is characterized by a set of states and the possible transitions among them \cite{Andersen1993}. The states represent different situations of the individual (healthy, diseased, etc) along a follow-up. Special multi-state models that have been widely used in biomedical applications are the three-state progressive model, the illness-death model, or the bivariate model \cite{Hougaard2000}. One of the important features of multiple state models is reversible illness-death model that is the model we, in this paper, drive some formulas in a Markovian context.\\
In many biomedical studies, the event of interest can occur more than once over the follow-up time for patients. Such events are called recurrent events \cite{book2012}. The recurrent events are repeated events, which are of the same type such as acute exacerbations in asthmatic children, seizures in epileptics, cancer recurrences, myocardial infarctions, migraine pain, and ear infections. Several statistical models have been proposed in the literature to analyze recurrent events including \cite{Andersen1982},\cite{Lin2000},\cite{Kelly2000} and \cite{Cook2010}.
\cite{Araujo2014} describes the R package TPmsm which aims at implementing nonparametric and semiparametric estimators for the transition probabilities in progressive illness-death models. Also another R package, TP.idm is introduced in \cite{Balboa2018} that implements a novel non-Markovian estimator for the transition probability matrix in the progressive illness-death model under right censoring. \cite{Meira2009} reviews several modeling approaches following the methodology of the multi-state models, and focusing on the estimation of several quantities such as the transition probabilities and survival probabilities.\\
In this paper, we use a finite-state continuous time Markov process with a single absorbing state. The state space in the Markov process is partitioned to some subspaces. Each subspace will represent a state of a human beings health state. For example, in cancer case, each subspace is interpreted as either a stage of cancer or recovery. Each subspace will include states such that the semi-Markov property can be reflected in the models. See \cite{Asghari2019} for more details.\\
As in our model the time to death follows a phase-type (PH) distribution, we can take advantages of the PH properties. It is good to remind that the set of \emph{PH} is dense in the class of all distributions defined on the non-negative real numbers, see \cite{Asmussen1996}. There are closed-form expressions for the distribution and density functions, and this also applies to the Laplace--Stieltjes transform and we are able to obtain the expected value and the all non-central moments of the \emph{PH} random variable by successive differentiation of the Laplace transform. \\
Many authors recently have used PH distribution in their researches. \cite{Hassanzadeh2013} uses \textit{PH} for modeling disability insurance. The model represents the aging process as the passage through a number of phases of decreasing vitality. When disabled, individuals additionally pass through several stages that represent duration of disability. \cite{Asghari2019} uses \textit{PH} for modeling of skin cancer patients in the United States and estimate parameters related to the aging process that can be useful for comparing the physiological age processes of patients with cancer and healthy people. \cite{Lin2007} used phase-type distribution for mortality analysis, and obtained conditional survival probabilities of the time of death and the actuarial present values of the whole life insurance and annuity. \cite{Faddy2000} used phase-type distribution for analyzing data on lengths of stay of hospital patients. \cite{Odd1995} uses \textit{PH} to model the interval time between first and second birth. A special case of \textit{PH} is Coxian distribution that can be used to represent survival times in terms of phases through which an individual may progress until leaves a system, such as hospital stay or time till death. \cite{Marshall2007} used the Coxian to model the patients stay in hospital. The contribution of this paper are presented in Theorem 3.1 and by developing formulas, in the \emph{PH} context, presented in examples. It can be seen that how some formulas in recurrent events context are calculated tractably in a Markov chain, by using the Markov property.\\
This paper is organized as follows. Section 2 provides an introduction to \textit{PH} as well as the notational convention that will be used in describing our model. In Section 3, we describe the structure of our model and present the main theorem which is the main contribution of this paper. In this section two examples will be presented. Section 4 closes the paper with some concluding remarks and a brief discussion of possible future studies.
\section{Phase-type Distributions}
Consider a continuous-time Markov process $\{J_t,t\ge0\}$ with the space state, $\Gamma=\{0,1,\ldots,m\}$, $m \in \mathbb{N}$, where the state $0$ is absorbing and the rest are transient. Assume the intensity matrix and initial probability vector associated with $J$, are denoted by $\textbf{Q}$ and $\boldsymbol{\beta}$, respectively. Clearly, we have that
\begin{eqnarray} \label{matrix-Q}
\textbf{Q}=
\begin{bmatrix}
0 & \textbf{0}^\prime \\
\textbf{t}_0 & \textbf{T}
\end{bmatrix}
\end{eqnarray}
where the matrix $\textbf{T}$ is $m\times m$, sub--intensity matrix ( matrix $\mathbf{B}=(b_{ij})_{ i,j=1,\dots,m} $ is called a sub--intensity matrix if $b_{ii}\leq 0$, $b_{ij} \geq 0, \textrm{and} \, \sum_{k=1}^{m} b_{ik} \leq 0$, with strick inequality for at least on $i$, for ~$i,j=1,2,\dots,m; ~i\neq j$). The vector $\textbf{t}_0$ is the transition rates from the transient states to the absorbing state, 0. Therefore we have $\textbf{T}\textbf{1}+\textbf{t}_0=\textbf{0}$, (where $\textbf{0}$, $\textbf{1}$ are column vectors of zeros and ones with proper dimensions, respectively). It can be easily proved that
\begin{eqnarray}
e^{\textbf{Q}y}=
\begin{bmatrix}
1 & \textbf{0}^\prime \\
\textbf{1}-e^{y\textbf{T}}\textbf{1} & e^{y\textbf{T}}
\end{bmatrix},
\end{eqnarray}
( matrix exponential of the squared matrix $\mathbf{B}$ is defined as $e^{\mathbf{B}}=\sum_{l=0}^{\infty} \frac{\textbf{B}^l}{l !}$).
\newline
We also assume that $P(J_0 \in \{0\}) = \beta_0=0$. As a result, $\boldsymbol{\beta}= ( \beta_{0}, \boldsymbol{\alpha})$ where we have that $\boldsymbol{\alpha}\textbf{1}=1$.\\
If we define, $Y=\inf\{t; J_t = 0\}$, then it is said that $Y$ is a \emph{PH} random variable with representation $(\boldsymbol{\alpha},\textbf{T})$.
For a continuous random variable $Y$, the event $Y>y$ implies that the process started from any transient state has not reached the absorbing state by time $y$, thus the distribution function of $Y$ can be obtained as follows:\\
\begin{align*}
F(y)=1-\boldsymbol{\alpha}e^{y\textbf{T}}\textbf{1},~~~ y\ge 0.
\end{align*}
Thus the survival function of $Y$ is
\begin{equation} \label{PHSUR}
S(y)=\boldsymbol{\alpha}e^{y\textbf{T}}\textbf{1},~~~y\ge 0.
\end{equation}
By taking derivatives of $F(y)$, one can obtain the probability density function of $Y$ as follows ,
\begin{equation*}
f_Y(y)=\boldsymbol{\alpha}e^{y\textbf{T}}\textbf{t}_0,~~~y\ge 0.
\end{equation*}
The Laplace-transform and $k^{th}$ moments are given in the following.\\
$$ \Phi(s)=\boldsymbol{\alpha}(s\textbf{I}-\textbf{T})^{-1}\textbf{t}_0,$$
$$E(Y^k)=k!\boldsymbol{\alpha}(-\textbf{T}^{-1})^k\textbf{1},~~~~k=0,1,2,\cdots.$$
For a complete review of \emph{PH} distributions refer to \cite{bookneuts}.
\section{ Transition probabilities}
Based on the structure of a representation of the PH, several classes of phase-type distributions can be distinguished. The structure of a PH representation often has an impact on its application, as some structures allow more efficient solutions. The most important distinction is the one into Acyclic and General Phase-type distributions: Every acyclic phase-type distribution has at least one Markovian representation without cycles in the sub--generator, while for general phase-type distributions cycles are allowed.
An application of the general PH is in the reversible illness-death model. In this paper we consider the reversible illness-death model in a Markovian environment.
\noindent The analysis in recurrent studies is often performed using the multi--state models. These models are very useful for describing event history data offering a better understanding of the process of the illness, and leading to a better knowledge of the evolution of the disease over time. The complexity of a multi-state model greatly depends on the number of states defined and by the transitions allowed between these states. Our approach is calculating the probability of the number of transitions.\\
We use the finite-state continuous-time Markov process with multi states. One absorbing state which is the death phase and the other states are recovery and disease phases. From now on, we will use the word "stage" rather than "state" or "phase" in our multi-state model. In fact, we will assume $k$ stages ( from 1 to $k$) for the disease and recovery, (one stage for the recovery and $k-1$ for the disease) and one absorbing state for death. Each stage will have $n$ transient states. In this model transition from every stage to another is possible. In some cases the states inside stages can be interpreted as physiological ages, see \cite{Lin2007} and \cite{Asghari2019} where aging is transition from one physiological age to the next physiological age and process will end when transition occurs from any other state to the absorbing state, death. For each state $i, i=1,2,..,n$ in every stage, several parameters are used for modeling mortality. Issues of interest \textit{i.e.}, the number of recurrences or transitions until time $t$, the number of transitions from one stage to another and the expected time stay in every stage will be calculated.\\
After the diagnosis of special disease, that stage is also known. So patient is at one of the disease stages at time $t = 0$ -arrival time- at age $x$. At any time thereafter, he may arrive at death state, recovery or visit another disease stages. Recovered patient may become sick or dead. Our proposed multi-state model is reversible, in the sense that past stages can be revisited. For this model, let $E$ denote the space state of the underlying Markov chain, then $E=\bigcup_{j=1}^k E_i \bigcup D$ where sets of $E_1,E_2,\dots.E_k$ are to represent stages, including recovery, and the state $D=\{0\}$ represents death. Every stage, including recovery, has a finite set of $n$ states labeled $1, ..., n$, with instantaneous transitions being possible between selected pairs of states, see Figure \ref{graph1}.
In general, there are $k\times n + 1$ states, where $k \times n$ are for recovery and disease stages and 1 for the death state.\\
For each $t \geq 0$, the continuous Markovian random process $J_t$ is in one of the stages $ 1, ..., k$ or in $D$. We interpret the event $J_t \in E_i$ to mean that the individual is in stage $i$ at age $x + t$, $i =1,\dots,k$.
We assume the rate of transition from the disease stages depend on stage, age and duration of illness. Therefore, we design our Markovian model such that it reflects the semi-Markovian property.\\
As mentioned earlier, we propose a reversible illness--death model involving $n$ states in each alive stage. By doing so, the semi-Markov property can be captured in the model. This method have been used in \cite{Hassanzadeh2013}. Patients may move to the next stage of disease, recover or death.
The recovery rate will normally be lower for the later stages of disease. Of course, patients may die while in any state of stages, as indicated by the arrows to death in Figure \ref{graph1}.\\
\begin{figure}
\centering
\includegraphics[scale=0.35]{illnessmodel2}
\caption{Reversible illness-death model: the k+1 stages (boxes) and the possible transition among
them (arrows).} \label{graph1}
\end{figure}
Since the Markov process has only a single absorbing state, the time of death (the time of absorption) follows a \textit{PH} distribution. Note also that the time staying in recovery or disease statuses can be obtained by using the Markov property.\\
Furthermore, the sub-intensity matrix, $\textbf{T}$, of the Markov process in \eqref{matrix-Q} is given by
\begin{equation}\label{T}
\textbf{T}=
\begin{bmatrix}
\textbf{T}_1 & \textbf{T}_{1,2} & \textbf{T}_{1,3} & \cdots & \textbf{T}_{1,k} \\
\textbf{T}_{2,1} & \textbf{T}_2 & \textbf{T}_{2,3} & \cdots & \textbf{T}_{2,k} \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
\textbf{T}_{k,1} & \textbf{T}_{k,2} & \textbf{T}_{k,3} & \cdots & \textbf{T}_{k}
\end{bmatrix},
\end{equation}
where $\textbf{T}_{i,j}, i \neq j,~ i,j=1,2,...,k$ are $n \times n$ matrices containing the transition rates from the stage $i$ to $j$. The matrices $\textbf{T}_i$'s, $i =1,2,...,k$ are the sub-intensity matrices (with non-zero main and upper diagonal elements) for the Markov chain describing a sojourn in stage $i$. The mortality rates, $\mathbf{t}_0=-\textbf{T}\boldsymbol{1}$, is an $n\times k$-dimensional column vector containing the rate of death from each of the $n\times k$ alive states.\\
In this paper, we assume that the person at diagnose has age of $x$ and we will remove $x$ from the notations.\\
From now on, it as also assumed that
$\boldsymbol{\alpha}=(\boldsymbol{\alpha_1},\boldsymbol{\alpha_2},\dots,\boldsymbol{\alpha_{k}})$, where $\boldsymbol{\alpha_i}$'s are all $1\times n$ vectors corresponding to initial probabilities to start from stage $i$ ($\boldsymbol{\alpha_1}$ corresponds to the recovery which can be assumed to be $\boldsymbol{0}$). The normalized and the transposed vectors of a vector $\boldsymbol{\beta}$ are denoted by $\hat{\boldsymbol{\beta}}$ and $\boldsymbol{\beta}'$, respectively. \\
Now with the notations above and assumptions, by using the Markov property we are able to calculate the probability of various number of transitions until time $t$.\\
\begin{theorem}\label{THM1}: Let $\{J_t,t\geq 0\}$ be a continuous-time Markov chain with space state $E=\bigcup_{j=1}^{k}E_j\cup D$ and with the sub-intensity matrix \eqref{T} where $D$ the only absorbing state and $E_j$'s are mutual disjoint subsets of $E$. Assume $\{N(t),t\geq 0\}$ is the number of transitions between $E_j$'s in $[0,t]$ or from any $E_j$ to $D$, $j=1,...,k$, and define $P^{(i)}_{t}(l)=P[N(t)=l|J_0\in E_i]$. Then we have the followings:\\
\begin{equation}\label{PN0}
P^{(i)}_t(0)=\hat{\boldsymbol{\alpha}}_i\,e^{t\textbf{T}_i}\textbf{1}
\end{equation}
\begin{equation}\label{PN1}
P^{(i)}_t(1)=\sum_{\substack{i_1 =1 \\ i_1\neq i }}^{k}\hat{\boldsymbol{\alpha}}_i\,\mathbf{x}^{(i)}_{i_1}(t)\textbf{1},
\end{equation}
\begin{equation}\label{PN2}
P^{(i)}_t(2)=\sum_{\substack{i_1,i_2\\ i_1\neq i\\i_2\neq i_1 }}^{k}\hat{\boldsymbol{\alpha}}_i \mathbf{x}^{(i)}_{i_1,i_2}(t)\textbf{1},
\end{equation}
$$\vdots$$
\begin{equation}\label{PNk}
P^{(i)}_t(l)=\sum_{\substack{i_1,i_2,...,i_l\\ i_1\neq i\\i_2\neq i_1\\ \vdots\\i_l\neq i_{l-1}}}^{l}\hat{\boldsymbol{\alpha}}_i \mathbf{x}^{(i)}_{i_1,i_2\cdots, i_l}(t)\textbf{1}
\end{equation}
where $\mathbf{x}^{(i)}_{i_1}(.),\mathbf{x}^{(i)}_{i_1i_2}(.),\cdots, \mathbf{x}^{(i)}_{i_1i_2\cdots i_k}(.)$ satisfy the following differential equations:
$$\frac{d}{dt}\mathbf{x}^{(i)}_{i_1}(t)=e^{\textbf{T}_i\,t}\textbf{T}_{i,i_1}+\mathbf{x}^{(i)}_{i_1}(t)\textbf{T}_{i_1},$$
$$\frac{d}{dt}\mathbf{x}^{(i)}_{i_1,i_2}(t)=\mathbf{x}^{(i)}_{i_1}(t)\textbf{T}_{i_1,i_2}+\mathbf{x}^{(i)}_{i_1,i_2}(t)\textbf{T}_{i_2},$$ and
$$\frac{d}{dt}\mathbf{x}^{(i)}_{i_1,i_2,\cdots,i_l}(t)=\mathbf{x}^{(i)}_{i_1,i_2\cdots i_{l-1}}(t)\textbf{T}_{i_{l-1},i_l}+\mathbf{x}^{(i)}_{i_1,i_2,\cdots,i_l}(t)\textbf{T}_{i_l},$$
with the matrices $\textbf{x}$ satisfy the following recursive equations
$$\textbf{x}_{i_1,i_2,\dots,i_{m-1},i_m}^{(i)}(t)=\int_{0}^t \textbf{x}_{i_1,i_2,\dots,i_{m-1}}^{(i)}(z)\textbf{T}_{i_{m-1}i_m}e^{\textbf{T}_{i_m}(t-z)}dz,$$
for $m=1,2,\dots,$ where $$\textbf{x}_{\{\}}^{(i)}(t)=e^{\textbf{T}_i\,t},$$
and all $\textbf{x}$'s are zero matrices, with proper dimension, at $t=0$.\\
\end{theorem}
\noindent \textbf{Proof}: See Appendix 1.\\\\
To solve the differential equations above, we use ODE-45 function in MATLAB (2018) software. In the following we will consider two examples for the applications.\\
\noindent \textbf{Example~1}: The Stanford Heart Transplantation Study\\
The Stanford heart transplant study began in October $1967$. This data set can be found in \cite{Kalb1980} (pp 230-232) or in \cite{Crowley1977}. The available data covers the period until April 1, $1974$. Some patients died before an appropriate heart was found. Out of the $103$, $69$ received a heart transplant that 45 (65\%)of them deceased. Of the 34 patients without transplant, the number of deaths was 30 (88\%). The remaining $28$ alive patients contributed with censored survival times. For each individual, an indicator of its final vital status (censored or not), the survival times (time to transplant, time to death) from the entry of the patient into the study (in days), and a vector of covariates including age at acceptance (Age), year of acceptance (Year), previous surgery (Surgery: coded as $1$ = yes; $0$ = no), and transplant (Transplant: coded as $1$ = yes; $0$ = no) were recorded. The Transplant covariate is the only time-dependent covariate.\\
In this data-structure, an individual's survival data is expressed by three variables: start, stop and event. For the Stanford study, the time-dependent covariate “transplant” represents a treatment intervention. Individuals without change in the time-dependent covariate are represented by only one line of data, whereas patients with a change in the time-dependent covariate must be represented by two lines. For these patients, the first line represents the time period until the transplant; the second line represents the time period that passes from the transplant to the end of the follow-up or death. The remaining (time-fixed) covariates are the same for the two lines. For each row, variables start and stop mark the time interval (start, stop) for the data, while event is an indicator variable taking on value $1$ if there was a death at time stop, and 0 otherwise. As an example consider the information available from four patients (from the Stanford study) with identification numbers $25$, $26$, $27$ and $28$ in Table \ref{tab4} . For the first two patients the time from enrollment to censoring is $1800$ and $1401$ days, respectively, and the first patient had a heart transplant $25$ days after enrollment. The time from enrollment to death for the third and fourth patients are $263$ and $72$ days, respectively, and the last patient received a new heart in day $71$.\\
\begin{table}
\caption{ Stanford heart transplantation in Example 2.}
\label{tab4}
\begin{tabular}{lccccccc}
\hline
id &start &stop&event&transplant&age&year&surgery \\
\hline
25 &0 &25 &0 &0 &33.2238 &1.57426 &0\\
25 &25 &1800 &0 &1 &33.2238 &1.57426 &0\\
26 &0 &1401 &0 &0 &30.5353 &1.58248 &0\\
27 &0 &263 &1 &0 &8.7858 &1.59069 &0\\
28 &0 &71 &0 &0 &54.0233 &1.68378 &0\\
28 &71 &72 &1 &1 &54.0233 &1.68378 &0\\
\hline
\end{tabular}
\end{table}
The descriptive statistics of the data by Age (intervals of 5-year), Year (from 1967 to 1974, intervals of 1) and Surgery (yes=1 or no=0) are presented in Tables \ref{tab4.1}, \ref{tab4.2} and \ref{tab4.3}, respectively.
\begin{table}
\caption{ The descriptive statistics of the data by age in Example 2.}
\label{tab4.1}
\begin{tabular}{l|cccccc}
\hline
Age & N & Trans. & Death &$\textrm{Death}_{\textrm{trans.}}$&\% $\textrm{Death}_{\textrm{without trans.}}$&\% $\textrm{Death}_{\textrm{after trans.}}$ \\
\hline
$<$25 &5 &2 &4 &1&100&50\\
25-30 &5 &4 &1 &1&0&25\\
30-35 &4 &2 &1 &0&50&0\\
35-40 &7 &4 &4 &1&100&25\\
40-45 &19&12&15&9&100&75\\
45-50 &29&22&19&13&86&59\\
50-55 &27&17&23&14&90&82\\
$>$55 &8 &6 &8 &6&100&100\\
\hline
sum &103 &69 &75 &45\\
\end{tabular}
\end{table}
In Table \ref{tab4.1}, the average age of heart patients is 45.2 years. The percent of transplanted patients with 45-50 years old is maximum with 78\% transplanting. The rate of death for transplanted patients($\frac{\# \text{death of transplanted patients}}{\#\text{ transplanted patients}}$) increases with age.\\
\begin{table}
\caption{ The descriptive statistics of the data by year in Example 2.}
\label{tab4.2}
\begin{tabular}{l|cccccc}
\hline
Year & N & Trans. & Death &$\textrm{Death}_{trans.}$&\% $\textrm{Death}_{\textrm{without trans.}}$& \% $\textrm{Death}_{\textrm{after trans.}}$\\
\hline
1 &16 &7 &16 &7&100&100\\
2 &17 &11 &15 &10&83&91\\
3 &10 &8 &6 &4&100&50\\
4 &17 &11 &14 &8&100&73\\
5 &17 &12 &12 &8&100&67\\
6 &19 &16 &11 &8&100&50\\
$>$6 &7 &4 &1 &0&33&0\\
\hline
sum &103 &69 &75 &45\\
\end{tabular}
\end{table}
In Table \ref{tab4.2}, the number of heart patients' enrollment is about the same for every year but the percent of transplanted patients for the fifth year of study is maximum with 84\% transplanting. The mortality rate for transplanted patients has decreased over the years so it is expected that the influence of year of acceptance on hazard is negative.\\
\begin{table}
\caption{ The descriptive statistics of the data by surgery in Example 2.}
\label{tab4.3}
\begin{tabular}{l|cccccc}
\hline
surgery & N & Trans. & Death &$\textrm{Death}_{\textrm{trans.}}$&\% $\textrm{Death}_{\textrm{without trans.}}$&\% $\textrm{Death}_{\textrm{after trans.}}$\\
\hline
0 &87 &56 &66 &39&87&70\\
1 &16 &13 &9 &6&100&46\\
\hline
sum &103 &69 &75 &45\\
\end{tabular}
\end{table}
\noindent As can be seen in Table~\ref{tab4.3}, most patients have no history of surgery and patients without surgery are more likely to die, about 76\%. Heart patients with surgery are more likely to have heart transplants than those without heart surgery, in other words, 81\% of patients with surgery and 64\% of patients without surgery have a heart transplant. Therefore it is expected that the influence of previous surgery on hazard is negative.\\
In this example, we use the illness-death model that can be used to study the effect of binary time-dependent covariates, shown in Figure \ref{graph4}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{heart}
\caption{Illness-death model with disease and transplant states.} \label{graph4}
\end{figure}
The heart patients are in the illness status when the disease is diagnosed. At any time thereafter, they may become transplanted or deceased. As explained in Section 1, rates of transition from the illness state depend on the duration of disease. Therefore the model must be a semi-Markov model.\\
We propose a model involving $2$ stages (Illness and Transplant) as shown in Figure \ref{graph4}. In each stage, the patients may move from left to right in the figure as they age. After a heart transplant, patients move right from a state in the illness stage to a state in the transplant stage. Of course, patients may die while in any state, as indicated by the arrows to death state in Figure \ref{graph4}.\\
In order to describe how probabilities values are calculated, we define some notations for our model. For $t \geq 0$, let $J_t$ represent the status of a patient at time $t$. We use the finite-state continuous-time Markov process with $2n$ alive states: $n$ transient states for disease stage, $n$ transient states for transplant stage, and one absorbing state which is the death state. Suppose that the space state is partitioned into the set $E_1$ of $n$ illness states and the set $E_2$ of $n$ transplant states and $D$, for death.\\
Hence the sub-intensity matrix \eqref{T} will be reduced to \begin{equation}\label{matrix_T}
\mathbf{T}=
\begin{bmatrix}
\textbf{T}_1 & \textbf{T}_{12} \\
\textbf{0} & \textbf{T}_2
\end{bmatrix},
\end{equation}
where the $\textbf{T}_1$ and $\textbf{T}_2$ are the sub-intensity (with non-zero main and upper diagonal elements) matrices for the Markov chain describing a sojourn in $E_1$, and $E_2$, respectively. The matrix $\mathbf{T}_{12}$ contains the transition rates from the illness stage to the transplant stage. Let $\boldsymbol{\alpha}$ be an 2n-dimensional column vector providing the initial state probabilities. \\
The elements of the proposed sub-intensity matrix $\textbf{T}$ are:
\begin{equation}
\textbf{T}=
\begin{bmatrix}
-(q^1_1+\lambda_0+\lambda_1)& \lambda_0 &\cdots& 0 & \lambda_1 & 0 & 0&\cdots \\
0 & -(q^1_2+\lambda_0+\lambda_1) & \lambda_0 &\cdots& 0 & \lambda_1 & 0&\cdots \\
\vdots & \vdots & \ddots &\vdots& \vdots & \vdots& \ddots & \vdots \\
0 & 0 &\cdots& -(q^1_n+\lambda_1)& 0 &0 &\cdots & \lambda_1\\
0 & 0 &\cdots& 0 & -(q^2_1+\lambda_0) & \lambda_0 & 0&\cdots\\
0 & 0 &\cdots& 0 & 0 & -(q^2_2+\lambda_0) & \lambda_0&\cdots\\
\vdots & \vdots & \ddots &\vdots& \vdots & \vdots& \ddots & \vdots \\
0 & 0 &\cdots& 0 & 0 & 0 &\cdots& -q^2_n\\
\end{bmatrix},
\end{equation}
where $\lambda_0$ is the rate of transition from one state to the next, $\lambda_1$ is the rate of transplant from disease.
The rate of mortality for the stages decease and transplant are shown by $\mathbf{q}^1_i$ and $\mathbf{q}^2_i$, respectively.
\begin{eqnarray} \label{qi}
\mathbf{q}^1_i&=&a+b+q\times i^{(p+\gamma_1age+\gamma_2year+\gamma_3surgery)} \nonumber \\
\mathbf{q}^2_i&=&a +q\times i^{(p+\gamma_1age+\gamma_2year+\gamma_3surgery)}
\end{eqnarray}
for $i=1,2,\dots,n$. Where the constant $a$ is interpreted as a background rate that a general reflection of the living environment, $q$ is a scale parameter and $p$ is a measure of the relative impact of the occurrence of aging. By descriptive statistics from heart data, rate of death for heart patients, before transplant is higher than after. So we add the parameter $b$ to the elements of $\mathbf{q}^1_i$. The three regression coefficients, $\gamma_1, \gamma_2$, and $\gamma_3$ are included in the model to represent Age, Year, Surgery effects.\\
Also it was assumed that patients at acceptance will be in the first phase of the model. That is, $\boldsymbol{\alpha}=(1,0,..,0)^\prime$.\\
We aim to estimate the parameters by the maximum likelihood method. In the heart transplant data, there are four situations for a patient: He stays in disease stage (stage 1) until the end of the study period, dies before the transplant, dies after the transplant or he is alive after transplant (stage 2) until end of the follow-up. Since the Markov process has only a single absorbing state, the time of death (the time of absorbing) follows a PH distribution. In order to estimate the parameters of the model, by using the phase-type density function \eqref{PHSUR} with representation ($\boldsymbol{\alpha},\mathbf{T}$), we can construct expressions for the contribution of the likelihood function.\\
1. The probability of staying in disease state for a period of time $t$:
\begin{equation}\label{prob_1}
f_1=Pr[J_{u}\in E_1~~\forall u\in [0,t]|J_0 \in E_1]=\boldsymbol{\hat{\alpha}}_1^\prime e^{t\textbf{T}_1}\textbf{1}.
\end{equation}
2. The probability of staying in disease state for $s$ time units and then stay in transplant state until time $t$, will be:
\begin{equation}\label{prob_2}
f_{12}= Pr[\underset{\forall u\in [0,s]}{J_{u}\in E_1},\underset{\forall v\in [0,t-s]}{J_{v+s}\in E_2}|J_0 \in E_1]= \boldsymbol{\hat{\alpha}}_1^\prime \, e^{\textbf{T}_1\,s}\textbf{T}_{12} e^{\textbf{T}_2 (t-s)}\textbf{1}.
\end{equation}
3. The probability of death in disease state after $t$ time units is:
\begin{equation}\label{prob_3}
f_{10}=Pr[\underset{\forall u\in [0,t)}{J_{u}\in E_1},J_{t}\in D|J_0 \in E_1 ]=\boldsymbol{\hat{\alpha}}_1^\prime e^{\textbf{T}_1\,t}\textbf{t}_{10}.
\end{equation}
where $\textbf{t}_{10}=-\textbf{T}_1 \textbf{1}$.\\
4. The probability of death at time $t$ from the transplant after staying in disease state for $s$ time units is:
\begin{equation}\label{prob_4}
\begin{split}
f_{20}&=Pr[ \underset{\forall u\in [0,s]}{J_{u}\in E_1}\,\underset{\forall \nu \in [0,t-s]}{J_{\nu+s}\in E_2},J_{t}\in D|J_0 \in E_1]\\&=\boldsymbol{\alpha}_1^\prime\,e^{s\textbf{T}_1}\textbf{T}_{12}e^{\textbf{T}_2(t-s)}\textbf{t}_{20}
\end{split}
\end{equation}
where $\mathbf{t}_{20}=-\textbf{T}_{2} \textbf{1}$.
We estimate the parameters in \eqref{qi} by maximizing the likelihood function, $$L(\theta)=f_{1}^{n_1}f_{12}^{n_2}f_{10}^{n_3}f_{20}^{n_{4}},$$
where $n_k$s, $k=1,2,3,4,$ represent the number of patients in each scenarios explained in \eqref{prob_1}-\eqref{prob_4} and $\theta=(a,~b,~q,~p,~\lambda_0,~\lambda_1,~\gamma_1,~\gamma_2,~\gamma_3)$.\\
The FMINCON function in MATLAB Software has been used for finding the maximum likelihood estimates.
We estimated the parameters for different numbers of $n$. Based on the results shown in Table \ref{tab4.5}, $n=3$ was chosen. Note that the our criterion is maximizing $l(\theta)=log(L(\theta))$. \\
\begin{table}
\caption{Parameters values for different $n$ values.}
\label{tab4.5}
\begin{tabular}{lcccccccccc}
\hline
$n$& $a$ & $q$ & $p$&$\lambda_0$&$\lambda_1$& $b$&$\gamma_1$&$\gamma_2$&$\gamma_3$&$l(\theta)$\\
\hline
1 & \small{1.7e-03}&\small{5.0e-06} & \small{7.5}& \small{1.50}& \small{0.0116}& \small{0.0033}& \small{1.0} &\small{-0.25} &\small{-0.54 }& \small{-896.48}\\
2 & \small{1.7e-03}&\small{6.7e-08}& \small{3.0}& \small{0.50}& \small{0.0116}& \small{0.0034}& \small{0.37} &\small{-0.17} &\small{-0.59} &\small{ -896.45}\\
3 & \small{4.9e-04}& \small{6.4e-08} & \small{9.3}&\small{0.50}& \small{0.0115}& \small{0.0034}& \small{0.098} &\small{-0.02} &\small{-0.92} & \small{-885.17}\\
4 & \small{7.7e-04}& \small{2.7e-08} & \small{7.7}& \small{0.49}& \small{0.0113}& \small{0.0034}& \small{0.104} &\small{-0.003} &\small{-0.98} &\small{ -885.88}\\
5 & \small{7.7e-04}& \small{1.2e-08} & \small{7.1} &\small{0.49}& \small{0.0115}& \small{0.0036}& \small{0.094} &\small{-1.2e-07} &\small{-0.98} & \small{-886.25}\\
6 & \small{8.4e-04}&\small{8.7e-09} & \small{6.5} & \small{0.49}& \small{0.0116}& \small{0.0037}& \small{0.089} &\small{-0.015} &\small{-0.87} & \small{-886.78}\\
8 & \small{1.0e-03}& \small{5.9e-10} & \small{6.7}& \small{0.30}& \small{0.0116}& \small{0.0037}& \small{0.094 }&\small{-2.1e-08} &\small{-0.86 }&\small{ -888.37}\\
10& \small{8.9e-04}& \small{5.1e-09 }& \small{5.2} & \small{0.48}&\small{0.0119}& \small{0.0038}& \small{0.077 }&\small{-1.3e-08} &\small{-0.75} & \small{-887.63}\\
20& \small{1.7e-03}& \small{7.5e-09 }& \small{2.2} &\small{0.008}&\small{0.0116}& \small{0.0034}& \small{0.49 }&\small{-0.47} &\small{-0.60} & \small{-895.97}\\
\hline
\end{tabular}
\end{table}
The parameters values, the standard deviations and the confidence intervals at level 95\% are shown in Table \ref{tab5}.\\
In the fitted model, the influence of Age at acceptance on hazard is positive, while effects of Year and Surgery are both negative as they were expected.\\
The standard deviation of each parameter is obtained by the bootstrap technique. The main benefit of the bootstrap is that it allows statisticians to set the confidence intervals of the parameters without having to make unreasonable assumptions. It creates multiple resamples (with replacement) from a single set of observations, and computes the effect size of interest on each of these resamples. The bootstrap resamples of the effect size can then be used to determine the $95\%$ confidence interval. See \cite{EfTib1993} for more about bootstrap techniques.\\
\begin{table}
\small
\caption{Parameters values with standard deviations and confidence intervals($95\%$).}
\label{tab5}
\begin{tabular}{lcccccccccc}
\hline
Param.& a & q & p&$\lambda_0$&$\lambda_1$& b&$\gamma_1$&$\gamma_2$&$\gamma_3$&$sup_\theta l(\theta)$\\
\hline
Estimate&\small{4.9e-04}& \small{6.4e-08} & \small{9.3}&\small{0.50}& \small{0.0115}& \small{0.0034}& \small{0.098} &\small{-0.02} &\small{-0.92} & \small{-885.17}\\
Std. & \small{5.3e-04}&\small{2.9e-06} &\small{2.36}&\small{0.34} &\small{ 0.0036}&\small{0.0018}&\small{0.49}&\small{0.37} &\small{0.22}& \\
\small{(Lower,~~)}&\small{1.9e-04}&\small{9.2e-14}&\small{1.5e-08}&\small{4.8e-04}&\small{0.007}&\small{4.9e-04}&\small{0.055}&\small{-0.50}&\small{-0.98}&\\
\small{(~~,Upper)} &\small{0.0025} &\small{1.0e-05}&\small{9.7} &\small{1.25} &\small{0.021} &\small{0.0077} &\small{1.77}&\small{-1.8e-08}&\small{-0.0014}&\\
\hline
\end{tabular}
\end{table}
It was our interest to know about the significance of the covariates and the parameter $b$ in the model. To this end, we have conducted hypothesis testing for some null tests. In Table \ref{tab6} the results are presented.\\
\begin{table}
\caption{The value of estimates}
\label{tab6}
\begin{tabular}{lcccccccccc}
\hline
Parameter & a & q & p&$\lambda_0$&$\lambda_1$& b&$\gamma_1$&$\gamma_2$&$\gamma_3$&$sup_\theta l(\theta)$\\
\hline
\small{without $\gamma_1$,$\gamma_2$,$\gamma_3$}& \small{0.0017}&\small{4.3e-09}&\small{3.01}&\small{0.50}&\small{0.0116}&\small{0.0033}& -& -& - & \small{-896.48} \\
\small{without $\gamma_1$}&\small{0.0013}&\small{9.9e-06}&\small{5.1}&\small{0.50}&\small{0.0114}&\small{0.0029}&-&\small{-0.50}&\small{-0.96}&\small{-895.54}\\
\small{without $\gamma_2$}&\small{6.1e-04}&\small{1.2e-08}&\small{10.4}&\small{0.21}&\small{0.0114}&\small{0.0036}&\small{0.14}&-&\small{-0.98}&\small{-885.02}\\
\small{without $\gamma_3$}&\small{5.0e-04}&\small{3.5e-07}&\small{7.4}&\small{0.50}&\small{0.0116}&\small{0.0036}&\small{0.13}&\small{-3.7e-04}&-&\small{-888.01}\\
\small{without b}&\small{0.0022} &\small{2.1e-09} &\small{0.39} & \small{0.0035}& \small{0.0112 }& -&\small{1.71}&\small{-0.077} &\small{ -0.52} & \small{-904.7}\\
\hline
\end{tabular}
\end{table}
\newline
The likelihood ratio tests ($LRT$) has been used for the testing. For example, for the null hypothesis $H_0:\forall i,\gamma_i=0$ against the alternative $H_1: \exists i,\gamma_i\neq 0 $ can be expressed:
\begin{equation}
LRT=-2log(\frac{sup_{\Theta_0} L(\theta)}{sup_{\Theta} L(\theta)})=-2(sup_{\Theta_0} l(\theta)-sup_{\Theta} l(\theta))=22.62,
\end{equation}
which is greater than $ \chi^2_{0.95,3}=7.8$ and therefore the null hypothesis is disproved. As seen in Table \ref{tab6}, we can obtain the values of $LRT$ for the model without every variable separately. $\Theta$ is the parameter space and $\Theta_0$ is the parameter under the null hypothesis. \\
For testing the null hypothesis $H_0:b=0$ against the alternative $H_1: b\neq 0$, $LRT$ is $39.06$, by comparing with $\chi^2_{0.95,1}=3.8$, we reject $H_0$. Hence, it can be said that mortality before heart transplant is greater than after transplant.\\
Now using the value of parameters, the probability of the number of transition until time $t$ can be calculated. These probabilities are shown in Table \ref{tab7} for different Age ( $30$ and $50$), Year (years $3$ and $5$) and Surgery ($0$ and $1$). As we can seen in this table, the probability without transition decreases as $t$ increases because the patient may die or transplant after more time of staying in disease stage. The event of no transition decreases with age, increases with year and surgery. \\
As we said before, mortality increases with age, so probability of staying in the disease stage for older patients are reduced, and he/she is more likely to die. Therefore, it is expected that the probability of one transition increases with age. Mortality decreases with year and surgery, so the probability for no transition also increases as the year and surgery increase. The probability of one transition increases until 6 months but decreases thereafter because the probability of transplant decreases in the later months and the probability of death increases. The probability of two transitions constantly increases. It is obvious that this probability increases with age because of its positive influence on death, decreases with year and surgery because of their negative influence on death.\\
\begin{table}
\caption{Probability of N(t) for different Age, Year and Surgery.}
\label{tab7}
\begin{tabular}{l|cccccccc}
\hline
Age&Year&Surgery &P[N(t)] &1 month & 3 months & 6 months & 1 year & 3 years \\
\hline
\multirow{12}{*}{30}&\multirow{6}{*}{3} &\multirow{3}{*}{0}&P[N(t)=0] & 0.6254 & 0.2441 & 0.0595 & 0.0033 & 3.5e-08\\
&&&P[N(t)=1] & 0.3714 & 0.7338 & 0.8786 & 0.8505 & 0.6082\\
& &&P[N(t)=2] & 0.0032 & 0.0221 & 0.0619 & 0.1462 & 0.3918\\\cline{4-9}
&&\multirow{3}{*}{1}&P[N(t)=0] & 0.6279 & 0.2474 & 0.0612 & 0.0035 & 4.1e-08\\
& &&P[N(t)=1] & 0.3695 & 0.7350 & 0.8892 & 0.8776 & 0.6650\\
& &&P[N(t)=2] & 0.0026 & 0.0176 & 0.0496 & 0.1190 & 0.3350\\\cline{2-9}
& \multirow{6}{*}{5}&\multirow{3}{*}{0}&P[N(t)=0] & 0.6255 & 0.2443 & 0.0596 & 0.0033 & 3.5e-08\\
&&&P[N(t)=1] & 0.3713 & 0.7339 & 0.8793 & 0.8523 & 0.6117\\
&&&P[N(t)=2] & 0.0032 & 0.0218 & 0.0611 & 0.1444 & 0.3883\\\cline{4-9}
& &\multirow{3}{*}{1}&P[N(t)=0] & 0.6280 & 0.2475 & 0.0612 & 0.0035 & 4.2e-08\\
&&&P[N(t)=1] & 0.3695 & 0.7350 & 0.8894 & 0.8783 & 0.6666\\
&&&P[N(t)=2] & 0.0026 & 0.0175 & 0.0493 & 0.1183 & 0.3334\\
\hline
\multirow{12}{*}{50}&\multirow{6}{*}{3} &\multirow{3}{*}{0}&P[N(t)=0] & 0.5955 & 0.2077 & 0.0428 & 0.0017 & 4.5e-09\\
&&&P[N(t)=1] & 0.3935 & 0.7215 & 0.7764 & 0.6360 & 0.3831\\
&&&P[N(t)=2] & 0.0110 & 0.0709 & 0.1808 & 0.3624 & 0.6169\\\cline{4-9}
&&\multirow{3}{*}{1}&P[N(t)=0] & 0.6168 & 0.2332 & 0.0542 & 0.0027 & 2.0e-08\\
& &&P[N(t)=1] & 0.3777 & 0.7299 & 0.8453 & 0.7712 & 0.4802\\
& &&P[N(t)=2] & 0.0055 & 0.0369 & 0.1005 & 0.2261 & 0.5198\\\cline{2-9}
& \multirow{6}{*}{5}&\multirow{3}{*}{0}&P[N(t)=0] & 0.5969 & 0.2093 & 0.0434 & 0.0017 & 5.0e-09\\
&&&P[N(t)=1] & 0.3925 & 0.7220 & 0.7804 & 0.6426 & 0.3844\\
&&&P[N(t)=2] & 0.0106 & 0.0688 & 0.1762 & 0.3557 & 0.6156\\\cline{4-9}
& &\multirow{3}{*}{1}&P[N(t)=0] & 0.6173 & 0.2339 & 0.0545 & 0.0027 & 2.0e-08\\
&&&P[N(t)=1] & 0.3774 & 0.7301 & 0.8472 & 0.7756 & 0.4859\\
&&&P[N(t)=2] & 0.0053 & 0.0360 & 0.0982 & 0.2217 & 0.5141\\
\hline
\end{tabular}
\end{table}
Given $J_u \in E_i$, the time of staying continuously in stage $i$ of disease has a PH distribution with representation $(\boldsymbol{\hat{\alpha}}^u_i, \textbf{T}_i)$, where
$$\boldsymbol{\hat{\alpha}}^u_i=\dfrac{\boldsymbol{\hat{\alpha}}^\prime\,e^{\textbf{T}u}\textbf{I}_{E\,E_i}}{\boldsymbol{\hat{\alpha}}^\prime\,e^{\textbf{T}u}\textbf{1}_i},$$
$\textbf{I}_{E\,E_c}$, $c=1,\dots,k$, is $nk \times n$ matrix with $(n(c-1)+l,l)$ entry equal to 1, $l=1,\dots,n$ and all other entries are 0, and $\textbf{1}_c$ is a column vector with $nk$ entries such that elements $n(c-1)+l$, $l=1,\dots, n$ equal to $1$ and all other entries $0$.
\noindent Hence, given that the process $J$ is $i^{\textrm{th}}$ stage at time $u$, the expected time of continuously staying in this stage, in time interval $[0,t]$ is equal to:
\begin{equation}\label{EN0}
\begin{split}
E[\int_{0}^{t}\mathbf{1}_{\{J_\nu \in E_i,\,\forall \nu \in [0,\omega] | J_u\in E_i\}}d\omega]&=\int_{0}^{t}Pr[J_\nu\in E_i,\,\forall \nu \in [0,\omega] | J_u\in E_i]\, d\omega\\
&=\int_{0}^{t} \hat{\boldsymbol{\alpha}}^{u}_i\,e^{\textbf{T}_i\,\omega}\textbf{1}_i\, d\omega\\
&=\boldsymbol{\hat{\alpha}}^{u}_i\,\textbf{T}_i^{-1}(e^{\textbf{T}_i\,t}-\textbf{I})\textbf{1}.
\end{split}
\end{equation}
The integrand in \eqref{EN0} can be interpreted as the survival function of a PH distribution with representation of $(\boldsymbol{\hat{\alpha}}^u_i, \textbf{T}_i)$.
Given starting from stage illness (stage 1), expected times without transition for $u=0$, are shown in Table \ref{tab8} for different ages.
It is expected that the 30-year-old patient spends 24 days in disease stage during one month. This expected time decrease as age increases because of transition to death state.\\
\begin{table}
\caption{Expected sojourn time (days) of without transition}
\label{tab8}
\begin{tabular}{l|ccccc}
\hline
& \multicolumn{5}{c}{duration} \\
Age & 1 month & 3 months & 6 months & 1 year & 3 years\\
\hline
30 & 24 & 48.3 & 60.1 & 63.6 & 63.9\\
40 & 23.9 & 47.6 & 58.7 & 61.9 & 62.1\\
50 & 23.5 & 45.6 & 55.0 & 57.3 & 57.4\\
60 & 22.5 & 40.5& 46.3 & 47.3 & 47.3\\
\hline
\end{tabular}
\end{table}
Finally, the probability of transition from stage $i$ to stage $j$ in a period of time $t$ can be calculated easily as follows
\begin{equation}\label{transfromi2j}
Pr[J_{u+t} \in E_j| J_u \in E_i]=\dfrac{\hat{\boldsymbol{\alpha}}^\prime e^{\textbf{T}u}\textbf{I}_{E\,E_i}\textbf{I}^\prime_{E\,E_i}e^{\textbf{T}t}\textbf{1}_j }{\hat{\boldsymbol{\alpha}}^\prime e^{\textbf{T}u}\textbf{1}_i}.
\end{equation}
For Age$=30$, Year$=3$ and Surgery$=0$, and $u=0$ \eqref{transfromi2j} can be seen in Table \ref{tab9}. The formula for the probability of transition from disease stage to transplant is greater than death. Mortality from disease and transplant stages increase over time but mortality before transplant is greater than after.
\begin{table}
\caption{Probability of transition from stage $i$ to stage $j$ for Age=30, Year=3, Surgery=0.}
\label{tab9}\label{i2j}
\begin{tabular}{lcccccc}
\hline
from & to & 1 months & 3 months & 6 months & 1 year & 3 years \\
\hline
disease& transplant & 0.2726 & 0.5338 & 0.6296 & 0.5866 & 0.3434\\
disease& death & 0.0988 & 0.2000 & 0.2490 & 0.2640 & 0.2648\\
transplant& death & 0.0032 & 0.0221 & 0.0619 & 0.1462 & 0.3918\\
\hline
\end{tabular}
\end{table}
\newpage
\noindent \textbf{Example~2}: Cancer Disease\\
After someone is diagnosed with cancer, doctors will try to figure out if it has spread, and if so, how far. This process is called staging. The stage of a cancer describes how much cancer is in the body. It helps determine how serious the cancer is and how best to treat it. Cancer staging may sometimes include the grading of the cancer. This describes how similar a cancer cell is to a normal cell.
Doctors also use a cancer's stage when talking about survival statistics. The earliest stage of cancers are called stage 0 (carcinoma in situ), and then range from stages I (1) through IV (4). As a rule, the lower the number, the less the cancer has spread. A higher number, such as stage IV, means cancer has spread more. Although each person's cancer experience is unique, cancers with similar stages tend to have a similar outlook and are often treated in much the same way. When cancer returns after a period of remission, it's considered a recurrence. A cancer recurrence happens because, in spite of the best efforts to rid of cancer, some cells from cancer remain. These cells could be in the same place where cancer first originated, or they could be in another part of body. These cancer cells may have been dormant for a period of time, but eventually they continued to multiply, resulting in the reappearance of the cancer.\\
Also by the end of the observation period, the patient under study may not have reached an absorbing state. In survival analysis this would correspond to individual still being alive by the end of the study and this kind of incomplete observation is known as right-censoring.\\
To illustrate the computation of probabilities of transitions, we use hypothetical transition rate between stages. In the following, we show how the PH model can be used in modeling of recurrent events in cancers.\\
The model can be considered as a development of \cite{HassanZadehetalldis}, where the number of disease stages (which in \cite{HassanZadehetalldis} is called disability stage) is increased from 2 to 5. \\
We assume $5$ disease stages and one recovery stage. Each disease stage and recovery include 5 states, \textit{i.e.} $k=6$ and $n=5$. The stage $1$ corresponds to the recovery (R) and the stages $2,3,\dots,6$ represents cancer stages $0,1,\dots,4$, respectively. The entries of the sub-generator matrix $\textbf{T}=(t_{ij}) \,\, i,j=1,...,30$ will be described in the following. \\
The rates of transition from one state to the next are given by
$$t_{i,i+1}=\lambda, \,\,\, i=1,2,...,29, i\neq 5,10,15,20,25,$$
where $\lambda$ is assumed to be $0.2$.\\
The rates of recovery from disease stage, defined based on the degree of malignancy of the disease, are
$$t_{i+5l,i-5}=\gamma(0.1)^l, ~~~ i=6,7,...,10, ~l=0,1,...,4,$$
where $\gamma$ is the coefficient of recovery; assume $\gamma=0.1$\\
and the rates of transition from one stage of disease to the stage before, defined based on the degree of malignancy of the disease, are given by
$$t_{i+5l,i+5(l-1)}=\beta(0.1)^{l},~~~ i=11,12,...,15, ~l=0,...,3$$
where $\beta$ is assumed to be $0.2$.\\
The rates of transition from one stage of disease to the next are
$$t_{i+5l,i+5(l+1)}=(l+1)(a+qi^p),~~~i=1,2,...,5,~l=1,...,4$$
with $a=10^{-3}$, $q=10^{-6}$ and $p=4.5$.\\
The rates of transition from recovery stage to each of the disease stages are given by
$$t_{i,i+5(l+1)}=(0.1)^l(a+qi^p),~~~i=1,2,...,5,~l=0,1,...,4,$$
whereas the mortality rates are given by
$$\mathbf{t}^\prime_0=(t_{1,0},t_{2,0},...,t_{30,0})$$
where
$$t_{i,0}=(0.1)^5+a+qi^p,~~~i=1,...,5$$
and for $l=0,1,...,4$
$$t_{i+5(l+1),0}=(0.1)^{4-l}+a+qi^p,~~~i=1,...,5.$$
So other entries of $\textbf{T}$ are zero except the diagonal entries which are:
$$t_{i,i}=-\sum_{j=0,j\neq i}^{30}t_{i,j}, ~~~ i=1,2,...,30.$$
Finally, the initial probability is $\boldsymbol{\alpha}'=(\boldsymbol{\alpha}^\prime_1,\boldsymbol{\alpha}^\prime_2,...,\boldsymbol{\alpha}^\prime_6)$ where $\boldsymbol{\alpha}_i$'s are $5$-dimensional column vector. After the diagnosis of cancer, first stage, $i^{\textrm{th}}$, is also known so $\boldsymbol{\alpha}^\prime_i$ is $(1,0,0,0,0)$ and other elements of $\boldsymbol{\alpha}$ will be zero.\\
\begin{table}
\caption{Probability of N(t) in Example 2.}
\label{table1}
\begin{tabular}{lccccc}
\hline
& \multicolumn{5}{c}{duration} \\
& input stage & 6 months & 12 months & 24 months & 36 months \\
\hline
P(N(t)=0) & & & & & \\
&0 &0.5382&0.2885&0.0814&0.0226\\
&1 &0.2750&0.0752&0.0055&0.0004\\
&2 &0.8046&0.6429&0.3981&0.2403\\
&3 &0.5219&0.2701&0.0698&0.0175\\
&4 &0.0025&6.04e-6&3.62e-11&2.16e-16\\
\hline
P(N(t)=1) & & & & & \\
&0 & 0.4541 &0.6880& 0.8504 & 0.8576\\
&1 & 0.5199 &0.4422 & 0.1960 & 0.0952\\
&2 & 0.1406 &0.2061 & 0.2694 & 0.2959\\
&3 & 0.4573 &0.6915 &0.8703 & 0.9132\\
&4 & 0.9975 & 0.9999& 0.9998 & 0.9998\\
\hline
P(N(t)=2) & & & & & \\
&0 & 0.0088 &0.0204 & 0.0440 &0.0669 \\
&1 & 0.2875 &0.4614 & 0.3268 & 0.1523\\
&2 & 0.0639 &0.1173 & 0.1483 & 0.1552\\
&3 & 0.0208 &0.0377 & 0.0559 & 0.0622\\
&4 & 7.94e-5 & 1.31e-4& 1.71e-4 & 1.81e-4\\
\hline
\end{tabular}
\end{table}
\noindent Table \ref{table1} shows the probability of $N(t)$, \eqref{PN0}--\eqref{PN2}, for different input stages, for different durations.
These probabilities behave the way we expect. With entering to every stage, probability of no transition is decreasing as the duration increases.
This means that probability of staying in every stage is higher at the earlier times and decreases over time.
For example in stage $4$, probability is near to zero over time, because stay in this state very low and patient has a higher chance to die.
For one transition until $t$ this probability is near to one, because of the transition to death state.
As it is seen in Table \ref{table1}, the probability of staying in stage $2$ is the maximum, compared to other stages, due to the fact
that chance of recovery is not as high as in stage 1 and the rate of death is not as high as in stage 3.\\
The expected time of staying in stage at diagnosis, \eqref{EN0}, is shown in Table \ref{table2}. When a patient is diagnosed with stage $0$ cancer,
it is expected that he stays in that stage for 4.5 months during the six months. If the first stage is 2, this value will be 5.4 months. As we said before, at this stage, the expected sojourn time is greater
than the others because of the low chance of recovery compared to the previous stage and low chance of death compared to the next stage. The patient, who is in stage 4, is expected to eventually survive for 1 month.\\
\begin{table}
\caption{Expected sojourn time in first stage until $t$.}
\label{table2}
\begin{tabular}{lcccc}
\hline
& \multicolumn{4}{c}{duration}\\
input stage & 6 months & 12 months & 24 months & 36 months \\
\hline
0 &4.5 &6.9 &8.8 &9.4\\
1 &3.4 &4.3 &4.6 &4.6\\
2 &5.4 &9.7 &15.9 &19.6\\
3 &4.4 &6.7 &8.5 &9.0\\
4 &1 &1 &1 &1\\
\hline
\end{tabular}
\end{table}
Also we can obtain the probability of one transition from stage $i$ to stage $j$, $P[N_{ij}(t)=1]$, by using Equation \eqref{transfromi2j}.
\begin{table}
\caption{Probability of one transition between stages.}
\label{tab3}
\begin{tabular}{lccccccccc}
\hline
stage &waiting time &$R$&$0$&$1$&$2$&$3$&$4$&$D$ \\
\hline
0 & 6 months& 0.4440 & 0 & 0.0050 & 0 & 0 & 0 & 0.0051 \\
&12 months& 0.6750 & 0 & 0.0046 & 0 & 0 & 0 & 0.0084 \\
1 & 6 months& 0.0334 & 0.4704 & 0 & 0.0092 & 0 & 0 & 0.0069 \\
&12 months& 0.0420 & 0.3809 & 0 & 0.0104 & 0 & 0 & 0.0089 \\
2 & 6 months& 0.0054 & 0 & 0.0592 & 0 & 0.0165 & 0 & 0.0596 \\
&12 months& 0.0096 & 0 & 0.0635 & 0 & 0.0247 & 0 & 0.1084 \\
3 & 6 months& 4.38e-04 & 0 & 0 & 0.0078 & 0 & 0.0032 & 0.4459 \\
&12 months& 6.58e-04 & 0 & 0 & 0.0103 & 0 & 0.0021 & 0.6784 \\
4 & 6 months& 9.89e-6 & 0 & 0 & 0 & 1.16e-4& 0 & 0.9973 \\
&12 months& 9.72e-6 & 0 & 0 & 0 & 6.049e-5& 0 & 0.9998 \\
\hline
\end{tabular}
\end{table}
As seen in Table \ref{tab3}, probability of one transition from the lower stages of cancer to recovery stage is higher than the higher stages, this means that the lower the number, the less the cancer has spread and the higher number of cancer has spread more. In higher stages probability of transition to recovery is near to zero. Also probability of transition from the lower stages of cancer to death is lower than the higher stages and probability of transition from higher stages to death is near to one. Other possible probabilities from one stage to another is shown in this table.\\\\
\section{Conclusion}
In this article, the multi-state models, that are widely used in the medical studies for many diseases, has been developed via taking advantages of phase--type distribution, to model the recurrent events. It is important for decision makers and patients to be aware of information about their illness, such as the probability of relapse, the time stay in each stage of recovery or disease, the probability of recovery, and so on. Using Markov's properties and phase-type distributions, we present a formula for calculating the probability of the number of transitions between the stage os diseases. To calculate these probabilities, which are interdependent and defined as recursive relations, we need to use differential equations to calculate them. In this article, we used two examples to calibrate our model. Example $1$ was for 103 Stanford heart patients who have two stages: heart disease and heart transplant. The mortality rate was defined for these stages, which have several parameters. We estimate these parameters by the maximum likelihood method to estimate the matrix \textbf{T}. The standard deviations and the confidence intervals for the parameters are obtained by the bootstrap technique. Finally, the probability of the number of transition until time $t$, the expected time of continuously staying in each stage and the probability of transition from stage $i$ to stage $j$ in a period of time $t$ are obtained. Another example is a simulated cancer that has one recovery stage and five cancer stages. In the simulated example, which had a more complex model, similar calculations have been done.\\
\noindent Calculating these probabilities for higher recurrent's number is time consuming due to more transition between stages. Because of using of the recursive equations in the formulas, we need to use the differential equations of the previous equations to calculate the probability. In other words, we have to start from the beginning to get the probability. \\
We have used the available functions in Matlab and have not developed any algorithms for solving the differential equations. In future study, we aim to optimize the algorithms for calculating probability distribution of the transitions.
\newpage
\section*{Data Availability}
Data availability statement: NA
|
1,108,101,565,717 | arxiv | \section{Introduction}
It is important to understand mobility for a variety of reasons, including uncertainty reduction for allocation of resources such as communications and computing \cite{sousaaura} infrastructure usage, robustness and interdependence \cite{Vespignani2010}, wireless networking applications \cite{Griepp2013}, social network analysis \cite{Cho:2011:FMU:2020408.2020579}, intelligent transportation \cite{Silva2006}, economic development \cite{Helbing2004}, crisis response \cite{Chen2006, Helbing2000}, and large-scale energy consumption and CO$_2$ emissions \cite{Momoh2009, Townsend_2000, banister2011cities}, to name a few.
The data sources in recent papers in the `big' mobility-scaling literature have been dollar bill movements, mobile call-data records (CDRs), and geo-tagged social media such as Foursquare, Twitter, Gowalla, Facebook, and others \cite{Brockmann:2006uq, Gonzalez:2008uq, Noulas2011, Cho:2011:FMU:2020408.2020579}. The defining characteristic of this big mobility data is not only its size, which can be smaller than some conventional sources -- the number of trips or movements in these `big' sources can range from $10^5$ to $10^9$ or broader -- but most often they are characterized by new forms of large-scale automatic data collection using `check-ins' (phone calls, tweets, etc.), for some purpose other than their eventual research use, at relatively low cost.
The Where's George, CDRs, and social media contain either little or no categorical data about the trip or individual, or are limited due to privacy concerns. Spatial resolution can vary from cell-tower radius ($\sim3$ km) to less than a few meters in the case of GPS-based social media \cite{Brockmann:2006uq, Gonzalez:2008uq, Noulas2011, Cho:2011:FMU:2020408.2020579}. There are several challenges posed by these data sources, stemming from the large geographic- and time-scales, as well as the incidental sampling method, to be discussed below.
\vspace*{-.1cm}
\subsection{Our Contributions}
Here, we are interested in the ability of conventionally-collected (`little') mobility data to contribute to scientific research on mobility patterns. The availability of categorical information allows us to ask and address questions that are challenging using exclusively check-in mobility data. Transportation mode, city size, and trip purpose are particularly helpful to shed light on mobility patterns at an urban scale.
We say this data is conventional because it was collected as
a large effort including survey design by experts as part of a series running over many years, with an intentional focus on understanding of mobility patterns.
Based on this
we ask -- how can mode, trip purpose, and other categories further our understanding of mobility generally, and especially of urban mobility?
Also, how can we begin to address the challenges faced by big mobility research above? For example, in contrast to most check-in displacements, which are inherently based on misleading Euclidean distances and may not correspond to actual trip start and end, we have reported lengths of the trips themselves.
Our main findings are:
\begin{itemize}
\item We argue that assuming that trips are i.i.d. is imprudent, and that categories matter in refining our understanding of mobility patterns.
Mode matters, helping to characterize mobility universality classes, both at the urban and inter-urban scales. E.g. there are significant differences between walking, bicycling, and automobile driving trip length distributions. In addition, looking at trip lengths rescaled by maximum length for each mode, there are significant distinguishable universal properties.
\item Even for trip lengths at and below the urban scale ($\sim 10$ km), mode differences are evident, a fact that is at odds with previous claims \cite{Noulas2011}. On a related note, it seems that city population is not a strong determinant of mean trip length, with only a slight difference found in large vs. small cities.
\item Scaling of mobility confirms previous findings, when histograms of daily trips and overnight travel are taken together, yielding a scaling exponent of $1.44$ for trip lengths within Germany.
\item We show that other categories and dependencies are also important. Trip lengths respond differently to different purposes, shopping and business, for example. It is also evident that mobility is time-dependent. E.g. trip lengths on Sunday vary from those on Wednesday.
\end{itemize}
More broadly, we see that:
Scaling in response to categories hints at the existence of different universality classes in mobility patterns;
purpose, along with mode, may give us insight into how to form a bridge between distance and intervening opportunity arguments for trip length form in cities;
since this dataset is less prone to sampling error, we believe it can also offer the ability to understand trip length changes in time, and helps elucidate some of the factors that are averaged together when using large-scale check-in data.
\thispagestyle{plain}
\pagestyle{plain}
\section{Related Work and Challenges}
There has recently been progress characterizing scaling of long-distance trips ($\sim10^2$ to $\sim10^4$ km), fitting them with power law having scaling exponent ranging between $1.50$ and $1.75$ \cite{Noulas2011, Gonzalez:2008uq, Brockmann:2006uq}. Since at the longer scale fewer (motorized) modes dominate mobility, and they are somewhat clustered compared to non-motorized modes (See Fig.~\ref{fig:ccdf_city_trips_mode} and Table. \ref{table:fits}), it is not surprising that these trips are easier to characterize. There has been some success modelling these long trips and attempting to determine major mechanisms driving them. These mechanisms -- some also found in past research using conventional data -- are based on distance (Random Walks and Levy Flights) \cite{Brockmann:2006uq, Gonzalez:2008uq}, `intervening opportunity' (place density) \cite{simini_2012, Stouffer1940}, along with social networks \cite{Cho:2011:FMU:2020408.2020579}, and others \cite{song2010modelling}.
Longer trips are very different from spontaneous, inexpensive, dense infrastructure, dense location, urban-scale mobility, occurring at distances less than or close to $10^1$ km \cite{Noulas2011, Scheiner2010}. Mobility research has been facing an `urban challenge', due to: A. Difficulties fitting shorter distance trips, or the necessity of using distance transformations \cite{Noulas2011, Gonzalez:2008uq}, B. the limits of spatial resolution (e.g. in CDRs) \cite{Gonzalez:2008uq}, and C. because heavy-tail research is focused exactly on the {\it tail} of distributions, due to the systemic and mathematical property of power law scaling -- to break down at small data values \cite{Newman2009}.
Recently, this work has attempted to address the apparent debate between two schools of thought about mechanisms influencing mobility patterns on the urban scale: A. distance-based mechanisms \cite{Gonzalez:2008uq, Song2010}, versus B. intervening opportunity \cite{Stouffer1940, Noulas2011}. Basically, their question is: Is there some inherent property of human behavior -- purely related to distance -- that leads to heavy-tailed trip-length distributions, or are these trip length patterns driven more by urban form, as seen in the density of `places'?
This mobility research faces some significant challenges:
First, these check-in sources do not usually contain very much ancillary categorical information about a trip such as mode, weather, purpose, number of passengers, etc. Thus, if one wants to know the effect of external factors, one may be limited by resources to determine all of them accurately~\cite{Reddy2010}.
Second is sampling bias. Between check-ins, it may be impossible to know actual travel patterns, and check-in rates may not be independent of factors (such as mode) that affect these patterns. Trips may not begin and end at check-ins, and very rarely follow a linear path -- the terms `travel' and `displacement' are intermingled \cite{Brockmann:2006uq, Gonzalez:2008uq, Noulas2011}, which may be appropriate at distances mostly traversed by air, but certainly can be misleading at urban scales. Shorter trips length measurements may be more sensitive to these inaccuracies.
With one or two significant exceptions, existing mobility scaling research seems to implicitly assume that `mean field', random, independent characteristics apply -- due to the large data size -- and that these approximations are sufficient to account for sampling bias \cite{Brockmann:2006uq, Gonzalez:2008uq, Noulas2011}, so that check-in displacements are assumed to reflect actual displacements or trip lengths.
Finally, related to sampling bias is stability of mobility patterns over time. There is no question that mode share and trip length change, and that this needs to be considered
\vspace*{-.1cm}
\section{Data and Methodology}
This work is based on the Mobility in Germany 2008 (MID 2008) survey data set, which was collected and is maintained by the Infas Institute for Applied Social Science Research and the German Aerospace Center (DLR), with the main survey between the dates of February 2008 and March 2009. The final survey involved 25,922 Households, 60,713 Individuals, 193,290 Trips and 36,182 Travel events. `Trips' describe daily journeys, where a return journey was counted as a separate trip, while `travel' data describe mobility that included an overnight stay \cite{Follmer2008a}.
MID 2008 was designed carefully, as a continuation of the West German Kontiv surveys in 1976, 1982 and 1989, and MID 2002. It included a pre-survey, pretest, and used a mixed methodology combination of computer-aided telephone interview (CATI), online, and mail surveys in order to avoid bias and maintain continuity with past surveys. Querying a large number of households from different federal states, it was the largest household survey apart from the official German microscensus.
The trip lengths $(\ell)$ in our data correspond to the {\it actual} traveled distances, reported by subjects. Hence, in contrast to check-in data, we do not have to approximate trip lengths by Euclidean displacements $(\Delta r)$ \cite{Brockmann:2006uq, Gonzalez:2008uq, Noulas2011}, which may introduce a bias to the scaling exponent, especially for short trips. This is particularly interesting, since our data features a high resolution, recorded down to the 100m scale.
If not otherwise stated, lengths shown are for trips only, not travel, and trips are counted over the entire measurement period. Categorical information describe trip origination and mode describes the main transportation mode for a trip. We define urban trips as those starting in a city (pop. $> 100,000$), and other categorical information is stated explicitly. `All modes' is composed of a weighted average of walking, bicycling, automobile drivers, automobile passengers and public transportation trips. We have removed the automobile passenger mode from figures for ease of visibility, but note that its scaling and statistical characteristics are similar to those of public transportation (Table \ref{table:fits}).
\vspace*{-.1cm}
\begin{figure}[!ht]
\vspace*{-.2cm}
\subfloat[]{%
\includegraphics[trim =9mm 55mm 9mm 51mm, clip, width=0.24\textwidth]{./_mode_trip_length.pdf}
\label{fig:ccdf_city_trips_mode}
}
\subfloat[]{%
\includegraphics[trim =9mm 55mm 9mm 51mm, clip, width=0.24\textwidth]{./_rescaled_trip_length.pdf}
\label{fig:rescaled_trips}
}
\caption{ (\ref{fig:ccdf_city_trips_mode}) Trip length distributions (CCDF) of all trips starting in German cities with population $> 100,000$ by major transportation mode. (\ref{fig:rescaled_trips}) CCDF of trip lengths rescaled by maximum trip length $(\ell/\ell_{max})$ for each respective mode.}
\label{fig:mode}
\end{figure}
\vspace*{-.1cm}
We simply use {\it best-fit} power law scaling exponents ($\alpha$) to give a sense of relative scaling in what are visibly truncated heavy-tailed distributions, not as claim to fit. Power laws are of the form $p(\ell) = C \ell^{-\alpha},$
for normalization constant $C$, trip length $\ell$, scaling exponent $\alpha$, and $\ell > \ell_0,$ the minimum fit trip length. Here we have shown trip lengths as log-log CCDFs, $p(L > \ell)$, as is common in scaling literature \cite{Newman2009}.
Statistical fitting was carried out by a method that uses maximum likelihood estimators and Kolmogorov-Smirnov statistics to fit data with a power law. (See \cite{Newman2009}.)
\section{The Importance of Categories}
\subsection{Mode Matters for Mobility Scaling}
\label{sec:mode}
\begin{table}[!ht
\vspace*{-.2cm}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Mode & Count & $\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$\\
\hline
I. All Modes & 52973 & 2.13 & 29.40 & 9.99 & 1313.79\\
\hline
A. Walk & 14303 & 3.99 & 6.37 & 1.37 & 3.77\\
\hline
B. Bicycle & 5581 & 2.72 & 6.37 & 3.47 & 30.06\\
\hline
C. Auto. Driver & 18484 & 2.29 & 39.90 & 13.06 & 1331.84\\
\hline
D. Public Trans. & 6944 & 1.97 & 27.98 & 16.34 & 2875.92\\
\hline
Auto. Passenger & 7658 & 2.00 & 24.32 & 17.69 & 2949.11\\
\hline
\end{tabular}
\caption{Sample size, best-fit scaling exponent $\alpha$, beginning of fit $\ell_0$, and moments -- mean trip length $\bar{\ell}$ and variance $\sigma^2$ for the major modes.}
\label{table:fits}
\end{table}
\vspace*{-.1cm}
Check-in-based mobility data research must average together displacements of substantially different modes. With our conventional categorical data we can distinguish between modes, seeing a visible discrepancy in scaling between walking, bicycling, automobile drivers, and those using public transport (Fig.~\ref{fig:ccdf_city_trips_mode}). For comparison of the scaling, best-fit power laws are drawn, with corresponding exponents shown in Table \ref{table:fits}.
With $\alpha > 3$ for walking (A), the first two moments -- mean and variance -- are defined. For bicycling and driving (B, C), with $2 < \alpha < 3$, the mean is defined but variance diverges. For public transport (D), with $1 < \alpha <2$, neither the mean nor variance is defined \cite{newman2005power}.
\begin{figure*}[h!t]
\subfloat[]{
\includegraphics[trim =9mm 55mm 9mm 45mm, clip, width=0.32\textwidth]{./_sgtyp_trip_length_.pdf}
\label{fig:municipality_type}}
\subfloat[]{
\includegraphics[trim =9mm 55mm 9mm 45mm, clip, width=0.32\textwidth]{./_trips_and_travel_length.pdf}
\label{fig:travels_and_trips}}
\subfloat[]{
\includegraphics[trim =9mm 55mm 9mm 45mm, clip, width=0.32\textwidth]{./_hwzweck_trip_length.pdf}
\label{fig:purpose}}
\caption{CCDFs of (\ref{fig:municipality_type}) trip length according to urban population, (\ref{fig:travels_and_trips}) daily trip and overnight travel lengths originating in Germany, taken together, (\ref{fig:purpose}) trip length according to purpose.}
\end{figure*}
Scaling exponent ($\alpha$) and mean trip length ($\bar{\ell}$) for walking and bicycling (Table \ref{table:fits}: A,B), both non-motorized, differ greatly from that of motorized modes- automobile driving and public transportation usage (C, D), as one might expect. We also note that walking and bicycling have exponents differing by more than one, and that the exponent for walking, nearly 4, implies that it behaves quite differently than other modes.
Note that trip lengths, representing daily trips originating within Germany, are truncated at approximately the diameter of Germany\footnote{Using a simplifying approximation of a disc, we calculate $\log_{10}$(diameter) $= \log_{10} (2\sqrt{\frac{A}{\pi}}) \approx 2.83$, where $A = 357,021$ km$^2$, Germany's square area.} (674 km $\approx 10^{2.83}$ km, see Fig.~\ref{fig:ccdf_city_trips_mode}). This is confirmed by intuition, since it is perhaps less likely that trips beginning in Germany, not including an overnight stay, will end in another country.
Rescaling trip lengths by the maximum trip length for each respective mode, we also observe that certain modes have somewhat similar heavy tails (Fig.~\ref{fig:rescaled_trips}), again suggesting distinct universality classes, and thus some mechanism at work causing these differences. Between $\sim 10^{-3}$ and $\sim 10^{-1}$ of maximum trip length, trips seem clustered into two groups by scaling, non-motorized -- walking and bicycling, and motorized -- auto. driving and public transport. From $10^{-1}$ to $10^{0}$ of max. trip length, scaling for the various modes seems to diverge.
Generally, correlation of mode with trip length scaling has considerable implications for human systems such as cities. A small change in a mode's scaling exponent can imply a large difference in total trips of a certain length, and therefore total energy. Mode share also implies a significantly different energy consumption budget. (E.g. walking vs. automobile modes.) Since these statistics describe system characteristics of large-scale random processes -- sometimes called `urban metabolism' \cite{Townsend_2000, banister2011cities, wolman1965metabolism, west2004life} -- and therefore substantial amounts of energy and $CO_2$ emissions, they are very important to understand.
\subsection{Urban Mobility Patterns}
\vspace*{-.2cm}
\begin{table}[!ht]\footnotesize
\centering
\begin{tabular}{|c||c|c|c|c|c|}
\hline
Length & Mode \% & • & • & • & • \\
\hline
• & walking & bicycling & auto. & public & auto. \\
• & • & • & driving & trans. & passenger \\
\hline
intra-urban & 28.30 & 11.91 & 38.06 & 6.99 & 14.73 \\
\hline
inter-urban & 0.31 & 1.83 & 58.60 & 14.57 & 24.68 \\
\hline
\end{tabular}
\caption{Mode share $(\%)$ for intra-urban ($< 10^{1.17}$ km) and inter-urban trips ($\ge 10^{1.17}$ km) for large cities (pop. $> 100,000$) in Germany.}
\label{table:short_vs_long_mode_share}
\end{table}
\vspace*{-.2cm}
For Germany's 76 cities with over 100,000 population, the average area is 174.02 km$^2$ \cite{Office}. Using a similar approximation as for Germany$^1$, this yields an urban diameter of 14.89 km ($\approx 10^{1.17}$ km).
Mode is therefore also revealing about urban scale mobility, since we can now use trip length statistics separated by mode to distinguish between patterns near and below this scale (Fig.~\ref{fig:ccdf_city_trips_mode}).
For intra-urban trip lengths below the urban diameter, non-motorized modes contribute significantly to trip statistics (Table~\ref{table:short_vs_long_mode_share}) At the inter-urban scale, trip statistics are mostly the result of motorized modes, as expected.
It is important to note these scaling differences, especially in intra-urban region. Here, averaging together all of these modes (`all modes') is essentially averaging the heads of some trip length distributions together with the tails of others (See $\ell_0$ and $\bar{\ell}$, Table \ref{table:fits} and Fig. \ref{fig:ccdf_city_trips_mode}), and thereby aggregating the results of processes belonging to significantly distinct universality classes. It is therefore not surprising that urban scale mobility patterns have posed a challenge to those using aggregated check-in data.
As noted above, the non-motorized versus motorized modes each seem to be the product of some unique mobility process at the urban scale -- since both their absolute and rescaled trip length distributions stand apart (Figs. \ref{fig:ccdf_city_trips_mode} and \ref{fig:rescaled_trips}). These largely different exponents imply that trips by certain modes are caused by different processes and system characteristics, belonging to distinct universality classes -- plausible when comparing these groups of modes. This also suggests that we may be able to consider modes as making up separate phases of the underlying process of mobility \cite{newman2005power, Mitzenmacher2004, Stanley1999}. Also, due to the different form of rescaled trip lengths for non-motorized modes -- perhaps exponential -- this is interesting to consider in the context of mobility behavior of other organisms \cite{ramos2004levy}. Thus with this information, we can begin to investigate causal mechanisms more carefully.
It seems that mode allows us to describe trip lengths primarily by their scaling exponent within the intra-urban region, perhaps down as far as $\ell_0$ = 6.37 km ($10^{.8}$ km) (Table \ref{table:fits}). However, below that distance other factors may be at work, and the behavior may be better described primarily by something other than scaling with respect to the mode category.
\begin{table}[!ht]\footnotesize
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Urban Population & Count &$\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$ \\
\hline
small ($< 20k$) & 23433 & 2.41 & 43.32 & 10.52 & 1202.28 \\
\hline
medium ($20k$-$100k$) & 53038 & 2.35 & 30.38 & 10.62 & 1329.72 \\
\hline
large ($> 100k$) & 53011 & 2.13 & 29.40 & 9.99 & 1312.92 \\
\hline
\end{tabular}
\caption{Sample size (Count), best-fit scaling exponent ($\alpha$), beginning of fit ($\ell_0$), mean trip length ($\bar{\ell}$) and variance ($\sigma^2$) according to city population.}
\label{table:trip_length_vs_pop}
\end{table}
On a related note, trip lengths seem related to urban population, but not strongly (Fig.~\ref{fig:municipality_type} and Table~\ref{table:trip_length_vs_pop}), confirming other results \cite{Scheiner2010}. For example, there is a small difference between mean trip lengths $(\bar{\ell})$ in low-population rural municipalities versus larger urban populations. It therefore seems further investigation is needed to determine whether mean trip length scales allometrically with city population alone, as has been found for other urban parameters \cite{Helbing2007}.
Also, this indeterminate response by trip length to city population may support previous results about the independence of trip length and {\it city area} \cite{Noulas2011}, but since the Pearson correlation of urban population and area in Germany is not high $(r = 0.51)$ \cite{Office}, this cannot yet be confirmed.
\subsection{Trips taken together with overnight travel confirm previous findings}
\begin{table}[!ht]\footnotesize
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Regime & Count & $\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$\\
\hline
A & 209,045 & 1.44 & 1.81 & 48.97 & 14,727.00\\
\hline
B & 8,055 & 2.17 & 816.00 & 1,670.36 & 2,172,741.49\\
\hline
C & 380 & 5.91 & 11,000.00 & 11,312.92 & 7,047,781.70\\
\hline
\end{tabular}
\caption{Count, $\alpha$, $\ell_0$, $\bar{\ell}$ and $\sigma^2$ for the three distance regimes of trips and travel taken together.}
\label{table:trips_travel_fits}
\end{table}
\vspace*{-.1cm}
Furthermore, if we take daily trips and overnight travel together (Fig.~\ref{fig:travels_and_trips}), there seem to be three regimes: (A) Within Germany, (B) outside of Germany, and (C) near the maximum distance that can be traveled from Germany to the other side of the world.
For trips within Germany (Regime A), our best-fit gives us a scaling exponent of $\alpha = 1.44$, which is proximate to that found for Foursquare data ($\alpha = 1.50$) \cite{Noulas2011}, and for the Where's George data ($\alpha = 1.59$) \cite{Brockmann:2006uq}, though not as near to that found using call data records ($\alpha = 1.75$) \cite{Gonzalez:2008uq}. Similar to trips without overnight travel (Fig.~\ref{fig:ccdf_city_trips_mode}), this is truncated by the diameter of Germany ($\sim 10^{2.83}$ km).
For longer trips outside of Germany (Regime B), our best-fit result is quite different from others ($\alpha = 2.24$). However, big mobility data sources can include trips from all possible origins. Since our data was collected differently and only includes journeys originating within Germany, is not surprising that we see a marked decrease in the number of trips of this length. This second regime is truncated at roughly the distance of the furthest significant travel destination, Southeast Asia. (E.g. The flying distance from Germany to Thailand is approximately 8667 km $\approx 10^{3.94} $ km.) This truncation seems to agree with 2008 travel planning statistics, which show that few journeys ($ < 1\%$) were planned farther than Asia \cite{adac_reisemonitor_2008}.
\subsection{Distance-based and intervening opportunity arguments}
\begin{table}[!ht
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Purpose & Count & $\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$\\
\hline
education & 12704 & 3.06 & 31.07 & 8.15 & 574.29\\
\hline
shopping & 40322 & 2.88 & 35.15 & 5.19 & 196.73\\
\hline
work & 25808 & 2.71 & 38.95 & 17.40 & 1654.51\\
\hline
errands & 23716 & 2.51 & 45.13 & 8.06 & 593.81\\
\hline
accompanying driver & 16447 & 2.50 & 32.30 & 7.74 & 476.70\\
\hline
free time & 61152 & 2.10 & 30.38 & 13.55 & 2209.65\\
\hline
business & 2706 & 1.82 & 12.35 & 36.58 & 8011.00\\
\hline
\end{tabular}
\caption{Count, $\alpha$, $\ell_0$, $\bar{\ell}$ and $\sigma^2$ for trips by purpose.}
\label{table:purpose_fits}
\end{table}
\vspace*{-.1cm}
This mode information lets us address the central premise of a previous work, which suggested that trip length patterns cannot be distinguished at an urban scale \cite{Noulas2011}. These authors then went on to give convincing arguments that `intervening opportunity' -- using rank-distance of place -- can largely explain urban trip length patterns, rather than purely distance-based mechanisms.
Here, however, we have seen that trip lengths according to mode are distinguishable at this scale, lending credence to distance-based mechanisms. Our evidence does not necessarily contradict their conclusions, but rather allows us to hypothesize that mode, together with trip purpose -- both obviously strongly correlated with trip length (Figs. \ref{fig:mode} and \ref{fig:purpose}) -- can help elucidate the debate between these apparently disparate schools of thought. The distinct response of trip length to purpose (Fig. \ref{fig:purpose}) seems to support this line of thinking, since by necessity trip length according to purpose must respond to urban form (density and location of schools or grocery stores, for example). Another work analyzing earlier versions of our data set has also suggested that trip distance is a function of facility location (urban form), which then determines mode \cite{Scheiner2010}. Certainly, further work is needed, such as multivariate analysis and clustering.
\section{The Influence of Time}
\begin{figure}[!ht]
\subfloat[]{%
\includegraphics[trim =9mm 55mm 7mm 45mm, clip, width=0.24\textwidth]{./_hour_weekdays_2.pdf}
\label{fig:mode_share_hourly}
}
\subfloat[]{%
\includegraphics[trim =9mm 55mm 7mm 45mm, clip, width=0.24\textwidth]{./_st_stdg_trip_length_2.pdf}
\label{fig:hourly_trip_lengths}
}
\vfill
\subfloat[]{%
\includegraphics[trim =9mm 55mm 9mm 45mm, clip, width=0.24\textwidth]{./_wday_all_.pdf}
\label{fig:mode_share_day_of_week}
}
\subfloat[]{%
\includegraphics[trim =9mm 55mm 9mm 45mm, clip, width=0.24\textwidth]{./_stichtag_trip_length_2.pdf}
\label{fig:day_of_week_trip_lengths}
}
\caption{(\ref{fig:mode_share_hourly}) Weekday hourly trip frequency according to mode. (\ref{fig:hourly_trip_lengths}) CCDF of weekday hourly trip lengths. (\ref{fig:mode_share_day_of_week}) Day-of-week trip frequency according to mode. (\ref{fig:day_of_week_trip_lengths}) CCDF of trip lengths by day-of-week.}
\label{fig:time_dependence}
\end{figure}
\begin{table}[!ht
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Time of Day & Count & $\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$\\
\hline
before 5 AM & 1670 & 2.01 & 25.27 & 32.92 & 10,123.99\\
\hline
5 to 7 AM & 7026 & 2.41 & 20.58 & 23.71 & 3,268.39\\
\hline
7 to 9 AM & 21991 & 2.33 & 16.15 & 11.29 & 1,717.64\\
\hline
9 to 11 AM & 24511 & 2.03 & 10.45 & 11.31 & 2,159.07\\
\hline
11 to 2 PM & 37693 & 2.32 & 31.36 & 9.49 & 1,046.26\\
\hline
2 to 5 PM & 43375 & 2.43 & 51.30 & 10.34 & 868.00\\
\hline
5 to 8 PM & 34742 & 2.55 & 31.36 & 9.68 & 705.90\\
\hline
8 to 10 PM & 7819 & 2.39 & 34.30 & 9.58 & 684.20\\
\hline
after 10 PM & 4060 & 2.89 & 30.40 & 10.94 & 550.62\\
\hline
\end{tabular}
\caption{Count, $\alpha$, $\ell_0$, $\bar{\ell}$ and $\sigma^2$ for trips by time of day.}
\label{table:hourly_fits}
\end{table}
\begin{table}[!ht
\vspace*{-.2cm}
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Day of Week & Count & $\alpha$ & $\ell_0$ (km) & $\bar{\ell}$ (km) & $\sigma^2$\\
\hline
Sunday & 17768 & 2.11 & 32.34 & 15.84 & 2,652.07\\
\hline
Monday & 28476 & 2.42 & 34.20 & 9.66 & 1,026.79\\
\hline
Tuesday & 28449 & 2.42 & 38.81 & 9.47 & 919.62\\
\hline
Wednesday & 28649 & 2.46 & 48.45 & 9.86 & 966.94\\
\hline
Thursday & 27787 & 2.38 & 38.95 & 10.07 & 943.46\\
\hline
Friday & 27878 & 2.22 & 43.23 & 11.46 & 1,507.96\\
\hline
Saturday & 23880 & 2.23 & 32.30 & 12.60 & 1,789.28\\
\hline
\end{tabular}
\caption{Count, $\alpha$, $\ell_0$, $\bar{\ell}$ and $\sigma^2$ for trips by day of week.}
\label{table:daily_fits}
\end{table}
\vspace*{-.1cm}
Finally, we see that trip frequency, mode share, and trip lengths are clearly dependent on time. On weekdays, according to time-of-day, we see an expected daily pattern of increased trips in morning $(\sim7$ AM) and evening $(\sim5$ PM) (Fig. \ref{fig:mode_share_hourly}). We also note a change in the relative mode share at different times of day. Driving, for example, makes up a much higher proportion of trips during the day, lower in evening hours. Trip lengths are also notably responsive to time-of-day, e.g. from 5 to 7AM, trips tend to be longer (Fig. \ref{fig:hourly_trip_lengths}).
Similar observations can be made about day-of-week patterns. For example, on Sundays we see a change in trip frequency and mode share from weekday levels, with fewer overall trips and less driving relative to other modes. (Fig. \ref{fig:mode_share_day_of_week}). Trip lengths are also clearly responsive to day-of-week, with a higher proportion of long trips also on Sunday (Fig. \ref{fig:day_of_week_trip_lengths}).
Aggregation over all time periods can therefore also obscure time-dependency and potentially bias results. We must conclude that sampling time needs thorough investigation when making statements characterizing average mobility patterns.
\section{Conclusions and Further work}
We have argued that aggregate data misses important aspects of mobility patterns. As a case study, we have analyzed a category-rich set of German mobility data and found that mode, city size, population, purpose, and temporal aspects of trips can be illustrative. This conventional data can expose both inter- and intra- urban-scale mobility, and possibly address related issues such as urban metabolism, allometric scaling, and the debate between distance- and intervening-opportunity-based mechanisms for mobility patterns.
We understand our work as a first step toward a more refined understanding. In particular, we have only focused on Germany and will be interested whether other countries have similar characteristics. Our data may still have some bias and errors, and we would like to address those. Moreover, so far we have focused on data analysis only. In future work, it would be interesting to come up with models explaining the observed statistics. Based on our work, mode, purpose, urban population, and time look like useful categories to investigate. From other research, density, mode availability, and other urban parameters also seem relevant \cite{Noulas2011, Scheiner2010, Batty2010}. Further work fitting trip length along with duration, analyzing mean squared distance, and using clustering and dimensionality reduction to understand the main categories and dependencies making up the space of mobility universality classes all seem promising.
\bibliographystyle{IEEEtran}
|
1,108,101,565,718 | arxiv | \section{Motivation and plan}
The tidal heating of planets and moons has long been a key area of planetary science. Accurate investigation into this process requires numerical integration of dissipation
over layers of the perturbed body. At the same time, it is common to infer qualitative conclusions from approximations based on modeling the body with a homogeneous sphere
of a certain rheology. However, the simplistic nature of the approach limits the precision of the ensuing conclusions. For example, the presence of a sizable molten core,
like in Mercury, may increase the damping rate, compared to a homogeneous body. Still, estimates obtained with our simplified, homogeneous-sphere model
should be accurate within a factor of several --- thus (1) serving as a useful guidance for solar system bodies and (2) being completely legitimate for
exoplanets, as our knowledge of their structure is speculative at best.
\vspace{3mm}
In our paper, we derive from the first principles a formula for the tidal heating rate in a tidally perturbed homogeneous sphere. We compare our result with the formulae used in the literature and point out the differences.
\vspace{3mm}
In Sections \ref{prelude2} - \ref{sphere}, we present an accurate re-examination of the standard integral expression for the damping rate in a homogeneous incompressible sphere subject to tides. The check is necessary because in previous studies the expression was derived in an {\it ad hoc} manner, sometimes with demonstrable mathematical inaccuracies. The conventional derivation begins with the general formula for the power
\ba
P\;=\;
\int\,\rho_{_E}\;\vbold\,\cdot\,\nabla {V_{_E}}'\;d^3r
\nonumber
\ea
written in the Eulerian description (i.e., via coordinates associated with a deformed body). Its time average is then cast into the form of
\ba
\langle P\rangle~=~\frac{1}{4\pi G R}~\sum_{l=2}^{\infty}\,(2\,l\,+\,1)\,\int~\left\langle~W_l~\stackrel{\bf\centerdot}{U}_l~\right\rangle~dS
\nonumber
\ea
which is in the Lagrangian language (an integral over an undeformed body). In the former equation, $\,\rho_{_E}\,$ is the Eulerian density, $\,\vbold\,$ is the Eulerian velocity, $\,{V_{_E}}'\,$ is the Eulerian perturbation of the potential (perturbation assembled of the tide-raising potential and the resulting additional tidal potential of the deformed body), and $\,r\,$ is a {\it{perturbed}} position in the body frame. In the latter equation, $\,W_l\,$ and $\,U_l\,$ are the degree-$l\,$ components of the tide-raising and additional tidal potentials, $\,G\,$ is the Newton gravity constant, $\,R\,$ is the radius of the planet, and $\,dS\,$ is an element of the {\it{undeformed}} surface of the sphere.
The transition from the former formula to the latter requires the use of the boundary conditions on the free surface. At that point, integration is already carried out within the Lagrangian description (over an undeformed surface), but the boundary conditions are nonetheless imposed on the Eulerian potential and its gradient. (The boundary conditions are much simpler in the Eulerian form.) This mixed treatment requires attention, and its employment by the early authors (Zschau 1978, Platzman 1984) contained inaccuracies. However none of those turned out to be critical, and the above expression for the average power $\,\langle P\rangle\,$ is correct for small deformations.
\vspace{3mm}
In Section \ref{next} we explore the standard way of casting the above integral into a spectral sum over the tidal Fourier modes $\,\omega\,$. It is commonly assumed that the result should read as in Platzman (1984):
\ba
\langle P\rangle~=~\frac{1}{4\pi G R}~
{\sum_{\omega}}
(2l+1)\,\frac{\omega}{2}\,W^{\,2}_l(\omega)\,k_l(\omega)~\sin\epsilon_l(\omega)\,~.
\nonumber
\ea
Here $\,k_l(\omega)\,$ and $\,\epsilon_l(\omega)\,$ are the Love number and phase lag corresponding to the Fourier mode $\,\omega=\omega_{lmpq}\,$, with $\,lmpq\,$ being the four integers wherewith the Fourier modes are numbered in the Darwin-Kaula theory of tides (see Efroimsky \& Makarov 2013 and references therein). However, an accurate investigation demonstrates that the spectral sum differs from the above. The difference originates for two reasons. One is the degeneracy, i.e., the fact that several different Fourier modes $\,\omega_{lmpq}\,$ share a numerical value $\,\omega\,$, so the
structure of the above sum is more complex. \footnote{~When calculating $\,W_l\,$, one has first to group together and sum all the terms corresponding to a particular value of $\,\omega\,$. Each such sum should be squared and averaged, and only after that should the final summation over the distinct values of $\,\omega\,$ be carried out. In the original expression for
the average power,
$\,~(\textstyle 4\pi G R)^{-1}~{\sum_{\omega}} (2l+1)\,\frac{\textstyle \omega}{\textstyle 2}\,W^{\,2}_l(\omega)\,k_l(\omega)~\sin\epsilon_l(\omega)\,~$, the $W^{\,2}_l(\omega)$ term should be replaced with the squared sum of all the harmonics of $\,W\,$ that correspond to a particular value of $\,\omega\,$.} The second reason is that
the modes can be of either sign, not necessarily positive. So the resulting power will contain seemingly strange terms with $\,W_l(\omega)\,W_l(-\omega)\,k_l(\omega)~\sin\epsilon_l(\omega)\,$.
These difficulties were noticed and analysed by Peale and Cassen (1978), but their result needs correction too. Some terms in
their spectral sum (equation 31 in {\it{Ibid}}.) are not positive definite, whence an underestimate of the heat production may result.~\footnote{~In the expression for $\,\langle P \rangle\,$, an input from each value of $\,\omega_{lmpq}\,$ must be non-negative. This can be observed from the fact that the mode $\,\omega=\omega_{lmpq}\,$ and the corresponding phase lag $\,\epsilon_l(\omega)\equiv\omega\,\Delta t_l(\omega)\,$ are always of the same sign (the time lag $\,\Delta t_l(\omega)\,$
being positive definite due to causality). Thus the product $\,\omega\,\epsilon_l(\omega)\,=\,\omega_{lmpq}\,\epsilon_l(\omega_{lmpq})\,$ in the spectral sum can always be
rewritten as $\,|\omega_{lmpq}|/Q_{lmpq}\,$, with the tidal quality factor being defined via $\,1/Q_{lmpq}\,=\,|\,\sin\epsilon_l(\omega_{lmpq})\,|\,$. In their spectral sum, Peale \&
Cassen (1978, eqn 31) have just $\,\omega_{lmpq}/Q_{lmpq}\,$, ~and not $\,|\omega_{lmpq}|/Q_{lmpq}\,$. As a result, some terms come out negative and the heat production intensity may be underestimated.}
The calculation of the power production, developed by Peale \& Cassen (1978), implies averaging not only over the tidal period but also over the apsidal period. This can be observed from the formulae (20 - 21) in their work. In our paper, however, we consider two separate cases: those with and without apsidal precession. In the first case, the period of the apsidal precession is shorter than the typical time of relaxation in the mantle (which may be identified with the Maxwell time). The argument of the pericentre of the perturber, $\,{\omega}^*\,$, cannot be treated as constant, wherefore the formula for the mean power should be averaged not only over the tidal period, but also over the period of the pericentre motion. (We assume this motion steady.) In the second case, the evolution of the line of apsides is slow, with its period being longer than the Maxwell time. The argument of the pericentre should be regarded as a constant. Accordingly, in the latter case the tidal dissipation formula is more complicated, because it includes explicit dependence of Fourier terms on the argument of pericentre.
In a subsequent work, Makarov \& Efroimsky (2014),
we apply our results in three case studies: Io, Mercury, and Kepler-10$\,$b. In that paper we, among other things, hypothesise that the tidal heating rate at spin-orbit resonances is greatly influenced by libration and, therefore, by the triaxiality of the tidally perturbed body.
\section{The Darwin-Kaula formalism in brief}
Describing of linear bodily tides consists of two steps. First, it is necessary to Fourier-expand both the tide-raising potential and the induced additional potential of
the tidally perturbed body. Second, it is necessary to link each Fourier component of the additional tidal potential to an appropriate Fourier component of the tide-raising potential. This means: establishing the phase lag and the ratio of magnitudes called the {\it{dynamical Love number}}.
Due to interplay of rheology and self-gravitation, the phase lags and Love numbers have nontrivial frequency dependencies. Things are complicated even further
because different mechanisms of friction become leading over different frequency bands, wherefore the tidal response cannot be described by one simple dissipation model (Efroimsky 2012$\,$a,b).
\subsection{Generalities}
The development of the mathematical theory of bodily tides was started by Darwin (1879) who derived a partial sum of the Fourier expansion of the additional potential of a tidally perturbed sphere. A decisive contribution into this theory was offered almost a century later by Kaula (1964) who wrote down a complete series. In a previous paper (Efroimsky \& Makarov 2013), we provided a detailed presentation of the Darwin-Kaula expansion and explained how tidal friction and lagging are built
into it. We compared the Darwin-Kaula theory with the one by MacDonald (1964) and demonstrated that the former theory is superior to the latter, because it can, in principle, be combined with an arbitrary rheology. Referring the reader to the afore-cited literature for details, we present several central formulae that will be necessary.
An external body of mass $\,M^{\,*}\,$, located in $\,{\erbold}^{\;*} = (r^*,\,\lambda^*,\,\phi^*)\,$, generates the following disturbing
potential in a point $\,\Rbold = (R,\phi,\lambda)\,$ on the surface of a sphere of radius $\,R\,<\,r^*~$:
\ba
\nonumber
W(\eRbold\,,\,\erbold^{~*})~=~\sum_{{\it{l}}=2}^{\infty}~W_{\it{l}}(\eRbold\,,~\erbold^{~*})~=~-~\frac{G\;M^*}{r^{
\,*}}~\sum_{{\it{l}}=2}^{\infty}\,\left(\,\frac{R}{r^{~*}}\,\right)^{\textstyle{^{\it{l}}}}\,P_{\it{l}}(\cos \gamma)~=
~\quad~\quad~\quad~\quad~\\
\nonumber\\
\nonumber\\
-\,\frac{G~M^*}{r^{\,*}}\sum_{{\it{l}}=2}^{\infty}\left(\frac{R}{r^{~*}}\right)^{\textstyle{^{\it{l}}}}\sum_{m=0}^{\it l}
\frac{({\it l}-m)!}{({\it l}+m)!}(2-\delta_{0m})P_{lm}(\sin\phi)P_{lm}(\sin\phi^*)~\cos m(\lambda-\lambda^*)~~.~\quad~
\label{1a}
\label{101a}
\ea
Here $\,G
\,$ denotes Newton's gravity constant,
$\,\phi\,$ is the latitude reckoned from the spherical body's equator, $\,\lambda\,$ is the longitude measured from a fixed meridian, and $\gamma$ is the angular
separation between the vectors $\,{\erbold}^{\;*}\,$ and $\,\Rbold\,$ pointing from the perturbed body's centre. The definitions of the Legendre polynomials $\,P_l(\cos\gamma)\,$ and the associated Legendre polynomials $\,P_{lm}(\sin\phi)\,$ are given in Appendix \ref{appB}.
While in the above formula the location of the perturber on its trajectory is expressed through the spherical coordinates $\,{\erbold}^{\;*} = (r^*,\,\lambda^*,\,\phi^*)
\,$, a trigonometric transformation (developed by Kaula 1961) enables one to switch to the perturber's orbital elements $\,\erbold^{\;*}=(\,a^*,\,e^*,\,\inc^*,\,\Omega^*,\,\omega^*,\,{\cal M}^*\,)\,$. In terms thereof, the disturbing potential is expressed as
\ba
\nonumber
W(\eRbold\,,\;\erbold^{\;*})\;=\;\sum_{lmpq}\,W_{lmpq}\;=\;-\;
\frac{G\,M^*}{a^*}\;\sum_{{\it
l}=2}^{\infty}\;\left(\,\frac{R}{a^*}\,\right)^{\textstyle{^{\it
l}}}\sum_{m=0}^{\it l}\;\frac{({\it l} - m)!}{({\it l} + m)!}\;
\left(\,2 \right. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
\nonumber\\
\nonumber\\
\left.
~~~ -\;\delta_{0m}\,\right)\;P_{{\it{l}}m}(\sin\phi)\;\sum_{p=0}^{\it
l}\;F_{{\it l}mp}(\inc^*)\;\sum_{q=\,-\,\infty}^{\infty}\;G_{{\it l}pq}
(e^*)
\left\{
\begin{array}{c}
\cos \\
\sin
\end{array}
\right\}^{{\it l}\,-\,m\;\;
\mbox{\small even}}_{{\it l}\,-\,m\;\;\mbox{\small odd}} \;\left(
v_{{\it l}mpq}^*-m(\lambda+\theta^*) \right)
~~~,~~~~~~
\label{1b}
\label{101b}
\label{1}
\label{101}
\ea
where $\,\theta^*\,$ is the rotation angle of the tidally perturbed body,$\,$\footnote{~When the equinoctial precession may be neglected, $\,\theta^*\,$ may be regarded as the sidereal angle.} while $\,F_{lmp}(\inc^*)\,$ and $\,G_{lpq}(e^*)$ are the inclination functions and the eccentricity polynomials,
respectively. The auxiliary linear combinations $\,v_{lmpq}^*\,$ are defined by
\ba
v_{lmpq}^*\;\equiv\;(l-2p)\,\omega^*\,+\,(l-2p+q){\cal M}^*\,+\,m\,\Omega^*~~~.
\label{2}
\label{102}
\ea
Conventionally,
the letters denoting the elements of the perturber are accompanied with asterisks:
$\,a^*,\,e^*,\,\inc^*,\,\Omega^*,\,\omega^*,\,{\cal M}^*\,$. Following Kaula (1964), the sidereal angle also acquires an asterisk, when it appears in a
combination $~v_{lmpq}^*-\,m\,\theta^*~$ with the perturber's elements.
The angle $\,\theta\,$, however, does not acquire an asterisk, when it appears in a linear combination $~v_{lmpq}-\,m\,\theta~$ with the orbital elements of a test body
subject to the additional tidal potential of the perturbed body. This strange nomenclature introduced by Kaula (1964) --- two different notations for one angle --- turns out to
be helpful and convenient in the calculation of the back-reaction experienced by the perturber. For comprehensive explanation of this obscure point, see Section 5 in Efroimsky \& Makarov (2013).
Over timescales shorter than the apsidal-motion period, the expression in round brackets in the formula (\ref{1}) can be linearised as
\ba
v_{lmpq}^*-m(\lambda+\theta^*)\,=\,\omega_{lmpq}\,(t\,-\,t_0)~-~m~\lambda~+~v_{lmpq}^*(t_0)~-~m~\theta^*(t_0)~~,
\label{3}
\label{103}
\ea
where the following quantities act as the Fourier tidal modes:
\ba
\omega_{\textstyle{_{lmpq}}}\;\equiv~\stackrel{\bf\centerdot~~~~}{v^*_{lmpq}}\,-~m\,\stackrel{\bf\centerdot\,}{\theta^*}
~=\;(l-2p)\;\dot{\omega}^*\,+\,(l-2p+q)\;{\bf{\dot{\cal{M}}}}^{\,*}\,+\,m\;(\dot{\Omega}^*\,-\,\dot{\theta}^*)~~,
\label{4a}
\label{104a}
\label{504a}
\label{504}
\ea
${\bf{\dot{\cal{M}}}}^{\,*}\,$ being the perturber's ``anomalistic" mean motion (see Section \ref{diff} below), and $\,t_0\,$ being the time of pericentre passage. (As ever, we set $\,{\cal{M}}^{\,*}=\,0\,$ in the pericentre.) The modes $\,\omega_{\textstyle{_{lmpq}}}\,$ can assume either sign, but the physical forcing frequencies are positive definite:
\ba
\chi_{\textstyle{_{lmpq}}}\,=\,|\,\omega_{\textstyle{_{lmpq}}}\,|\,~.
\label{chi}
\ea
\subsection{Simplifying the notation: $\,$less asterisks}
In the preceding subsection, we obeyed the convention by Kaula (1964) and marked with asterisk the orbital elements of the tide-raising body. Kaula introduced this notation, because within his model he also considered another exterior body that was disturbed by the tides generated on the planet by the tide-raising body. This exterior body's elements were denoted by the same letters, but without an asterisk.
When the two outer bodies coincide, the asterisks may be dropped, except on two occasions. The first is writing the masses -- while the mass of the planet is denoted with $\,M\,$, the mass of the perturber (the star) will be written as $\,M^{\,*}\,$. The other occasion requires writing the additional tidal potential of the perturbed body -- the potential will have a value $\,U(\erbold,\,\erbold^{\,*})\,$ in a point $\,\erbold\,$,
provided the perturber (the star) is located in an exterior point $\,\erbold^{\,*}\,$ (both vectors being planetocentric). The planet's rotation rate $\,\theta\,$, as well
as the orbital elements of the star as seen from the planet, will hereafter be written without asterisks.
The most important notations employed in this paper are connected in Table \ref{nota.tab}.
\subsection{Difficulties}\label{diff}
At this point, a word of warning is necessary. Deriving the equation (\ref{504a}), we differentiated the expression (\ref{2}), which gave us the terms with $\,\dot{\omega}\,$
and $\,\dot{\Omega}\,$. Including these terms in the equation (\ref{504a}) acknowledges the fact that the perturber's trajectory is disturbed, not Keplerian.
The disturbance may come solely from tides, as in Kaula (1964, eqn 38), or from both tides and other sources. One way or another, the perturber's mean anomaly
$\,{\cal{M}}\,$ is no longer equal to $\,n\,t\,$, $\,$but is now given by
$~\,
{\cal{M}}\,=~{\cal{M}}_0\,+~\int^{\,t}_{t_0} n(t)~dt
~\,$,
~where
$
n(t)\,\equiv\,\sqrt{G\,(M\,+\,M^*)~a^{-3}(t)\,}~.
\,$
Accordingly, the expression for the modes becomes:
\ba
\omega_{\textstyle{_{lmpq}}}\;\equiv~\stackrel{\bf\centerdot~~~~}{v_{lmpq}}\,-~m\,\stackrel{\bf\centerdot\,}{\theta}
~=\;(l-2p)\;(\dot{\omega}\,+\,\dot{\cal{M}}_0)\,+\,q\,\dot{\cal{M}}_0
\,+\,(l-2p+q)\;n\,+\,m\;(\dot{\Omega}\,-\,\dot{\theta})~~.\quad
\label{4b}
\label{104b}
\label{504b}
\ea
It is, of course, tempting to assume that $\,\stackrel{\bf\centerdot}{{\cal{M}}_0}\,\ll\,n\,$, thus accepting the approximation
\ba
\stackrel{\bf\centerdot}{\cal{M}\,}\,=~\stackrel{\bf\centerdot}{{\cal{M}}_0}\,+~n~\approx~n~~,
\label{105}
\ea
as Kaula (1964) did in his equations (46 - 47). Within his theory, however, this approximation could not be used.$\,$\footnote{~In his books, Kaula (1966, 1968)
corrected this oversight. There, he kept the notation $\,n\,$ for the mean motion defined as in the Kepler law, and never confused it with $\,\stackrel{\bf\centerdot}{\cal{M}\,}\,$.} This is explained in Appendix \ref{difficulty} where we consider two examples. One is the case where perturbation of an orbit of a moon is mainly due to the tides the moon creates in the planet. In that situation, $\,\dot{\omega}\,$ and $\,\dot{\cal{M}}_0\,$ are of the same order but of opposite signs, so they largely compensate one another. This suggests a simultaneous neglect of $\,${\it{both}}$\,$ rates. The second example is when the dominant perturbation of the orbit comes from the oblateness of the primary. In this case, $\,\dot{\omega}\,$ and $\,\dot{\cal{M}}_0\;$ turn out to be of the same order and of the same sign -- so keeping one of these terms requires keeping the other.
Whether one or both of these rates should be included depends on a particular setting, and each practical case must be examined separately. In general, both rates should be kept.
While keeping $\,\stackrel{\bf\centerdot\,}{\omega}\,$ complicates the formalism, the emergence of $\,\stackrel{\bf\centerdot}{{\cal{M}}_0}\,$ complicates the treatment even further. To sidestep this issue, we shall $\,${\it{define}}$\,$ the mean motion via
\ba
n\,\equiv\,{\bf{\dot{\cal{M}}}}~~.
\label{}
\ea
This, the so-called $\,${\it{anomalistic}}$\,$ mean motion differs from $\,\sqrt{G(M+M^*)\,a^{-3}(t)\,}~$.
We shall derive the heat-production formulae for two different settings -- with a fixed pericentre $\,\omega\,$ and with $\,\omega\,$ moving uniformly.
\subsection{Lagging}
For a static tide, the incremental tidal potential of the perturbed body mimics the perturbation (\ref{1}), except that each term $\,W_l\,$ is now equipped with a
mitigating multiplier $\,k_{\textstyle{_l}}\left({R}/{r}\right)^{\,l+1}\,$, where $\,k_{\textstyle{_l}}\,$ is an $\,l-$degree Love number. With the star located in $\,\erbold^{\,*}\,$, the additional potential in a point $\,\erbold\,$ will read as
\ba
U(\erbold\,,\;\erbold^{\;*})&=&\sum_{{\it l}=2}^{\infty}~U_{\it{l}}(\erbold)~=~\sum_{{\it l}=2}^{\infty}~k_{\it
l}\;\left(\,\frac{R}{r}\,\right)^{{\it l}+1}\;W_{\it{l}}(\eRbold\,,\;\erbold^{\;*})~~.~~~~~~~~
~~~~~~~~~~~~~~~~
\label{dr}
\ea
For time-dependent tides, this expression acquires an extra amendment: the reaction must lag, compared to the action. Naively, this would imply taking each $\,W_l\,$ at an earlier instant of time. However, in reality lagging depends on frequency; so each $\,W_l\,$ must be first expanded into a Fourier series over tidal modes, whereafter each term of the series should be delayed separately. The magnitude of the tidal reaction is frequency dependent too; so each term of the Fourier series will be multiplied by a
dynamical Love number of its own. Symbolically, this may be written in a manner similar to the static expression:
\ba
U(\erbold\,,\;\erbold^{\;*})&=&\sum_{{\it l}=2}^{\infty}~U_{\it{l}}(\erbold\,,\;\erbold^{\;*})~=~\sum_{l=2}^{\infty}~\left(\,\frac{R}{r}\,\right)^{{\it l}+1}\;\hat{k}_{l}\;W_{\it{l}}(\eRbold\,,\;\erbold^{\;*})~~.~~~~~~~~
~~~~~~~~~~~~~~~~
\label{505a}
\label{107a}
\ea
The hat above $\,\hat{k}_{l}\,$ means that this is not a multiplier but a linear operator that mitigates and delays each Fourier mode of $\,W_l\,$ differently:
\ba
\nonumber
U(\erbold\,,\;\erbold^{\;*})&=&
-\;
\frac{G\,M^*}{a}\;\sum_{{\it
l}=2}^{\infty}\;\left(\,\frac{R}{r}\,\right)^{\textstyle{^{l+1}}}
\left(\,\frac{R}{a}\,\right)^{\textstyle{^{\it
l}}}\sum_{m=0}^{\it l}\;\frac{(l - m)!}{(l + m)!}\;
\left(\,2~-\;\delta_{0m}\,\right)\;P_{lm}(\sin\phi)~\sum_{p=0}^{\it
l}\;F_{lmp}(i)~~~~\\
\nonumber\\
\nonumber\\
&~&\left.~~~\right.\sum_{q=\,-\,\infty}^{\infty}\;G_{lpq}(e)~k_l(\omega_{lmpq})~ \left\{
\begin{array}{c}
\cos \\
\sin
\end{array}
\right\}^{{\it l}\,-\,m\;\;
\mbox{\small even}}_{l\,-\,m\;\;\mbox{\small odd}} \;\left(\,v_{lmpq}~-\,m\,(\lambda~+~\theta)~-~\epsilon_{l}
\, \right)~~,~~\qquad~\,
\label{505b}
\label{107b}
\ea
where the Love numbers $\,k_l(\omega_{lmpq})\,$ and the phase lags $\,\epsilon_{\textstyle{_{l}}}(\omega_{lmpq})\,$ are functions of the Fourier modes. The lags emerge as the products
\bs
\ba
\epsilon_l(\omega_{\textstyle{_{lmpq}}})~=~\omega_{\textstyle{_{lmpq}}}~\Delta t_l(\omega_{\textstyle{_{lmpq}}})~~,
\label{506a}
\label{108a}
\ea
where $\Delta t_l(\omega_{\textstyle{_{lmpq}}})\,$ is the time delay at the mode $\,\omega_{\textstyle{_{lmpq}}}\,$. In reality, the time delays are functions not
of the Fourier modes (which can assume either sign), but of the actual physical forcing frequencies $~\chi_{\textstyle{_{lmpq}}}\,=\,|\,\omega_{\textstyle{_{lmpq}}}\,|~$
which are positive definite. Thus it is more accurate to write the delays not as $\Delta t_l(\omega_{\textstyle{_{lmpq}}})\,$ but as $\Delta t_l(\chi_{\textstyle{_{lmpq}}})\,$. Accordingly, the phase lags become
\ba
\epsilon_l(\omega_{\textstyle{_{lmpq}}})~=~\omega_{\textstyle{_{lmpq}}}~\Delta t_l(\chi_{\textstyle{_{lmpq}}})~~.
\label{506b}
\label{108b}
\ea
The time delays are positive definite due to causality, so the sign of the phase lag always coincides with that of the corresponding Fourier mode. Thus we finally have:
\ba
\epsilon_l(\omega_{\textstyle{_{lmpq}}})~=~|\,\omega_{\textstyle{_{lmpq}}}\,|~\,\mbox{Sgn}\,(\omega_{\textstyle{_{lmpq}}})~\,\Delta t_l(\chi_{\textstyle{_{lmpq}}})~=~
\chi_{\textstyle{_{lmpq}}}~\,\mbox{Sgn}\,(\omega_{\textstyle{_{lmpq}}})~\,\Delta t_l(\chi_{\textstyle{_{lmpq}}})~~,
\label{506c}
\label{108c}
\ea
\label{506}
\label{108}
\es
where $\,\chi_{\textstyle{_{lmpq}}}\equiv\,|\,\omega_{\textstyle{_{lmpq}}}\,|\,$ are the positive definite forcing frequencies.
The dynamical Love number $\,k_l(\omega_{\textstyle{_{lmpq}}})\,$ and the phase lag $\,\epsilon_l(\omega_{\textstyle{_{lmpq}}})\,$ are the absolute value and the negative
phase of the complex Love number $\,\bar{k}_l(\omega_{\textstyle{_{lmpq}}})\,$ whose functional dependence upon the Fourier mode is solely determined by $\,l\,$, provided the body is spherical.$\,$\footnote{~For oblate celestial bodies, the functional form of the complex $\,\bar{k}_{\textstyle{_{l}}}(\omega_{\textstyle{_{lmpq}}})\,$ is also determined by the order $\,m\,$. In that situation, the right notation for the complex Love number is: $\,\bar{k}_{\textstyle{_{lm}}}(\omega_{\textstyle{_{lmpq}}})\,$. Its absolute value and negative phase will then be denoted with $\,{k}_{\textstyle{_{lm}}}(\omega_{\textstyle{_{lmpq}}})\,$ and $\,\epsilon_{\textstyle{_{lm}}}(\omega_{\textstyle{_{lmpq}}})\,$.}
\subsection{Physics behind the Love numbers and phase lags}
As we saw above, to obtain the decomposition (\ref{107b}) from the Fourier series (\ref{101b}), each $\,lmpq\,$ term of the latter had to be endowed with its own mitigating factor $\,k_{\textstyle{_l}}=k_{\textstyle{_l}}(\omega_{\textstyle{_{lmpq}}})\,$ and phase lag $\,\epsilon_{\textstyle{_l}}=\epsilon_{\textstyle{_l}}(\omega_{\textstyle{_{lmpq}}})\,$. In the past, some authors enquired whether this mitigate-and-lag method was general enough to describe tides. It is, as long as the tides are linear. This is explained in Appendix \ref{universality} below.
The expression (\ref{107b}) for the additional tidal potential contains both sines and cosines of the phase lags, and so does the ensuing expression for the surface elevation. However, the resulting expression for the tidal dissipation rate turns out to contain only the combination $\,{k}_{\textstyle{_l}}(\omega)\;\sin\epsilon_{\textstyle{_l}}(\omega)\,$ which is the negative imaginary part of the complex Love number:
\ba
{k}_{\textstyle{_l}}(\omega)\;\sin\epsilon_{\textstyle{_l}}(\omega)\;=\;|\bar{k}_{\textstyle{_l}}(\omega)|\;\sin\epsilon_{\textstyle{_l}}(\omega)\;=
\;-\;{\cal{I}}{\it{m}}\left[\bar{k}_{\textstyle{_l}}(\omega)\right]~~,~~~\mbox{where}\quad\omega=\omega_{\textstyle{_{lmpq}}}~~.
\label{gfr}
\ea
This quantity is often denoted as $\,k_l/Q\,$, although it would be more reasonable to employ the notation $\,k_l/Q_l\,$, with the tidal quality factors defined through $\,1/Q_l\,\equiv\,\sin|\epsilon_l|\,$.
A dynamical Love number $\,{k}_{\textstyle{_l}}(\omega_{\textstyle{_{lmpq}}})\,$ is an even function of the tidal mode $\,\omega_{\textstyle{_{lmpq}}}\,$, while a phase lag $\,\epsilon_{\textstyle{_l}}(\omega_{\textstyle{_{lmpq}}})\,$ is odd, as can be observed from the equation (\ref{108b}). Thus the expression for the product $\,{k}_{\textstyle{_l}}\,\sin\epsilon_{\textstyle{_l}}\,$ as a function of the physical frequency $\,\chi=\chi_{\textstyle{_{lmpq}}}\,\equiv\,|\omega_{\textstyle{_{lmpq}}}|\,$ is:
\ba
{k}_{\textstyle{_l}}(\omega)\;\sin\epsilon_{\textstyle{_l}}(\omega)\;=\;{k}_{\textstyle{_l}}(\chi)\;\sin\epsilon_{\textstyle{_l}}(\chi)\;\,\mbox{Sgn}\,(\omega)~~,
\label{ggffrr}
\ea
where $\,\epsilon_l(\chi)\,$ is non-negative, because non-negative is the physical frequency $\,\chi\,$.
The frequency dependence of $\,{k}_{\textstyle{_l}}/Q_l={k}_{\textstyle{_l}}(\chi)\,\sin\epsilon_{\textstyle{_l}}(\chi)\,$ is defined by two major physical circumstances: self-gravitation of the planet and the rheology of its mantle. A rheological law is expressed by a constitutive equation, i.e., by an equation interconnecting the strain and the stress. A particular form of this equation is determined by the friction mechanisms present in the considered medium. A realistic rheological law should contain contributions from elasticity, viscosity, and inelastic processes (mainly, dislocation unjamming). Self-gravitation suppresses the tidal bulge. At low frequencies this effectively adds to the mantle's rigidity, whereas at higher frequencies the interplay of rheology and gravity is more complex (Efroimsky 2012$\,$b, Figure 2).
The calculation of the frequency dependence $\,{k}_{\textstyle{_l}}(\chi)\,\sin\epsilon_{\textstyle{_l}}(\chi)\,$ for a homogeneous body of a known size, mass and rheology is presented in detail in Efroimsky (2012a,b). See also the Appendix to Makarov \& Efroimsky (2014).
While quadrupole ($\,l=2\,$) terms are sufficient in most problems, exceptions are known. For the orbital evolution of Phobos, the $\,l=3\,$ and, possibly, even $\,l=4\,$ terms of the Martian tidal potential may be of relevance (Bills et al. 2005). Studying close binary asteroids, Taylor \& Margot (2010) took into account the Love numbers up to $\,l=6\,$.
The question of how rapidly $\,l>2\,$ terms fall off with the increase of the degree $\,l\,$ is also interesting. Most authors only rely on the geometric factor $\,(R/a)^{2l+1}\,$ to answer this question. As was explained in Efroimsky (2012$\,$b), the $\,l$-dependence of $\,k_l(\omega_{lmpq})\,\sin\epsilon(\omega_{lmpq})\,$, too, comes into play and changes the result considerably.
\section{The Eulerian and Lagrangian descriptions
}\label{prelude2}
\vspace{2.4mm}
${\left.~~~~~~\,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\,\right.}^{\mbox{\small \it What~we~hope~ever~to~do~with~ease,}}$~~\\
${\left.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~\,\right.}^{\mbox{\small \it we~must~learn~first~to~do~with~diligence.}}$
~\\
${\left.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~\right.}^{
\mbox{\small\it ~~~Samuel~Johnson
}}$
\subsection{Notations and definitions}
To compare the varying shape of a deformable body against some benchmark configuration, we use $\,\Xbold\,$ to denote the initial position occupied by a particle at $\,t=0\,$. At another time $\,t\,$, the particle finds itself in a new place
\ba
\xbold~=~\fbold(\Xbold,\,t)\,~,
\label{change}
\nonumber
\ea
where the function $\,\fbold(\Xbold,\,t)\,$ is a trajectory, i.e., a solution to the equation of motion, with the initial condition $\,\xbold=\Xbold~$ set at $\,t=0\,$.
The current values of all physical and kinematic properties of the medium can be expressed as functions of the instantaneous coordinates $\,\xbold\,$ of a point where these properties are being measured at the present moment $\,t\,$. When referred to the present time and position, such properties are named {\it{Eulerian}} and are equipped with a subscript {\it{E}}$\,$; for example: $\,q_{_E}(\xbold,\,t)\,$. The Eulerian description is fit to answer the question ``where" and therefore is convenient in fluid dynamics where the displacement $\,\xbold-\Xbold\,$ can become arbitrarily large and the initial position $\,\Xbold\,$ is soon forgotten.
While $\,\xbold\,$ denotes a place in space, the initial condition $\,\Xbold\,$ acts as the ``number" of a particle presently residing at the place $\,\xbold\,$. Although located now at $\,\xbold\,$, the particle originally came from $\,\Xbold\,$ and will carry the label $\,\Xbold\,$ forever.
Knowing the trajectories of all particles, we can express the properties as functions of the time $\,t\,$ and the initial conditions $\,\Xbold~$. To that end, we employ the change of variables $\,\xbold~=~\fbold(\Xbold,\,t)\,$. Expressed through the initial conditions, a property $\,q\,$ will be termed as {\it{Lagrangian}} and equipped with the subscript $\,L~$:
\bs
\ba
q_{_L}(\Xbold,\,t)~\equiv~q_{_E}(\xbold\,,~t)\,~~~~~~~~~
\label{109a}
\label{416a}
\ea
or, in more detail:
\ba
q_{_L}(\Xbold,\,t)~\equiv~q_{_E}(\,\fbold(\Xbold\,,\,t)\,,~t\,)\,~.
\label{109b}
\label{416b}
\ea
\label{109}
\label{416}
\es
So $\,q_{_L}\,$ has the same value as $\,q_{_E}\,$, but has a different functional form, as it is now understood as a function of the initial conditions
(the particles' $\,$`numbers') $\,\Xbold\,$, and not of the present-time coordinates $\,\xbold\,$. Relating the quantities to the initial positions $\,\Xbold\,$, the Lagrangian description tells us ``which particle" and is thus practical in description of deformable solids.
In anticipation of perturbative treatment, we regard the trajectory $\,\xbold~=~\fbold(\Xbold,\,t)\,$ as fiducial and equip the appropriate functional dependencies with a superscript $\,0\,$:
\bs
\ba
q^0_{_L}(\Xbold,\,t)~\equiv~q^0_{_E}(\xbold\,,~t)~~~~~~~~~~~
\label{equality_a}
\label{110a}
\label{417a}
\ea
which is:
\ba
q^0_{_L}(\Xbold,\,t)~\equiv~q^0_{_E}(\,\fbold(\Xbold,\,t)\,,~t\,)\,~.
\label{equality_b}
\label{110b}
\label{417b}
\ea
\label{110}
\label{417}
\label{equality}
\es
\subsection{Perturbative approach}\label{approach}
\noindent
Under disturbance, two changes will take place in a point $\,\erbold\,$ at a time $\,t~$:
\begin{itemize}
\item[\bf 1.~] Properties will now assume different values in this point at this time.
So we substitute the unperturbed Eulerian dependencies $\,q^0_{_E}(\rbold\,,~t)\,$ with
\ba
q_{_E}(\rbold\,,~t)~=~q^0_{_E}(\rbold\,,~t)~+~q\,'_{_E}(\rbold\,,~t)\,~.
\label{111}
\ea
This equality, in fact, serves as a definition of the variation $\,q\,'_{_E}(\rbold,~t)~$: the variation is a change in the functional dependence of a physical property upon the present position $\erbold$
~\\
\item[\bf 2.~] A different particle will now appear in the point $\,\erbold\,$ at the time $\,t\,$. It will not be the same particle as the one expected there at the time $\,t\,$ in the absence of perturbation.\\
~\\
Accordingly, a particle, which starts in $\,\Xbold~$ at $~t=0\,$, will show up, at the time $\,t\,$, not in the point $\,\xbold=\fbold(\Xbold,\,t)\,$ but in some other location displaced by $\,\ubold~$:
\ba
\erbold~=~\xbold~+~\ubold
~=~\fbold(\Xbold,\,t)~+~\ubold(\Xbold,\,t)\,~.
\label{112}
\ea
\end{itemize}
Both OF these changes, {\bf{1}} and {\bf{2}}, will affect the Lagrangian dependencies of the properties upon the initial conditions, so the dependency of each property will acquire a variation $~q\,'_{_L}(\Xbold,\,t)~$:
\ba
q_{_L}(\Xbold,\,t)~=~q^0_{_L}(\Xbold,\,t)~+~q\,'_{_L}(\Xbold,\,t)\,~.
\label{113}
\ea
In Appendix \ref{EL}, we provide a self-sufficient introduction into the perturbative treatment of a deformable body, both in the Eulerian and Lagrangian languages. There we derive a relation between the perturbations of the Lagrangian and Eulerian quantities:
\ba
q\,'_{_L}(\Xbold,\,t)~=~q\,'_{_E}(\,\fbold(\Xbold,\,t)\,,~t\,)~+~\ubold(\Xbold,\,t)~\nabla_{\textstyle{_x\,}} q_{_E}~+~O(\ubold^2)~~,
\label{duga}
\ea
with the gradient in the second term acting on the unperturbed history: \footnote{~To derive (\ref{duga}), we expanded $\,q_{_E}(\erbold,\,t)=q_{_E}(\xbold+\ubold,\,t)$ into the Taylor series near the unperturbed $q_{_E}(\xbold,\,t)$.}
\ba
\nabla_{\textstyle{_{x\,}}} q_{_E}\,\equiv\,\nabla_{\textstyle{_x\,}}q_{_E}(\xbold,\,t)~\,~,\,\quad\mbox{where}\qquad \xbold\,=\,\fbold(\Xbold,\,t)~~.
\label{116}
\ea
In the formula (\ref{duga}), the first term on the right-hand side, $~q\,'_{_E}(\xbold,\,t)~$, accounts for the change of the final spatial distribution of properties. The other two terms show up because
perturbation alters the mapping from $\,\Xbold\,$ to the present position.
\subsection{Summary of linearised formulae for the density\\
of a periodically deformed solid}
We need several formulae for density perturbations, which are obtained in Appendix \ref{EL}.
\vspace{3mm}
${\bf{
\divideontimes
}}~~~$ {\underline{In the Eulerian description:}}
\ba
\rho_{_E}(\rbold,\,t)\,=~\rho^{\,0}_{_E}(\rbold)~+~{\rho_{_E}}'(\rbold,\,t)\,~,~\qquad~\quad~
\label{141}
\ea
\ba
\rho_{_E}\,'\,+\,\nabla_{\textstyle{_r\,}}\cdot(\rho^{\,0}_{_E}\,\ubold)\,=\,0\,~,~\qquad~\quad~
\label{142}
\ea
Formula (\ref{141}) renders the interrelation between the functions of the same variable. The unperturbed density $\,\rho^{\,0}_{_E}\,$ appears here as a function of the perturbed present positions $\,\erbold\,$, not of the unperturbed reference positions $\,\xbold\,$. This can be traced through the derivation (\ref{132} - \ref{135}). There, the unperturbed density initially shows up as a function of $\,\xbold=\erbold-\ubold\,$. It then ends up as a function of $\,\erbold\,$, after the Taylor expansion around $\,\rbold\,$ over powers of $\,\ubold\,$ is performed.
Accordingly, the symbol $\,\nabla_{\textstyle{_r\,}}\,$ denotes differentiation with respect to the perturbed position $\,\erbold\,$ upon which $\,\rho^{\,0}_{_E}\,$ is
set to depend in the above equations. Also remember that in $\,\ubold(\xbold,\,t)=\ubold(\erbold,\,t)\,+\,O(\ubold^2)\,$ we can neglect $\,O(\ubold^2)\,$, in the linear approximation. Thus the Lagrangian and Eulerian values of the displacement coincide in the first order. Specifically, in the equation (\ref{142}), our $\,\ubold\,$ can be treated as a function of $\,\erbold\,$. So all entities in that equation are functions of the same variable, the perturbed location.
\vspace{3mm}
${\bf{
\divideontimes
}}~~~$ {\underline{In the Lagrangian description:}}\vspace{3mm}
\ba
\rho_{_L}(\Xbold,\,t)\,=~\rho^{\,0}_{_E}(\Xbold)~+~{\rho_{_L}}'(\Xbold,\,t)\,~,~\quad~\quad
\label{143}
\ea
\ba
\rho_{_L}\,'\,+\,\rho^{\,0}_{_E}\,\nabla_{\textstyle{_X\,}}\cdot\ubold\,=\,0\,~.~\qquad~~\quad~
\label{144}
\ea
Recall that this is an interrelation between functions of the same variable. This time, it is the initial position
$\,\Xbold\,$. Had we altered the notation from $\,\Xbold\,$ to $\,\erbold\,$, nothing would have changed (except that we would write $\,\nabla_{\textstyle{_r\,}}\,$
instead of $\,\nabla_{\textstyle{_X\,}}\,$) --- it would still be the same relation between three functions of the same argument.
\vspace{3mm}
${\bf{
\divideontimes
}}~~~$ {\underline{Relation between the increments $\,\rho_{_L}\,'\,$ and $\,\rho_{_E}\,'\,\;$:}}
\vspace{4mm}~\\
This relation originates from the general formula (\ref{duga}). In our case the reference trajectory $\,\xbold=\fbold(\Xbold,\,t)\,$ stays identical to the initial position
$\,\Xbold\,$, so we obtain:
\ba
\rho_{_L}\,'(\xbold,\,t)~=~\rho_{_E}\,'(\xbold,\,t)~+~\ubold(\xbold,\,t)\cdot\nabla_{\textstyle{_x\,}}\rho^{\,0}_{_E}(\xbold,\,t)\,~.
\label{145}
\ea
Once again, we are dealing with a relation between several functions taken all at one and the same point. Here the point is denoted with $\,\xbold\,$. Had we denoted it with $\,\erbold\,$, the only change would be a switch from $\,\nabla_{\textstyle{_x}}\,$ to $\,\nabla_{\textstyle{_r}}\,$,
no matter what meaning we instill into these $\,\xbold\,$ and $\,\erbold\,$.
For an initially homogeneous body, $\,\nabla\rho^{\,0}_{_E}\,=\,0\,$; so the forms (\ref{142}) and (\ref{144}) of the linearised conservation law coincide and can both
be conveniently written as
\ba
\rho^{\,0}~\nabla_{\textstyle{_r}}\cdot{\vbold}\,+\,\frac{\textstyle \partial \rho}{\textstyle\partial t\,}\,=\,0\,~,
\label{146}
\ea
where $~\rho^{\,0}\equiv\rho^{\,0}_{_E}~$ and the velocity is
\ba
\vbold~=~\frac{\partial \ubold}{\partial t}~~.
\label{147}
\ea
\subsection{Potentials and their increments}
In each point, the density $\,\rho\,$
and potential $\,V\,$ comprise a mean value and a perturbation:
\begin{subequations}
\ba
\mbox{density:}\qquad\quad\rho&=&\rho^{\,0}~+~\rho\,'\,~,\,\qquad\qquad
\label{8a}
\label{148a}\\
\nonumber\\
\mbox{potential:}\qquad\quad V&=&V^{\,0}\,+~V\,'\,=~V^{\,0}\,+~(\,W~+~U\,)\,~,
\label{8d}
\label{148b}
\ea
\label{8}
\label{148}
\end{subequations}
where $\,V^{\,0}\,$ is the constant-in-time spherically symmetrical potential of an undeformed body, while $\,V\,'\,$ denotes the potential's perturbation. The
perturbation consists of the external tide-raising potential $\,W\,$ and the resulting additional potential $\,U\,$ of the perturbed body:
\ba
V\,'~=~W~+~U\,~.
\label{9}
\label{149}
\ea
The potentials and densities will be endowed with a subscript {\it{$\,$``$\,L\,$"$\,$}} or {\it{$\,$``$\,E\,$"$\,$}} pointing at the Lagrangian or Eulerian descriptions, accordingly.
Owing to the general expression (\ref{duga}), we have:
\ba
{V_{_E}}'(\erbold,\,t)~=~{V_{_L}}'(\xbold,\,t)~-~\ubold\cdot\nabla_{\textstyle{_x\,}} V^{\,0}_{_E}\,~,
\label{150}
\ea
the same being valid for
$\,\rho\,$, see the equation (\ref{145}).
For unperturbed properties, however, subscripts may be dropped without causing any confusion:
\ba
V^{\,0}~\equiv~V^{\,0}_{_E}\,\quad,\qquad\rho^{\,0}~\equiv~\rho^{\,0}_{_E}\,~.
\label{151}
\ea
\subsection{The Poisson equation in the Eulerian description}
In both the perturbed and unperturbed settings, the density and potential are always linked through the Poisson equation:
\begin{subequations}
\ba
\nabla_{\textstyle{_r\,}}^{\,2}\,V_{_E}&=&-~4\,\pi\,G\,\rho_{_E}~~,
\label{152a}\\
\nonumber\\
\nabla_{\textstyle{_r\,}}^{\,2}\,V^{\,0}_{_E}&=&-~4\,\pi\,G\,\rho^{\,0}_{_E}~~,
\label{152b}
\ea
while the perturbing potential $\,W\,$ obeys the Laplace equation outside the perturber:
\ba
\nabla_{\textstyle{_r\,}}^{\,2}\,W_{_E}&=&0~~.~\qquad\qquad
\label{152c}
\ea
\label{152}
\end{subequations}
Subtraction of (\ref{152b}) from (\ref{152a}) results in a Poisson equation for the density perturbation:
\ba
\left.~\quad~\right. \nabla_{\textstyle{_r\,}}^{\,2}\,{V_{_E}}'~=~-~4~\pi~G~{\rho_{_E}}'~~~
\label{153}
\ea
The Poisson equation in the Lagrangian description is presented in Appendix \ref{EL}.
\section{The power produced by the tidal force}
\subsection{In the Eulerian description}
The power $\,P\,$ exerted on the perturbed body is an integral, over its volume, of the rate of working by tidal forces on displacements.
In the Eulerian language, the power reads as
\ba
P\;=\;
\int\,\rho_{_E}\;{\vbold}\,\cdot\,\nabla_{\textstyle{_r\,}} {V_{_E}}'\;d^3r\,~,
\label{157}
\ea
the integration being performed over an instantaneous, deformed volume. Together with
\ba
\rho_{_E}~{\vbold}\cdot\nabla_{\textstyle{_r\,}} {V_{_E}}'~=~
\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,{\vbold}~{V_{_E}}'\,)~-~{V_{_E}}'~\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,{\vbold}\,)\,~,~~
\label{13}
\label{158}
\ea
the mass-conservation law
\ba
\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,\vbold\,)\,+\,\frac{\textstyle \partial \rho_{_E}}{\textstyle\partial t\,}\,=\,0\,~
\label{14}
\label{159}
\ea
simplifies the expression under the integral to the following form:
\bs
\ba
\rho_{_E}~{\vbold}\cdot\nabla_{\textstyle{_r\,}} {V_{_E}}'
~=~\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,{\vbold}\,{V_{_E}}'\,)\,+\,{V_{_E}}'~\frac{\,\partial {\rho_{_E}}'}{\partial t\,}
\;\;~.
\label{160a}
\ea
Further employment of the Poisson equation in the Eulerian form, (\ref{153}), gives us
\ba
\rho_{_E}~{\vbold}\cdot\nabla_{\textstyle{_r\,}} {V_{_E}}'
~=~\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,{\vbold}\,{V_{_E}}'\,)~-~\frac{1}{4\,\pi\,G}~{V_{_E}}'~\frac{\partial}{\partial t}\,\nabla_{\textstyle{_r\,}}^2 {V_{_E}}'\;\;~.
\label{160b}
\ea
\label{15}
\label{160}
\es
So the power becomes
\bs
\ba
P&=&
\int\,\nabla_{\textstyle{_r\,}}\cdot(\,\rho_{_E}\,{\vbold}\,{V_{_E}}'\,)\;d^3r
~+~
\int\,{V_{_E}}'~\frac{\,\partial {\rho_{_E}}'}{\partial t\,}~d^3r
\label{161a}\\
\nonumber\\
\nonumber\\
&=&
\int_{\Sigma^{t}}\,\rho_{_E}~{V_{_E}}'\;{\vbold}\cdot d\Sbold^t
~-~\frac{1}{4\,\pi\,G}~
\int\,{V_{_E}}'~\frac{\partial}{\partial t\,}\,\nabla_{\textstyle{_r\,}}^2 {V_{_E}}'~d^3r
\,~,
\label{161b}
\ea
\label{161}
\es
where $\,d\Sbold^t\,\equiv~ {\bf{\hat{n}}}^{t}\;d\Sigma^{t}\,$, ~with
$\,{\bf{\hat{n}}}^{t}\,$ and $\,d\Sigma^{t}\,$ being a unit normal to the deformed surface and an element of area on that surface, both taken at the time $\,t\,$.
Correct to the first order in the displacement $\,\ubold\,$, these are related to their unperturbed analogues via
\ba
{\bf{\hat{n}}}^{t}\,=\;(\,1~-~\nabla^{\Sigma}\otimes\ubold\,)\,{\bf{\hat{n}}}\qquad~\mbox{and}~\qquad d\Sigma^{t}\,=\;\left(\,1~+~\nabla^{\Sigma}\cdot\ubold\,\right)\,d\Sigma~~,
\label{162}
\ea
where the surface gradient is defined as
\ba
\nabla^{\Sigma}\,\equiv\,\nabla_{\textstyle{_x\,}}\,-~{\bf{\hat{n}}}~\partial_{{\bf{\hat{n}}}}\,~,
\label{163}
\ea
so $\,\nabla^{\Sigma}\otimes\ubold\,$ is a three-dimensional second-rank tensor (Dahlen \& Tromp 1998). Altogether,
\bs
\ba
d\Sbold^t\,\equiv~ {\bf{\hat{n}}}^{t}\;d\Sigma^{t}\,=\,\left(\,1\,+\,\nabla^{\Sigma}\cdot\ubold\,\right)\,{\bf{\hat{n}}}~d\Sigma~-~(\nabla^\Sigma\otimes\ubold)\,{\bf{\hat{n}}}~d\Sigma~=~
\left(\,1\,+\,\nabla^{\Sigma}\cdot\ubold\,\right)\,d{\bf S}~-~(\nabla^\Sigma\otimes\ubold)\,d{\bf S}
\,~,~~\,~
\label{164a}
\ea
with $d{\bf S}\equiv{\bf{\hat{n}}}d\Sigma$ pertaining to the unperturbed surface. In a shorter form, the above reads as
\ba
d\Sbold^t\,=~{\mathbb{J}}~d{\bf S}~~,
\label{164b}
\ea
\label{164}
\es
where the three-dimensional second-rank tensor
\ba
{\mathbb{J}}~\equiv~(\,1\,+\,\nabla^{\Sigma}\cdot\ubold\,)\,{\mathbb{I}}\,-\,\nabla^\Sigma\otimes\ubold
\label{165}
\ea
is, loosely speaking, playing the role of a Jacobian for elements of area. This is fully analogous to the formula
\ba
d^3r\,=\,J\,d^3x\,=\,(1\,+\,\nabla_{\textstyle{_x}}\cdot\ubold)\,d^3x\,=\,[1\,+\,\nabla_{\textstyle{_r}}\cdot\ubold\,+\,O(u^2)\,]\,d^3x
\label{uu}
\ea
linking the deformed volume $\,d^3r\,$ to the undeformed volume $\,d^3x\,$. (See Appendix \ref{continuity}.)
\subsection{In the Lagrangian description}
Applied to the density, the general formula (\ref{416}) renders:
\ba
\rho_{_L}(\Xbold,\,t)\,\equiv\,\rho_{_E}(\erbold,\,t)\,~~.
\label{445}
\ea
This, together with the formula (\ref{150}) for the potential perturbation, enables us to express the power in the Lagrangian description:
\ba
P\;=\;
\int\,\rho_{_L}\;{\vbold}\,\cdot\,\nabla_{\textstyle{_x\,}}{V_{_L}}'\;d^3x~-~\int\,\rho_{_L}\;{\vbold}\,\cdot\,\nabla_{\textstyle{_x\,}}(\,\ubold\cdot
\nabla_{\textstyle{_x\,}} V^0\,)\;d^3x\,~,
\label{166}
\label{446}
\ea
the integral now being taken over the undeformed body. Be mindful that $\,d^3 r\,\nabla_{\textstyle{_r\,}}\,=\,d^3x\,\nabla_{\textstyle{_x\,}}\,$, so no Jacobian shows up
on the right-hand side.
The velocity and displacement being in quadrature, the second term should be dropped after time averaging (denoted with angular brackets):
\bs
\ba
\langle P\rangle\;=\;
\int\,\rho_{_L}\;{\vbold}\,\cdot\,\nabla_{\textstyle{_x\,}}{V_{_L}}'\;d^3x\,~,
\label{167a}
\label{447a}
\ea
For a periodically deformed solid, we set the equilibrium state to play the role of the unperturbed configuration, for which reason
\footnote{~The mass is conserved along both trajectories, perturbed and unperturbed. So both $\,\rho_{_E}(\erbold,\,t)\,d^3\erbold\,$ and $\,\rho^{\,0}_{_E}(\xbold,\,t)\,d^3\xbold\,$ must be equal to the initial mass $\,\rho^{\,0}_{_E}(\Xbold)\,d^3\Xbold\,$, and therefore to one another:
$\,\rho_{_E}(\erbold,\,t)\,d^3\erbold\,=\,\rho^{\,0}_{_E}(\xbold,\,t)\,d^3\xbold\,$. Thence,
$\,\rho_{_E}(\rbold\,,~t)~J~=~\rho^{\,0}_{_E}(\Xbold)\,$, where $\,J\equiv d^3\erbold/d^3\xbold\,$. In combination with (\ref{445}), this yields:
\ba
\rho_{_L}(\Xbold\,,~t)~J~=~\rho^{\,0}_{_E}(\xbold\,,~t)~~.
\nonumber
\ea
When the unperturbed configuration is the equilibrium state, $\,\xbold=\fbold(\Xbold,\,t)\,$ coincides with $\,\Xbold\,$ at all times. So $\,\rho^{\,0}_{_E}(\xbold,\,t)\,$ bears no dependence on time, and the above equality becomes simply $\,\rho_{_L}(\Xbold\,,~t)~J~=~\rho^{\,0}_{_E}(\xbold)\,$.
See Appendix \ref{continuity} for a detailed discussion.
\label{7}}
$\,~\rho_{_L}(\Xbold\,,~t)~J~=~\rho^{\,0}_{_E}(\xbold)\,$. Insertion of this equality into the expression (\ref{167a}) gives us:
\ba
\langle P\rangle\;=\;
\int\,\rho^{\,0}\;\,{\vbold}\,\cdot\,\nabla_{\textstyle{_x\,}}{V_{_L}}'\;J^{-1}\,d^3x\,~.
\label{167b}
\label{447b}
\ea
\label{167}
\label{447}
\es
The dot-product can be easily rearranged via the formulae analogous to (\ref{158} - \ref{160}). Due to
\ba
\rho^{\,0}~{\vbold}\cdot\nabla_{\textstyle{_x\,}} {V_{_L}}' ~=~
\nabla_{\textstyle{_x\,}}\cdot(\,\rho^{\,0}\,{\vbold}\,{V_{_L}}'\,)\,-\,{V_{_L}}'\,\nabla_{\textstyle{_x\,}}\cdot(\,{\vbold}\,\rho^{\,0}\,)
\label{168}
\label{448}
\ea
and
\ba
\rho^{\,0}~\nabla_{\textstyle{_x\,}}\cdot{\vbold}\,+\,\frac{\textstyle \partial \rho}{\textstyle\partial t\,}\,=\,0\,~,
\label{169}
\label{449}
\ea
the expression under the integral becomes
\ba
\rho^{\,0}~{\vbold}\cdot\nabla_{\textstyle{_x\,}} {V_{_L}}'
~=~\nabla_{\textstyle{_x\,}}\cdot(\,\rho^{\,0}\,{\vbold}\,{V_{_L}}'\,)\,+\,{V_{_L}}'~\frac{\,\partial \rho_{_L}\,'}{\partial t\,}
\;\;~,
\label{170}
\label{450}
\ea
provided we set $\,\nabla_{\textstyle{_x\,}}\rho^{\,0}\,=0\,$, i.e., provided we assume that the unperturbed body is homogeneous.$\,$\footnote{~No such assumption was required to obtain the Eulerian analogue (\ref{159}) of the Lagrangian formula (\ref{169}).} Then the time-averaged power, for an initially homogeneous body, acquires the form of
\ba
\langle P\rangle\;=\;
\int\,\nabla_{\textstyle{_x\,}}\cdot(\,\rho^{\,0}\,{\vbold}\,{V_{_L}}'\,)\;d^3x
~+~
\int\,{V_{_L}}'~\frac{\,\partial \rho_{_L}\,'}{\partial t\,}\;d^3x
\,~,
\label{171}
\ea
where we approximated the Jacobian with unity, thus neglecting higher-order terms.
\section{Tidal dissipation rate in a homogeneous sphere}\label{sphere}
Although the Eulerian and Lagrangian descriptions are equivalent,
the boundary conditions look simpler in the Eulerian picture. On the other hand, for periodic deformations, practical calculations are easier carried out in the
Lagrangian description, as it implies integrations over the unperturbed volume and surface corresponding to the equilibrium shape. It is, unfortunately, not unusual for the authors to refrain from pointing out which description is employed, leaving this to the discernment of the readers. The easiest way to trace an author's choice is to look at the way they write the expression for the power and the Poisson equation.
The often-cited authors Zschau (1978) and Platzman (1984) started in the Eulerian language and then switched to the Lagrangian description. This can be seen from the fact that the time-average power was eventually written by both of them as an integral over the $\,${\it{undeformed}}$\,$ body. Both works contained some mathematical omissions which, fortunately, did not influence the final form of the integral.
Below we present these authors' method in a more mathematically complete manner. While our expression for the power, written as an integral over the unperturbed surface, will coincide with the integrals derived by the said authors, our final result (the power written as a spectral sum over the Fourier modes) will differ. In one important detail, our result also differs from that by Peale \& Cassen (1978).
\subsection{A mixed, Eulerian-Lagrangian treatment}\label{zp}\label{p}\label{z}
Similar to Zschau (1978, eqn. 2), we begin with the formula (\ref{157}) for the power in the Eulerian variables. The next natural step is (\ref{161}), whereafter integration by parts renders:
\begin{subequations}
\ba
P~=\int\rho_{_E}\,{V_{_E}}'\;{\vbold}\cdot d{\bf{S}}^t\,-~\frac{1}{4\pi G}\,\int d^3r~ \nabla_{\textstyle{_r}}\cdot\left({V_{_E}}'~\,\frac{\partial \nabla_{\textstyle{_r\,}} {V_{_E}}'}{\partial t}\,\right)
~+~\frac{1}{4\pi G}\,\int d^3r~\frac{\partial \nabla_{\textstyle{_r\,}} {V_{_E}}'}{\partial t}\,\cdot\,\nabla_{\textstyle{_r\,}} {V_{_E}}'~~~~~~~
\label{172a}\\
\nonumber\\
\nonumber\\
=\int\rho_{_L}\,{V_{_E}}'\,{\vbold}\cdot\left(\,{\mathbb{J}}~d{\bf{S}}\,\right)\,-~\frac{1}{4\pi G}\,\int {V_{_E}}'~\frac{\partial \nabla_{\textstyle{_r\,}} {V_{_E}}'}{\partial t}\,\cdot\left(\,{\mathbb{J}}\,d{\bf{S}}\,\right)\,+~\frac{1}{8\pi G}~\frac{\partial}{\partial t}\int (J\,d^3x)~\nabla_{\textstyle{_r\,}} {V_{_E}}'\cdot\nabla_{\textstyle{_r\,}} {V_{_E}}'\,~.~~~~~
\label{172b}
\ea
\label{172}
\end{subequations}
{\it{En route}} from the former expression to the latter, we switch from $\,d{\bf{S}}^t\,$ and $\,d^3r\,$ to $\,{\mathbb{J}}~d{\bf{S}}\,$ and $\,J\,d^3x\,$, respectively. Thereby we switch from integration over a deformed body to that over the undeformed one. So $\,\rho_{_E}\,$ becomes $\,\rho_{_L}\,$,
see the equation (\ref{445}). A similar switch from $\,{V_{_E}}'\,$ to $\,{V_{_L}}'\,$ can be performed using the equation (\ref{150}), but we prefer to stick to $\,{V_{_E}}'\,$ for some time, for it will be easier to impose the boundary conditions on the Eulerian potential.
In a leading-order calculation, both the Jacobian and its tensorial analogue may be set unity: $\;{\mathbb{J}}\approx{\mathbb{I}}\;$ and $\;J\approx 1~$,
$\,$as evident from the formulae (\ref{165}) and (\ref{uu}). In the same order, we can substitute $\,\nabla_{\textstyle{_r\,}}\,$ with $\,\nabla_{\textstyle{_x\,}}\,$. In addition, as was explained in Footnote \ref{7}, we can substitute $\,\rho_{_L}\,=\,\rho^{\,0}/J\,$ with $\,\rho^{\,0}\,$, and can treat the latter as time-independent. Thus the time average of the power becomes:
\begin{subequations}
\ba
\langle P\rangle &=&\int\,\left\langle\,\rho^{\,0}~{V_{_E}}'\;{\vbold}\,\right\rangle\,\cdot\,d{\bf{S}}~-~\frac{1}{4\pi G}~\int ~\left\langle\,{V_{_E}}'~\,\frac{\partial \,\nabla_{\textstyle{_x}} {V_{_E}}'}{\partial t}\,\right\rangle\,\cdot\,d{\bf{S}}
\label{19a}
\label{173a}
\ea
\ba
&=&-~\frac{1}{4\pi G}~\int ~\left\langle~{V_{_E}}'~\,\frac{\partial}{\partial t}\,\left(\,\nabla_{\textstyle{_x\,}} {V_{_E}}'\,-\,4\,\pi\,G\,\rho^{\,0}\,\ubold
\,\right)\;\right\rangle\,\cdot\,d{\bf{S}}\,~.
\label{19b}
\label{173b}
\ea
with the volume integral dropped.$\,$\footnote{~As previously agreed, in our approximation the Jacobian is set unity. The potential variation $\,{V_{_E}}'\,$ is a sum of sinusoidal harmonics, and so is its gradient $\,\nabla_{\textstyle{_x\,}} {V_{_E}}'\,$. After time averaging of (\ref{172b}), the cross terms in the product $\,\nabla_{\textstyle{_x\,}} {V_{_E}}'\,\cdot\,\nabla_{\textstyle{_x\,}} {V_{_E}}'\,$ will vanish, while the products of harmonics of the same frequency will render constants.} The potential $\,{V_{_E}}'\,$ in the above developments was the $\,${\it{interior}}$\,$ potential,
so the above formula should, rigorously speaking, have been written as
\ba
\langle P\rangle~=~-~\frac{1}{4\pi G}~\int ~\left\langle~{{V_{_E}}'}^{\textstyle{^{\,(interior)}}}~\frac{\partial}{\partial t}\,\left(\,\nabla_{\textstyle{_x\,}} {{V_{_E}}'}^{\textstyle{^{\,(interior)}}}\,-\,4\,\pi\,G\,\rho^{\,0}\,\ubold
\,\right)\;\right\rangle\,\cdot\,d{\bf{S}}\,~.
\label{19c}
\label{173c}
\ea
\label{19}
\label{173}
\end{subequations}
The expression (\ref{19c}) is somewhat formal. On the one hand, it contains integration over an undeformed surface, an operation appropriate to the Lagrangian description. On the other hand, the quantity under the integral is Eulerian, i.e., is a function of the perturbed positions. Thus, to employ the expression (\ref{173c}) in practical calculations, one would first have to express the integrated average product $~\left\langle~{{V_{_E}}'}^{\textstyle{^{\,(interior)}}}~\frac{\textstyle\partial}{\textstyle\partial t}\,\left(\,\nabla_{\textstyle{_x\,}} {{V_{_E}}'}^{\textstyle{^{\,(interior)}}}\,-\,4\,\pi\,G\,\rho^{\,0}\,\ubold
\,\right)\;\right\rangle~$ as a function of the unperturbed positions, i.e., of the coordinates on the undeformed surface. Simply speaking, one would have to switch from a Eulerian function under the integral to a Lagrangian function, using the formula (\ref{150}). The reason for our procrastination with this step is the convenience of the Eulerian description for imposing boundary conditions.
\subsection{Comparing the intermediate result (\ref{173c}) with analogous\\ formulae from Zschau (1978) and Platzman (1984)}
Our expression (\ref{173c}) is equivalent to the formula (12) in Zschau (1978). The sole difference is how we justify the substitution of the Lagrangian density $\,\rho_{_L}\,$ with the unperturbed $\,\rho^{\,0}\,$. Whereas we approximated the Jacobian with $\,1+O(|\ubold |)\,$, Zschau (1978, eqn. 10) employed a clever trick that did not rely on the smallness of disturbance. In our notation, the trick looks like this: if in the first term of our expression (\ref{173a}) we also keep the first-order perturbation $\,{\rho_{_L}}'\,$ of the density, the time average of the product $\,{\rho_{_L}}'\,\vbold\,{V_{_E}}'\,\,$ will always be zero, provided all three oscillate at the same frequency. While elegant, Zschau's argument works only for a perturbation at one frequency, not for a spectrum of frequencies.
The treatment by Platzman (1984) contains more inaccuracies. The author's formula (2) looks like our equation (\ref{167b}), with the actual density substituted from the beginning by its unperturbed value $\,\rho^{\,0}\,$. Such a start indicates the use of the Lagrangian description. This however comes into contradiction with the way the author writes down the conservation law. Platzman's form of that law is equivalent to our equation (\ref{142}), i.e., is written in the Eulerian language. The following Poisson equation, too, is Eulerian. That the author eventually arrives at the right integral expression (equation 5 in {\it{Ibid.}}) is more due to luck than to accuracy. In the subsequent derivation, the author's formulae (7) and (10) are incorrect, because the fact that the Fourier modes in the Darwin-Kaula theory can be of either sign is neglected. We shall address this point at the end of Section \ref{foll}.
\subsection{Employment of the boundary conditions}
The Eulerian boundary conditions mimic those from electrostatics (see Appendix \ref{appA}):
\ba
\label{174}
\label{20}
{{V_{_E}}'}^{\,\textstyle{^{(interior)}}}~=~{{V_{_E}}'}^{\,\textstyle{^{(exterior)}}}~~
\ea
and
\ba
\left[~\frac{\partial~}{\partial\hat{\bf{n}}}\, {V_{_E}}'~-~4~\pi~G~\rho^{\,0}~{\bf u}~\right]^{\,\textstyle{^{(exterior)}}}\,=~
\left[~\frac{\partial~}{\partial\hat{\bf{n}}}\, {V_{_E}}'~-~4~\pi~G~\rho^{\,0}~{\bf u}~\right]^{\,\textstyle{^{(interior)}}}\,~.
\label{21}
\label{175}
\ea
Insertion thereof into the equation (\ref{173c}) makes the power look
\ba
\langle P\rangle~=~-~\frac{1}{4\pi G}~\int ~\left\langle~{{V_{_E}}'}^{\textstyle{^{\,(exterior)}}}~\frac{\partial}{\partial t}~\nabla_{\textstyle{_x\,}} {{V_{_E}}'}^{\textstyle{^{\,(exterior)}}}\;\right\rangle\,\cdot\,d{\bf{S}}\,~.
\label{176}
\ea
It is now high time to write the expression under the integral (\ref{176}) as a function of the coordinates on the unperturbed surface, the one over which we integrate. The formula (\ref{150}) prescribes us to substitute $\,{{V_{_E}}'}\,$ with $\,{V_{_L}}'-\,\ubold\cdot\nabla_{\textstyle{_x\,}} V_0\,$. As $\,\ubold\,$ is zero outside the body, we get~\footnote{~For the first multiplier under the integral (\ref{176}), we simply substitute $\,{{V_{_E}}'}^{\textstyle{^{\,(exterior)}}}\,$ with $\,{{V_{_L}}'}^{\textstyle{^{\,(exterior)}}}\,$, omitting the term $\,\left[\,-\,\ubold\cdot\nabla_{\textstyle{_x\,}} V_0\,\right]^{\textstyle{^{\,(exterior)}}}\,$ because $\,\ubold\,$ is zero outside the body.\\
$\left.~~~~\right.$ The case of the second multiplier, $\,\frac{\textstyle\partial}{\textstyle\partial t}~\nabla_{\textstyle{_x\,}} {{V_{_E}}'}^{\textstyle{^{\,(exterior)}}}\,$, is less obvious. Employment of the formula (\ref{150}) furnishes us $\,\frac{\textstyle\partial}{\textstyle\partial t}\,\left[\,\nabla_{\textstyle{_x\,}}{V_{_L}}'\,-\,\nabla_{\textstyle{_x}}\cdot(\ubold\, V^{\,0})\,\right]^{\textstyle{^{\,(exterior)}}}$.
The vanishing of $\,\ubold\,$ on the exterior side of the boundary does not imply the vanishing of its gradient there. On the contrary, $\,\nabla_{\textstyle{_x}}\cdot(\ubold\, V^{\,0})\,$ performs a finite step -- but so also does the gradient of $\,{V_{_L}}'\,$, so that altogether the gradient $\,{V_{_E}}'\,$ remains continuous. To sidestep these intricacies, we can expand the volume of integration slightly outward from the actual volume of the planet (Platzman 1984, p. 74).}
\ba
\langle P\rangle~=~-~\frac{1}{4\pi G}~\int ~\left\langle~{{V_{_L}}'}^{\textstyle{^{\,(exterior)}}}~\frac{\partial}{\partial t}~\nabla_{\textstyle{_x\,}} {{V_{_L}}'}^{\textstyle{^{\,(exterior)}}}\;\right\rangle\,\cdot\,d{\bf{S}}\,~,
\label{177}
\label{22}
\ea
To analyse the behaviour of $\,V\,'\,$ outside the perturbed body, recall that its two components, $\,U\,$ and $\,W\,$, scale differently with the planetocentric radius. As can be seen from (\ref{1a}), the degree-$l\,$ Legendre component of the perturbing potential changes as \footnote{~Do not be misled by the planetocentric distance in (\ref{1a}) being denoted with $\,R\,$. There we needed the value of $\,W\,$ on the surface, whereas here we need to know $\,W\,$ at an arbitrary planetocentric distance.} ~$~W_l\,\propto\,\,r^{\,l}\,$. According to (\ref{505a}), the degree-$l\,$ component of the tidal potential obeys $~U_l\,\propto\,r^{-(l+1)}\,$. All in all, the $\,l-$degree part of the exterior $\,V\,'\,$ assumes the form of
\ba
{{V_{_L}}'}^{\,\textstyle{^{(exterior)}}}=~\sum_{l=2}^{\infty}\,\left[\,\left(\,\frac{r}{R}\,\right)^{\textstyle{^{l}}}\,W_l(R)\,+\,\left(\,\frac{r}{R}\,
\right)^{\textstyle{^{-(l+1)}}}\,U_l(R)\,\right]\,~,
\label{23}
\label{178}
\ea
while the normal part of its gradient on the free surface is
\ba
\frac{\partial\,}{\partial r}~{{V_{_L}}'}_{\textstyle{_l}}^{\,\textstyle{^{(exterior)}}}~=~R^{-1}\,\sum_{l=2}^{\infty}\,\left[\,l~W_l\,-\,(l+1)~U_l\,\right]\,~.
\label{24}
\label{179}
\ea
Plugging it into (\ref{22}), and benefitting from the orthogonality of surface harmonics, we obtain:~\footnote{~On the boundary, we have: $\,{{V_{_L}}'}(R)\,=\,\sum_{l=2}^{\infty}\,\left[\,W_l(R)\,+\,U_l(R)\,\right]~$, as evident from (\ref{23}). Together with (\ref{24}), this expression was inserted in (\ref{22}). By doing so, we omitted the diagonal products $\,W_l\,\dot{W}_l\,$ and $\,U_l\,\dot{U}_l\,$ that vanish after time averaging. (Indeed, $\,W_l\,$ is in quadrature with $\,\dot{W}_l\,$, while $\,U_l\,$ is in quadrature with $\,\dot{U}_l\,$.) {\it{En route}} from
(\ref{25a}) to (\ref{25b}), we took into account that the time averages of $\,\partial (U_l\,W_l)/\partial t\,$ also vanish.}
\begin{subequations}
\ba
\langle P\rangle &=&-~\frac{1}{4\pi G R}~\sum_{l=2}^{\infty}\,\int~\left\langle~l~U_l\,\stackrel{\bf\centerdot}{W}_l\,-\,(l+1)~W_l\,\stackrel{\bf\centerdot}{U}_l~
\right\rangle~dS
\label{25a}
\label{180a}
\ea \ba
&=&\frac{1}{4\pi G R}~\sum_{l=2}^{\infty}\,(2\,l\,+\,1)\,\int~\left\langle~W_l~\stackrel{\bf\centerdot}{U}_l~\right\rangle~dS
\,~,
\label{25b}
\label{180b}
\ea
\label{25}
\label{180}
\end{subequations}
which is equivalent to the formulae (18) in Zschau (1978) and (5) in Platzman (1984). This, however, is the last point on which we are still in agreement with our predecessors.
\subsection{Writing the integral as a spectral sum}\label{foll}
Bringing in the dynamical Love numbers $\,k_l\,$
and
the phase lags defined in (\ref{505b}), one can express the products $\,W_l(t)\,\dot{U}_l(t)\,$
via the spectral components of the disturbance $\,W(t)~$.~\footnote{~In this subsection, $\,\omega\,$ is a shortened notation for the mode $\,\omega_{lmpq}\,$, not the argument of the pericentre.}
Although the formula
\ba
\bcancel{
\sum_{l=2}^{\infty}(2l+1)\,\left\langle W_l(t)\,\dot{U}_l(t)\right\rangle ~=~
\sum_{\omega}\,(2l+1)\,\frac{\omega}{2}\,W^{\,2}_l(\omega)\,k_l(\omega)~\sin\epsilon_l(\omega)
}
~
\label{26}
\ea
is often used in the literature (Zchau 1978, Platzman 1984, Segatz et al. 1988),
$\,$\footnote{~Our expression (\ref{26}) is identical to the upper line of the equation (10) in Platzman (1984).
(Note a misprint on that line of Platzman's equation: a missing factor of $\,\omega\,$.)\\
$~\quad~~$ Our formula (\ref{26}), when truncated to $\,l=2\,$, also becomes equivalent to the equation (22) in Zschau (1978) and to the equation (12) in Segatz et al. (1988). ~(Both authors kept only the degree-2 terms.)}$\,$
accurate examination demonstrates that it is incorrect. To appreciate this, one simply has to insert the expansions (\ref{1b}) and (\ref{505b}) into the formula (\ref{25b}) and see what happens.
That the answer differs from (\ref{26}) was noticed by Peale \& Cassen (1978). However, their development also needs correction. Below, we dwell upon this matter in great detail and provide a full inventory of the terms emerging in the spectral expansion for damping rate. At this point, we only mention the two key circumstances:
\begin{itemize}
\item[\bf(a)~] The conventional expression (\ref{26}) ignores the degeneracy of modes, i.e., a situation where several modes $\,\omega_{\textstyle{_{lmpq}}}\,$
with different sets $\,lmpq\,$ take the same numerical value $\,\omega\,$. As will be demonstrated in the Section \ref{accurate}, the sum over modes $\,\omega\,$ in (\ref{26}) should be substituted with a sum over $\,${\it{distinct$\,$ values}}$\,$ of the modes:
\ba
\nonumber
\mbox{instead~of}~~ \sum_\omega W_l^2(\omega)~~\mbox{in~(\ref{26})$\,,\,$~use~this}~:~~ \sum_\omega\left[\,\sum_{{\omega_{\textstyle{_{_{lmpq}}}}=~\omega}} W_{l}(\omega_{lmpq})\,\right]^2 \ea
\ba
\nonumber\\
\mbox{where}~\sum_{{\omega_{\textstyle{_{_{lmpq}}}}=~\omega}} W_{l}(\omega_{lmpq})~~\mbox{denotes~a~sum~of~all~terms~for~which~}\,\omega_{lmpq}\,~\mbox{takes~a~value~}\,\omega~.\qquad\qquad\qquad\qquad\qquad
\nonumber
\ea
In short: first sum all the terms corresponding to one value of $\,\omega~$, $\,$then square the sum, and only thereafter sum over all the values of $\,\omega\,$.\\
\item[\bf(b)~] Much less intuitive is the fact that the spectral sum will contain extra terms missing completely in the expression (\ref{26}). As we shall see in Appendix \ref{sketch}, these terms look (up to some caveat) as $\,W_l(\omega)\,W_l(-\omega)\,$. They show up because two modes of opposite values, $\,\omega\,$ and $~-\omega\,$, correspond to the same physical frequency $\,|\,\omega\,|\,$.\\
\end{itemize}
For the time being, we use the notation $\,\sum^{\,\textstyle\sharp}~$:
\ba
\sum_{l=2}^{\infty}(2l+1)\,\left\langle W_l(t)\,\dot{U}_l(t)\right\rangle ~=~
{\sum_{\omega}}^{\textstyle{\sharp}~}(2l+1)\,\frac{\omega}{2}\,W^{\,2}_l(\omega)\,k_l(\omega)~\sin\epsilon_l(\omega)\,~,
\label{27}
\ea
where the superscript $\,^{\textstyle{\sharp}~}\,$ reminds the reader that the spectral sum needs to be amended down the road.
Insertion of (\ref{27})
into (\ref{25b}) results in:$\,$\footnote{~Were we using complex potentials, we would
have $\,W_l\,\dot{U}_l^*\,$ instead of $\,W_l\,\dot{U}_l\,$ in (\ref{25b}), and would have $\,W_l\,W_l^*\,$ instead of $\,W_l\,W_l\,$ in (\ref{27}).}
\ba
\langle\,P\,\rangle~=~\frac{1}{8\pi G R}~{\sum_{\omega}}^{\textstyle{\sharp}~}\,(2\,l\,+\,1)~\omega\,k_l(\omega)~\sin\epsilon_l(\omega)\int~W^{{\,2}}_l(\omega)~dS
\,~ ~.~~
\label{30a}
\label{30b}
\label{30}
\label{183}
\ea
If not for the superscript $\,^{\textstyle{\sharp}~}\,$, this expression would coincide with the results by Zschau (1978) and Platzman (1984).$\,$\footnote{~Our expression (\ref{30}) should be compared to the equation (22) from Zschau (1978), in understanding that our expression furnishes the mean damping rate summed over the entire spectrum, whereas Zschau's formula renders the energy loss over a period, at a certain frequency. With these details taken into account, the formulae are equivalent. They are also equivalent to the formulae (10) and ({12}) in Segatz et al. (1988) and (10) in Platzman (1984). Note, however, that in the first line of Platzman's formula a factor of $\,\omega\,$ is missing. \label{foot}}
The superscript reminds us of the important caveat in the evaluation of the sum: the factors $\,W^{\,2}_l(\omega)\,$ should be substituted with more complicated expressions, whereas the sum should be carried not over all modes $\,\omega=\omega_{\textstyle{_{lmpq}}}\,$, but over all $\,${\it{distinct$\,$ values$\,$}} of $\,\omega\,$, see Section \ref{next}.
\section{Heat production over tidal modes}\label{next}\label{accurate}
We must insert the expansions (\ref{1b}) and (\ref{505b}) into the formula (\ref{25b}) for the heating rate, in order to obtain a comprehensive version of the
somewhat symbolic sum (\ref{27}) and to see what the modified sum $\,\sum^{\textstyle{^{\,\sharp}}}\,$ actually means. A sketchy version of this calculation (which takes into account that the modes may have either sign, but neglects the degeneracy of modes) is given in Appendix \ref{sketch}. Extraordinarily laborious, the full calculation is presented in Appendix \ref{appD}. Here we provide the final results.
In the case of a $\,${\it{uniformly}}$\,$ moving pericentre, the average dissipation rate is:
\ba
\nonumber
\langle\,P\,\rangle~=~
\ea
\ba
\frac{G\,{M^*}^{\,2}}{a }\sum_{l=2}^{\infty}\left(\frac{R\,}{a}\right)^{\textstyle{^{2l+1}}}\sum_{m=0}^{l}
\frac{(l - m)!}{({\it l} + m)!}
\left(2-\delta_{0m}\right)
\sum_{p=0}^{l}F^{\,2}_{lmp}(i)
\sum_{q\,=-\infty}^{\infty}G^{\,2}_{lpq}(e)
\,\chi_{\textstyle{_{lmpq}}}\,
k_l(\chi_{\textstyle{_{lmpq}}})~\sin\epsilon_l(\chi_{\textstyle{_{lmpq}}})\,~,~~~
\label{196}
\ea
where the physical frequencies are the absolute values of the Fourier modes:
\ba
\chi_{\textstyle{_{lmpq}}}\,=~|\,\omega_{\textstyle{_{lmpq}}}\,|~=~|\,
(l-2p)\;\dot{\omega}\,+\,(l-2p+q)\;{\bf{\dot{\cal{M}}}}\,+\,m\;(\dot{\Omega}\,-\,\dot{\theta})
\,|~\approx~|\,(l-2p+q)\;n~-~m~\dot{\theta}\,|\,~,~~~
\label{freq}
\ea
and $\,\sin\epsilon_l(\chi_{\textstyle{_{lmpq}}})\,$ is what they often call $\,1/Q_{\textstyle{_{l}}}\,$ in the literature.$\,$\footnote{~It would not hurt to reiterate that the Fourier modes $\,\omega_{\textstyle{_{lmpq}}}\,$ can be of either sign, while the physical forcing frequencies (\ref{freq}) are positive definite. Obviously,
$~\chi_{\textstyle{_{lmpq}}}\,k_l(\chi_{\textstyle{_{lmpq}}})~\sin\epsilon_l(\chi_{\textstyle{_{lmpq}}})\,=\,\omega_{\textstyle{_{lmpq}}}\,
k_l(\omega_{\textstyle{_{lmpq}}})~\sin\epsilon_l(\omega_{\textstyle{_{lmpq}}})\,$, because the dynamical Love numbers are even functions, whereas the phase lags are odd and of the same sign as their argument. This is why the tidal quality factors may be expressed as $\,1/Q_{\textstyle{_{l}}}\,=\,\sin\epsilon_l(\chi_{\textstyle{_{lmpq}}})\,$ and also as $\,1/Q_{\textstyle{_{l}}}\,=\,|\,\sin\epsilon_l(\omega_{\textstyle{_{lmpq}}})\,|\,$, with the absolute value symbols being redundant in the former formula and needed in the latter.}
In the Appendix \ref{appD}, we also derive a formula for an idle pericentre; but the applicability realm of that formula is limited.$\,$\footnote{~For an idle pericentre, the time-averaged tidal-heating power reads as:
\ba
\nonumber
\langle\,P\,\rangle~=~
\frac{G\,{M^*}^{\,2}}{a}\,
\sum_{l=2}^{\infty}\,\left(\frac{R}{a}\right)^{\textstyle{^{2\,l\,+\,1}}}
\sum_{m=0}^{l}~
\frac{(l - m)!}{({\it l} + m)!}\;
\left(\,2 -\delta_{0m}\,\right)
\,\sum_{p=0}^{l}F_{lmp}(i)\;\sum_{p\,'=0}^{l}F_{lmp\,'}(i) \qquad\,\qquad\qquad\qquad\qquad\qquad\qquad
~\\
\nonumber\\
\left.\qquad\qquad\qquad\right.
\sum_{q\,=-\infty}^{\infty} G_{lpq}(e)~ \left[\,G_{lp\,'q\,'}(e)\,\right]_{\textstyle{_{q\,'\,=\,q\,-\,2\,(p-p\,')}}}~\cos\left(\,2\,(p\,'\,-\,p)\,\omega_0\,\right)
\,~\omega_{\textstyle{_{lmpq}}}\,
\,k_l(\omega_{\textstyle{_{lmpq}}})~\sin\epsilon_l(\omega_{\textstyle{_{lmpq}}})~
\,~,~~\qquad
\nonumber
\ea
$\omega_0\,$ being the value of the pericentre. This formula is of a limited practical value, since $\,\omega_0\,$ seldom stays idle. For example, if we are computing tidal damping in a planet perturbed by the star, $\,\omega_0\,$ of the star as seen from the planet will be evolving due to the equinoctial precession of the planet equator.
}
Our formula (\ref{196}) differs from the appropriate expression in Kaula (1964, Eqn 28) that contains a redundant factor $\,(1+k_{\,l})/2\,$.
\noindent
In the special situation where
\begin{itemize}
\item[(a)~] $~l=2\,$,
\item[(b)~] the body is incompressible, $\,$so $\,k_2\,=\,3\,h_2/5\,$, $\,$\footnote{~Static Love numbers of an incompressible spherical planet satisfy the relation $\;(2l+1)\,k_l\,=\,3\,h_l\;$. As explained in Appendix \ref{love}, an analogue of this equality for dynamical Love numbers is $\;(2l+1)\,k_l(\omega_{\textstyle{_{lmpq}}})\,=3\,h_l(\omega_{\textstyle{_{lmpq}}})\,$.}
\item[(c)~] the spin is synchronous, with no libration,
\end{itemize}
the expression (\ref{196}) should be compared to the formula (31) from Peale \& Cassen (1978). This comparison is carried out in Appendix \ref{appD}. In our expression, all terms are positive-definite, because the factors $\,\omega_{\textstyle{_{2mpq}}}\,k_2(\omega_{\textstyle{_{2mpq}}})\,\sin\epsilon_2(\omega_{\textstyle{_{2mpq}}})\,$ are even functions of the tidal mode $\,\omega_{\textstyle{_{2mpq}}}\,$. Peale \& Cassen (1978), however, have their terms proportional to the products \footnote{~In the notation of Peale \& Cassen (1978), these products are written as $~\,\frac{\textstyle 3}{\textstyle 5}\,h_2\,\frac{\textstyle{2-2p+q-m}}{\textstyle{Q_{2mpq}}}\,$.} $\,\omega_{\textstyle{_{2mpq}}}\,k_2(\omega_{\textstyle{_{2mpq}}})\,\sin|\epsilon_2(\omega_{\textstyle{_{2mpq}}})|\,~$ which are negative for negative
$\,\omega_{\textstyle{_{2mpq}}}\,$. In the considered setting, the largest of such terms were of the order of $\,e^4\,$. Such inputs lead to an underestimation of the heat production rate in the situations where the eccentricity is sufficiently high (like in the case of the Moon whose eccentricity might attain high values in the past due to a three-body resonance with the Sun).
\section{Conclusions}
We have derived from the first principles a formula for the tidal dissipation rate in a homogeneous spherical body. $\,${\it{En$\,$ route}}$\,$ to that formula, we compared our intermediate results with those by Zschau (1978) and Platzman (1984). When restricted to the special case of an incompressible spherical planet spinning synchronously without libration, our final formula can be compared with the commonly used expression from Peale \& Cassen (1978, Eqn. 31). The two turn out to differ. In our expression, the contributions from all Fourier modes are positive-definite, which is not the case of the formula from {\it{Ibid}}. As a result, employment of the formula from Peale \& Cassen (1978) may cause underestimation of tidal heat production. For example, our calculations for the rate of energy dissipation in the Moon with its current parameters yield a value approximately a factor of 2 greater than the result derived from the classic equation, with the difference coming from the non-resonant terms having the correct sign.
We therefore propose to use our equation (\ref{196}) for rocky planets and moons, instead of the classic formula from Peale \& Cassen (1978, eqn 31), because (a) our expression is more accurate for the basic case of objects captured into the 1:1 resonance, and (b) it correctly captures the frequency dependence of tidal dissipation for objects outside the 1:1 resonance.
Several applications are provided in the work by Makarov and Efroimsky (2014).
\section*{Acknowledgments}
M.E. is indebted to Jeroen Tromp and Mikael Beuthe for pointing out the advantage of the Lagrange description, and to Gabriel Tobie for a very useful exchange on tidal heating.
The authors are grateful to James G. Williams for meticulous reading of the manuscript and very judicious comments that were of great help.
The authors' special thanks go to the referee, Patrick A. Taylor, whose thoughtful and comprehensive report enabled the authors to improve the quality of the paper significantly.
This research has made use of NASA's Astrophysics Data System.\\
\pagebreak
|
1,108,101,565,719 | arxiv | \section*{Introduction}
Let $R$ be a commutative ring with unity and let $\mathbb N=\{1,2,3,\ldots\}$ be the set of positive integers. The set of functions
$\Omega(\mathbb N,R):=\{\alpha:\mathbb N \rightarrow R\}$ has a natural structure of a $R$-algebra, with the operations:
\begin{eqnarray*}
& (\alpha+\beta)(n):=\alpha(n)+\beta(n),\;\forall n\in \mathbb N, \\
& (\alpha \cdot \beta)(n):=\sum_{ab=n}\alpha(a)\beta(b),\;\forall n\in \mathbb N.
\end{eqnarray*}
Moreover, if $R$ is a domain, then $\Omega(\mathbb N,R)$ is also. Assume $R=\mathbb C$.
The algebraic properties of the ring $\Omega(\mathbb N,\mathbb C)$, known as the Dirichlet ring or the ring of arithmetic functions, were intensively studied in the literature, see for instance \cite{berb}, \cite{cash} and \cite{cash2}.
Let $k\in\mathbb R$. We denote $\mathcal O_k$ the domain of holomorphic functions on $\operatorname{Re} z>k$. We let
$$O(n^k)=\{ \alpha\in\Omega(\mathbb N,\mathbb C)\;|\;\exists C>0, \text{ such that} |\alpha(n)|\leq Cn^{k},\;\forall n\geq 1 \},$$
the set of arithmetic functions of order $n^k$. We denote $\mathcal D_k:=\bigcap_{\varepsilon>0}O(n^{k+\epsilon})$. It is easy to check
that $\mathcal D_k$ is a $\mathbb C$-subalgebra of $\Omega(\mathbb N,\mathbb C)$. For any $\alpha\in \mathcal D_k$,
the \emph{Dirichlet series} $$F(\alpha)(z):=\sum_{n=1}^{+\infty}\frac{\alpha(n)}{n^z},$$
uniformly absolutely converge on the compact subsets of $\operatorname{Re} z> k+1$, hence $F(\alpha)\in \mathcal O_{k+1}$.
Also, $F(\alpha)$ is identically zero if and only if $\alpha=0$ (see \cite[Theorem 11.3]{apostol}). Therefore,
by straightforward computations, the map
$F:\mathcal D_k \rightarrow \mathcal O_{k+1},\; \alpha\mapsto F(\alpha)$,
is an injective morphism of $\mathbb C$-algebras.
Consequently, if $\alpha_1,\ldots,\alpha_r\in\mathcal D_k$ are linearly independent (algebraically independent) over $\mathbb C$,
then $F(\alpha_1),\ldots,F(\alpha_r)$ are also linearly independent (algebraically independent) over $\mathbb C$.
This remark has very important consequences in analytic number theory, see for instance \cite{florin}, \cite{kac} and \cite{cim}.
The aim of our paper is to study the association $\alpha\mapsto F(\alpha)$ in a larger context. Our approach follows the methods used in
\cite{cim}.
Let $k\in \mathbb R$. We consider the subsets
\begin{eqnarray*}
& \mathcal C_k:=\{\alpha\in \Omega(\mathbb N, \mathcal O_k) :\forall \varepsilon>0, \exists M_{\varepsilon}:\{\operatorname{Re} z>k\}\rightarrow [0,+\infty) \text{ continuous }\\
& \text{ such that } |\alpha(n)(z)|<n^{k+\varepsilon}M_{\varepsilon}(z),\forall n\geq 1,\operatorname{Re} z>k \},\\
& \mathcal E_k:=\{\operatorname{L}\in \Omega(\mathbb N,\mathcal O_{k})\;:\; \exists c>0,\;C:\{\operatorname{Re} z>k\}\rightarrow [0,+\infty) \text{ continuous }\\
& \text{ such that } |\operatorname{L}(n)(z)| \leq C(z)n^{- c(\operatorname{Re} z)}, \;\forall n\geq 1, \operatorname{Re} z>k\}.
\end{eqnarray*}
We prove that $\mathcal C_k$ has a natural structure of $\mathcal O_k$-algebra, see Proposition $1.1$.
In Proposition $1.4$, given $\alpha\in\mathcal D_k$ and $L\in\mathcal E_k$, we prove that there exists $k'\geq k$ such that the series of functions
$$F_L(\alpha)(z):=\sum_{n=1}^{+\infty}\alpha(n)(z)\operatorname{L}(n)(z)$$
is uniformly absolutely convergent on the compact subsets of $\operatorname{Re} z>k'$, hence $F(\alpha)\in \mathcal O_{k'}$.
In the main result of the first section, Theorem $1.5$, we prove that the map $\alpha\mapsto F_L(\alpha)$ is $\mathcal O_k$-linear and, moreover, is a morphism of
$\mathcal O_k$-algebras, if $L:(\mathbb N,\cdot)\rightarrow (\mathcal O_k,\cdot)$ is a monoid morphism.
In the $\mathbb C$-vector space $\mathcal E_k$ we consider the subset
\begin{eqnarray*}
& \widetilde{\mathcal E}_k:=\{\operatorname{L}\in \mathcal E_k\;:\; \operatorname{L}(n)(z)\neq 0,\;\forall n\geq 1,\;\operatorname{Re} z>k \text{ and } \\
& \forall n_0\geq 1,\; \exists C(n_0)>0, \text{ such that } \frac{|L(n)(z)|}{|L(n_0)(z)|} \leq n^{-C(n_0)\operatorname{Re} z},
\;\forall n\geq n_0+1,\;\operatorname{Re} z>k \}.
\end{eqnarray*}
In Remark $1.6$, we note that the (general) Dirichlet series, see \cite{hardy}, and the classical Dirichlet series, see \cite{apostol}, are
particular cases of the type $F_{L}(\alpha)$, where $\alpha\in \Omega(N^*,\mathbb C)$ with polynomial growth and $L\in \widetilde{\mathcal E}_0$ has a special form.
In the beginning of the second section, similarly to \cite{cim}, we define
\begin{eqnarray*}
& \mathcal B_{k} := \{f \in \mathcal O_k \;:\;\forall a>0,\; \lim_{x\rightarrow +\infty}e^{-ax}|f(x)|=0 \}, \\
& \mathcal I_{k} := \{f\in \mathcal O_k \;:\exists a>0 \text{ such that} \;\lim_{x \rightarrow +\infty}e^{ax}|f(x)|=0 \},
\end{eqnarray*}
where the limits are taken on the real line.
In Proposition $2.1$ we show that $\mathcal B_{k}$ is a subdomain in $\mathcal O_k$ and $\mathcal I_{k}$ is an ideal in $\mathcal B_{k}$.
Moreover, in Proposition $2.5$ we prove that $\mathcal I_{k}$ does not contain non-zero entire functions of order $<1$.
Let $\mathcal A_{k} \subset (\mathcal B_{k}\setminus \mathcal I_{k})\cup \{0\}$ be a
$\mathbb C$-subalgebra of $\mathcal B_{k}$. We consider the subset
\begin{eqnarray*}
& \mathcal H_{k}:=\{ \alpha\in \Omega(\mathbb N,\mathcal A_{k})\;:\;\exists M:\{\operatorname{Re} z> k\}\rightarrow [0,+\infty) \text{ continuous, such that } \\
& \text{(i)} |\alpha(n)(z)|\leq M(z)n^{k},\;\forall n\geq 1,\operatorname{Re} z>k,\text{ (ii) }\lim_{x\rightarrow+\infty} e^{-ax}M(x) = 0,\;\forall a>0\},
\end{eqnarray*}
which is an $\mathcal A_{k}$-subalgebra of $\Omega(\mathbb N,\mathcal A_{k})$.
The main result of our paper is Theorem $2.7$, were we prove that for any $L\in \widetilde{\mathcal E}_k$, there exists a constant $k'\geq k$ which
depends on $L$, such that the map $F_L:\mathcal H_k \rightarrow \mathcal O_{k'}$, $\alpha\mapsto F_L(\alpha)$, is an injective morphism of $\mathcal A_k$-modules.
Moreover, if $L:(\mathbb N,\cdot)\rightarrow (\mathcal O_k,\cdot)$ is a monoid morphism, then $F_L$ is an injective morphism of $\mathcal A_k$-algebras.
Note that, Theorem $2.7$, in light of Remark $1.6$, generalize the identity theorem for (general) Dirichlet series, see \cite[Theorem 6]{hardy}.
Let $\alpha_1,\ldots,\alpha_r \in \mathcal H_{k}$ and assume there exists a nondiscrete subset $S\subset \{\operatorname{Re} z\gg 0\}$ such that
the numerical sequences $n\mapsto \alpha_j(n)(z)$, $1\leq j\leq r$, are linearly independent over $\mathbb C$, for any $z\in S$.
Let $L\in\widetilde{\mathcal E}_{k}$. In Corollary $2.8$ we prove that $F_L(\alpha_1),\ldots,F_L(\alpha_r)\in \mathcal O_{k'}$ are linearly independent over
$\mathcal F_{k'}:=$ the quotient field of $\mathcal A_{k'}$. Also, if $L$ is a monoid morphism between $(\mathbb N,\cdot)$ and $(\mathcal O_k,\cdot)$ and
$n\mapsto \alpha_j(n)(z)$, $1\leq j\leq r$, are algebraically independent over $\mathbb C$, then $F_L(\alpha_1),\ldots,F_L(\alpha_r)$ are algebraically
independent over $\mathcal A_k$. The case of (general) Dirichlet series, discussed in Remark $1.6$, is reobtained as a particular case of Corollary $2.9$.
In the third section, we give an application to Dirichlet series associated to multiplicative arithmetic functions. Given $\alpha_1,\ldots,\alpha_r\in \mathcal D_k$ multiplicative functions
and an integer $m\geq 0$, such that $e,\alpha_1,\ldots,\alpha_r$ are pairwise non equivalent, in the sense of \cite{kac}, we prove that the derivatives of order $\leq m$, $F^{(i)}(\alpha_j)$, $1\leq j\leq r$,
$0\leq i\leq m$, are linearly independent over $\mathcal F_k$, and, in particular, over the field of meromorphic functions of order $<1$, see Proposition $3.2$.
This generalize the main result of \cite{kac}. Moreover, if $\alpha_1,\ldots,\alpha_r$ are algebraically independent over $\mathbb C$, then $F(\alpha_1),\ldots,F(\alpha_r)$ are algebraically
independent, see Proposition $3.3$. We also note in Remark $3.4$ the connections with the theory of Artin L-functions, see \cite{artin1}, and the main results of \cite{florin} and \cite{cim}. We note that other, and independent, cases of independence of suitable families
of Dirichlet series are proved in \cite{molt}.
\section{Algebras of sequences of holomorphic functions}
Let $R$ be a commutative ring with unity and denote $\mathbb N$ the set of positive integers. In the set of functions
$\Omega(\mathbb N,R):=\{\alpha:\mathbb N \rightarrow R\}$, we consider two operations
\begin{eqnarray*}
& (\alpha+\beta)(n):=\alpha(n)+\beta(n),\;\forall n\in \mathbb N, \\
& (\alpha \cdot \beta)(n):=\sum_{ab=n}\alpha(a)\beta(b),\;\forall n\in \mathbb N.
\end{eqnarray*}
We denote $0,e \in \Omega(\mathbb N,R)$ the functions $0(n)=0$ for all $n\geq 1$, $e(1)=1$, $e(n)=0$, for all $n\geq 2$.
It is well known, see for instance \cite{cash} and \cite{cash2}, that $(\Omega(\mathbb N,R),+,\cdot)$ is a commutative ring with the unity $e$.
Moreover, if $R$ is a domain then $\Omega(\mathbb N,R)$ is also a domain.
Let $\mathcal O:=\mathcal O(\mathbb C)$ be the domain of holomorphic (entire) functions, with the usual operations of addition and multiplication of functions.
For any real number $k$, let $\mathcal O_k:=\mathcal O(\{\operatorname{Re} z>k\})$ be the domain of holomorphic functions defined on the open half plane $\operatorname{Re} z>k$.
Note that, the natural map
$$i_k:\mathcal O \rightarrow \mathcal O_k, \; f\mapsto f|_{\operatorname{Re} z>k},$$ is an injective morphism of $\mathbb C$-algebras.
Indeed, if $f,g\in \mathcal O$ such that $f|_{\operatorname{Re} z>k}=g|_{\operatorname{Re} z>k}$, then, by the identity theorem of holomorphic functions, $f=g$.
If we see the maps $i_k$'s as inclusions, then $\mathcal O = \bigcap_{k\in\mathbb R} \mathcal O_k$.
If $k\leq k'$ then the natural map $$i_{k,k'}:\mathcal O_{k} \rightarrow \mathcal O_{k'},\; f\mapsto f|_{\operatorname{Re} z>k'},$$
is an injective morphism of $\mathbb C$-algebras.
Moreover, $i_{k,k}$ is the identity map on $\mathcal O_k$ and for any $k''<k'<k$,
we have $i_{k,k''}=i_{k',k''}\circ i_{k,k'}$. Hence, $(\{\mathcal O_k\}_{k\in\mathbb R}, \{i_{k,k'}\}_{k\leq k'})$ is a direct system.
We denote $$\mathcal O_{\infty}:=\lim_{\longrightarrow}\mathcal O_k = \bigcup_{k\in\mathbb R}\mathcal O_k,$$
the direct limit of the above system. For the last equality, we see the maps $i_{k,k'}$'s as inclusions. Therefore
$ \mathcal O \subset \mathcal O_k \subset \mathcal O_{\infty}$, for all $k\in \mathbb R$.
On the other hand, for any $k,k'\in\mathbb R$,
the map
\begin{equation}\label{tkk}
T_{k-k'}:\mathcal O_{k'}\rightarrow \mathcal O_{k},\; T_{k-k'}(f)(z):=f(z-k'+k),\;\forall f\in\mathcal O_{k'},\operatorname{Re} z>k',
\end{equation}
is a $\mathbb C$-algebra isomorphism. The above construction can be naturally extended as follows. For any $k<k'$, we have the natural maps of $\mathcal O_k$-algebras
$$ i_{k,k'}:\Omega(\mathbb N,\mathcal O_k)\rightarrow \Omega(\mathbb N,\mathcal O_{k'}),\; i_{k,k'}(\alpha)(n):=\alpha(n)|_{\operatorname{Re} z> k'},\;\forall n\geq 1.$$
If we see $i_{k,k'}$'s as inclusions, we can define $\Omega^f(\mathbb N,\mathcal O_{\infty}):=\bigcup_{k\in\mathbb R} \Omega(\mathbb N,\mathcal O_k)$.
Since $\alpha(n)\in \mathcal O_k$, for all $\alpha\in \Omega(\mathbb N,\mathcal O_k)$, $k\in\mathbb R$ and $n\in \mathbb N$,
it follows that $\Omega(\mathbb N,\mathcal O)=\bigcap_{k\in\mathbb R}\Omega(\mathbb N,\mathcal O_k)$.
Note that
$$ \Omega(\mathbb N,\mathcal O) \subset \Omega(\mathbb N,\mathcal O_k) \subset \Omega^f(\mathbb N,\mathcal O_{\infty})\subset \Omega(\mathbb N,\mathcal O_{\infty}),
\forall k\in\mathbb R, $$
all the inclusion being strict. For any $k\in\mathbb R$, we consider {\small
$$\mathcal C_k:=\{\alpha\in \Omega(\mathbb N, \mathcal O_k) :\forall\varepsilon>0, \exists M_{\varepsilon}:\{\operatorname{Re} z>k\}\rightarrow [0,+\infty) \text{ continuous }$$
\begin{equation}\label{ceka}
\text{ such that } |\alpha(n)(z)|<n^{k+\varepsilon}M_{\varepsilon}(z),\forall n\geq 1,\operatorname{Re} z>k \}.
\end{equation}
As before, we can define $\mathcal C_{\infty}:=\bigcup_{k\in\mathbb R}\mathcal C_k$ and $\mathcal C:=\bigcap_{k\in\mathbb R}\mathcal C_k$. Note that
$$\mathcal C\subset \mathcal C_k \subset \mathcal C_{\infty} \subset \Omega^f(\mathbb N,\mathcal O_{\infty}),\;\forall k\in\mathbb R,$$
where the inclusions are strict.
\begin{prop}
With the above notations, $\mathcal C_k$ is an $\mathcal O_k$-subalgebra of the domain $\Omega(\mathbb N,\mathcal O_k)$.
\end{prop}
\begin{proof}
Let $\alpha,\beta\in \mathcal C_k$ and let $\varepsilon>0$. Let $M_{\varepsilon},M'_{\varepsilon}:\{\operatorname{Re} z>k\} \rightarrow [0,+\infty)$ such that
$$|\alpha(n)(z)|\leq n^{k+\varepsilon}M_{\varepsilon}(z),\; |\beta(n)(z)|\leq n^{k+\varepsilon}M'_{\varepsilon}(z),\; \forall n\geq 1,\operatorname{Re} z>k.$$
It follows that $$|(\alpha+\beta)(n)(z)| = |\alpha(n)(z)|+|\beta(n)(z)|\leq n^{k+\varepsilon}(M_{\varepsilon}(z)+M'_{\varepsilon}(z)),\; \forall n\geq 1,\operatorname{Re} z>k,$$
hence $\alpha+\beta \in\mathcal C_k$. On the other hand, for $n\geq 1$ and $\operatorname{Re} z>k$, we have that
\begin{equation}\label{pula}|(\alpha\cdot \beta)(n)(z)|
\leq \sum_{ab=n} |\alpha(a)(z)||\beta(b)(z)| \leq
\sum_{ab=n} a^{k+\varepsilon}M_{\varepsilon}(z)b^{k+\varepsilon}M'_{\varepsilon}(z) = d(n)n^{k+\varepsilon}M_{\varepsilon}(z)M'_{\varepsilon}(z),
\end{equation}
where $d(n)$ is the number of (positive) divisors of $n$. For any $\varepsilon'>\varepsilon$, there exists a constant $C>0$ such that
$d(n)< C n^{\varepsilon'-\varepsilon}$, for all $n\geq 1$, see \cite[Page 296]{apostol}. Let $\overline{M}_{\varepsilon'}(z)=C \cdot M_{\varepsilon}(z)M'_{\varepsilon}(z)$. By \eqref{pula} it follows that
$$|(\alpha\cdot \beta)(n)(z)|\leq \overline{M}_{\varepsilon'}(z)n^{k+\varepsilon'},\;\forall n\geq 1,\operatorname{Re} z>k,$$
hence $\alpha\cdot\beta\in \mathcal C_k$. If $f\in\mathcal O_k$ and $\alpha\in \mathcal C_k$, then for any $\varepsilon>0$, we have that
$$|(f\cdot \alpha)(n)(z)| = |f(z)||\alpha(n)(z)| \leq n^{k+\varepsilon}M_{\varepsilon}(z)|f(z)|,\forall n\geq 1,\operatorname{Re} z>k,$$
hence $f\cdot \alpha \in \mathcal C_k$.
\end{proof}
\begin{cor}
With the above notations, it hold that
\begin{enumerate}
\item[(1)] $\mathcal C_{\infty}$ is an $\mathcal O_{\infty}$-subalgebra of $\Omega^f(\mathbb N,\mathcal O_{\infty})$.
\item[(2)] $\mathcal C$ is an $\mathcal O$-subalgebra of $\Omega(\mathbb N,\mathcal O)$.
\item[(3)] The inclusions $\Omega(\mathbb N, \mathcal O) \subset \Omega(\mathbb N, \mathcal O_k) \subset \Omega^f(\mathbb N,\mathcal O_{\infty})$ induce
the inclusions $\mathcal C \subset \mathcal C_k\subset \mathcal C_{\infty}$, i.e. $\mathcal C_k=\mathcal C_{\infty}\cap \Omega(\mathbb N, \mathcal O_k)$ and
$\mathcal C=\mathcal C_{\infty} \cap \Omega(\mathbb N, \mathcal O) = \mathcal C_{k} \cap \Omega(\mathbb N, \mathcal O)$.
\end{enumerate}
\end{cor}
\begin{proof}
It follows from Proposition $1.1$ and the above considerations by easy checkings.
\end{proof}
Let $k\in \mathbb R$. We define
$$\mathcal E_k:=\{\operatorname{L}\in \Omega(\mathbb N,\mathcal O_{k})\;:\; \exists c>0,\;C:\{\operatorname{Re} z>k\}\rightarrow [0,+\infty) \text{ continuous } $$
\begin{equation}\label{ek}
\text{ such that } |\operatorname{L}(n)(z)| \leq C(z)n^{- c(\operatorname{Re} z)}, \;\forall n\geq 1,\operatorname{Re} z>k\}.
\end{equation}
We let $\mathcal E_{\infty}:=\bigcup_{k\in\mathbb R}\mathcal E_k$ and $\mathcal E:=\bigcap_{k\in\mathbb R}\mathcal E_k$, where the intersection and the union
are naturally defined.
\begin{prop}
For any $k\in\mathbb R$, $\mathcal E_k$ has a structure of a $\mathbb C$-vector space, hence $\mathcal E_{\infty}$
and $\mathcal E$ have also structures of $\mathbb C$-vector spaces.
\end{prop}
\begin{proof}
The proof is straightforward.
\end{proof}
\begin{prop}
Let $k\in \mathbb R$ and let $\operatorname{L}\in \mathcal E_k$. There exists a constant $k'=k'(\operatorname{L})\geq k$ such that,
for any $\alpha\in \mathcal C_k$, the series of functions
$$F_{\operatorname{L}}(\alpha)(z) := \sum_{n=1}^{+\infty}\alpha(n)(z)\operatorname{L}(n)(z)$$
is uniformly absolutely convergent on the compact subsets of $\operatorname{Re} z>k'$, hence $F(\alpha)\in \mathcal O_{k'}$.
\end{prop}
\begin{proof}
Let $\varepsilon>0$. From $(1.2)$ and \eqref{ek}, there exist a constant $c>0$ and two continuous maps $M_{\varepsilon},C:\{\operatorname{Re} z>k\}\rightarrow [0,+\infty)$ such that
\begin{equation}\label{cucu1}
|\alpha(n)(z)\operatorname{L}(n)(z)|\leq M_{\varepsilon}(z)n^{k+\varepsilon}C(z)n^{-c(\operatorname{Re} z)} = M_{\varepsilon}(z)C(z)n^{k+\varepsilon-c\operatorname{Re} z}
,\;\forall ;n\geq 1,\operatorname{Re} z>k.
\end{equation}
Let $k':=\max\{\frac{k+1}{c},k\}$ and let $K\subset \{\operatorname{Re} z>k'\}$ be a compact subset.
Let $r:=\inf \{\operatorname{Re} z\;:\;z\in K\}$. Since $K$ is compact and $\varepsilon>0$ can be arbitrarily chosen, we can assume that
$\delta:=c(r-k')>\varepsilon$. From \eqref{cucu1} it follows that
\begin{equation}\label{cucu2}
|\alpha(n)(z)\operatorname{L}(n)(z)|\leq M_{\varepsilon}(z)C(z) n^{-1-\delta+\varepsilon},\;\forall n\geq 1,\; z\in K.
\end{equation}
Let $M_K:=\sup\{M_{\varepsilon}(z)C(z):\; z\in K \}$. From \eqref{cucu2} it follows that
$$|\alpha(n)(z)\operatorname{L}(n)(z)|\leq \frac{M_K}{n^{1+\delta-\varepsilon}},\;\forall n\geq 1,\;z\in K,$$
hence $F_{\operatorname{L}}(\alpha)$ is uniformly absolutely convergent on $K$. Consequently, $F_{\operatorname{L}}(\alpha)\in \mathcal O_{k'}$.
\end{proof}
\begin{teor}
Let $\operatorname{L}\in \mathcal E_{\infty}$.
\begin{enumerate}
\item[(1)] The map $F_{\operatorname{L}}:\mathcal C_{\infty} \rightarrow \mathcal O_{\infty}$, $\alpha\mapsto F_{\operatorname{L}}(\alpha)$,
is a linear map of $\mathcal O_{\infty}$-modules. Moreover, if $\operatorname{L}(1)(z)\neq 0$ for all $\operatorname{Re} z\gg 0$, then $F_{\operatorname{L}}$
is surjective.
\item[(2)] If $\operatorname{L}(ab)=\operatorname{L}(a)\operatorname{L}(b)$, for all $a,b\in\mathbb N$, and $\operatorname{L}\neq 0$, then $F_{\operatorname{L}}$
is a surjective morphism of $\mathcal O_{\infty}$-algebras with $F_{\operatorname{L}}(f\cdot e)=f$, for all $f\in\mathcal O_{\infty}$.
\end{enumerate}
\end{teor}
\begin{proof}
$(1)$ Let $\alpha,\beta \in \mathcal C_{\infty}$, $f \in \mathcal O_{\infty}$ and $z\in \mathbb C$ with $\operatorname{Re} z\gg 0$.
We have that
$$ F_{\operatorname{L}}(\alpha+\beta)(z) = \sum_{n=1}^{+\infty}(\alpha+\beta)(n)(z)\operatorname{L}(n)(z) = $$
$$ = \sum_{n=1}^{+\infty}\alpha(n)(z)\operatorname{L}(n)(z) + \sum_{n=1}^{+\infty}\beta(n)(z)\operatorname{L}(n)(z) =
F_{\operatorname{L}}(\alpha)(z)+F(\beta)_{\operatorname{L}}(z) \text{ and } $$
$$ F_{\operatorname{L}}(f\cdot \alpha)(z) = \sum_{n=1}^{+\infty}(f\cdot \alpha)(n)(z)\operatorname{L}(n)(z) =
f(z)\sum_{n=1}^{+\infty}\alpha(n)(z)\operatorname{L}(n)(z) = f(z)F(\alpha)(z),$$
thus $F$ is a linear map of $\mathcal O_{\infty}$-algebras.
Now, assume $F_{\operatorname{L}}(1)(z)\neq 0$, for all $\operatorname{Re} z\gg 0$ and let
$$g(z):=f(z)\operatorname{L}(1)(z)^{-1},\; \operatorname{Re} z\gg 0.$$
It follows that $F_{\operatorname{L}}(g\cdot e)=f$, hence $F_{\operatorname{L}}$ is surjective.
$(2)$ Let $\alpha,\beta \in \mathcal C_{\infty}$ and $z\in \mathbb C$ with $\operatorname{Re} z\gg 0$. We have that
$$ F_{\operatorname{L}}(\alpha\cdot \beta)(z)= \sum_{n=1}^{+\infty}(\alpha\cdot \beta)(n)(z)\operatorname{L}(n)(z) =
\sum_{n=1}^{+\infty}\sum_{ab=n}\alpha(a)(z)\beta(b)(z)\operatorname{L}(ab)(z) = $$
$$ = \sum_{a=1}^{+\infty}\alpha(a)(z)L(a)(z)\sum_{b=1}^{+\infty}\beta(b)(z)L(b)(z)=F(\alpha)(z)F(\beta)(z),$$
therefore $F_{\operatorname{L}}$ is a morphism of $\mathcal O_{\infty}$-algebras. Moreover, the hypothesis implies
$\operatorname{L}(1)(z)=\operatorname{L}(1)(z)^2$ so either $\operatorname{L}(1)(z)=0$, either $\operatorname{L}(1)(z)=1$. If the zero set of $\operatorname{L}(1)$ is nondiscrete,
then $\operatorname{L}(1)$ is identically zero, which implies $\operatorname{L}(n)=\operatorname{L}(1)\operatorname{L}(n)$ is identically zero, for all $n\geq 1$, a
contradiction. Therefore, $\operatorname{L}(1)$ takes the value $1$ in a nondiscrete subset of the half-plane $\operatorname{Re} z\gg 0$, hence
$\operatorname{L}(1)(z)=1$, for all $z\in \mathbb C$.
\end{proof}
Let $k\in \mathbb R$. We consider the following subset
$$
\widetilde{\mathcal E}_k:=\{\operatorname{L}\in \mathcal E_k\;:\; \operatorname{L}(n)(z)\neq 0,\;\forall n\geq 1,\;\operatorname{Re} z>k \text{ and }
$$
\begin{equation}\label{ekt}
\forall n_0\geq 1,\; \exists C(n_0)>0, \text{ such that } \frac{|L(n)(z)|}{|L(n_0)(z)|} \leq n^{-C(n_0)\operatorname{Re} z},\;\forall n\geq n_0+1,\; \operatorname{Re} z>k \}.
\end{equation}
We consider also $\widetilde{\mathcal E}_{\infty}:=\bigcup_{k\in\mathbb R}\widetilde{\mathcal E}_k
\text{ and } \widetilde{\mathcal E}:=\bigcap_{k\in\mathbb R}\widetilde{\mathcal E}_k$, where the intersection and the union
are naturally defined.
\begin{obs} \emph{
Let $\lambda:\mathbb N \rightarrow (0,+\infty)$ be an increasing sequence of positive real numbers such that
\begin{equation}\label{lambda}
\limsup_{n\rightarrow+\infty}\frac{\log n}{\lambda (n)} < + \infty.
\end{equation}
We define
$\operatorname{L}:=e^{-\lambda}:\mathbb N \rightarrow \mathcal O,\; \operatorname{L}(n)(z):=e^{-\lambda(n)z}, \;z\in\mathbb C.$
The condition \eqref{lambda} implies that there exists a constant $c>0$ such that
\begin{equation}\label{lambda2}
\lambda(n)\geq c\log n,\;\forall n\geq 0.
\end{equation}
From \eqref{lambda2} it follows that
$$ |\operatorname{L}(n)(z)|\leq e^{-c(\operatorname{Re} z)\log n} = n^{-c\operatorname{Re} z},\;\forall z\in\mathbb C,n\geq 1,$$
hence $\operatorname{L}\in \mathcal E$. Also $L(n)(z)\neq 0$, for all $z\in \mathbb C$. Let $n_0\in\mathbb N$ and $n\geq n_0+1$.
It holds that
\begin{equation}
\left|\frac{\operatorname{L}(n)(z)}{\operatorname{L}(n_0)(z)}\right| = e^{-(\lambda(n)-\lambda(n_0))\operatorname{Re} z},\;\forall z\in\mathbb C.
\end{equation}
By $(1.9)$, there exists a constant $C(n_0)>0$ such that
$$ e^{-(\lambda(n)-\lambda(n_0))\operatorname{Re} z} \leq e^{-C(n_0)\log n \operatorname{Re} z} = n^{-C(n_0)\operatorname{Re} z},\;\forall z\in \mathbb C,\;\operatorname{Re} z>0,$$
hence, from $(1.10)$, it follows that $\operatorname{L}\in \widetilde{\mathcal E_0}$.}
\emph{We let
$$O(n^k)=\{ \alpha\in\Omega(\mathbb N,\mathbb C)\;|\;\exists C>0, \text{ such that} |\alpha(n)|\leq Cn^{k},\;\forall n\geq 1\},$$
the set of arithmetical functions of order $n^k$. We consider the $\mathbb C$-algebras
\begin{equation}
\mathcal D :=\mathcal C\cap \Omega(\mathbb N,\mathbb C), \mathcal D_k := \mathcal C_k\cap \Omega(\mathbb N,\mathbb C) \text{ and } \mathcal D_{\infty} := \mathcal C_{\infty} \cap \Omega(\mathbb N,\mathbb C).
\end{equation}
It is easy to check that
$$\mathcal D_k=\bigcap_{\varepsilon>0} O(n^{k+\varepsilon}), \; \mathcal D_{\infty} = \bigcup_{k\in \mathbb R} O(n^k) \text{ and } \mathcal D=\bigcap_{k\in\mathbb R} O(n^k).$$
If $\alpha\in \mathcal D_k$ then the \emph{general Dirichlet series}
$$
F_{\lambda}(\alpha)(z):=F_{\operatorname{L}}(\alpha)(z)=\sum_{n=1}^{+\infty}\alpha(n)e^{-\lambda(n)z},
$$
defines a holomorphic function on $\operatorname{Re} z>k'$.}
\emph{
It is well known \cite[Theorem 6]{hardy} that $F_{\lambda}(\alpha)$ is identically zero
if and only if $\alpha(n)=0$, for all $n\geq 1$. Consequently, from Theorem $1.5(1)$, the map
$F_{\lambda}:\mathcal D_{\infty}\rightarrow \mathcal O_{\infty},\; \alpha\mapsto F_{\lambda}(\alpha)$,
is an injective morphism of $\mathbb C$-vector spaces.
In particular, for $\lambda(n)=\log n$, the \emph{(classical) Dirichlet series}
$$
F(\alpha)(z):=\sum_{n=1}^{+\infty}\frac{\alpha(n)}{n^z},
$$
defines a holomorphic function on $\operatorname{Re} z>k+1$. From the above remarks or as a consequence of the uniqueness theorem for Dirichlet series (\cite[Theorem 11.3]{apostol}),
$F(\alpha)$ is identically zero if and only if $\alpha(n)=0$, for all $n\geq 1$. Hence, from Theorem $1.5(2)$, the map
$F_{\lambda}:\mathcal D_{\infty}\rightarrow \mathcal O_{\infty},\; \alpha\mapsto F_{\lambda}(\alpha),$
is an injective morphism of $\mathbb C$-algebras.}
\end{obs}
\section{Main results}
Let $k\in\mathbb R$. Similarly to \cite[page 2]{cim}, we define
\begin{equation}
\mathcal B_{k} := \{f \in \mathcal O_k \;:\;\forall a>0,\; \lim_{x\rightarrow +\infty}e^{-ax}|f(x)|=0 \},
\end{equation}
\begin{equation}
\mathcal I_{k} := \{f\in \mathcal O_k \;:\exists a>0
\text{ such that} \;\lim_{x \rightarrow +\infty}e^{ax}|f(x)|=0 \}.
\end{equation}
We also let
\begin{equation}
\mathcal B_{\infty}:=\bigcup_{k\in\mathbb R}\mathcal B_{k} \subset \mathcal O_{\infty},\; \mathcal B:=\bigcap_{k\in\mathbb R}\mathcal B_{k}\subset \mathcal O,\; \mathcal I_{\infty}:=\bigcup_{k\in\mathbb R}\mathcal I_{k} \subset \mathcal O_{\infty}
\text{ and }\;\mathcal I :=\bigcap_{k\in\mathbb R}\mathcal I_{k} \subset \mathcal O.
\end{equation}
\begin{prop}
With the above notations, it hold that
\begin{enumerate}
\item[(1)] $\mathcal B_{k}$ is a subdomain of $\mathcal O_k$ and $\mathcal I_{k}$ is an ideal of $\mathcal B_{k}$.
\item[(2)] $\mathbb C[z]\subset \mathcal B_{k}$ and $\mathcal I_{k} \cap \mathbb C[z] = \{0\}$. In particular, $\mathcal B_{k}$ is a $\mathbb C[z]$-subalgebra of $\mathcal O_k$.
\item[(3)] If $k,k'\in\mathbb R$, then $T_{k-k'}:\mathcal B_{k} \rightarrow \mathcal B_{k'}$, $T_{k-k'}(f)(z):=f(z+k-k')$, for all $f\in\mathcal B_{k'}$, is a $\mathbb C$-algebra isomorphism.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)$
Let $f,g\in \mathcal B_{k}$ and $a>0$. We have
\begin{equation}
0\leq e^{-ax}|f(x)+g(x)|\leq e^{-ax}|f(x)| + e^{-ax}|g(x)|,\;\forall x>k.
\end{equation}
By $(2.1)$ and $(2.4)$ it follows that
$$ \lim_{x\rightarrow +\infty}e^{-ax}|f(x)+g(x)|=0, $$
hence $f+g\in \mathcal B_{k}$. On the other hand, by $(2.1)$, we have
$$\lim_{x\rightarrow +\infty}e^{-ax}|f(x)g(x)| = \lim_{x\rightarrow +\infty}e^{-\frac{a}{2}x}|f(x)|
\lim_{x\rightarrow +\infty}e^{-\frac{a}{2}x}|g(x)| = 0,$$
hence $f\cdot g \in \mathcal B_{k}$. Therefore $\mathcal B_{k}$ is a subdomain of $\mathcal O_k$.
Let $f,g\in \mathcal I_{k}$. From $(2.2)$, there exists $a>0$ such that
$$\lim_{x\rightarrow +\infty}e^{ax}|f(x)| = \lim_{x\rightarrow +\infty}e^{ax}|g(x)| = 0.$$
Since $e^{ax}|f(x)+g(x)|\leq e^{ax}|f(x)|+e^{ax}|g(x)|$, it follows that
$$\lim_{x\rightarrow +\infty}e^{ax}|f(x)+g(x)|=0,$$
hence $f+g\in\mathcal I_{k}$. Let $f\in \mathcal I_{k}$ and $g\in \mathcal B_{k}$. Let $a>0$ such that
\begin{equation}
\lim_{x\rightarrow +\infty}e^{ax}|f(x)|=0.
\end{equation}
By $(2.4)$ and $(2.1)$ it follows that
$$\lim_{x \rightarrow +\infty} e^{\frac{a}{2}x}|f(x)g(x)| = \lim_{x \rightarrow +\infty} e^{ax}|f(x)| e^{-\frac{a}{2}x} |g(x)| = 0,$$
hence $f\cdot g\in \mathcal I_{k}$.
$(2)$ Let $f\in \mathbb C[z]$. It is clear that
$$ \lim_{x\rightarrow +\infty}e^{-ax}|f(x)| = 0,\;\forall a>0, $$
hence $f\in\mathcal B_{k}$. On the other hand, if $f\neq 0$, then $\lim_{x\rightarrow +\infty}|f(x)|>0$, therefore $f\notin\mathcal I_{k}$.
$(3)$ It follows by straighforward computations, similarly to formula \eqref{tkk}.
\end{proof}
\begin{cor}
With the above notations, it hold that
\begin{enumerate}
\item[(1)] $\mathcal B_{\infty}$ is a $\mathbb C[z]$-subalgebra of $\mathcal O_{\infty}$ and $\mathcal I_{\infty}$ is an ideal in $\mathcal B_{\infty}$ with $\mathcal I_{\infty}\cap \mathbb C[z]=\{0\}$.
\item[(2)] $\mathcal B$ is a $\mathbb C[z]$-subalgebra of $\mathcal O$ and $\mathcal I$ is an ideal in $\mathcal B$ with $\mathcal I\cap \mathbb C[z]=\{0\}$.
\end{enumerate}
\end{cor}
\begin{proof}
It follows immediately from $(2.3)$ and Proposition $2.1$.
\end{proof}
\begin{prop}
For any $a>0$, let $f_a(z):=e^{-az}$, $z\in\mathbb C$. It hold that
\begin{enumerate}
\item[(1)] If $f\in \mathcal B_{k}$ then $f\in \mathcal I_{k} \Leftrightarrow$ there exists $a>0$ such that $g=\frac{f}{f_a}\in \mathcal I_{k}$.
\item[(2)] If $a<b$ then $f_a\mathcal I_{k} \supsetneq f_b\mathcal B_{k}$.
\item[(3)] $\mathcal I_{k} = \sum_{a>0}f_a\mathcal B_{k} = \bigcup_{a>0}f_a\mathcal B_{k}$.
\item[(4)] Let $(a_n)_{n\geq 1}$ be a sequence of positive real numbers with $\liminf_n a_n=0$. Then
$\{f_{a_n}:n\geq 1\}$ is a system of generators of the ideal $\mathcal I_{k}$.
\item[(5)] The ideal $\mathcal I_{k}$ is not finitely generated.
\end{enumerate}
\end{prop}
\begin{proof}
$(1)$ First, note that $f_a \in \mathcal I_{k}$, for any $a>0$. Let $f\in \mathcal I_{k}$. Then there exists $a>0$ such that
\begin{equation}
\lim_{x\rightarrow +\infty}e^{2ax}|f(x)| = 0.
\end{equation}
Let $g=\frac{f}{f_a}$, that is $g(z)=e^{az}f(z)$. From $(2.6)$ it follows that
$$ \lim_{x\rightarrow +\infty}e^{ax}|g(x)| = 0, $$
thus $g \in\mathcal I_{k}$. The other implication is obvious, since $\mathcal I_{k}$ is an ideal.
$(2)$ Let $f\in f_b\mathcal B_{k}$. It follows that there exists $g\in\mathcal B_{k}$ such that $f(z)=e^{-bz}g(z)$.
Let $h(z)=e^{(a-b)z}g(z)$. We have $h\in \mathcal I_{k}$ and $f(z)=f_a(z)h(z)$. Thus $f\in f_a\mathcal I_{k}$.
On the other hand, $f_{\frac{a+b}{2}}\in f_a\mathcal I_{k} \setminus f_b\mathcal B_{k}$.
$(3)$ The set $\{ f_a\mathcal B_{k}\;:\;a>0\}$ is totally ordered by inclusion.
$(4)$ Is a direct consequence of $(2)$ and $(3)$.
$(5)$ Assume that $\{f_1,\ldots,f_m\}$ is a minimal set of generators of $\mathcal I_{k}$. Then there exists $a>0$ such
that $f_i(z)=f_a(z)g_i(z)$ with $g_i\in \mathcal I_{k}$, for $1\leq i\leq m$. It follows that $(f_1,\ldots,f_m)\mathcal B_{k} \subset f_a\mathcal B_{k} \subsetneq \mathcal I_{k}$,
a contradiction.
\end{proof}
\begin{obs}\emph{
$(1)$ The ideal $\mathcal I_{k}$ is not prime.
Let $f(z):=\sin(\sin z + 1 + e^{-z})$, $z\in\mathbb C$. It is easy to see that
$$ \limsup_{x\rightarrow +\infty} |f(x)| = 1\;\text{and } \liminf_{x\rightarrow +\infty} |f(x)| = 0,$$
hence $f\in \mathcal B_k \setminus \mathcal I_k$. Let $g:\mathbb R\rightarrow (0,+\infty)$,
$$ g(x)=\begin{cases} \frac{e^{-x}}{f(x)},& x\geq 0 \\ \sin (2),& x<0 \end{cases}.$$
The function $g$ is continuous on $\mathbb R$. According to a theorem of Carleman \cite{carleman},
there exists an entire function $h:\mathbb C\rightarrow \mathbb C$ such that
\begin{equation}\label{carle1}
|h(x)-g(x)| < e^{-x},\;\forall x\in\mathbb R.
\end{equation}
We prove that $h\in \mathcal B_k$ and $fh\in \mathcal I_k$. By straightforward computations, we get
\begin{equation}\label{carle2}
\limsup_{x\rightarrow +\infty} g(x) = 1\;\text{and } \liminf_{x\rightarrow +\infty} g(x) = 0.
\end{equation}
By \eqref{carle1} and \eqref{carle2} it follows that
$$ \limsup_{x\rightarrow +\infty} h(x) = 1\;\text{and } \liminf_{x\rightarrow +\infty} h(x) = 0,$$
hence $h\in \mathcal B_{k} \setminus \mathcal I_{k}$. On the other hand, from \eqref{carle1} it follows that
$$ |f(x)h(x)| < |f(x)g(x)| + |e^{-x}f(x)| < e^{-x}|1+f(x)| \leq 2e^{-x},\;\forall x\in\mathbb R,$$
hence $fh\in \mathcal I_k$ as required.}
\emph{$(2)$ If $f\in\mathcal B_{k}$ is inversible, then $f(z)\neq 0$, for all $\operatorname{Re} z>k$, and $f\notin \mathcal I_{k}$. The first condition must be satisfied in order that
$\frac{1}{f}$ to be defined on $\{\operatorname{Re} z>k\}$. The second condition follows from the fact that $\mathcal I_{k}$ is a proper ideal of $\mathcal B_{k}$.
On the other hand, if $f(z):=e^{-(\sin z+1)z}$, $z\in\mathbb C$, then $f\in \mathcal B_{k}\setminus \mathcal I_{k}$, $f(z)\neq 0$, for all $z\in\mathbb C$, but $\frac{1}{f}\notin \mathcal B_{k}$.}
\emph{$(3)$ The results of Proposition $2.3$ and the previous remarks are valid for $\mathcal B_{\infty}, \mathcal I_{\infty}, \mathcal B$ and $\mathcal I$.}
\end{obs}
Let $f\in \mathcal O$ be an entire function. If there exist a positive number $\rho$ and
constants $A,B > 0$ such that
\begin{equation}\label{ord}
|f(z)|\leq Ae^{B|z|^{\rho}} \text{ for all }z\in\mathbb C,
\end{equation}
then we say that f has an \emph{order of growth} $\leq\rho$. We define the \emph{order of growth} of $f$ as
$$\rho(f)=\inf\{\rho>0\;:\;f \text{ has an order of growth }\leq\rho \}.$$
For any $\rho>0$, we let $\mathcal O_{<\rho}$ the set of entire functions or order of growth $<\rho$.
It is easy to check that $\mathcal O_{<\rho}$ is a $\mathbb C$-subalgebra of the algebra of entire functions $\mathcal O$.
The following result was proved, in a different context, in \cite[Proposition 5]{cim}. In order of
completeness, we present here the proof.
\begin{prop}
With the above notations, $\mathcal O_{<1}$ is a $\mathbb C$-subalgebra of $\mathcal B$.
Moreover, $\mathcal O_{<1}\cap \mathcal I = \{0\}$
\end{prop}
\begin{proof}
Let $f\in\mathcal O_{<1}$ and assume that $f$ has an order of growth $\leq\rho<1$.
Let $a>0$ and $x>0$. According to \eqref{ord}, there exist two constants $A,B>0$ such that
$$ e^{-ax}|f(x)|\leq Ae^{-ax+Bx^{\rho}},\;\text{ so } \lim_{x\rightarrow+\infty}e^{-ax}|f(x)|=0,$$
hence $f\in\mathcal B$. If $f$ is polynomial then, by Corollary $2.2(2)$, $f\in \mathcal I \Leftrightarrow f=0$.
Assume $f$ is not polynomial. Then, by Hadamard's Theorem, there exists $D\in\mathbb C^*$ such that
\begin{equation}\label{cutu1}
f(z)=Dz^mE(z),\;\text{ with }E(z)=\prod_{n=1}^{+\infty}(1-\frac{z}{z_n}),z\in\mathbb C,
\end{equation}
where $m$ is the multiplicity of $z_0=0$ as zero of $f$ and $z_1,z_2,\ldots$ are the non-zero zeros of $f$.
According to \cite[Corollary 5.4]{stein}, there exists a strictly increasing sequence $(x_k)_{k\geq 1}$ of positive numbers with
$\lim_{k\rightarrow+\infty}x_k=+\infty$ and a constant $B'>0$ such that
\begin{equation}\label{cutu2}
|E(x_k)|\geq e^{-B'x_k^{\rho}},\;\forall k\geq 1.
\end{equation}
Let $a>0$. From \eqref{cutu1} and \eqref{cutu2} it follows that
$$e^{ax_k}|f(x_k)| = e^{ax_k}|D|x_k^m|E(x_k)| \geq |D|x_k^me^{ax_k-B'x_k^{\rho}} \rightarrow +\infty,$$
hence $f\notin \mathcal I$.
\end{proof}
\noindent
The following lemma, which generalize \cite[Lemma 1]{cim}, is a key part in the proof of Theorem $2.7$.
\begin{lema}
Let $\alpha\in\Omega^f(\mathbb N,\mathcal O_{\infty})=\bigcup_{k\in\mathbb R}\Omega(\mathbb N,\mathcal O_k)$ with $\alpha(n)\notin \mathcal I_{\infty}\setminus\{0\}$, for all $n\geq 1$,
such that there exists $k\in \mathbb R$ and a continuous function $M:\{\operatorname{Re} z> k\}\rightarrow [0,+\infty)$
which satisfies
\begin{enumerate}
\item[(i)] $|\alpha(n)(z)|\leq M(z)n^{k}$, for all $n\geq 1,\operatorname{Re} z>k$.
\item[(ii)] $\lim_{x\rightarrow+\infty} e^{-ax}M(x) = 0$, for all $a>0$.
\end{enumerate}
The following hold
\begin{enumerate}
\item[(1)] $\alpha\in\mathcal C_k \cap \Omega(\mathbb N,\mathcal B_k)$.
\item[(2)] Let $L \in\widetilde{\mathcal E}_{\infty}$. If $\alpha(n)\notin \mathcal I_{k}\setminus\{0\}$, for all $n\geq 1$,
and $F_L(\alpha)(z):=\sum_{n=1}^{+\infty}\alpha(n)(z)L(n)(z)$ is identically zero, then $\alpha=0$.
\end{enumerate}
\end{lema}
\begin{proof}
$(1)$ The hypothesis $(i)$ implies $\alpha\in \mathcal C_k$. Also, $(i)$ and $(ii)$ implies $\alpha(n) \in\mathcal B_k$, for all $n\geq 1$.
$(2)$ Note that, according to Proposition $1.4$, $F_L(\alpha)$ is defined on $\operatorname{Re} z\gg 0$.
Let $n_0$ be the smallest integer with $\alpha(n_0)\neq 0$. Since $F_L(\alpha)=0$, for any $x\gg 0$ we have that
\begin{equation}\label{coor}
|\alpha(n_0)(x)| = \left| \sum_{n=n_0+1}^{+\infty}\alpha(n)(x)\frac{L(n)(x)}{L(n_0)(x)} \right| \leq \sum_{n=n_0+1}^{+\infty}|\alpha(n)(x)|\left|\frac{L(n)(x)}{L(n_0)(x)} \right|.
\end{equation}
Since $L \in\widetilde{\mathcal E}_{\infty}$, by \eqref{ekt}, there exists $C(n_0)>0$ such that
\begin{equation}\label{pulas}
\left|\frac{L(n)(x)}{L(n_0)(x)}\right| \leq n^{-C(n_0)x}, \forall x \gg 0,\;n\geq n_0+1
\end{equation}
From \eqref{coor} and \eqref{pulas} it follows that
\begin{equation}\label{pulaas}
|\alpha(n_0)(x)| \leq M(x) \sum_{n=n_0+1}^{+\infty} e^{(-C(n_0)x+k)\log n}, \forall x\gg 0.
\end{equation}
Let $0<2a<C(n_0)$. From \eqref{pulaas} it follows that
\begin{equation}\label{pulaaa}
e^{ax}|\alpha(n_0)(x)| \leq e^{-ax}M(x) \sum_{n=n_0+1}^{+\infty} e^{((2a-C(n_0))x+k)\log n}, \forall x\gg 0.
\end{equation}
Taking $\lim_{x\rightarrow +\infty}$ in \eqref{pulaaa}, by hypothesis $(ii)$, it follows that
$$\lim_{x\rightarrow +\infty}e^{ax}|\alpha(n_0)(x)| = 0,$$
hence $\alpha(n_0)\in \mathcal I_{\infty}$. Therefore $\alpha(n_0)=0$.
\end{proof}
Let $\mathcal A_{\infty} \subset (\mathcal B_{\infty}\setminus \mathcal I_{\infty})\cup \{0\}$ be a
$\mathbb C$-subalgebra of $\mathcal B_{\infty}$. (According to Proposition $2.5$ we can choose $\mathcal A_{\infty}$ to be the domain of entire functions of order $<1$). Let $\mathcal A_k:=\mathcal A_{\infty}\cap \mathcal O_k$,
$k\in\mathbb R$, and $\mathcal A:=\mathcal A_{\infty}\cap \mathcal O$.
In the $\mathcal A_{k}$-algebra $\Omega(\mathbb N,\mathcal A_{k})$ we consider the $\mathcal A_{k}$-subalgebra defined by
$$
\mathcal H_{k}:=\{ \alpha\in \Omega(\mathbb N,\mathcal A_{k})\;:\;\exists M:\{\operatorname{Re} z> k\}\rightarrow [0,+\infty) \text{ continuous }
$$
\begin{equation}
\text{ such that (i)} |\alpha(n)(z)|\leq M(z)n^{k}, \forall n\geq 1,\operatorname{Re} z>k,\text{ (ii) }\lim_{x\rightarrow+\infty} e^{-ax}M(x) = 0 \forall a>0\}.
\end{equation}
Let $\mathcal H_{\infty}:=\bigcup_{k\in\mathbb R}\mathcal H_k$ and $\mathcal H:=\bigcap_{k\in\mathbb R}\mathcal H_k$.
Note that, from $(1.11)$ and $(2.16)$ it follows that $\mathcal D_k\subset \mathcal H_k$, for all $k\in\mathbb R$, hence $\mathcal D_{\infty}\subset \mathcal H_{\infty} \text{ and }
\mathcal D\subset \mathcal H$.
\begin{teor}
Let $\operatorname{L} \in \widetilde{\mathcal E}_{\infty}$. It hold that
\begin{enumerate}
\item[(1)] The map $F_{L}:\mathcal H_{\infty} \rightarrow \mathcal O_{\infty}$ is an injective morphism of $\mathcal A_{\infty}$-modules.
\item[(2)] If $\operatorname{L}(ab)=\operatorname{L}(a)\operatorname{L}(b)$, for all $a,b\in\mathbb N$, and $\operatorname{L}\neq 0$, then
the map $F_L$ is an injective morphism of $\mathcal A_{\infty}$-algebras.
\end{enumerate}
\end{teor}
\begin{proof}
It follows from Theorem $1.5$, $(2.16)$ and Lemma $2.6$.
\end{proof}
Let $\mathcal F_{\infty}$ be the quotient field of $\mathcal A_{\infty}$. Let $\mathcal F_k$ bet the quotient field of $\mathcal A_k$
and $\mathcal F$ be the quotient field of $\mathcal A$. It is easy to check that $\mathcal F_{\infty}=\bigcup_{k\in\mathbb R}\mathcal F_k$ and $\mathcal F =\bigcap_{k\in\mathbb R}\mathcal F_k$.
Note that, if $\mathcal A_{\infty}=\mathcal A=\mathcal O_{<1}$ is the $\mathbb C$-algebra of entire functions of order $<1$, then $\mathcal F_{\infty}=\mathcal F$ is the field of meromorphic functions of order $<1$.
\begin{cor}
Let $\alpha_1,\ldots,\alpha_r \in \mathcal H_{\infty}$. Assume there exists a nondiscrete subset $S\subset \{\operatorname{Re} z\gg 0\}$ such that
the numerical sequences $n\mapsto \alpha_j(n)(z)$, $1\leq j\leq r$, are linearly independent over $\mathbb C$, for any $z\in S$.
Let $ L\in\widetilde{\mathcal E}_{\infty}$. Then
\begin{enumerate}
\item[(1)] $F_L(\alpha_1),\ldots,F_L(\alpha_r)\in \mathcal O_{\infty}$ are linearly independent over $\mathcal F_{\infty}$.
\item[(2)] Moreover, if $L(ab)=L(a)L(b)$, for all $a,b\in\mathbb N$, $L\neq 0$, and $n\mapsto \alpha_j(n)(z)$, $1\leq j\leq r$, are algebraically independent over $\mathbb C$, for any $z\in S$,
then $F_L(\alpha_1),\ldots,F_L(\alpha_r)\in \mathcal O_{\infty}$ are algebraically independent over $\mathcal F_{\infty}$.
\end{enumerate}
\end{cor}
\begin{proof}
$(1)$ Let $g_1,\ldots,g_r \in \mathcal F_{\infty}$ such that
$g_1\alpha_1 + \cdots + g_r\alpha_r = 0$. Multiplying this with a common multiple of the denominators of $g_1,\ldots,g_r$, we
can assume that $g_1,\ldots,g_r\in \mathcal A_{\infty}$. It follows that
\begin{equation}\label{kuku}
g_1(z)\alpha_1(n)(z)+\cdots+g_r(z)\alpha_r(n)(z) = 0,\;\forall \operatorname{Re} z\gg 0, n\geq 1.
\end{equation}
The hypothesis and \eqref{kuku} implies $g_1(z)=\cdots=g_r(z)=0$, for all $z\in S$, hence, by the identity
theorem of holomorphic functions, it follows that $g_1=\cdots=g_r=0$, therefore $\alpha_1,\ldots,\alpha_r$
are linearly independent over $\mathcal A_{\infty}$. Now, Theorem $2.7(1)$ implies $(1)$.
$(2)$ Let $I\subset \mathbb N^r$ be a finite subset of indices and let $g_i\in\mathcal A_{\infty}$, $i\in I$.
Assume that $$\sum_{(i_1,\ldots,i_r)\in I}g_i(z) (\alpha_1^{i_1}\cdots \alpha_r^{i_r})(n)(z) = 0,\;\forall \operatorname{Re} z\gg 0,n\geq 1.$$
The hypothesis implies $g_i(z)=0$, for all $z\in S$ and $i\in I$, hence the holomorphic functions $g_i$'s are identically zero.
The conclusion follows from Theorem $2.7(2)$.
\end{proof}
\begin{cor}
Let $\alpha_1,\ldots,\alpha_r\in \mathcal D_{\infty}$, linearly independent over $\mathbb C$, and
let $L\in\widetilde{\mathcal E}_{\infty}$. Then $F_L(\alpha_1),\ldots$, $F_L(\alpha_r)\in \mathcal O_{\infty}$ are linearly independent over
$\mathcal F_{\infty}$. Moreover, if $L(ab)=L(a)L(b)$, for all $a,b\in\mathbb N$, $L\neq 0$, and
$\alpha_1,\ldots,\alpha_r\in \mathcal D_{\infty}$ are algebraically independent over $\mathbb C$,
then $F_L(\alpha_1),\ldots$, $F_L(\alpha_r)\in \mathcal O_{\infty}$ are algebraically independent over
$\mathcal F_{\infty}$.
\end{cor}
\begin{proof}
It follows from Corollary $2.8$ and the inclusion $\mathcal D_{\infty}\subset \mathcal H_{\infty}$.
\end{proof}
Note that, according to Remark $1.6$, the case of (general) Dirichlet series is contained in Corollary $2.9$.
\section{Applications to Dirichlet series associated to multiplicative functions}
Given an arithmetic function $\alpha\in\Omega(\mathbb N,\mathbb C)$ and a non-negative integer $j$ its arithmetic $j$-derivative is
$$\alpha^{(j)}(n) := (-1)^j \alpha(n) \log^j n,\;\forall n\geq 1.$$
Assume $\alpha\in\mathcal D_k$. Since $\log n$ has the order of growth $O(n^{\varepsilon})$ for any $\varepsilon>0$,
it follows, by straighforward computations, that $\alpha^{{(j)}}\in\mathcal D_k$. Moreover, if $\alpha\in \mathcal D_k$, then
the $j$-derivative of the Dirichlet series $F(\alpha)=\sum_{n=1}^{+\infty} \frac{\alpha(n)}{n^z}$ is
\begin{equation}
F^{(j)}(\alpha)(z)=F(\alpha^{(j)})(z),\;\forall \operatorname{Re} z>k+1.
\end{equation}
An arithmetic function $\alpha\in\Omega(\mathbb N,\mathbb C)$ is called \emph{multiplicative}, if
$\alpha(1)=1$ and
$$ \alpha(nm)=\alpha(n)\alpha(m),\;\forall n,m\in\mathbb N\;\text{with }\operatorname{gcd}(n,m)=1.$$
Two multiplicative arithmetic functions $\alpha,\beta \in \Omega(\mathbb N,\mathbb C)$ are
\emph{equivalent}, see \cite{kac}, if
$f(p^j)=g(p^j)$ for all integers $j\geq 1$ and all but finitely many
primes $p$. We recall that $e\in \Omega(\mathbb N,\mathbb C)$, defined by $e(1)=1$ and $e(n)=0$ for $n\geq 2$, is the identity function. Obviously, $e$ is multiplicative.
We recall the following result of Kaczorowski, Molteni and Perelli \cite{kac}.
\begin{lema}(\cite[Lemma 1]{kac})
Let $\alpha_1,\ldots,\alpha_r \in \Omega(\mathbb N,\mathbb C)$ be multiplicative functions such that
$e,\alpha_1,\ldots,\alpha_r$ are pairwise non-equivalent, and let
$m$ be a non-negative integer. Then the functions
$$\alpha_1^{(0)},\ldots,\alpha_1^{(m)}, \alpha_2^{(0)},\ldots,\alpha_2^{(m)},
\ldots, \alpha_r^{(0)},\ldots,\alpha_r^{(m)} \in \Omega(\mathbb N,\mathbb C)$$
are linearly independent over $\mathbb C$.
\end{lema}
\begin{prop}(See also \cite[Corollary 4]{cim} and \cite[Corollary 6]{cim})
Let $\alpha_1,\ldots,\alpha_r \in \mathcal D_{k}$ be multiplicative functions
such that $e,\alpha_1,\ldots,\alpha_r$ are pairwise non-equivalent. Let $m$ be a non-negative integer.
Then
$$ F^{(0)}(\alpha_1),\ldots,F^{(m)}(\alpha_1),F^{(0)}(\alpha_2),\ldots,F^{(m)}(\alpha_2), \ldots, F^{(0)}(\alpha_r),\ldots,F^{(m)}(\alpha_r) $$
are linearly independent over $\mathcal F_{k+1}$, hence, in particular,
over the field of meromorphic functions of order $<1$.
\end{prop}
\begin{proof}
If follows from $(3.1)$, Lemma $3.1$ and Corollary $2.9$.
\end{proof}
Note that Proposition $3.2$, combined with \cite[Lemma 2]{kac}, generalize the main result in \cite{kac}.
\begin{prop}
If $\alpha_1,\ldots,\alpha_r\in \mathcal D_k$ are multiplicative functions, algebraically independent over $\mathbb C$, then
$F(\alpha_1),\ldots,F(\alpha_r)\in \mathcal O_{k+1}$ are algebraically independent over $\mathcal F_{k+1}$, hence, in particular,
over the field of meromorphic functions of order $<1$.
\end{prop}
\begin{proof}
It is a special case of the second part of Corollary $2.9$.
\end{proof}
\begin{obs}\emph{
Let $K/\mathbb Q$ be a finite Galois extension. Let $\chi_1,\ldots,\chi_h$ be the irreducible characters of the Galois group.
It was proved in \cite[Corollary 5]{florin}, that the L-Artin functions, see \cite{artin1}, $L(z,\chi_1),\ldots, L(z,\chi_h)$ associated to $\chi_1,\ldots,\chi_h$
are algebraic independent over $\mathbb C$. This result was extended in \cite[Corollary 9]{cim} for the field of meromorphic functions
of order $<1$. Assuming $L(z,\chi_j)=F(\alpha_j)(z)$, $1\leq j\leq h$, $\operatorname{Re} z>1$, the key point in the proof of the above results was to show that $\alpha_1,\ldots,\alpha_h\in \mathcal D_{\varepsilon}$,
where $\varepsilon>0$ can by arbitrarely choosed, are in fact algebraically independent over $\mathbb C$. Therefore, $\alpha_1,\ldots,\alpha_h$ satisfy the hypothesis of Proposition $3.3$ for $k=\varepsilon$.}
\end{obs}
\noindent
\textbf{Aknowledgment.} I express my gratitude to Florin Nicolae for valuable discussions regarding the results of this paper.
\section*{Compliance with Ethical Standards.}
This article was not funded. The author declares that he has not conflict of interest.
|
1,108,101,565,720 | arxiv | \section{Introduction}
\label{section:introduction}
Non-significant empirical results (usually in the form of $t$-statistics smaller than 1.96) relative to some null hypotheses of interest (usually zero coefficients) are notoriously hard to publish in professional/scientific journals \citep[see, e.g.,][]{ziliak2008cult}. This state of affairs is in part maintained by the widespread notion that non-significant results are non-informative. After all, lack of statistical significance derives from the absence of extreme or surprising outcomes under the null hypothesis. In this article, we argue that this view of statistical inference is misguided. In particular, we show that non-significant results are informative, and argue that they are more informative than significant results in scenarios that are common, even prevalent, in empirical practice.
To discuss the informational content of different statistical procedures, we formally adopt a limited information Bayes perspective. In this setting, agents representing journal readership or the scientific community have priors, $\mathcal P$, over some parameters of interests, $\theta\in\Theta$. That is, a member $p$ of $\mathcal P$ is a probability density function (with respect to some appropriate measure) on $\mathcal P$. While agents are Bayesian, we will consider a setting where journals report frequentist results, in particular, statistical significance. Agents construct limited information Bayes posteriors based on the reported results of significance tests. We will deem a statistical result informative when it has the potential to substantially change the prior of the agents over a large range of values for $\theta$.
Notice, that, like \cite{ioannidis2005why} and others, we restrict our attention to the effect of statistical significance on beliefs. We adopt this framework not because we believe it is (always) representative of empirical practice (in fact, journals typically report additional statistics, beyond statistical significance), but because isolating the informational content of statistical significance has immediate implications for how we should interpret its occurrence or lack of it. Correct interpretation of statistical significance is important because, while many other statistics are reported in practice, the scientific discussion of empirical results is often framed in terms of statistical significance of some parameters of interest and non-significant results may be under-reported as discussed above.
\section{A Simple Example}\label{section:normal}
In this section, we consider a simple example with Normal priors and data that captures the essence of our argument. In section \ref{section:General_Case} we will consider the case where the priors and the distribution of the data are not restricted to be in a particular parametric family. Assume an agent has a prior $\theta\sim N(\mu,\sigma^2)$ on $\theta$, with $\sigma^2>0$. A researcher observes $n$ independent measurement of $\theta$ with Normal errors mutually independent and independent of $\theta$, and with variance normalized to one. That is, $x_1,\ldots, x_n$ are independent $N(\theta,1)$. Let
\[
\widehat\theta=\frac{1}{n}\sum_{i=1}^n x_i\sim N(\theta,1/n).
\]
$\theta$ is deemed significant if $\sqrt{n}|\widehat\theta|> c$, for some $c>0$. In empirical practice, $c$ is often equal to 1.96, the $0.975$-quantile of the Standard Normal distribution. Suppose a journal reports on statistical significance. We will calculate the limited information posteriors of the agents conditional on significance and lack thereof. These posteriors are the distributions of $\theta$ conditional on $\sqrt{n}|\widehat\theta|> c$ and $\sqrt{n}|\widehat\theta|\leq c$. First, notice that
\begin{align*}
\Pr(\sqrt{n}|\widehat\theta|> c|\theta)&=\Pr(\widehat\theta> c/\sqrt{n}|\theta)+\Pr(-\widehat\theta> c/\sqrt{n}|\theta)\\
&=\Phi(\sqrt{n}\theta-c)+\Phi(-\sqrt{n}\theta-c).
\end{align*}
Therefore,\footnote{This calculation uses the following fact of integration
\[
\int \Phi\left(\frac{\lambda-\theta}{\xi}\right)\frac{1}{\sigma}\phi\left(\frac{\theta-\mu}{\sigma}\right)d\theta
=\Phi\left(\frac{\lambda-\mu}{\sqrt{\sigma^2+\xi^2}}\right)
\]
for arbitrary real $\lambda$ and $\mu$ and positive $\sigma$ and $\xi$. Alternatively, the result can be easily derived after noticing that the distribution of $\widehat\theta$ integrated over the prior is Normal with mean $\mu$ and variance $\sigma^2+1/n$.}
\begin{equation}
\label{equation:ProbRej}
\Pr(\sqrt{n}|\widehat\theta|> c)=\Phi\Bigg(\frac{\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)+\Phi\Bigg(\frac{-\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg).
\end{equation}
The limited information posteriors given significance and non-significance are:
\[
p\big(\theta\big|\sqrt{n}|\widehat\theta|>c\big)
=\frac{\displaystyle\frac{1}{\sigma}\phi\Bigg(\displaystyle\frac{\theta-\mu}{\sigma}\Bigg)\Big(\Phi(\sqrt{n}\theta-c)+\Phi(-\sqrt{n}\theta-c)\Big)}
{\Phi\Bigg(\displaystyle\frac{\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)+\Phi\Bigg(\displaystyle\frac{-\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)},
\]
and
\[
p\big(\theta\big|\sqrt{n}|\widehat\theta|\leq c\big)
=\frac{\displaystyle\frac{1}{\sigma}\phi\Bigg(\displaystyle\frac{\theta-\mu}{\sigma}\Bigg)\Big(1-\Phi(\sqrt{n}\theta-c)-\Phi(-\sqrt{n}\theta-c)\Big)}
{1-\Phi\Bigg(\displaystyle\frac{\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)-\Phi\Bigg(\displaystyle\frac{-\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)}.
\]
The two posteriors, along with the Normal prior, are plotted in Figure
\ref{fig:posterior} for $\mu=1$, $\sigma=1$, $c=1.96$, and $n=10$. This figure illustrates the informational value of a significance test. Rejection of the null carves probability mass around zero in the limited information posterior, while failure to reject concentrates probability mass around zero. Notice that failure to reject carries substantial information, even in the rather under-powered setting generated by the values of $\mu$, $\sigma$, $c$, and $n$ adopted for Figure \ref{fig:posterior}, which imply $\Pr(\sqrt n|\widehat\theta|> c\big)=0.7028$.
\begin{figure}
\centering
\includegraphics[width=4.6in,keepaspectratio=1 ]{significance_crop.pdf}
\caption{Posterior Distributions After a Significance Test}
\label{fig:posterior}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=4.6in,keepaspectratio=1 ]{prior_posterior_crop.pdf}
\caption{Prior and Posterior with Significance for Different Sample Sizes}
\label{fig:postN}
\end{figure}
Figure \ref{fig:postN} shows how prior and posteriors after significance compare as a function of the sample size. When $n$ is small, significance affects the posterior over a large range of values. When $n$ is large, significance provides only local to zero information. That is, significance is not informative in large samples. This is explained by the fact that the probability of rejection in equation (\ref{equation:ProbRej}) converges to one as the sample size increases. By the law of total probability, it follows that conditional on non-significance probability mass concentrates around zero as $n$ increases. That is, the occurrence of an event that is very unlikely given the prior has a large effect on beliefs.
The full information posterior is
\[
p\big(\theta|x_1,\ldots, x_n\big)= \frac{1}{\sigma_n}\phi\Bigg(\displaystyle\frac{\theta-\mu_n}{\sigma_n}\Bigg),
\]
where
\[
\mu_n
=\frac{\mu+n\sigma^2\widehat\theta}{1+n\sigma^2},
\]
and
\[
\sigma^2_n=\frac{\sigma^2}{1+n\sigma^2}.
\]
So, in this very particular context, knowledge of the $t$-ratio ($\sqrt{n}\widehat\theta$) is sufficient to go back to the full information posterior. The same is true for the combined information given by the $P$-value, $2\Phi(-\sqrt n |\widehat\theta|)$, and the sign of $\widehat\theta$.
These results have immediate counterparts in in large samples settings with asymptotically Normal distributions. They can also be generalized to non-parametric settings, as we demonstrate in the next section.
\section{General Case}
\label{section:General_Case}
To extend the results of the previous section beyond Normal priors and data, we will consider a test statistic, $\widehat T_n$, such that
\begin{align*}
\Pr\big(\widehat T_n > c\big| \theta=0\big)&\rightarrow \alpha,\\
\intertext{and}
\Pr\big(\widehat T_n > c\big| \theta, \theta\neq 0\big)&\rightarrow 1.
\end{align*}
That is, we consider significance tests that are consistent under fixed alternatives and have asymptotic size equal to $\alpha$. Let $p(\cdot)$ be a prior on $\theta$, and $p(\cdot | \widehat T_n > c)$ and $p(\cdot | \widehat T_n \leq c)$ be the limited information posteriors under significance and non-significance, respectively.
\subsection{Continuous Prior}
We will first assume a prior that is absolutely continuous with respect to the Lebesgue measure, with a version of the density that is positive and continuous at zero. By dominated convergence, we obtain:
\[
\Pr\big(\widehat T_n > c\big)\rightarrow 1.
\]
We first derive the posterior densities under significance,
\begin{align*}
p(0|\widehat T_n > c)&=\frac{\Pr\big(\widehat T_n > c\big| \theta=0\big)}{\Pr\big(\widehat T_n > c\big)}p(0)\rightarrow \alpha\, p(0),\\
\intertext{and }
p(\theta|\widehat T_n > c)&=\frac{\Pr\big(\widehat T_n > c\big| \theta\big)}{\Pr\big(\widehat T_n > c\big)}p(\theta)\rightarrow p(\theta),
\end{align*}
for $\theta\neq 0$.
So, again, significance only changes beliefs locally around zero. The posterior densities after non-significance are
\begin{align*}
p(0|\widehat T_n \leq c)&=\frac{\Pr\big(\widehat T_n \leq c\big| \theta=0\big)}{\Pr\big(\widehat T_n \leq c\big)}p(0)\rightarrow \infty,\\
\intertext{and}
p(\theta|\widehat T_n \leq c)&=\frac{\Pr\big(\widehat T_n \leq c\big| \theta\big)}{\Pr\big(\widehat T_n \leq c\big)}p(\theta)
\end{align*}
for $\theta\neq 0$. Typically, for $\theta\neq 0$ (using large deviation results)
\[
-\frac{1}{n} \log\left(\Pr\big(\widehat T_n \leq c\big| \theta\big)\right)\rightarrow d_\theta,
\]
with $0<d_\theta<\infty$. Therefore, $\Pr\big(\widehat T_n \leq c\big| \theta\big)$ converges to zero exponentially for $\theta\neq 0$. Let
\[
\beta_n(\theta) = \Pr(\widehat T_n\leq c|\theta)
\]
be the probability of Type II error (one minus the power). Assume that
\[
\int \liminf\limits_{n\rightarrow\infty}\beta_n(z/\sqrt n)\, dz>0.
\]
This rules out perfect local asymptotic power. Then, by change of variable $z=n^{1/2}\theta$ and Fatou's lemma, we obtain\footnote{For the second to last equality, notice that if $a_n\geq 0$ and $b_n\rightarrow b>0$ as $n\rightarrow \infty$, then
\[
\liminf\limits_{n\rightarrow\infty} (a_n b_n) = \liminf\limits_{n\rightarrow\infty} a_n \lim\limits_{n\rightarrow\infty} b_n.
\]
}
\begin{align*}
\liminf\limits_{n\rightarrow\infty}n^{1/2}\Pr(\widehat T_n\leq c) &= \liminf\limits_{n\rightarrow\infty} n^{1/2}\int \beta_n(\theta)\, p(\theta)\, d\theta\\\
&=\liminf\limits_{n\rightarrow\infty}\int \beta_n(z/\sqrt n)\, p(z/\sqrt n)\, dz\\
&\geq \int \liminf\limits_{n\rightarrow\infty}(\beta_n(z/\sqrt n)\, p(z/\sqrt n))\, dz\\
&= \int \liminf\limits_{n\rightarrow\infty}\beta_n(z/\sqrt n)\, \lim\limits_{n\rightarrow \infty}p(z/\sqrt n)\, dz\\
&=p(0) \int \liminf\limits_{n\rightarrow\infty}\beta_n(z/\sqrt n)\, dz >0.
\end{align*}
It follows that
\[
p(\theta|\widehat T_n \leq c)\rightarrow 0,
\]
for $\theta\neq 0$.
That is, like in the Normal case of section \ref{section:normal}, conditional on non-significance the posterior converges to a degenerate distribution at zero.
\begin{figure}[h!]
\centering
\includegraphics[width=4.8in,keepaspectratio=1 ]{probfactor_crop.pdf}
\caption{Limit of $p(\theta|\widehat T_n>c)/p(\theta)$ as a function of $q$ ($\theta\neq 0$, $\alpha=0.05$)}
\label{fig:probfactor}
\end{figure}
\subsection{Prior with Probability Mass at Zero}
We now consider the case when the prior has probability mass $q$ at zero, with $0<q<1$. Then
\begin{align*}
\Pr\big(\widehat T_n > c\big)&\rightarrow q\alpha+(1-q)\in (\alpha, 1).
\end{align*}
Now, the posterior after significance is,
\begin{align*}
p(0|\widehat T_n > c)&=\frac{\Pr\big(\widehat T_n > c\big| \theta=0\big)}{\Pr\big(\widehat T_n > c\big)}p(0)\rightarrow \left(\frac{\alpha}{q\alpha+(1-q)}\right)q\leq q,\\
\intertext{and}
p(\theta|\widehat T_n > c)&=\frac{\Pr\big(\widehat T_n > c\big| \theta\big)}{\Pr\big(\widehat T_n > c\big)}p(\theta)\rightarrow \left(\frac{1}{q\alpha+(1-q)}\right)p(\theta)\geq p(\theta),
\end{align*}
for $\theta\neq 0$. Now, in contrast to the continuous prior case, significance changes beliefs away from zero in large samples In particular, if we start with a prior that assigns a large probability to $\theta=0$, then significance greatly affects beliefs for values of $\theta$ different from zero. Notice, however, that for moderate values of $q$ the effect of significance on beliefs may be negligible. Figure \ref{fig:probfactor} show the limit of $p(\theta|\widehat T_n>c)/p(\theta)$ as a function of $q$, for $\theta\neq 0$ and $\alpha=0.05$. This limit is close to one for modest values of $q$. In order for significance to at least double the probability of $\theta\neq 0$ we need $q\geq 1/(2(1-\alpha))=0.5263$. Notice that reducing the value of $\alpha$ does not substantially change the value of the limit of $p(\theta|\widehat T_n>c)/p(\theta)$, except for very large values of $q$. For example, with $\alpha = 0.005$ \citep[as advocated in][]{benjamin2017redefine}, for significance to at least double the probability of $\theta\neq 0$ we need $q\geq 1/(2(1-\alpha))= 0.5025$. In fact, regardless of the size of the test, $q$ needs to be bigger than 0.5 in order for significance to double the probability density function of beliefs at non-zero values of $\theta$.
The posterior after non-significance is,
\begin{align*}
p(0|\widehat T_n \leq c)&=\frac{\Pr\big(\widehat T_n \leq c\big| \theta=0\big)}{\Pr\big(\widehat T_n \leq c\big)}p(0)\rightarrow \frac{1-\alpha}{q(1-\alpha)}q=1,\\
\intertext{and for $\theta\neq 0$,}
p(\theta|\widehat T_n \leq c)&=\frac{\Pr\big(\widehat T_n \leq c\big| \theta\big)}{\Pr\big(\widehat T_n \leq c\big)}p(\theta)\rightarrow 0.
\end{align*}
Again, non-significance seems to have a stronger effect on beliefs than significance.
Some remarks about priors with probability mass at a point null are in order. First, prior beliefs that assign probability mass to point nulls may not be adequate in certain settings. For example, beliefs on the average effect of an anti-poverty intervention may sometimes concentrate probability smoothly around zero, but more rarely in such a way that a large probability mass at zero is a good description of a reasonable prior. Moreover, priors with probability mass at a point null generate a drastic discrepancy, know as Lindley's paradox, between frequentist and Bayesian testing procedures \citep[see, e.g.,][]{berger1985statistical}. Lindley's paradox arises in settings with a fixed value of $\widehat T_n$ and a large $n$. In those settings, frequentists would reject the null hypothesis when $\widehat T_n>c$. Bayesians, however, would typically find that the posterior probability of the point null far exceeds the posterior probability of the alternative. Lindley's paradox can be explained by the fact that, as $n$ increases, the distribution of the test statistic under the alternative diverges. Therefore, a fixed value of the test statistic as $n$ increases can only be explained by the null hypothesis.
Notice that conditioning on the event $\{\widehat T_n\leq c\}$ (as opposed to conditioning on the value of $\widehat T_n$) is not subject to Lindley's paradox
and it may be the natural choice to evaluate a testing procedure for which significance depends on the value of $\{\widehat T_n\leq c\}$ only.
\section{Testing an Interval Null}
In view of the lack of informativeness of non-significance in large samples (under a point null), one could instead try to reinterpret significance tests as tests of the implicit null ``$\theta$ is close to zero''.
To accommodate this possibility, we will now concentrate in the problem of testing the null that the parameter $\theta$ is in some interval around zero. Under the null hypothesis, $\theta\in [-\delta,\delta]$, where $\delta$ is some positive number. Under the alternative hypothesis, $\theta\not\in [-\delta,\delta]$. Consider the Normal model of section \ref{section:normal}. To obtain a test of size $\alpha$ we control the supremum of the probability of Type I error:
\begin{align*}
\Pr(\sqrt{n}|\widehat\theta|> c\,|\,|\theta|=\delta)&=\Phi(\sqrt{n}\delta-c)+\Phi(-\sqrt{n}\delta-c).
\end{align*}
Therefore, we choose $c$ such that $\Phi(\sqrt{n}\delta-c)+\Phi(-\sqrt{n}\delta-c)=\alpha$. While there is no closed-form solution for $c$, its value can be calculated numerically for any given value of $\sqrt n \delta$, and a very accurate approximation for large $\sqrt n\delta$ is given by
\[
c= \Phi^{-1}(1-\alpha)+\sqrt n\delta.
\]
That is, controlling size in this setting implies that the critical value has to increase with the sample size at a root-$n$ rate. In turn, this implies that the probability of rejection, $\Pr(\sqrt{n}|\widehat\theta|> c|\theta)=\Phi(\sqrt{n}\theta-c)+\Phi(-\sqrt{n}\theta-c)$ converges to one if $\theta\not\in [-\delta,\delta]$, and converges to zero if $\theta\in (-\delta,\delta)$. As a result, the large sample posterior distributions with and without significance are truncated versions of the prior, with the prior truncated at $(-\delta,\delta)$ under significance, and at $(-\infty,-\delta)\cup (\delta,\infty)$ under no significance. If $\delta$ is large both significance and non-significance are informative. If, however, $\delta$ is small, we go back to the setting where significance carries only local-to-zero information. Figure \ref{figure:interval} reports posterior distributions for $\delta=\{0.5,1,2\}$, $\alpha=0.05$ and $n=10000$.
\begin{figure}
\centering
\includegraphics[width=4.8in,keepaspectratio=1 ]{significance_interval_crop.pdf}
\caption{Posterior After a Test of the Null $\theta\in [-\delta,\delta]$ ($n=10000$, $\alpha=0.05$)}
\label{figure:interval}
\end{figure}
\section{Conditioning on the sign of the estimated coefficient}
In previous sections we have shown that statistical significance may carry very little information in large samples. As a result, the values of other statistics should be taken into account along with significance when the null is rejected in a significance test. As discussed above, in a Normal (or asymptotically Normal) setting it does not take much to go back to full information (e.g., $P$-value and the sign of $\widehat\theta$). Here we consider the question of whether minimally augmenting the information on significance with the sign of $\widehat\theta$ results in informativeness when the null is rejected. This exercise is motivated by the possibility that the sign of the estimated coefficient is implicitly taken into account in many discussions of results from significance tests.
For concreteness, we will concentrate on the case of a positive coefficient estimate, $\widehat\theta>0$. That is, the limited information posterior under significance and positive $\widehat\theta$ conditions on the event $\sqrt n\widehat\theta>c$. The case with negative $\widehat\theta$ is analogous. Using similar calculations as in section \ref{section:introduction}, we obtain:
\[
p\big(\theta\big|\sqrt{n}\widehat\theta>c\big)
=\frac{\displaystyle\frac{1}{\sigma}\phi\Bigg(\displaystyle\frac{\theta-\mu}{\sigma}\Bigg)\Phi(\sqrt{n}\theta-c)}
{\Phi\Bigg(\displaystyle\frac{\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)},
\]
and
\[
p\big(\theta\big|0<\sqrt{n}\widehat\theta\leq c\big)= \frac{\displaystyle\frac{1}{\sigma}\phi\Bigg(\displaystyle\frac{\theta-\mu}{\sigma}\Bigg)\Big(1-\Phi(\sqrt{n}\theta-c)-\Phi(-\sqrt n \theta)\Big)}
{1-\Phi\Bigg(\displaystyle\frac{\sqrt{n}\mu-c}{\sqrt{1+n\sigma^2}}\Bigg)-\Phi\Bigg(\displaystyle\frac{-\sqrt{n}\mu}{\sqrt{1+n\sigma^2}}\Bigg)}.
\]
\begin{figure}
\centering
\includegraphics[width=4.6in,keepaspectratio=1 ]{significance_sign_crop.pdf}
\caption{Posterior Distributions Conditional of Significance and Coefficient Sign}
\label{fig:posterior_sign}
\end{figure}
Figure \ref{fig:posterior_sign} reproduces the setting of Figure \ref{fig:posterior} but for the case when the posterior is conditional on sign of the estimate in addition to significance. Like in Figure \ref{fig:posterior}, failure to reject carries subtantial information. In fact, both outcomes of the significance test carry additional information, with respect to the setting in Figure \ref{fig:posterior}, which of course is explained by the additional information in the sign of $\widehat\theta$.
Notice that, in this case, under significance, the ratio between the posterior and the prior converges to
\[
\lim\limits_{n\rightarrow \infty}\,\frac{p(\theta|\sqrt n\widehat\theta>c)}{p(\theta)}=\left\{\begin{array}{ll}0&\mbox{ if } \theta<0,\\
\Phi(-c)/\Phi(\mu/\sigma)&\mbox{ if } \theta=0,\\
1/\Phi(\mu/\sigma)&\mbox{ if } \theta>0.\end{array}\right.
\]
Without significance, the ratio between the posterior and the prior converges to
\[
\lim\limits_{n\rightarrow \infty}\,\frac{p(\theta|0<\sqrt n\widehat\theta\leq c)}{p(\theta)}=\left\{\begin{array}{ll}0&\mbox{ if } \theta\neq 0,\\
\infty&\mbox{ if } \theta=0.\end{array}\right.
\]
That is, as $n\rightarrow\infty$ non-significance is highly informative. Under significance, the posterior of $\theta$ converges to the prior truncated at zero. As a result, in this case the informational content of significance depends on the value of $\Pr(\theta>0)=\Phi(\mu/\sigma)$. If this quantity is small, significance with a positive sign is highly informative. Unsurprisingly, when $\mu/\sigma$ is large (that is, in cases where there is little uncertainty about the sign of the parameter of interest), a positive sign of $\widehat\theta$ does not add much to the informational content of the test. Moreover, the limit of $p(\theta|\sqrt n\widehat\theta>c)$ cannot be more than double the value of $p(\theta)$ as long as $\mu$ is non-negative. This is relevant to many instances where there are strong belief about the sign of the estimated coefficients (e.g., the slope of the demand function, or the effect of schooling on wages) and specifications reporting ``wrong'' signs for the coefficients of interest are rarely reported or published.
\section{Conclusions}
Significance testing on a point null is the most extended form of inference. In this article, we have shown that rejection of a point null often carries very little information, while failure to reject is highly informative. This is especially true in empirical contexts where data sets are large and where there are no reasons to put substantial prior probability on a point null. Our results challenge the usual practice of conferring point null rejections a higher level of scientific significance than non-rejections. In consequence, we advocate a visible reporting and discussion of non-significant results in empirical practice \citep[e.g., as in][]{angrist2017maimonides,cantoni2018id}.
\nocite{*}
\bibliographystyle{chicago}
|
1,108,101,565,721 | arxiv | \section{Introduction}
The functional correctness of industrial software systems is of utmost importance as a system failure may incur significant financial or even human life losses. Testing of such industrial systems are further complicated due to the lack of a test oracle~\cite{weyuker1982testing}. Metamorphic testing (MT) was introduced by Chen et al.~\cite{chen2020metamorphic} as a solution to test systems when the expected output or test oracle of the \textit{system under test} (SUT) is not available to compare the actual output of the SUT against its expected output. In MT, the behavioural or functional properties of the system are defined using generic relations known as \emph{metamorphic relations (MRs)} between different sets of inputs and their expected outputs. These relations are used to verify functional correctness instead of mapping specific inputs to their expected outputs.
However, the recent surveys on MT highlight several questions remain open for further investigation: how to create the MRs, how to define the follow-up test cases, and how to automate different phases of the process. In this paper, we propose a two-phase MT approach to circumvent the need for a traditional pre-defined test oracle:
a) \textbf{exploration}: extracting the MRs from system specifications, generating source test cases automatically from real-execution data by analyzing data logs, and generating follow-up test cases automatically from source test cases, and b) \textbf{exploitation}: identifying fault patterns via random test generation and exploring them in more detail via guided test generation.
In our approach the definition of MRs is manual based on domain expert knowledge, but the test generation, execution, and verdict assignment are automatic. In fact, studies have shown that manual testing with domain expert involvement can be more effective than fully automated testing \cite{DBLP:conf/ast/ZafarAE22}. We exemplify our approach on a position control system that determines the position of a hanging load attached to the hoisting frame of a crane.
\section{Overview of the approach}
Metamorphic testing performs as follows \cite{liu2012new}: a) identify MRs based on system properties defined in the software specification, b) generate a \emph{source test case} passing the seed input to the system, c) generate \emph{follow-up test cases} from the \emph{source test case} based on MRs and execute them, and d) compare the results of \emph{source and follow-up test cases} if they satisfy the MR.
An MR is composed of two parts: an \emph{input relation} and \emph{output relation} \cite{liu2012new}. An input relation represents the relation between the inputs of source and follow-up test cases, whereas an output relation represents the relation between the outputs of the source and follow-up test cases. A \textit{source test case} is the first set of tests performed using \emph{seed inputs}. The seed inputs are transformed into \emph{morphed inputs}. The \textit{follow-up test cases} are performed using these \emph{morphed inputs}. In addition, an \textit{implication} between the outputs of {source and follow-up test cases} is needed to specify the impact of input transformations on their corresponding outputs.
Chen et al. \cite{chen2018metamorphic} presented the MT methodology and defined MR as follows:
\begin{definition}
\textbf{(Metamorphic relation):} Let \textit{f} be a target function or algorithm. An MR is a necessary property of \textit{f} over a sequence of two or more inputs $ \langle x_1, x_2, ..., x_n \rangle $ where $n \geq 2$, and their corresponding outputs $ \langle f(x_1), f(x_2),..., f(x_n) \rangle $. It can be expressed as a relation $ \mathbf{R} \subseteq X^n \times Y^n $, where $X^n$ and $Y^n$ are the Cartesian products of \textit{n} input and \textit{n} output spaces, respectively.
\label{def:MR_Chen}
\end{definition}
We extend the above definition, by refining $\mathbf{R}$ into $R_{in}$ and $R_{out}$, where the satisfiability of MR output relation $R_{out}$ by outputs $Y_s$ and $Y_m$ presumes also that their corresponding morphed inputs $X_m$ satisfy respectively MR input relation $R_{in}$. That is,
given $f(x_i) = y_i$ and $f(x_j) = y_j \forall (x_i,x_j)$, then $R_{in}(x_i, x_j) \implies R_{out}(y_i, y_j)$, where
$f$ denotes the function that creates outputs $(y_i, y_j)$ in response to inputs $(x_i, x_j)$, $R_{in}$ is input MR and $R_{out}$ is output MR.
Concretely, for a given sequence of inputs $X^c \subseteq X$ under a constraint $C$, an MR \textit{R} should hold for any corresponding output of the system, that is $f(X^c_s) \mathbf{R} f(X^c_m)$, where $X^c_s, X^c_m \subseteq X^c$. Furthermore, we consider $\mathbf{R}$ to be of any of the types defined in \cite{segura2017metamorphic}: equivalence, equality, subset, disjoint, complete, and difference.
Our MT approach is shown in Figure~\ref{fig:fig2mtapproach}.
It is applied using two phases: \textit{exploration phase} and \textit{exploitation phase}. In the exploration phase, $X_s, X_m$ are extracted/created from X satisfying a set of constraints $C_s, C_m$ specific to the SUT. Then $X_s, X_m$ are executed against the SUT and the corresponding \textbf{seed output} $Y_s$ and respectively \textbf{morphed output} $Y_m$ are collected and satisfiability of $R_{out}(Y_s,Y_m)$ is checked, where $ Y_s =f(X_s)$ and $Y_m =f(X_m)$.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Figures/MT_overview_0412.pdf}
\caption{Overview of the metamorphic testing approach } \label{fig:fig2mtapproach}
\end{figure}
From those pairs of seed and morphed inputs $(X_{s_i}, X_{m_i})$ which fail the initial MR, we manually extract, in the exploitation phase, \textbf{fault-inducing patterns} of the input space. Based on them, we define $C_m'$ as a more restrictive constraint to be satisfied by morphed inputs
$X_m'$ which we use to verify the output metamorphic relation $R_{out}(Y_s,Y_m')$, where $ Y_s =f(X_s)$ and ${Y_m'} = f({X_m'})$.
The novelty of our approach stands in the fact that $C_m'$ allows us to define a refined morphed input that tests the system with more precision and effectiveness, by focusing the testing on the parts of the input with a higher probability of discovering faults.
\subsection{Running Example}
The ICS is a \textit{Position Control System} (PCS) which determines the position of a hanging load using attached markers on the hoisting frame. The PCS regularly receives up to 26 markers as [x,y] pixel coordinates from a camera module. The input may contain three markers on the hoisting frame attached to the load, as well as different light reflections in the environment (water, rain, snow, dust, etc.) which the camera filter was not able to remove. Only the markers corresponding to the three markers placed on the hoisting frame carrying the load are the \emph{true markers} that determine the position of the load (see Figure~\ref{fig:pcs}). The two markers placed on the sides of the hoisting frame are referred to as \emph{side markers}. The \emph{top marker} is used to detect the tilt of load and to increase the probability for the find algorithm to identify the true markers.
\begin{wrapfigure}{r}{0.4\linewidth}
\vspace{-1pc}
\centering
\includegraphics[width=0.6\linewidth]{Figures/PCS_overview_2.png}
\caption{Positional markers in PCS} \label{fig:pcs}
\vspace{-1pc}
\end{wrapfigure}
For each set of markers, the PCS tries to identify the true markers and to discard the markers corresponding to reflections. The PCS produces two outputs: a Boolean value \textit{found} indicating whether \textit{true markers} is identified and a vector of three integers \textit{[$I_{tm_1},I_{tm_2}, I_{tm_3}$]}, indicating the index in the input marker array of the positional markers identified as \textit{true markers}. Whenever the PCS is not able to identify the true markers consistently, the entire system can potentially move to an unsafe state and requires human intervention.
In the above context, we define the output of the PCS as follows:
\begin{definition}
The \textbf{output} of the Position Control System, $f(\{m_1, m_2,...,m_n\})$ is a pair $(found, [I_{tm_1},I_{tm_2}, I_{tm_3}])$, where $m_i$ and $tm_i$ are positional markers with two coordinates $x_i$ and $y_i$, $\{tm_1,tm_2, tm_3\} \subseteq \{m_1, m_2,...,m_n\}$, $found = TRUE | FALSE$ and $[I_{tm_1},I_{tm_2}, I_{tm_3}]$ is the vector of indexes of true markers provided that $found = TRUE$.
\label{def_outPCS}
\end{definition}
\subsection{Metamorphic relation}
In our approach, we extract the MR from the requirements of the SUT as follows:
"\textit{Assuming that the system is able to classify correctly a set of markers received from camera module in the absence of reflections (noise), the system should be able to classify correctly the same inputs in the presence of reflections}".
This can be formulated as the following metamorphic relation: $f(X_s) \equiv f(X_s \cup X_n)$.
\subsection{Creating the seed input}
In our approach, we choose the seed input as a series of true marker triplets $ X_s = \{s_1, s_2, \dots, s_k\}$, where $s_i = \{tm_1^i,tm_2^i, tm_3^i\}$. We extract the seed input from previous executions of the PCS by extracting those log entries that only contain three positional markers and which were classified correctly as true markers. The spatial continuity of the image coordinates is validated via checking the distance between consecutive image coordinate values against a predefined allowed range of movement. When the seed input data set is extracted from the execution trace, we run an initial test session against the SUT to confirm that all the input marker positions are classified correctly. In case execution logs are not available, the seed input data can be collected from the simulation environment of the PCS that is validated against a real crane for the set of inputs the seed is extracted from.
\subsection{Creating the morphed input}
In our case, the morphing transformation takes each sample in the seed input $X_s$ and adds markers corresponding to reflections, which we denote as \textit{noise}.
\begin{definition}
\label{def:noise}
\textbf{(Noise)}: A series of noise markers corresponding to environment reflections $X_n \equalhat \{n_1,n_2,n_3,...,n_{j} \} $, where $n_i \in X$.
\end{definition}
In the \textbf{exploration phase}, we use random generated noise to perform an initial exploration of the SUT in order to collect observations and identify fault patterns.
To this extent, we create random noise coordinate pairs of marker vectors of different lengths ranging from 1 to 23. These noise vectors are appended to the seed input one at a time. The algorithm for generating morphed input generates random (x, y) coordinates with a value in range [0,131072], which is the size of the camera frame.
\begin{definition}
\label{def:morphedinput}
\textbf{(Morphed input)}: A series of markers
$X_m$ $ = \{m_1, m_2,..., m_k \}$,
where each sample $m_i = \{tm_1^i, tm_2^i, tm_3^i ,n_1^i,n_2^i,n_3^i,...,n_{j}^i \}$ with $ j \leq 23 $, is the combination of seed input markers $X_s$ and noise markers $X_n$.
\end{definition}
In the \textbf{exploitation phase}, we analyze noise patterns in the \textit{morphed input} that caused the system to make incorrect classifications. This led to the following observations: \textit{the set of markers containing two} or \textit{three noise markers having the same geometrical pattern of true markers can trigger faulty behavior of the system}. Therefore, we refine the morphed input to a more constrained version of the input space to exploit the above mentioned fault patterns.
\begin{definition}
\label{def:refinedmorphedinput}
\textbf{(Refined Morphed input)}: A series of markers samples
${X_m'}= \{m_1, m_2,..., m_k \}$, where each sample
$ m_i = \{tm_1^i, tm_2^i, tm_3^i ,n_1^i,n_2^i \} $ in the first follow-up test and
$m_i = \{tm_1^i, tm_2^i, tm_3^i ,n_1^i,n_2^i,n_3^i\} $ in the second follow-up test. $C_m'$ is the restrictive constraint used to refine the added noise to two and three noise markers. The series ${X_m'}$ is the combination of seed input markers $X_s$ and restricted set of noise markers $X_n'$ generated using the constraint $C_m'$.
\end{definition}
In order to automate the creation of the noise markers, we create replicas of the true markers, thus obtaining a similar geometrical pattern in the noise. For each sample of true markers in the seed input we distribute the noise markers in a rectangular grid pattern in the camera frame, in order to obtain a uniform sampling of the input space.
\begin{wrapfigure}{r}{0.5\linewidth}
\vspace{-1pc}
\centering
\includegraphics[width=1.1\linewidth]{Figures/guided_star_640.pdf}
\caption{Test data distribution for the guided star approach}
\label{fig:GuidedDataDistribution}
\vspace{-1pc}
\end{wrapfigure}
In addition, each replica of true marker pair placed on the grid is rotated by angles $45^0$, $90^0$, and $135^0$ to distribute the noise markers in a star like pattern (see Figure~\ref{fig:GuidedDataDistribution}). We note that, the approach is completely automated and it allows us to change the number of generated tests by changing the density of the grid and number of rotations of the noise markers.
\subsection{Test execution}
The tests are generated in offline mode. Test execution is performed using the \texttt{Pytest} testing framework \cite{hunt2019pytest} by setting up an adapter to connect to the SUT. Open Platform Communications Unified Architecture (OPC UA) \cite{leitner2006opc} is used as the adapter to connect to the CODESYS development environment where the PLC application program resides. The test suite contains the functions to set up OPC server-client connection. It also has a one-time setup to read the input from an input file, send it to SUT, and collect the execution results to verify if the MR defined between the source and follow-up test output holds.
\section{Experiments and evaluation}
\label{sec:experiments}
For all experiments used in this section, we use a seed input with 625 samples, each containing a sequence of three true markers extracted from execution logs. Each entry in the seed input is verified first that it is classified correctly as the true markers by the system.
Based on the outcomes of the executed follow up test cases we can categorize the test cases as follows: \textit{true positive} (TP) -- the system identifies the actual positional markers as true markers - expected behavior, \textit{false positive} (FP) -- the system identifies reflection markers as true markers - unexpected behavior, \textit{false negative} (FN) -- the system fails to identify the true markers even if they were present in the input - unexpected behavior, and \textit{true negative} (TN) -- the system does not identify true markers when the input does not contain true markers. This is not applicable since in our approach since the test input always contain true markers.
The TP classification of true markers in the morphed output satisfies the MR and counts as tests that do not fail. The failed tests include FPs, which indicate an incorrect identification and FNs, which indicate missed identification of true markers placed in the first three positions in the morphed input.
\begin{figure*}[!t]
\centering
\subfloat[Exploitation: FPs for markers=5]{
\includegraphics[width=.33\linewidth]{Figures/guided_star_FP_n5.pdf}}
\subfloat[Exploitation: FNs for markers=5]{
\includegraphics[width=.33\linewidth]{Figures/guided_star_FN_n5.pdf} }
\subfloat[Exploitation: FPs for markers=6]{
\includegraphics[width=.33\linewidth]{Figures/guided_star_FP_n6.pdf} }
\caption{Input distribution for failed tests in the exploitation phase}
\label{fig:FPresults}
\end{figure*}
In the exploration phase, the test generation algorithm produces $ 625 \times 10 \times 23 = 143 750 $ follow up tests. From the total of 143 750 executed tests, 143 615 satisfy the metamorphic relationship, while 135 do not. From the failed tests, 39 selected the wrong combination of inputs as true markers, whereas 96 could not find true markers among the inputs although they were present.
The geometric distribution of incorrectly classified data points is shown in Figure~\ref{fig:Rclassification}. Further analysis of the failed tests, provides us with the following observations. FP results occurred when the input set contained either a set of \textit{two noise markers} resembling the pattern of true side markers or a set of \textit{three noise markers} resembling the geometrical pattern of the true marker triplet. FN results occurred when the input set contained either a number of markers greater than or equal to 6 or a set of \textit{two noise markers} resembling the pattern of the true side markers.
\begin{figure}[h]
\vspace{-1.2pc}
\centering
\subfloat[morphed input w.r.t FP output]{
\includegraphics[width=0.5\linewidth]{Figures/Random_FP_eri23_t1.pdf}
}
\subfloat[morphed input w.r.t FN output]{
\includegraphics[width=0.5\linewidth]{Figures/Random_FN_eri23_t1.pdf}
}
\caption{Morphed input corresponding to incorrect classifications in the exploration phase}
\label{fig:Rclassification}
\end{figure}
In the exploitation phase, we ran two separate testing sessions in which the refined morphed input contains two and three noise markers respectively, besides the true markers. In both cases, the test generation algorithm produced 2400 refined morphed test inputs. These refined morphed inputs are created by replicating and rotating the seed input markers. It results in $ 625 \times 4 = 2500 $ noise markers and discarding the samples that do not fit in the 131072 x 131072 frame after the \textit{rotate} action. The results of the test execution are shown in Table~\ref{tab:test-exec}. For the test session with five input markers, 7 FP and 74 FN classifications are identified, whereas, for the subsequent test with six input markers, no FN and 92 FP classifications are identified. The distribution of the noise markers in the input space for failed tests in the exploitation phase is shown in Figure~\ref{fig:FPresults}.
The test execution results of the guided method where the number of markers is 5 contain more FNs indicating that the system is not identifying the true markers when a replica of two side markers is added as noise. However, the FP results of follow-up test where the number of markers is 6 reveal that a replica of 3 true markers can trigger an incorrect identification and compromise the functional safety of the system. It is also observed that a replica of two side markers has low chances of causing FPs when compared to the noise created with a replica of 3 true markers. Moreover, only the noise markers corresponding to the exact replica of true markers triggered the incorrect identification of true markers. In addition, we can observe that the noise markers rotated by angles $45^0,90^0,135^0$ resulted in TP test cases, where the system correctly identified the true markers despite the presence of noise.
Table \ref{tab:test-exec} also shows the corresponding Fault Detection Ratio (FDR) \cite{segura2016survey} for each phase as the number of tests that found a fault in the entire tests suite. As expected, the exploration phase has a very low FDR due to the random test generation, whereas in the exploitation phase, FDR has increased around 33 to 44 fold.
\begin{table}[!h]
\centering
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
Method & No. of & No. of & TPs & FPs & FNs & FDR \\
& markers & tests & & & & \\ \hline
Exploration & 4 - 26 & 143750 & 143615 & 39 & 96 & 0.0009 \\ \hline
\multirow{2}{*}{Exploitation} & 5 & 2400 & 2319 & 7 & 74 & 0.03 \\ \cline{2-7}
& 6 & 2400 & 2308 & 92 & - & 0.04
\\ \hline
\end{tabular}
\caption{Test execution results}
\label{tab:test-exec}
\end{table}
\section{Discussion and Conclusions}
In this work, metamorphic testing is effectively applied to detect faulty behavior in industrial control systems. The identification of a metamorphic relation is done manually based on the specification of the system. A known challenge in the identification of MRs is the need for domain expertise to assess the expected input and output behaviour of the system. As future work, we plan to automate the identification of MR for an ICS from its specification and to explore the applicability of MT for fault localization and program repair in industrial systems. It is also of interest to combine metamorphic and mutation-based approaches for testing ICS and to apply heuristic techniques for minimization of test suites.
|
1,108,101,565,722 | arxiv | \section{Introduction}
The accurate prediction of mutational effects on protein stability is of utmost importance in many fields ranging from biotechnology to medicine. In rational protein engineering applications, for example, the targeted redesign of proteins makes it possible to optimize the biotechnological and biopharmaceutical processes in which they are involved \cite{korendovych2020novo,coluzza2017computational}.
Stability prediction also plays a key role in interpreting the impact of human genetic variants and may provide a better understanding of how these variants lead to disease conditions \cite{kopanos2019varsome,gunning2020assessing}. Note that stability is all the more important as it is the dominant factor in protein fitness \cite{tokuriki2009stability}.
For these reasons, many studies have been devoted over the last decade to the development of computational tools that aim to predict in a fast and reliable way the change in protein stability
upon mutations \cite{dehouck2011popmusic,dehouck2009fast,savojardo2016inps,quan2016strum,capriotti2005mutant2,pires2014mcsm,pires2014duet,schymkowitz2005foldx,delgado2019foldx,kellogg2011role,cheng2006prediction,chen2020premps,li2020predicting,laimer2015maestro,cao2019deepddg,masso2014auto,huang2007iptree,witvliet2016elaspic,giollo2014neemo,chen2020istable,montanucci2019ddgun,benevenuta2021antisymmetric,li2021saafec}. These methods use information about protein sequence, structure and evolution, which are combined through a variety of machine learning methods ranging from simple linear regression to more complex models. \textcolor{black}{For more information, we refer to excellent recent reviews \cite{sanavia2020limitations,marabotti2021predicting} and comparative tests \cite{kepp2015towards,fang2020critical,iqbal2021assessing}}.
\textcolor{black}{ It has to be noted} that, although recent advances in the field of artificial intelligence
and more specifically in deep learning have considerably improved feature selection and combination in multiple bioinformatics problems such as three-dimensional (3D) protein structure prediction \cite{li2019deep,torrisi2020deep}, so far they are not often used in predicting the effects of mutations on protein stability. Indeed, the majority of current predictors use shallow algorithms, probably because the amount of experimental training data is too limited to allow for deeper algorithms.
\textcolor{black}{In this review, we concisely present the protein stability prediction methods that are available and functional, and test their performance on an independent set of experimentally characterized point mutations, which are not part of any of the training sets. Our main goal here is to take a critical look at the predictors by investigating their algorithms, limitations, and biases. We also discuss} the main challenges the field will have to face in the years to come in order to strengthen the role of computational approaches in protein design and personalized medicine.
\section{Brief overview \textcolor{black}{and benchmark} of the current computational models}
We collected \textcolor{black}{
existing computational methods predicting the change in protein thermodynamic stability upon point mutations, defined by the change in folding free energy $\Delta\Delta G$.
We restricted ourselves to predictors that are commonly used and currently available through a working web server or downloadable code.} These methods, listed in Table \ref{AWA}, are almost all based on the 3D protein structure and use a series of features such as the relative solvent accessible surface area (RSA) of the mutated residue, the change in folding free energy ($\Delta\Delta W$) estimated by various types of energy functions, the change in volume of the mutated residue ($\Delta$Vol), and the change in residue hydrophobicity ($\Delta$Hyd). They also often use evolutionary information either extracted from multiple sequence alignments of the query protein or from substitution matrices such as BLOSUM62 \cite{henikoff1992blosum}.
Several machine learning algorithms are used to combine the different features. These are most often algorithms that have become classical such as artificial neural networks, support vector machines or random forests.
Only a few very recent predictors use novel deep learning approaches \cite{li2020predicting,cao2019deepddg,benevenuta2021antisymmetric}.
At the other extreme, a predictor published this year uses a very simple model consisting of a linear combination of only three features \cite{caldararu2021three}.
It is a difficult task to rigorously evaluate the accuracy of predictors \cite{fang2020critical,iqbal2021assessing}. Indeed, performances depend on the training and test sets as well as on the evaluation metric. \textcolor{black}{ Here, we have chosen to benchmark the collected methods by estimating their accuracy in terms of} the root mean square error (RMSE) and the Pearson correlation coefficient ($r$) between experimental and predicted values for 830 mutations inserted in the 56-residue $\beta$1 extracellular domain of streptococcal protein G (PDB code 1PGA) \cite{nisthal2019protein}. \textcolor{black}{It has to be underlined that this set of mutations is not included in the training sets of the methods tested, and is thus a truly independent set.}
The RMSE of the predictors varies between 0.9 and 1.4 kcal/mol, and the correlation coefficients between 0.3 and 0.7, as shown in Table \ref{AWA}. We observe low correlation between these two metrics: the method with the worst RMSE (1.42 kcal/mol) has the best $r$ (0.66). This follows from the fact that Pearson correlation coefficients are essentially driven by the points that are far from the mean, in contrast to RMSE which takes all points equally into account.
\textcolor{black}{Note that these results must be interpreted with care. Indeed, both RMSE and $r$ values } depend on the distribution of experimental $\Delta\Delta G$s and more specifically, on its variance \cite{montanucci2019natural}. The ranking of the prediction methods and their scores thus crucially depend on the metric used and on the test $\Delta\Delta G$ distribution.
\textcolor{black}{In addition, we also tested two other widely known stability predictors, FoldX \cite{delgado2019foldx} and Rosetta \cite{kellogg2011role}, which are physics-based rather than AI-based and employ full-atom representations rather than simplified descriptions of protein structures. These two methods reach reasonable correlations with $r$ values of 0.36 and 0.44, respectively, slightly lower than AI-based methods ($\langle r\rangle=0.48$). In contrast, their RMSE values are above 3 kcal/mol, which is much worse than the average RMSE of 1.02 kcal/mol of AI-based methods. The lesser performance of these two methods has already been observed \cite{kepp2015towards} and could be due to the use of detailed atomic representation which makes them sensitive to resolution defects.}
\section{ \textcolor{black}{Evolution of
predictor performance over time}}
We have analyzed the average performance of all the methods according to their year of development. We clearly see in Fig. \ref{time}.a that the average accuracy has not improved in the last 15 years, but basically remains constant, despite all efforts and the improved performances claimed by the authors of the newly published methods. This is strikingly different from the situation in the field of protein structure prediction, for example, which has experienced an impressive improvement during the same period \cite{alquraishi2021machine}.
Whether the accuracy limit
on predicted $\Delta \Delta G$s is due to the relatively low number of mutations in the training set, to more fundamental reasons, or to uncontrolled biases in the predictors is currently a topic of debate \cite{montanucci2019natural,caldararu2020systematic,sanavia2020limitations}. We discuss this issue more extensively in the next sections. It must again be noted that the RMSE threshold and the ranking of the methods performance can be somewhat different on other test mutations \cite{kepp2015towards,fang2020critical,iqbal2021assessing}. But the lower limit on RMSE is basically always around 1 kcal/mol.
It is instructive to
look at the correlations between the predictions of the different methods, shown in Fig. \ref{time}.b. They are all reasonably good, with an average correlation coefficient of 0.5. This reflects that the different methods use roughly the same information, but that there is room for improvement and for further boosting the prediction accuracy by selecting informative features that have not yet been combined.
Another important characteristic of a prediction method is its speed. Indeed, as many current projects require investigating protein stability properties at a large, proteome, scale \cite{schwersensky2020large}, the predictors have to be able to run fast enough to scan the proteome in a reasonable time. All the methods tested are relatively fast with some extremely fast such as PoPMuSiC, SimBa, MAESTRO and AUTOMUTE (see Table \ref{AWA}).
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{REVIEW04.pdf}
\centering
\caption{Evaluation of the $\Delta\Delta G$ prediction methods listed in Table \ref{AWA} on the basis of the experimentally characterized mutations in the $\beta$1-extracellular domain of streptococcal protein G \cite{nisthal2019protein}. (a) Average RMSE of the predictors as a function of their development date. (b) Correlation coefficients $r$ between the $\Delta\Delta G$s predicted by the different methods.}
\label{time}
\end{figure*}
\begin{table}[h!]
\tabcolsep=0.11cm
\centering
\scalebox{0.8}{
\begin{tabular}{ccccccc}
\hline
\rowcolor{LightCyan} Method & 3D & Feature & RMSE (kcal/mol) & Run time& AI & Ref. \\
\rowcolor{LightCyan} Year & & type & $r$ & (min) &method & \\\hline
\href{https://www.ics.uci.edu/~baldig/mutation.html}{MUpro} & & Neighbors
& 1.17 & $<$ 1 & Support vector & \cite{capriotti2005mutant2} \\
(2006) & & & $r$=0.26 & & regression & \\ \hline
\href{http://gpcr2.biocomp.unibo.it/cgi/predictors/I-Mutant3.0/I-Mutant3.0.cgi}{I-Mutant 3.0} &\Checkmark& Residue type, RSA,
& 0.92 & $\sim$ 400 & Support vector & \cite{capriotti2005mutant2} \\
(2007) & & Residue environment & $r$=0.38 & & Regression & \\ \hline
\href{www.dezyme.com}{PoPMuSiC v2.1} &\Checkmark& Statistical potentials, & 0.95 & $<$ 1 & Artificial neural & \cite{dehouck2011popmusic} \\
(2011) & &$\Delta$Vol, RSA & $r$=0.56 & & network & \\ \hline
\href{http://marid.bioc.cam.ac.uk/sdm2}{SDM} &\Checkmark& RSA, Environment-specific & 0.95 & $\sim$ 250 & Linear & \cite{worth2011sdm} \\
(2011) && substitution frequencies& $r$=0.46 && combination& \\ \hline
\href{http://biosig.unimelb.edu.au/mcsm}{mCSM} &\Checkmark& Graph-based
signatures, & 1.10 & $\sim$ 250 &Regression via & \cite{pires2014mcsm} \\
(2014) && Atomic distance patterns & $r$=0.44& & Gaussian process \\ \hline
\href{https://pbwww.services.came.sbg.ac.at/maestro/web}{MAESTRO} &\Checkmark& Statistical potentials,
& 0.91 &$<$ 1 & Linear regression + & \cite{laimer2015maestro} \\
(2014) && PSize, ASA, SS, $\Delta$Hyd, $\Delta$IP & $r$=0.58& & ANN + SVM &\\ \hline
\href{http://binf.gmu.edu/automute/}{AUTOMUTE 2.0} &\Checkmark& 4-Body statistical potential
& 1.16 &$\sim$ 1& Random forest, & \cite{masso2014auto} \\
(2014) && ASA, depth, SS, Vol & $r$=0.30&& Tree regression & \\ \hline
\href{https://inpsmd.biocomp.unibo.it/inpsSuite/default/index3D}{INPS-3D} &\Checkmark& Contact potential, RSA, EvolInfo, & 0.96 & $\sim$4 &Support vector & \cite{savojardo2016inps}\\ (2016) && Bl62, $\Delta$Hyd, $\Delta$MW, MutI & $r$=0.52 & & regression&\\ \hline
\href{https://zhanglab.ccmb.med.umich.edu/STRUM/}{STRUM} &\Checkmark& Energy functions, Homology modeling, & 1.05 & $\sim$200 & Gradient boosting & \cite{quan2016strum} \\
(2016) && $\Delta$Hyd, $\Delta$Vol, $\Delta$IP, $\Delta$MW,
EvolInfo& $r$=0.49 && regression & \\ \hline
PoPMuSiC$^{\rm{sym}}$ &\Checkmark& Statistical potentials & 0.98 & $<$ 1 & Artificial neural & \cite{pucci2018quantification} \\
(2018) && $\Delta$Vol, RSA & $r$=0.54 & & network & \\ \hline
\href{https://github.com/biofold/ddgun}{DDGun3D} &\Checkmark& BL62, $\Delta$Hyd, RSA, & 0.94 & $\sim$ 30 & Non-linear & \cite{montanucci2019ddgun} \\
(2019) && Statistical potentials & $r$=0.57 & & regression & \\ \hline
\href{http://protein.org.cn/ddg.html}{DeepDDG} &\Checkmark& ASA, SS, H-bonds, EvolInfo, & 1.42 &$\sim$ 5 & Shared residue pair & \cite{cao2019deepddg} \\
(2019) && Residue distances/orientations & $r$=0.66 & & deep neural network & \\ \hline
\href{https://github.com/gersteinlab/ThermoNet}{ThermoNet} & \Checkmark & Aromatic, Positive, Negative, & 1.01 & $\sim$ 100 & 3D convolutional & \cite{li2020predicting} \\
(2020) && Hyd, H-bond donor/acceptor& $r$=0.29 &
& neural network & \\ \hline
\href{https://lilab.jysw.suda.edu.cn/research/PremPS/}{PremPS} &\Checkmark& EvolInfo, RSA, $\Delta$Hyd, Hyd, & 0.95 & $\sim$4 & Random & \cite{chen2020premps} \\
(2020) && Aromatic, Charged, Leu& $r$=0.57 & &forest& \\ \hline
SimBa &\Checkmark& RSA, $\Delta$Vol & 0.99 & $<$ 1 & Linear & \cite{caldararu2021three} \\
(2021) && $\Delta$Hyd & $r$=0.53 && regression& \\ \hline
\href{http://compbio.clemson.edu/SAAFEC-SEQ/}{SAAFEC-SEQ} & & EvolInfo, Neighbors, $\Delta$Vol, & 0.91 & $\sim$ 30 & Gradient boosting & \cite{li2021saafec} \\
(2021) && $\Delta$Hyd, $\Delta$Flex, PSize, H-bond & $r$=0.49 && decision tree & \\ \hline
\rowcolor{LightCyan}
& &$\langle$\textbf{RMSE}$\rangle$ = & 1.02 $\pm$ 0.13 && & \\
\rowcolor{LightCyan} && $\langle$\bf{$r$}$\rangle$ = & 0.48 $\pm$ 0.12 && & \\
\rowcolor{LightCyan}&& $\bm \sigma${\bf(Exp}) = & 0.98 && & \\
\hline
\end{tabular}
}
\caption{List of \textcolor{red}{ AI-based }$\Delta\Delta G$ predictors studied. The RMSE and linear correlation coefficient $r$ are computed for the experimentally characterized mutations in the $\beta$1-extracellular domain of streptococcal protein G \cite{nisthal2019protein}; $\sigma${(Exp}) is the standard deviation of the experimental $\Delta\Delta G$ distribution (in kcal/mol). Abbreviations used: ASA: solvent accessible surface area; RSA: relative ASA; Depth: surface, undersurface, or buried; PSize: Protein size; Vol: residue volume; $\Delta$Vol: Change in residue volume upon mutation; $\Delta$MW: Change in molecular weight; $\Delta$Flex: Change in flexibility; Hyd: Residue hydrophobicity; $\Delta$Hyd: Change in Hyd; SS: secondary structure; MutI: mutability index of the native residue \cite{dayhoff1978mutability}; BL62: BLOSUM62 matrix \cite{henikoff1992blosum}; Neighbors: type of residues in the neighborhood along the sequence; EvolInfo: Evolutionary information from protein families;
ANN: artificial neural network; SVM: supporting vector machine.}
\label{AWA}
\end{table}
\section{Limitations and prediction biases}
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{REVIEW03.pdf}
\centering
\caption{Schematic representation of the challenges and limitations that protein stability prediction methods have to address in the coming years.}
\label{MutaFrame}
\end{figure*}
The generalization property in machine learning is the ability of the algorithm to correctly predict unseen data. The protein stability predictors, like all machine learning-based methods, tend however to be biased towards the data sets on which they are trained. \textcolor{black}{The majority of the methods analyzed here \cite{dehouck2009fast,pires2014mcsm,worth2011sdm,pucci2018quantification,savojardo2016inps,laimer2015maestro,caldararu2021three,chen2020premps,li2021saafec} were trained on the data set known as S2648 \cite{dehouck2009fast}. It contains 2,648 mutations with experimental $\Delta \Delta G$ values collected from the literature and the ProTherm database \cite{gromiha2010protherm}, which were thoroughly checked and curated. Other predictors use subsets of S2648 or a slightly larger data set known as Q3421 \cite{quan2016strum}. }
Multiple hidden biases such as feature and hyperparameter selection biases that are difficult to control \textcolor{black}{can affect the generalization properties of the predictors trained on these data sets}. These problems are even more severe when complex algorithms are used or when the training sets are small and unbalanced.
\textcolor{black}{In the following, we quantitatively analyze a series of biases that often affect stability predictors and are primarily caused by various imbalances in the training data sets, and discuss the strategies used to limit their impact.}
\subsubsection*{\textbf{Cross-validation biases}}
Often, prediction performance is evaluated using a $k$-fold cross-validation procedure.
This is not always sufficient to estimate the accuracy of the methods and assessments on test sets are usually also provided, even though their sizes are usually small.
Going back to cross validation, there are different ways to perform the random split of the data set into $k$ folds: at the level of the mutation, position, protein and even protein cluster. Random splitting at mutation level introduces some distortions since the knowledge of the effect of a mutation at a given position makes the prediction of another substitution at the same position easier. Splitting at position level can also introduce some biases. To have more reliable estimations, cross validation at protein level has to be performed, or even at protein cluster level where all proteins that are similar to the target protein one wants to predict are removed from the training set.
It should be noted that the extent to which the type of data set splitting affects prediction performances is highly dependent on the prediction model. For example, the drop in performance of predictors that do not use complex machine learning like PoPMuSiC and SimBa is almost negligible when passing from residue level to protein level \cite{ancien2018SNP}.
In contrast, a substantial decrease in accuracy is undergone by STRUM,
with correlation coefficients and RMSE between experimental and predicted $\Delta \Delta G$s that
pass from (0.77, 0.94 kcal/mol) for a 5-fold cross validation at mutation level to (0.64, 1.14 kcal/mol) at position level and (0.54, 1.25 kcal/mol) at protein level \cite{quan2016strum}. A similar drop in performance of about 20-30\% when strict cross validation procedures are employed has also been observed in \cite{chen2020premps}.
\subsubsection*{\textbf{Bias towards destabilizing mutations}}
At fixed environmental conditions, the change in folding free energy upon mutation is antisymmetric by definition. More precisely, if protein $B$ is a mutant of protein $A$, we have that $\Delta \Delta G (A\rightarrow B) = - \Delta \Delta G (B\rightarrow A)$. However, the majority of the stability predictors violate this relation, as shown by a series of studies \cite{pucci2015symmetry,pucci2018quantification,usmanova2018self,caldararu2020systematic}. This is mainly because training data sets are dominated by destabilizing mutations, which in turn results from the vast majority of mutations in a given protein being destabilizing. For example, the ratio between the numbers of destabilizing and stabilizing mutations in the data sets S2648 \cite{dehouck2009fast} and Q3421 \cite{quan2016strum}, \textcolor{black}{which are widely used as training sets,} are equal to 3.7 and 3.2, respectively, with a mean $\langle \Delta \Delta G\rangle$ of about 1 kcal/mol in both sets.
Some of the recent prediction methods got rid of this bias and satisfy the antisymmetry property by construction \cite{chen2020premps,benevenuta2021antisymmetric,savojardo2016inps}. To check the extent to which it is the case, a balanced data set such as
$S^{\rm{sym}}$ \cite{pucci2018quantification} must be considered, which contains, for each mutation $A\rightarrow B$, the backward mutation $B\rightarrow A$ and thus an even number of stabilizing and destabilizing mutations. The deviation from antisymmetry $\delta=\Delta \Delta G (A\rightarrow B) + \Delta \Delta G (B\rightarrow A)$ is an important measure for the evaluation of the lack of bias.
\subsubsection*{\textbf{Protein and mutation biases}}
Another type of bias arises from the fact that training data sets do not provide a good sampling of the types of mutations and proteins, as recently discussed in \cite{caldararu2020systematic}. Often, mutation data sets are dominated by a few proteins which contain most of the entries and are therefore likely to bias the prediction towards them. For example, the 10 proteins from S2648 and Q3421 that contain the largest number of mutations represent 50\% and 40\% of the entries, respectively.
The types of substitutions are also not well sampled: among the 20$\times$19=380 possible amino acid substitutions, 78 and 38 are not sampled at all in S2648 and Q3421, respectively. The top 10 types are substitutions into alanine, which account for 25\% of the entries in the data sets.
The way in which different methods are affected by this bias is extensively evaluated in \cite{caldararu2020systematic} by introducing an unbiased test set with respect to mutation types.
The majority of the prediction methods are shown to be biased. They are able to correctly predict the effect of certain types of
mutations, while they completely miss others.
\section{Current and future challenges}
\subsection*{\textbf{Deep learning approaches}}
Deep learning algorithms such as such as convolutional neural networks have provided spectacular improvements in a series of bioinformatics problems such as protein structure prediction \cite{alquraishi2021machine}. Such methods are starting to be used in the prediction of the impact of mutations on protein stability \cite{li2020predicting,cao2019deepddg,zhoudnpro,benevenuta2021antisymmetric}, but the majority of the current methods still use standard shallow machine learning approaches. This is due to the fact that deep learning methods require large amounts of input data for training \cite{lecun2015deep}, while standard training data sets such as S2648 \cite{dehouck2011popmusic} or Q3421 \cite{quan2016strum} only include a few thousand entries and are thus too small for these approaches. New mutation data have recently been collected \cite{nikam2021prothermdb,xavier2021thermomutdb,stourac2021fireprotdb}, which will certainly increase the size of the training data sets after proper curation. However, these sets will probably remain too limited, with the consequence that deep learning is unlikely to outperform standard machine learning approaches without overfitting issues in the near future, even though unsupervised pre-training can help prevent these issues to some extent \cite{lecun2015deep,benevenuta2021antisymmetric}.
\subsection*{\textbf{Prediction model complexity and interpretability}}
The application of a wide variety of AI algorithms with different complexity to the prediction of protein stability is very informative. These algorithms range from deep learning approaches such as 3D convolutional neural networks \cite{li2020predicting} to extremely simple models such as linear regression \cite{caldararu2021three}.
Complex algorithms can capture the intricate relationships between input features and training data better than simpler models, but they are in general more prone to overfitting. Moreover, most of them act as black boxes, which makes their results more difficult to interpret.
Note that both over- and underfitting are serious problems for generalization. Therefore, the development of a prediction model must be a trade-off between these two extremes. We would like to point out that the best current methods are not always those that use the most complex AI techniques (see Table \ref{AWA}).
The interpretability of the model at biophysical and biochemical levels can be another characteristic to be considered in the model design. For example, it has been shown in \cite{caldararu2021three} that just three simple features, \emph{i.e.} the RSA of the mutated residues, and the change in residue volume and in hydrophobicity upon mutations, combined using a linear model, can achieve performances similar to state-of-the-art prediction methods that use up to hundred features and complex machine learning.
\textcolor{black}{Novel techniques for interpreting model predictions \cite{lundberg2017unified,AA,BB}, such as SHAP (SHapley Additive exPlanations) \cite{lundberg2017unified}, have recently been introduced in the AI field. Their application to protein stability predictors helps to better identify the relative importance of features and to lead to more accurate prediction models retaining interpretability properties.}
\subsection*{\textbf{Are we stuck with the limit of 1 kcal/mol RMSE ?}}
Surprisingly enough, all the methods developed over the past fifteen years have an accuracy evaluated by an RMSE slightly greater than 1 kcal/mol, while most validations on independent test sets are even worse with RMSEs between 1.5 and 2.5 kcal/mol \cite{fang2020critical,iqbal2021assessing}. On the test protein we used here, the situation is somewhat more favorable, with a lower value of 0.9 kcal/mol (Table \ref{AWA}); this is, however, related to the particularly low standard deviation of the experimental $\Delta\Delta G$ distribution in this case (1 kcal/mol).
The idea that 1 kcal/mol represents a hard limit for the prediction accuracy has already been suggested in \cite{caldararu2020systematic}.
Several reasons can explain this limit. First, all the predictors are based on a series of approximations, such as the use of the wild type structure but not the mutant structure. They thus neglect the possible structural modifications caused by the mutations to the folded structure and, moreover, also overlook perturbations to the unfolded state \cite{caldararu2020systematic}.
In addition, entropy contributions to the folding free energy are largely overlooked, even though the methods based on statistical mean force potentials do not neglect them completely.
Another reason comes from the intrinsic errors on experimental $\Delta \Delta G$ values. In particular, both thermal and chemical measurements of $\Delta \Delta G$ generally involve approximations \cite{pucci2016high}. In addition, all the $\Delta \Delta G$ values in the data sets have not been determined under the same conditions, and the dependence of $\Delta \Delta G$ on, \emph{e.g.}, temperature or pH can be important.
Whether the value of 1 kcal/mol is a true limit that cannot be circumvented, as suggested in \cite{montanucci2019natural,benevenuta2019upper} through a theoretical estimation of the experimental $\Delta \Delta G$ distribution and noise, is an open question. Our observation that the methods performance does not increase with time (Fig. \ref{time}.a) supports this view. This question must be further investigated to understand if and how the current state-of-the-art predictors can be significantly improved.
To address these issues, a systematic blinded experiment fully dedicated to the evaluation of protein stability changes upon mutations would be of great benefit, in the same way that CASP (\href{https://predictioncenter.org/}{predictioncenter.org}) and CAPRI (\href{www.capri-docking.org}{capri-docking.org}) are for structure predictions.
\subsection*{\textbf{Metagenomic data}}
Metagenomic sequence data is a valuable source of sequence information that started to be used in protein structure prediction since the seminal paper of \cite{ovchinnikov2017protein}, and is now also extensively used in enzyme discovery \cite{D1NP00006C}. For example, the majority of methods used such information as input in the last round of the CASP experiment (CASP14) \cite{ovchinnikov2017protein}.
Indeed, the enrichment of sequence data from metagenomic databases, even though they are often noisy, can improve protein sequence alignments and thus provide a more accurate assessment of how evolution shapes families of homologous proteins.
Metagenomic sequence data is not yet used in the field of protein stability prediction, even not by the methods that have sequence conservation among their features. This could be a way to boost the prediction accuracy.
\subsection*{\textbf{Multiple mutations versus single-point mutations}}
Another challenge is to predict the effect of multiple mutations. It is of particular interest in protein design because multiple mutations can clearly lead to a higher degree of protein stabilization or destabilization \cite{campeotto2017one,musil2017fireprot}. Yet, the vast majority of computational methods predict only the effect of single-site substitutions \cite{sanavia2020limitations}. Point mutations can of course be combined to model multiple mutations, but this leads to neglecting any direct or indirect epistatic interactions between mutated residues \cite{schmiedel2019determining,rollins2019inferring}.
The scarcity of experimental data on multiple mutations in a variety of proteins as well as the degree of complexity compared to point mutations are the current limitations that prevent obtaining satisfactory prediction accuracy.
|
1,108,101,565,723 | arxiv |
\section*{Supplementary Material}
\section{Full Gaussian process priors and automated inference \label{sec:full-gp}}
\citet{nguyen-bonilla-nips-2014} developed an automated variational
inference framework for a class models with Gaussian process priors
and generic i.i.d\xspace likelihoods. Although such an approach is an important step
towards black-box inference with \name{gp} priors, assuming i.i.d\xspace observations
is, by definition, unsuitable for structured models.
One way to generalize such an approach to structured models of the types
described in \S \ref{sec:linear-chain} is to differentiate between \name{gp} priors over
latent functions on unary nodes and \name{gp} priors over
latent functions over pairwise nodes. More importantly,
rather than considering i.i.d\xspace likelihoods over all observations, we assume likelihoods that factorize
over sequences, while allowing for statistical dependences within a sequence.
Therefore, our prior model for linear chain structures is given by:
\begin{align}
\label{eq:full-prior}
p(\f) = p(\f_{\un}) p(\f_{\bin}) = \left( \prod_{j=1}^{\Vsize} \Normal(\f_{\un \cdot j}; \vec{0}, \mat{K}_j) \right) \Normal(\f_{\bin}; \vec{0}, \K_{\bin}) \text{,}
\end{align}
where $\f$ is the vector of all latent function values of unary nodes $\f_{\un}$ and the function values of pairwise nodes $\f_{\bin}$.
Accordingly, $\f_{\un \cdot j}$ is the vector of unary functions of latent process $j$, corresponding
to the $j\mth$ label in the vocabulary, which is drawn from a zero-mean \name{gp} with
covariance function $\kernel_j(\cdot, \cdot; \vecS{\theta}_j)$. This covariance function, when evaluated
at all the input pairs in $\{\indexdata{\x}{n}\}$, induces the
$\n \times \n$ covariance matrix $\mat{K}_j$, where $\n = \sum_{n=1}^\nseq \tn$ is the total
number of observations.
Similarly, $\f_{\bin}$ is a zero-mean $\Vsize^2$-dimensional Gaussian random variable with
covariance matrix given by $\K_{\bin}$. We note here that while the unary functions
are draws from a \name{gp} indexed by $\mat{X}$, the distribution over pairwise functions is a finite
Gaussian (not indexed by $\mat{X}$).
Given the latent function values, our conditional likelihood is defined by:
\begin{align}
\label{eq:likelihood}
p(\y | \f) = \prod_{n=1}^\nseq p(\yn | \fn) \text{,}
\end{align}
where, omitting the dependency on the input $\X$ for simplicity, each individual
conditional likelihood term is computed using a valid likelihood function for sequential
data such as that defined by the structured softmax function in Equation \eqref{eq:struct-softmax};
$\yn$ denotes the labels of sequence $\yn$; and
$\fn$ is the corresponding vector of latent (unaries and pairwise) function values.
We now have all the necessary definitions to state our first result.
\newtheorem{theorem1}{Theorem}
\begin{theorem1}
The model class defined by the prior in Equation \eqref{eq:full-prior} and
the likelihood in Equation \eqref{eq:likelihood} contains the structured
\name{gp} model proposed by \citet{bratieres-et-al-tpami-2015}.
\end{theorem1}
The proof of this is trivial and can be done by (i) setting all the
covariance functions of the unary latent process ($\kappa_j$) to be the same;
(ii) making $\K_{\bin} = \mat{I}$; and (iii) using the structured softmax function in
Equation \eqref{eq:struct-softmax} as each of the individual terms $p(\yn | \fn)$
in Equation $\eqref{eq:likelihood}$. This yields exactly the same model as
specified by \citet{bratieres-et-al-tpami-2015}, with
prior covariance matrix with block-diagonal structure described
in \S \ref{sec:linear-chain} above.
\QEDA
The practical consequences of the above theorem is that we can now leverage
the results of \citet{nguyen-bonilla-nips-2014} in order to develop a variational inference (\name{vi}) framework
for structured \name{gp} models that can be carried out without knowing the details of the
conditional likelihood. Furthermore, as we shall see in the next section, in order to deal
with the intractable nonlinear expectations inherent to \name{vi} , the proposed method only requires expectations
over low-dimensional Gaussian distributions.
\subsection{Automated variational inference}
In this section we develop a method for estimating the posterior over the latent
functions given the prior and likelihood models defined in Equations \eqref{eq:full-prior}
and \eqref{eq:likelihood}. Since the posterior is analytically intractable and the
prior involves a large number of coupled latent variables, we resort to approximations
given by variational inference \citep[\name{vi}; ][]{jordan-etl-al-book-1998}. To this end, we start by
defining our variational approximate posterior distribution:
\begin{align}
\label{eq:q-full-init}
q(\f) &= q(\fun) q(\fbin)
\text{,} \quad \text{ with } \\
q(\fun) &= \sum_{k=1}^K \pi_k q_k(\f_{\un} | \qfmean{k}, \qfcov{k}) = \sum_{k=1}^K \pi_k \prod_{j=1}^{\Vsize} \Normal(\f_{\un \cdot j}; \qfmean{kj}, \qfcov{kj})
\quad \text{ and } \\
\label{eq:q-full-end}
q(\fbin) & = \Normal(\f_{\bin}; \postmean{\bin}, \postcov{\bin}) \text{,}
\end{align}
where $q(\fun)$ and $q(\fbin)$ are the approximate posteriors over
the unary and pairwise nodes respectively;
each $q_k(\f_{\un \cdot j})= \Normal(\f_{\un \cdot j}; \qfmean{kj}, \qfcov{kj})$ is a $\n$-dimensional full Gaussian distribution; and
$q(\fbin)$ is a $\Vsize^2$-dimensional Gaussian.
In order to estimate the parameters of the above distribution, variational inference entails the
optimization of the so-called evidence lower bound ($\calL_{\text{elbo}}$), which can be shown to be a lower bound
of the true marginal likelihood, and is composed of a KL-divergence term ($\calL_{\text{kl}}$), between the
approximate posterior and the prior, and an expected log likelihood term ($\calL_{\text{ell}}$):
\begin{equation}
\label{eq:elbo-full}
\calL_{\text{elbo}} = - \kl{q(\f)}{p(\f)} + \Eb{\log p(\y | \f) }_{q(\f)} \text{,}
\end{equation}
where the angular bracket notation $\Eb{\cdot}_{q}$ indicates an expectation over the distribution $q$. Although
the approximate posterior is an $\n$-dimensional distribution,
the expected log likelihood term can be estimated efficiently using expectations over much lower-dimensional
Gaussians.
\begin{theorem1}
\label{th:efficient-full}
For the structured \name{gp} model defined in Equations \eqref{eq:full-prior} and
\eqref{eq:likelihood},
the expected log likelihood over the variational distribution defined in
Equations \eqref{eq:q-full-init} to
\eqref{eq:q-full-end}
and its gradients can
be estimated using expectations over $\tn$-dimensional Gaussians and
$\Vsize^2$-dimensional Gaussians, where
$\tn$ is the length of each sequence and $\Vsize$ is the vocabulary size.
\end{theorem1}
The proof is constructive and can be found in the supplementary material. Here we
state the final result on how to compute these estimates:
\begin{align}
\label{eq:estimates-full-begin}
\calL_{\text{ell}} &= \sum_{n=1}^{\nseq} \sum_{k=1}^{K} \pi_k \Eb{\log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,} \\
\gradient_{\bs{\lambda}_{k}^{\un}} \elltermkn &= \Eb{\gradient_{\bs{\lambda}_{k}^{\un}} \log q_{k(n)}(\funn) \log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,} \\
\label{eq:estimates-full-end}
\gradient_{\bs{\lambda}_{\bin}} \elltermkn &= \Eb{\gradient_{\bs{\lambda}_{\bin}} \log q(\fbin) \log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,}
\end{align}
where $ q_{k(n)}(\funn)$ is a $(\tn \times \Vsize)$-dimensional Gaussian with block-diagonal covariance $ \qfcov{k(n)}$, each
block of size $\tn \times \tn$. Therefore, we can estimate the above term by sampling from
$\tn$-dimensional Gaussians independently. Furthermore, $q(\fbin)$ is a $\Vsize^2$-dimensional Gaussian, which
can also be sampled independently. In practice, we can
assume that the covariance of $q(\fbin)$ is diagonal and we only sample from univariate Gaussians for the
pairwise functions.
It is important to emphasize the practical consequences of Theorem \ref{th:efficient-full}. Although we
have a fully correlated prior and a fully correlated approximate posterior over $\n = \sum_{n=1}^{\nseq} \tn$
unary function values, yielding full $\n$-dimensional covariances, we have shown that for these classes of models
we can estimate $\calL_{\text{ell}}$ by only using expectations over $\tn$-dimensional Gaussians. We refer
to this result as that of \emph{statistical efficiency} of the inference algorithm.
Nevertheless, even when having only one latent function and
using a single Gaussian approximation ($K=1$), optimization
of the $\calL_{\text{elbo}}$ in Equation \eqref{eq:elbo-full} is completely impractical for any realistic dataset concerned
with structured prediction problems,
due to its high memory requirements $\calO(\n^2)$ and time complexity $\calO(\n^3)$.
In the following section we
will use a sparse \name{gp} approach within our variational framework in order to develop a practical algorithm
for structured prediction.
\section{Conclusion \& discussion}
We have presented a Bayesian structured prediction model with
\name{gp} priors and linear-chain likelihoods. We have
developed an automated variational inference algorithm that is statistically efficient in
that only requires expectations over very low-dimensional Gaussians in order to
estimate the expected likelihood term in the variational objective. We have exploited
these types of theoretical insights as well as practical statistical and optimization
tricks to make our inference framework scalable and effective.
Our model generalizes recent advances in
\name{crf}{s} \citep{koltun2011efficient} by allowing
general positive definite kernels defining their energy functions and opens new directions for combining
deep learning with structure models \citep{zheng2015conditional}.
As mentioned in the introduction, for general structured prediction problems
one may need to set up the configuration of
the latent functions (e.g.~the
unary and pairwise functions in the linear-chain case). Thus, the process of developing an inference procedure for a different structure (e.g.~when going from linear chains to skip-chains) requires some human intervention.
Nevertheless, when applied to fixed structures
our approach is entirely “black box” with respect to the choice of likelihood, inasmuch as different likelihoods can be used without any manual change to the inference engine.
Furthermore, we have already seen in our small-scale experiments a possible
way to extend our method to more general structured likelihoods,
where the exact likelihood is replaced by
a piecewise pseudo-likelihood.
Such an approach might be considered for using our
framework in models such as grids or skip-chains, for which the evaluation of the true structured likelihood would be intractable.
The performance of our small-scale experiments in which the true likelihood was approximated by its pseudo-likelihood was very encouraging and
we leave a more
in-depth investigation of the efficacy of this approach for future work.
We also leave to future work the challenging task of automating the very procedure that turns a structured specification into a likelihood-agnostic inference procedure.
Overall, we believe our approach is a fundamental step to developing automated inference methods
for general structured prediction problems.
\section{Experiments \label{sec:experiments}}
For comparison purposes, we used the same benchmark dataset suite as that used by \citet{bratieres-et-al-tpami-2015},
which targets several standard \name{nlp} problems and is part of the \name{crf}{++} toolbox\footnote{This was developed by Taku Kudo and
can be found at \url{https://taku910.github.io/crfpp/}.}.
This includes
noun phrase
identification (\name{base np}); chunking, i.e.~shallow parsing labels sentence constituents (\name{chunking});
identification of word segments in sequences of Chinese ideograms (\name{segmentation});
and Japanese named entity recognition (\name{japanese ne}). As we will see,
on these tasks our approach is on par with
competitive benchmarks which,
unlike our method, exploit the structure of the likelihood.
For more details of these datasets and
the experimental set-up for reproducibility of the results see the supplementary material.
\subsection{Small-scale experiments}
\input{table-small-expts}
Table \ref{tab:error-small} shows the error rates on the small
experiments across the different datasets considered. Overall,
we observe that our method in batch mode (\name{gp-var-b}) is consistently
better than \name{svm} and compares favorably with \name{crf}. When compared
to \name{gp-ess}, both versions of our method, the batch and the stochastic,
also have similar performance with the notable exception of \name{gp-var-s} on
\name{chunking}. However, we do note that \name{gp-var-s} has the smallest standard deviation among all compared
methods over all datasets. We credit this desirable property to the usage of doubly controlled variates (SAGA + standard control variates), as well as to the conservative learning rates chosen for these tests.
From these results we can conclude that, despite not knowing the details of
the conditional likelihood, our method is very competitive with other methods
that exploit this knowledge and has similar performance to \name{gp-ess}.
\subsubsection{Accelerating inference with
a piecewise pseudo-likelihood}
In order to demonstrate the flexibility of our approach, we also tested the performance of our framework
when the true likelihood is approximated by a piecewise pseudo-likelihood \citep{SutMcC07} that only takes in consideration the
local interactions within a single factor between the variables in our model.
We emphasize that this change did not require
any modification to our inference engine
and we simply used this pseudo-likelihood
as a drop-in replacement for the exact
likelihood.
As we can see from the results in
Table \ref{tab:error-small} (\name{gp-var-p}),
the performance of our model under this regime is comparable to the one for \name{gp-var-s}.
Furthermore,
every step of stochastic optimization ran roughly twice as fast in \name{gp-var-p} as in \name{gp-var-s}, which
made up for the fact that for a linear-chain structure the computation time of forward-backward is quadratic in the label cardinality while for the piecewise pseudo-likelihood the cost is linear.
Such an approach might be considered for extending our
framework to models such as grids or skip-chains, for which the evaluation of the true structured likelihood would be intractable.
Alternatively, a structured mean field approximation using tractable approximating families of sub-graphs (linear chains, for instance) might be used for the same purpose.
\subsection{Larger-scale experiments}
Here we report the results on an experiment that used the largest dataset in our
benchmark suite (\name{base np}). For this dataset we used a five-fold cross-validation
setting and $\nseq= 500$ training sequences. This amounts to
roughly $11,611$ words on average. For testing we used the remaining ($323$) sequences.
In this setting \name{gp-ess} is completely impractical. We compare the results of
our model with \name{crf}, which from our previous experiment was the most competitive
baseline.
Unlike the small experiments where the regularization parameter was learned through
cross-validation, because of the large execution times, here we report the error rates
for two values of this parameter $\lambda_\text{reg} \in \{0.1, 1\}$, where we obtained
$5.13\%$ and $4.50\%$ respectively. Our model (\name{gp-var-s}) attained an error rate of
$5.14\%$, which is comparable to \name{crf}'s performance. As in the small experiments, we
conclude that our model, despite not knowing the details of the likelihood, it performs
on par with methods that were hard-coded for these types of likelihoods.
See the supplementary material for more analysis.
\section{Introduction}
Developing automated inference methods for complex probabilistic models
has become arguably one of the most exciting areas of research in machine learning,
with notable examples in the probabilistic programming community given by
\name{stan} \citep{hoffman-gelman-jmlr-2014} and \name{church} \citep{goodman-et-al-2008}.
One of the main challenges for these types of approaches is to formulate expressive
probabilistic models and develop generic yet efficient inference methods for them.
From a variational inference perspective, one particular approach that has
addressed such a challenge is the black-box variational inference framework
of \citet{ranganath-et-al-aistats-2014}.
While the works of \citet{hoffman-gelman-jmlr-2014} and \citet{ranganath-et-al-aistats-2014}
have been successful with a wide range of priors and likelihoods,
their direct application to models with Gaussian process (\name{gp}) priors is cumbersome,
mainly due to the large number of highly coupled latent variables in such models.
In this regard, very recent work has investigated automated inference methods
for general likelihood models when the prior is given by a sparse Gaussian process
\citep{hensman-et-al-nips-2015, dezfouli-bonilla-nips-2015}.
While these advances have opened up opportunities for applying \name{gp}-based models
well beyond regression and classification settings, they have focused
on models with i.i.d\xspace observations and, therefore, are
unsuitable for addressing the more challenging task of \emph{structured prediction}.
Structured prediction refers to the problem where there are interdependencies
between the outputs and it is necessary to model these dependencies explicitly.
Common examples are found in natural
language processing (\name{nlp}) tasks, computer vision
and bioinformatics. By definition, observation models in these problems are not i.i.d\xspace
and standard learning frameworks have been extended to consider the
constraints imposed by structured prediction tasks. Popular
structured prediction frameworks are
conditional random fields \citep[\name{crf}{s};][]{lafferty-et-al-icml-2001},
maximum margin Markov networks \citep{taskar-et-al-nips-2004} and
structured support vector machines \citep[\name{svm}-struct,][]{tsochantaridis-et-al-jmlr-2005}.
From a non-parametric Bayesian modeling perspective, in general, and from a \name{gp} modeling
perspective, in particular, structured prediction problems present incredibly
hard inference challenges because of the rapid explosion of the number of latent variables
with the size of the problem. Furthermore, structured likelihood functions are usually
very expensive to compute. In an attempt to build non-parametric Bayesian approaches
to structured prediction, \citet{bratieres-et-al-tpami-2015} have proposed
a framework based on a \name{crf}-type modeling approach with
\name{gp}{s}, and use elliptical slice sampling
\citep[\name{ess};][]{murray-eyt-al-aistats-2010} as part of their inference method.
Unfortunately, although their method can be applied to linear chain
structures in a generic way without considering the details of the likelihood model,
it is not scalable as it involves sampling from the full \name{gp} prior.
In this paper we present an approach for automated inference
in structured \name{gp} models with linear chain likelihoods that builds upon
the structured \name{gp} model of \citet{bratieres-et-al-tpami-2015} and the
sparse variational frameworks of \citet{hensman-et-al-nips-2015}
and \citet{dezfouli-bonilla-nips-2015}.
In particular, we show that the model of \citet{bratieres-et-al-tpami-2015} can
be mapped onto a generalization of the automated inference framework
of \citet{dezfouli-bonilla-nips-2015}.
Unlike the work of \citet{bratieres-et-al-tpami-2015},
by introducing sparse \name{gp} priors in structured prediction models, our approach
is scalable to a large number of observations. More importantly,
this approach is also generic in that it does not need to know the details
of the likelihood model
in order to carry out posterior inference.
Finally, we show that our inference method is statistically efficient as, despite
having a Gaussian process prior over a large number of latent functions,
it only requires expectations over low-dimensional Gaussian distributions in order
to carry out posterior approximation.
Our experiments on a set of \name{nlp} tasks, including noun phrase
identification, chunking, segmentation, and named entity recognition,
show that our method can be as good as (and
sometimes better than) hard-coded approaches including \name{svm}-struct and \name{crf}{s}, and overcomes the
scalability limitations of previous inference algorithms based on
sampling.
We refer to our approach as ``gray-box'' inference
since, in principle, for general structured prediction problems it may require some human
intervention. Nevertheless, when applied
to fixed structures, our proposed inference method
is entirely “black box”.
\section{Learning}
We learn the parameters of our model, i.e. the parameters of our approximate variational posterior
well as the hyperparameters ($\{ \bs{\lambda}, \vecS{\theta} \}$) through gradient-based optimization of
the variational objective ($\calL_{\text{elbo}}$). One of the main advantages of our method is the decomposition
of the $\calL_{\text{ell}}$ in Equation \eqref{eq:ellhat} and its gradients as a sum of expectations of the individual
likelihood terms for each sequence. This result enables us to use parallel computation and
stochastic optimization in order to make our algorithms useful in practice.
Therefore, we consider batch optimization for small-scale problems (exploiting parallel computation) and
stochastic optimization techniques for larger problems. Nevertheless, from a statistical perspective, learning
in both settings is still hard due to the noise introduced by the empirical expectations (in both the
batch and the stochastic setting) and the noisy gradients when using stochastic learning
frameworks such as stochastic gradient descend (\name{sgd}). In order to address these issues, we
use variance reduction techniques such as control variates in the batch case. In the stochastic
setting, in addition to standard control variates used in sampling methods and
some stochastic variational frameworks \citep{ranganath-et-al-aistats-2014}, we use
the recently developed \name{saga} method for optimization. We describe
in section \ref{sec:var-reduction} why these two
approaches, standard control variates and \name{saga}, are complementary and should improve
learning in our method.
\paragraph{Computational complexity}
The time-complexity of our stochastic optimization is dominated by the
computation of the posterior's entropy, Gaussian sampling, and running the forward-backward
algorithm, which yields an overall
cost of $O(\m^3 + \tn^3 + S \tn|\V|^2)$.
The space complexity is dominated by
storing inducing-point covariances,
which is $O(\m^2)$. To put this in the perspective of other available methods,
the existing Bayesian structured model with
\name{ess} sampling \citep{bratieres-et-al-tpami-2015} has time and memory complexity of $O(N^3)$ and $O(N^2)$ respectively, where $N$ is the total number of observations (e.g.~words). \name{crf}'s time and space complexity with stochastic optimization depends on the feature dimensionality,
i.e.~it is $O(D)$. The actual running time of \name{crf} also depends on the cost of model selection via a cross-validation procedure. \name{ess} sampling makes the method of \citet{bratieres-et-al-tpami-2015} completely unfeasible for large datasets and \name{crf} has high running times for problems with high dimensions and many hyperparameters. Our work aims to make Bayesian structured prediction practical for large datasets, while being able to use
infinite-dimensional feature spaces as well as
sidestepping a costly cross-validation procedure.
\subsection{Variance reduction techniques
\label{sec:var-reduction}}
Our goal is to approximate an expectation of a function $g(\f)$
over the random variable
$\f$ that follows a distribution $q(\f)$, i.e. $\mathbb{E}_q[g(\f)]$ via Monte Carlo samples.
The simplest way to reduce the variance of the empirical estimator $\bar{g}$ is to
subtract from $g(\f)$ another function $h(\f)$ that is highly correlated with $g(\f)$.
That is, the function $\tilde{g}(\f):= g(\f) - \hat{a} h(\f)$ will have the
same expectation as $g(\f)$ i.e. $\mathbb{E}_q[\tilde{g}] = \mathbb{E}_q[g]$,
provided that $\mathbb{E}_q[h] = 0$ \footnote{We note that,
in general, to ensure unbiasedness, $\mathbb{E}_q[h]$,
if easily and efficiently computable, can be subtracted from $h$ to form an estimator
$\tilde{g}:= g - h + \mathbb{E}_q[h]$.}.
More importantly, as the variance of the new function is
$\text{Var}[\tilde{g}] = \text{Var}[g] + \hat{a}^2\text{Var}[h] - 2 \hat{a}\text{Cov}[g, h]$,
our problem boils down to finding suitable $\hat{a}$ and $h$ so as to minimize
$\text{Var}[\tilde{g}]$.
The following two techniques are based on this simple principle and their main
difference lies upon the distribution over which we want to reduce the variance.
\paragraph{Standard control variates for reducing the variance w.r.t.~the variational distribution.}
Here $q(\f)$ is the variational distribution and $g(\f) =\gradient_{\lambda} \log q(\f) \log p(\yn | \fn)$
(see supplementary material). Previous work \citep{ranganath-et-al-aistats-2014,dezfouli-bonilla-nips-2015} has found that a suitable correction term
is given by $h(\f) =\gradient_{\lambda} \log q(\f)$, which has expectation zero. Given this, the optimal $\hat{a}$ can
be computed as $\hat{a} = {\text{Cov}[g,h]}/{\text{Var}[h]}$.
The use of control variates is essential
for the effectiveness of our framework. for example, in our experiments
described in \S \ref{sec:experiments} we have found
that, in the batch setting, their use reduces the error rate for the Japanese name entity recognition task from about 46\% to around 5\%.
\paragraph{SAGA for reducing the variance w.r.t.~the data distribution.}
The fast incremental gradient method (\name{saga}) has been recently proposed as a better alternative to
existing stochastic optimization algorithms. Here the $q$ distribution we want to reduce the variance over is
the data distribution $p(\mat{X},\mat{Y})$; $g(\f)$ is the per-sample gradient direction;
and $h(\f)$ is the past stored gradient direction at the same sample point.
Since the expectation of the past stored gradient will be non-zero,
\name{saga} \citep{defazio2014saga} uses the general estimator $\tilde{g}(\f) := g(\f) - h(\f) + \mathbb{E}_q[h(\f)]$.
The quantity $\mathbb{E}_q[h(\f)]$ is an average over past gradients. We note that, crucially
for our model, this average can be cached instead of re-calculated at each iteration.
\section{Related work}
Recent advances in sparse \name{gp} models for regression
\citep{titsias-aistats-2009,hensmangaussian} have allowed the applicability of such models
to very large datasets, opening opportunities for the extension of these
ideas to classification and to problems with generic i.i.d\xspace likelihoods
\citep{hensman-et-al-aistats-2015, nguyen-bonilla-nips-2014, dezfouli-bonilla-nips-2015,hensman-et-al-nips-2015}.
However, none of these approaches is actually applicable to structured prediction problems, which inherently deal with
non-i.i.d\xspace likelihoods.
Twin Gaussian processes \citep{bo2010twin} address
structured continuous-output problems by forcing input kernels to be similar to output kernels. In contrast, here we deal
with the harder problem of structured \emph{discrete}-output problems, where one usually requires computing
expensive likelihoods during training.
The structured
continuous-output problem is somewhat related to the area of multi-output regression with \name{gp}{s}
for which, unlike discrete structured prediction with \name{gp}{s}, the literature is relatively mature
\citep{alvarez2010efficient,alvarez2011computationally,alvarez-lawrence-nips-08,bonilla-et-al-nips-08}.
The original structured Gaussian process model, \citep[\name{gpstruct}, ][]{bratieres-et-al-tpami-2015} uses
Markov Chain Monte Carlo (\name{mcmc}) sampling as the inference method and is not equipped with sparsification techniques
that are crucial for scaling to large data.
\citet{bratieres-et-al-icml-2014} have explored a distributed
version of \name{gpstruct} based on the pseudo-likelihood approximation \citep{Besag1975} where several weak learners are
trained on subsets of \name{gpstruct}'s latent variables and bootstrap data.
However, within each weak learner, inference is still done via \name{mcmc}. A variational alternative for \name{gpstruct} inference \citep{Srijith-et-al-2014} is also available.
However, it relies on pseudo-likelihood approximations and was only evaluated on small-scale problems.
Unlike this work, our approach can deal with both pseudo-likelihoods and generic (linear-chain) structured likelihoods,
and we rely on our sparse approximation procedure and our automated variational inference technique -- rather than on bootstrap aggregation -- to achieve good performance on larger datasets.
\section{Sparse Approximation}
In this section we describe a scalable approach to inference in the
structured \name{gp} model defined in \S \ref{sec:full-gp} by introducing the so-called sparse \name{gp} approximations
\citep{quinonero2005unifying} into our variational framework. Variational approaches to sparse \name{gp}
models were developed by \citet{titsias-aistats-2009} for Gaussian i.i.d\xspace likelihoods, then
made scalable to large datasets and generalized to non-Gaussian (i.i.d\xspace) likelihoods by
\citet{hensman-et-al-aistats-2015, hensman-et-al-nips-2015,dezfouli-bonilla-nips-2015}.
The main idea of such approaches is to introduce a set of $\m$ \emph{inducing variables}
$\{ \uj \}_{j=1}^\m$ for each latent process,
which lie in the same space as $\{ \fj \}$ and are drawn from the same \name{gp} prior.
These inducing variables are the latent function values of their corresponding set of \emph{inducing inputs}
$\{ \mat{Z}_j \} $. Subsequently, we redefine our prior in terms of these inducing inputs/variables.
In our structured \name{gp} model, only the unary latent functions are drawn from \name{gp}{s} indexed by $\X$. Hence
we assume a \name{gp} prior over the inducing variables and a conditional prior over the unary latent functions,
which both factorize over the latent processes, yielding the joint distribution over
unary functions, pairwise functions and inducing variables given by:
\begin{equation}
\label{eq:prior-sparse}
p(\f, \u) = p(\u) p(\f_{\un} | \u) p(\f_{\bin}) \text{, with }
p(\f_{\un} | \u) = \prod_{j=1}^{\Vsize} \Normal(\f_{\un \cdot j}; \priormean, \widetilde{\K}_j) \text{ and }
p(\u) = \prod_{j=1}^{\Vsize} p(\uj) \text{,}
\end{equation}
with the prior over the pairwise functions defined as before, i.e.~$p(\f_{\bin}) = \Normal(\f_{\bin}; \vec{0}, \K_{\bin})$,
and the means and covariances of the conditional distributions over the unary functions are given by:
\begin{align}
\priormean &=\mat{A}_j \uj \text{ and } \widetilde{\K}_j = \kernel_j(\X,\X) - \mat{A}_j \kernel(\Z_j, \X) \text{, with } \mat{A}_j = \kernel(\X,\Z_j) \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \text{.}
\end{align}
By keeping an explicit representation of the inducing variables, our goal is to estimate the joint
posterior over the unary functions, pairwise functions and inducing variables given the observed data.
To this end, we assume that our variational approximate posterior is given by:
\begin{equation}
\label{eq:posterior-sparse}
q(\f, \u | \bs{\lambda}) = p(\f_{\un} | \u) q(\u | \bs{\lambda}_{\un}) q(\f_{\bin} | \bs{\lambda}_{\bin}) \text{,}
\end{equation}
where $\bs{\lambda} = \{ \bs{\lambda}_{\un}, \bs{\lambda}_{\bin}\}$ are the variational parameters;
$p(\f_{\un} | \u)$ is defined in Equation \eqref{eq:prior-sparse};
$q(\f_{\bin} | \bs{\lambda}_{\bin})$ is defined as in Equation \eqref{eq:q-full-end}, i.e.~a Gaussian
with parameters $\bs{\lambda}_{\bin} = \{\postmean{\bin}, \postcov{\bin} \}$;
and
\begin{equation}
q(\u | \bs{\lambda}_{\un}) = \sum_{k=1}^{\k} \pi_k q_k(\u | \postmean{k}, \postcov{k}) = \sum_{k=1}^{\k} \pi_k \prod_{j=1}^{\Vsize} \Normal(\uj; \postmean{kj}, \postcov{kj}) \text{,}
\end{equation}
with $\bs{\lambda}_{\un} = \{\pi_k, \postmean{k}, \postcov{k} \}$ and
$\postmean{kj}, \postcov{kj}$ denoting the posterior mean and covariance
of the inducing variables corresponding to mixture component $k$ and latent function $j$.
\subsection{Evidence lower bound}
The KL term in the evidence lower bound now considers a KL divergence between
the joint approximate posterior in Equation \eqref{eq:posterior-sparse} and the joint prior
in Equation \eqref{eq:prior-sparse}. Because of the structure of the approximate posterior,
it is easy to show that the term $ p(\f_{\un} | \u)$ vanishes from the KL, yielding
an objective function that is composed of a KL between the distributions over the inducing variables;
a KL between the distributions over the pairwise functions, and the expected log likelihood
over the joint approximate posterior:
\begin{align}
\label{eq:elbo-sparse}
\calL_{\text{elbo}}(\bs{\lambda}) = - \kl{q(\u)}{p(\u)} - \kl{q(\f_{\bin})}{p(\f_{\bin})} + \Eb{\sum_{n=1}^\nseq \log p(\yn | \fn)}_{ q(\f, \u | \bs{\lambda}) } \text{,}
\end{align}
where $\kl{q(\f_{\bin})}{p(\f_{\bin})}$ is a straightforward KL divergence between two Gaussians and
$\kl{q(\u)}{p(\u)}$ is a KL divergence between a Mixture-of-Gaussians and a Gaussian, which we bound
using Jensen's inequality. The expressions for these terms are given in the supplementary material.
Let us now consider the expected log likelihood term in Equation \eqref{eq:elbo-sparse}, which is an expectation
of the conditional likelihood over the joint posterior $q(\f,\u | \bs{\lambda})$. The following result tells us that, as in
the full (non-sparse) case, these expectations can still be estimated efficiently by using expectations over low-dimensional
Gaussians.
\begin{theorem1}
\label{th:efficient-sparse}
The expected log likelihood term in Equation \eqref{eq:elbo-sparse}, with
a generic structured conditional likelihood $p(\yn | \fn)$ and
variational distribution $q(\f,\u | \bs{\lambda})$ defined in Equation
\eqref{eq:prior-sparse}, and its gradients can
be estimated using expectations over $\tn$-dimensional Gaussians and
$\Vsize^2$-dimensional Gaussians, where
$\tn$ is the length of each sequence and $\Vsize$ is the vocabulary size.
\end{theorem1}
As in the full (non-sparse) case, the proof is constructive and can be found in the
supplementary material.
This means that, in the sparse
case,
the expected log likelihood and its gradients can also be computed using Equations
\eqref{eq:estimates-full-begin} to
\eqref{eq:estimates-full-end}, where the mean and covariances
of each $q_{k(n)}(\funn)$ are determined by the means and covariances of the
posterior over the inducing variables.
Thus, as before, $q_{k(n)}(\funn)$ is a ($\tn \times \Vsize$)-dimensional Gaussian with block-diagonal
structure, where each of the $j=1, \dots, \Vsize$ blocks has mean and covariance given by:
\begin{eqnarray}
\label{eq:bkjn-skjn}
\qfmean{kj(n)} = \mat{A}_{jn} \postmean{kj} \text{,} &
\qfcov{kj(n)} = \widetilde{\K}_j^{(n)} + \mat{A}_{jn} \postcov{kj} \mat{A}_{jn}^T \text{, where} \\
\mat{A}_{jn} \stackrel{\text{\tiny def}}{=} \kernel(\X_n, \Z_j) \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \quad \text{ and } &
\widetilde{\K}_j^{(n)} \stackrel{\text{\tiny def}}{=} \kernelj(\X_n, \X_n) - \mat{A}_{jn} \kernel(\Z_j, \X_n) \text{,}
\end{eqnarray}
where, as mentioned in \S \ref{sec:linear-chain},
$\indexdata{\x}{n}$ is the $\tn \times \dim$ matrix of feature descriptors corresponding to sequence $n$.
\subsection{Expectation estimates}
In order to estimate the expectations in Equations \eqref{eq:estimates-full-begin} to
\eqref{eq:estimates-full-end}, we use a simple Monte Carlo approach where we draw
samples from our approximate distributions and compute the empirical expectations.
For example, for the $\calL_{\text{ell}}$ we have:
\begin{align}
\label{eq:ellhat}
\elltermhat = \frac{1}{S} \sum_{n=1}^{\nseq} \sum_{k=1}^K \pi_k \sum_{i=1}^{S} \log p(\yn | {\fun}_{n \mydot}^{(k,i)}, \f_{\bin}^{(i)}) \text{,}
\end{align}
\text{with } ${\fun}_{n \mydot}^{(k,i)} \sim \Normal( \qfmean{k(n)}, \qfcov{k(n)})$ and
$\f_{\bin}^{(i)} \sim \Normal(\postmean{\bin}, \postcov{\bin})$,
for $i = 1, \dots, S$, where $S$ is the number of samples used,
and each of the individual blocks of $\qfmean{k(n)}$ and $\qfcov{k(n)}$ are given
in Equation \eqref{eq:bkjn-skjn}. We use a similar approach for estimating the gradients
of the $\calL_{\text{ell}}$ and they are given in the supplementary material.
\section{Gaussian process models for structured prediction}
Here we are interested in structured prediction problems where we observe input-output pairs
$\calD = \{ \indexdata{\x}{n}, \yn \}_{n=1}^\nseq$, where $\nseq$ is the total number of observations,
$\indexdata{\x}{n} \in \calX$ is a descriptor of observation $n$ and $\yn \in \calY$ is a structured object such as
a sequence, a tree or a grid that reflects the interdependences between its individual constituents. Our goal
is that of, given a new input descriptor $\indexdata{\x}{\star}$, predicting its corresponding structured labels $\ystar$, and
more generally, a distribution over these labels.
A fairly general approach to address this problem with
Gaussian process (\name{gp}) priors was proposed by \citet{bratieres-et-al-tpami-2015} based on
\name{crf}-type models, where the distribution of the output given the input is defined in terms of cliques,
i.e.~sets of fully connected nodes. Such a distribution is given by:
\begin{align}
\label{eq:struct-softmax}
p(\y | \X, \f) = \frac{\exp\left(\sum_c f(c, \indexclique{\x}{c}, \indexclique{\y}{c}) \right)}{\sum_{\y^\prime \in \calY} \exp \left( \sum_c f(c, \indexclique{\x}{c}, \yc^\prime) \right) }
\text{,}
\end{align}
where $\indexclique{\x}{c}$ and $\indexclique{\y}{c}$ are tuples of nodes belonging to clique $c$; $f(c, \indexclique{\x}{c}, \indexclique{\y}{c})$
is their corresponding latent variable; and $\f$ is the collection of all these latent variables,
which are assumed to be drawn from a zero-mean \name{gp} prior with covariance function $\kappa(\cdot, \cdot; \vecS{\theta})$,
with $\vecS{\theta}$ being the hyperparameters.
It is clear that such a model is a generalization of vanilla \name{crf}{s} where the potentials
are draws from a \name{gp} instead of
being linear functions of the features. %
\subsection{Linear chain structures \label{sec:linear-chain}}
In this paper we focus on linear chain structures where both the input and the output
corresponding to datapoint $n$ are linear chains of length $\tn$, whose corresponding constituents
stem from a common set. In other words, $\indexdata{\x}{n}$ is a $\tn \times \dim$ matrix of feature descriptors and
$\yn$ is a sequence of $\tn$ labels drawn from the same vocabulary $\V$. In this case, in order to completely define
the prior over the clique-dependent latent functions in Equation \eqref{eq:struct-softmax}, it is necessary to specify
covariance functions over the cliques. To this end, \citet{bratieres-et-al-tpami-2015} propose a kernel
that is non-zero only when two cliques are of the same type, i.e.~both are unary cliques or both are pairwise cliques. Furthermore, these kernels are defined as:
\begin{align}
\kappa_{\text{u}}((t, \vec{x}_t, y_t),(t^\prime, \vec{x}_t^\prime,y_{t^\prime}))
& = \indicator{y_t = y_{t^\prime}} \kappa(\vec{x}_t, \vec{x}_{t}^{\prime}) \\
\kappa_{\text{bin}}((y_t, y_{t+1}),(y_{t^\prime}, y_{t^\prime +1}))
& = \indicator{y_t = y_{t^\prime} \wedge y_{t+1} = y_{t^\prime +1} } \text{,}
\end{align}
where $\kappa_{\text{u}}$ is the covariance on unary functions and
$\kappa_{\text{bin}}$ is the covariance on pairwise functions. With a suitable ordering of these
latent functions, we obtain a posterior covariance matrix that is block-diagonal,
with the first $\Vsize$ blocks corresponding to the unary covariances, each of
size $\tn$; and the last block, corresponding to the pairwise covariances, being a
diagonal (identity) matrix of size $\Vsize^{2}$, where $\Vsize$ denotes the vocabulary size.
To carry out inference in this model,
\citet{bratieres-et-al-tpami-2015} propose a sampling scheme based on
elliptical slice sampling \citep[\name{ess};][]{murray-eyt-al-aistats-2010}. In the following section,
we show an equivalent formulation of this model that leverages the general
class of models with i.i.d\xspace likelihoods presented by \citet{nguyen-bonilla-nips-2014}.
Understanding structured \name{gp} models from such a perspective will allow us to generalize
the results of \citet{nguyen-bonilla-nips-2014,dezfouli-bonilla-nips-2015} in order to develop
an automated variational inference framework. The advantages of such a framework are
that of (i) dealing with generic likelihood models; and (ii) enabling stochastic optimization
techniques for scalability to large datasets.
\section{Proof of Theorem 2}
Here we proof the result that we can estimate the expected log likelihood and its
gradients using expectations over low-dimensional Gaussians.
\subsection{Estimation of $\calL_{\text{ell}}$ in the full (non-sparse) model}
For the $\calL_{\text{ell}}$ we have that:
\begin{align}
\calL_{\text{ell}} &= \Eb{\sum_{n=1}^\nseq \log p(\yn | \fn)}_{q(\fun) q(\fbin)} \\
&= \sum_{n=1}^{\nseq} \int_{\f_{\bin}} \int_{\f_{\un}} q(\fun) q(\fbin) \log p(\yn | \fn) \ d\f_{\un} d\f_{\bin} \\
& = \sum_{n=1}^{\nseq} \int_{\f_{\bin}} \int_{\fun^{(n)}} \int_{\fun^{\backslash n}} q(\fun^{\backslash n} | \fun^{(n)}) q(\fun^{(n)}) q(\fbin)
\log p(\yn | \fn) \ d\fun^{\backslash n} d\fun^{(n)} d\f_{\bin} \\
&= \sum_{n=1}^{\nseq} \Eb{\log p(\yn | \fn)}_{q(\fun^{(n)}) q(\fbin) } \\
\label{eq:ell-full-end}
& = \sum_{n=1}^{\nseq} \sum_{k=1}^{K} \pi_k \Eb{\log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,}
\end{align}
where $ q_{k(n)}(\funn)$ is a $(\tn \times \Vsize)$-dimensional Gaussian with block-diagonal covariance $ \qfcov{k(n)}$, each
block of size $\tn \times \tn$. Therefore, we can estimate the above term by sampling from
$\tn$-dimensional Gaussians independently. Furthermore, $q(\fbin)$ is a $\Vsize^2$-dimensional Gaussian, which
can also be sampled independently. In practice, we can
assume that the covariance of $q(\fbin)$ is diagonal and we only sample from unary Gaussians for the
pairwise functions.
\QEDA
\subsection{Gradients \label{sec:proof-grad-full}}
Taking the gradients of the k$\mth$ term for the n$\mth$ sequence in the $\calL_{\text{ell}}$:
\begin{align}
\elltermkn &= \Eb{\log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \\
& = \int_{\f_{\bin}} \int_{\fun^{(n)}} q_{k(n)}(\funn) q(\fbin) \log p(\yn | \fn) \ \text{d} \funn \text{d} \fbin \\
\gradient_{\bs{\lambda}_{k}^{\un}} \elltermkn &= \int_{\f_{\bin}} \int_{\fun^{(n)}} q_{k(n)}(\funn) q(\fbin) \gradient_{\bs{\lambda}_{k}^{\un}} \log q_{k(n)}(\funn) \log p(\yn | \fn) \ \text{d} \funn \text{d} \fbin \\
\label{eq:grad-unary-full}
& = \Eb{\gradient_{\bs{\lambda}_{k}^{\un}} \log q_{k(n)}(\funn) \log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,}
\end{align}
where we have used the fact that $\nabla_{\vec{x}} f(\vec{x}) = f(\vec{x}) \nabla_{\vec{x}} \log f(\vec{x})$ for any nonnegative function
$f(\vec{x})$
Similarly. the gradients of the parameters of the distribution over binary functions can be estimated using:
\begin{equation}
\gradient_{\bs{\lambda}_{\bin}} \elltermkn = \Eb{\gradient_{\bs{\lambda}_{\bin}} \log q(\fbin) \log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{.}
\end{equation}
\QEDA
\section{KL terms in the sparse model}
The KL term ($\calL_{\text{kl}}$) in the variational objective ($\calL_{\text{elbo}}$) is composed of a KL divergence between
the approximate posteriors and the priors over the inducing variables and pairwise functions:
\begin{equation}
\calL_{\text{kl}} = \underbrace{ - \kl{q(\u)}{p(\u)} }_{\klterm^{\un}} \underbrace{ - \kl{q(\f_{\bin})}{p(\f_{\bin})} }_{\klterm^{\bin}}\text{,}
\end{equation}
where, as the approximate posterior and the prior over the pairwise functions are Gaussian, the KL
over pairwise functions can be computed analytically:
\begin{align}
\klterm^{\bin} = - \kl{q(\f_{\bin})}{p(\f_{\bin})} &= \kl{\Normal(\f_{\bin}; \postmean{\bin}, \postcov{\bin})}{\Normal(\f_{\bin}; \vec{0}, \K_{\bin})} \\
&=
- \frac{1}{2} \left( \log \det{\K_{\bin}} - \log \det{\postcov{\bin}}
+ \postmean{\bin}^T \K_{\bin}^{-1} \postmean{\bin} + \mbox{ \rm tr }{\K_{\bin}^{-1} \postcov{\bin}} - \Vsize \right) \text{.}
\end{align}
For the distributions over the unary functions we need to compute a KL divergence between
a mixture of Gaussians and a Gaussian.
For this we consider the decomposition of the KL divergence as follows:
\begin{equation}
\klterm^{\un} = - \kl{q(\u)}{p(\u)} = \underbrace{\mathbb{E}_q[- \log q(\u)]}_{\calL_{\text{ent}}} + \underbrace{\mathbb{E}_q[\log p(\u)]}_{\calL_{\text{cross}}} \text{,}
\end{equation}
where the entropy term ($\calL_{\text{ent}}$) can be lower bounded using Jensen's inequality:
\begin{equation}
\label{eq:entropy}
\calL_{\text{ent}} \geq - \sum_{k=1}^{\k} \pi_k \log \sum_{\ell=1}^\k \pi_{\ell} \Normal(\postmean{k}; \postmean{\ell}, \postcov{k} + \postcov{\ell})
\stackrel{\text{\tiny def}}{=} \hat{\calL}_{\text{ent}} \text{.}
\end{equation}
and the negative cross-entropy term ($\calL_{\text{cross}}$) can be computed exactly:
\begin{equation}
\calL_{\text{cross}} = - \frac{1}{2} \sum_{k=1}^\k \pi_k \sum_{j=1}^{\Vsize} [\m \log 2 \pi + \log \det{\kernel(\mat{Z}_j, \mat{Z}_j)}
+ \postmean{kj}^T \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \postmean{kj} + \mbox{ \rm tr }{\kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \postcov{kj}}] \text{.}
\end{equation}
\section{Proof of Theorem 3}
To prove Theorem 3 we will express the expected log likelihood term in the same form as that
given in Equation \eqref{eq:ell-full-end}, showing that the resulting
$q_{k(n)}(\funn)$ is also a $(\tn \times \Vsize)$-dimensional Gaussian with block-diagonal covariance,
having $\Vsize$ blocks each of dimensions $\tn \times \tn$.
We start by taking the given $\calL_{\text{ell}}$, where the expectations are
over the joint posterior $q(\f, \u | \bs{\lambda}) = p(\f_{\un} | \u) q(\u) q(\f_{\bin})$:
\begin{align}
\calL_{\text{ell}} &= \Eb{\sum_{n=1}^\nseq \log p(\yn | \fn)}_{p(\f_{\un} | \u) q(\u) q(\f_{\bin}) } \\
& = \int_\f \log p(\y | \f) \underbrace{\int_\u q(\u) p(\f_{\un} | \u) \text{d}\u }_{q(\f_{\un})}q(\f_{\bin})\text{d}\f \text{,}
\end{align}
where our our approximating distribution is:
\begin{align}
q(\f) &= q(\f_{\un}) q(\f_{\bin}) \\
q(\f_{\un}) & = \int_\u q(\u) p(\f_{\un} | \u) \text{d}\u \text{,}
\end{align}
which can be computed analytically:
\begin{align}
\label{eq:qfunsparse}
q(\f_{\un}) &= \sum_{k=1}^{K} \pi_k q_k(\f_{\un}) = \sum_{k=1}^{K} \pi_k \prod_{j=1}^{\Vsize} \Normal(\f_{\un \cdot j}; \qfmean{kj}, \qfcov{kj}) \\
\label{eq:meanqfsparse}
\qfmean{kj} & = \mat{A}_j \postmean{kj} \\
\label{eq:covqfsparse}
\qfcov{kj} &= \widetilde{\K}_j + \mat{A}_j \postcov{kj} \mat{A}_j^T \text{.}
\end{align}
We note in Equation \eqref{eq:qfunsparse} that $q_k(\f_{\un})$ has a block diagonal structure, which implies that
we have the same expression for the $\calL_{\text{ell}}$ as in Equation \eqref{eq:ell-full-end}. Therefore, we obtain
analogous estimates:
\begin{align}
\calL_{\text{ell}} &= \sum_{n=1}^{\nseq} \sum_{k=1}^{K} \pi_k \Eb{\log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,}
\end{align}
Here, as before, $q_{k(n)}(\funn)$ is a $(\tn \times \Vsize)$--dimensional Gaussian with block-diagonal covariance $ \qfcov{k(n)}$, each
block of size $\tn \times \tn$. The main difference in this (sparse) case is that
$\qfmean{k(n)}$ and $\qfcov{k(n)}$ are constrained by the expressions in Equations
\eqref{eq:meanqfsparse} and \eqref{eq:covqfsparse}.
Hence, the proof for the gradients follows the same derivation as
in \S \ref{sec:proof-grad-full} above.
\QEDA
\section{Gradients of $\calL_{\text{elbo}}$ for sparse model}
Here we give the gradients of the variational objective wrt the parameters for the variational
distributions over the inducing variables, pairwise functions and hyper-parameters.
\subsection{Inducing variables}
\subsubsection{KL term}
As the structured likelihood does not affect the KL divergence term,
the gradients corresponding to this term are similar to those in the non-structured case
\citep{dezfouli-bonilla-nips-2015}.
Let $\Kzzall$ be the block-diagonal covariance with $\Vsize$ blocks $\kernel(\mat{Z}_j, \mat{Z}_j)$, $j=1, \ldots Q$. Additionally,
lets assume the following definitions:
\begin{align}
\label{eq:Ckl}
\C_{kl} & \stackrel{\text{\tiny def}}{=} \postcov{k} + \postcov{\ell} \text{,}\\
\Normal_{k \ell} & \stackrel{\text{\tiny def}}{=} \Normal(\postmean{k}; \postmean{\ell}, \C_{kl}) \text{,} \\
z_k &\stackrel{\text{\tiny def}}{=} \sum_{\ell=1}^\k \pi_{\ell} \Normal_{k \ell} \text{.}
\end{align}
The gradients of
$\calL_{\text{kl}}$ wrt the posterior mean and posterior covariance for component $k$ are:
\begin{align}
\gradient_{\postmean{k}} \calL_{\text{cross}} &= - \pi_k \Kzzallinv \postmean{k} \text{,}\\
\gradient_{\postcov{k}} \calL_{\text{cross}} &= - \frac{1}{2} \pi_k \Kzzallinv \\
\gradient_{\pi_k} \calL_{\text{cross}} &= - \frac{1}{2} \sum_{j=1}^{\Vsize} [\m \log 2 \pi + \log \det{\kernel(\mat{Z}_j, \mat{Z}_j)}
+ \postmean{kj}^T \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \postmean{kj} + \mbox{ \rm tr }{\kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \postcov{kj}}]
\text{,}
\end{align}
where we note that we compute $\Kzzallinv$ by inverting the corresponding blocks $\kernel(\mat{Z}_j, \mat{Z}_j)$ independently.
The gradients of the entropy term wrt the variational parameters are:
\begin{align}
\label{eq:grad-ent-init}
\gradient_{\postmean{k}} \hat{\calL}_{\text{ent}} &= \pi_k \sum_{\ell=1}^\k \pi_{\ell} \left( \frac{\Normal_{k \ell}}{z_{k}} + \frac{\Normal_{k \ell}}{z_{\ell}} \right) \C_{kl}^{-1} (\postmean{k} - \postmean{\ell}) \text{,} \\
\gradient_{\postcov{k}} \hat{\calL}_{\text{ent}} &= \frac{1}{2} \pi_k \sum_{\ell=1}^{\k} \pi_{\ell} \left( \frac{\Normal_{k \ell}}{z_{k}} + \frac{\Normal_{k \ell}}{z_{\ell}} \right)
\left[ \C_{kl}^{-1} - \C_{kl}^{-1} (\postmean{k} - \postmean{\ell}) (\postmean{k} - \postmean{\ell})^T \C_{kl}^{-1} \right] \text{,} \\
\nonumber
\label{eq:grad-ent-end}
\gradient_{\pi_k} \hat{\calL}_{\text{ent}} &= - \log z_{k} - \sum_{\ell=1}^{\k} \pi_{\ell} \frac{\Normal_{k \ell}}{z_{\ell}} \text{.}
\end{align}
\subsubsection{Expected log likelihood term}
Retaking the gradients in the full model In Equations \eqref{eq:grad-unary-full}, we have that:
\begin{align}
\gradient_{\bs{\lambda}_{k}^{\un}} \elltermkn & = \Eb{\gradient_{\bs{\lambda}_{k}^{\un}} \log q_{k(n)}(\funn) \log p(\yn | \fn)}_{q_{k(n)}(\funn) q(\fbin) } \text{,}
\end{align}
where the variational parameters $\bs{\lambda}_{k}^{\un}$ are the posterior means and covariances
($\{ \postmean{kj} \}$ and $\{ \postcov{kj} \}$)
of
the inducing variables. As given in Equation \eqref{eq:qfunsparse}, $q_k(\fun)$ factorizes over the latent process
($j=1, \ldots, \Vsize$), so do the marginals $q_{k(n)}(\funn)$, hence:
\begin{align}
\label{eq:gradlogqkun}
\gradient_{\bs{\lambda}_{k}^{\un}} \log q_{k(n)}(\funn) = \gradient_{\bs{\lambda}_{k}^{\un}} \sum_{j=1}^{\Vsize} \log \Normal(\f_{\text{u} n j}; \qfmean{kj(n)}, \qfcov{kj(n)}) \text{,}
\end{align}
where each of the distributions in Equation \eqref{eq:gradlogqkun} is a $\tn$--dimensional Gaussian.
Let us assume the following definitions:
\begin{align}
\mat{X}_n &: \text{all feature vectors corresponding to sequence }n\\
\mat{A}_{jn} &\stackrel{\text{\tiny def}}{=} \kernel(\X_n, \Z_j) \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \\
\widetilde{\K}_j^{(n)} &\stackrel{\text{\tiny def}}{=} \kernelj(\X_n, \X_n) - \mat{A}_{jn} \kernel(\Z_j, \X_n) \text{, therefore:} \\
\qfmean{kj(n)} &= \mat{A}_{jn} \postmean{kj} \text{,}\\
\qfcov{kj(n)} & = \widetilde{\K}_j^{(n)} + \mat{A}_{jn} \postcov{kj} \mat{A}_{jn}^T \text{.}
\end{align}
Hence, the gradients of $\log q_k(\fun)$ wrt the the variational parameters of the unary posterior distributions over
the inducing points are:
\begin{align}
\gradient_{ \postmean{kj}} \log q_{k(n)}(\funn) & = \mat{A}_{jn}^T \qfcov{kj(n)}^{-1} \left( \f_{\text{u} n j} - \qfmean{kj(n)} \right) \text{,} \\
\gradient_{ \postcov{kj}} \log q_{k(n)}(\funn) &= \frac{1}{2}
\mat{A}_{jn}^T \left[
\qfcov{kj(n)}^{-1} (\f_{\text{u} n j} - \qfmean{kj(n)}) (\f_{\text{u} n j} - \qfmean{kj(n)})^T \qfcov{kj(n)}^{-1} - \qfcov{kj(n)}^{-1}
\right] \mat{A}_{jn}
\end{align}
Therefore, the gradients of $\calL_{\text{ell}}$ wrt the parameters of the distributions over unary functions are:
\begin{align}
\gradient_{\postmean{kj}} \calL_{\text{ell}} & = \frac{\pi_k}{S} \kernel(\mat{Z}_j, \mat{Z}_j)^{-1} \sum_{n=1}^{\nseq} \kernel(\Z_j, \X_n) \qfcov{kj(n)}^{-1}
\sum_{i=1}^S (\f_{\text{u} n j}^{(k,i)} - \qfmean{kj(n)} ) \log p(\yn | {\fun}_{n \mydot}^{(k,i)} , \f_{\bin}^{(i)}) \text{,} \\
\gradient_{\postcov{kj}} \calL_{\text{ell}} & = \frac{\pi_k}{2 S} \sum_{n=1}^{\nseq} \mat{A}_{jn}^T
\Big\{
\sum_{i=1}^S \big[ \qfcov{kj(n)}^{-1} (\f_{\text{u} n j}^{(k,i)} - \qfmean{kj(n)})(\f_{\text{u} n j}^{(k,i)} - \qfmean{kj(n)})^T \qfcov{kj(n)}^{-1} \\
\nonumber
& \quad\qquad\qquad\qquad\qquad - \qfcov{kj(n)}^{-1}\big] \log p(\yn | {\fun}_{n \mydot}^{(k,i)} , \f_{\bin}^{(i)})
\Big\}
\mat{A}_{jn}
\end{align}
\subsubsection{Pairwise functions}
The gradients of the $\klterm^{\bin}$ wrt the parameters of the posterior over pairwise functions are given by:
\begin{align}
\gradient_{\postmean{\bin}} \klterm^{\bin} &= - \K_{\bin}^{-1} \postmean{\bin} \\
\gradient_{\postcov{\bin}} \klterm^{\bin} &= \frac{1}{2} \left( \postcov{\bin}^{-1} - \K_{\bin}^{-1} \right)
\end{align}
The gradients of the $\calL_{\text{ell}}$ wrt the parameters of the posterior over pairwise functions are given by:
\begin{align}
\gradient_{\postmean{\bin}}\calL_{\text{ell}} &= \frac{1}{S} \sum_{n=1}^\nseq \sum_{k=1}^K \pi_k \sum_{i=1}^S
\postcov{\bin}^{-1}(\f_{\bin}^{(i)} - \postmean{\bin}) \log p(\yn | {\fun}_{n \mydot}^{(k,i)} , \f_{\bin}^{(i)}) \\
\gradient_{\postcov{\bin}}\calL_{\text{ell}} &= \frac{1}{2S} \sum_{n=1}^\nseq \sum_{k=1}^K \pi_k \sum_{i=1}^S
[\postcov{\bin}^{-1}(\f_{\bin}^{(i)} - \postmean{\bin})(\f_{\bin}^{(i)} - \postmean{\bin})^T \postcov{\bin}^{-1} - \postcov{\bin}^{-1} ] \log p(\yn | {\fun}_{n \mydot}^{(k,i)} , \f_{\bin}^{(i)})
\end{align}
\section{Experiments}
\subsection{Experimental set-up}
Details of the benchmarks used in our experiments can be seen in Table \ref{tab:datasets}.
\begin{table}
\centering
\caption{Datasets used in our experiments. \label{tab:datasets}. For each dataset we see the
number of categories (or vocabulary $\Vsize$), the number of features($\dim$), the number of
training sequences used in the small experiments ($\nseq$ small), and the average (across folds)
number of training words for the small experiments ($\bar{\n}$).
}
\begin{tabular}{ccccc}
Dataset & $\Vsize$ & $\dim$ & $\nseq$ small & $\bar{\n}$ small \\
\toprule
\name{base np} &3 &6,438 &150 & 3739.8\\
\name{chunking} &14 &29,764 &50 & 1155.8\\
\name{segmentation} &2 &1,386 &20 & 942 \\
\name{japanese ne} &17 &102,799 &50 & 1315.4\\
\bottomrule
\end{tabular}
\end{table}
For the experiments with batch optimization, we optimized the three sets of parameters separately in a global loop (variational parameters for unary nodes, variational parameters for pairwise nodes, and hyper-parameters). In each global iteration, each set of parameters were optimized while keeping the rest of the parameters fixed. Variational parameters for unary nodes were optimized for 50 iterations, variational parameters for pairwise nodes were optimized for 10 iterations, and hyper-parameters were updated for 5 iterations. We used L-BFGS algorithm for optimizing each set of parameters, and parameters were optimized for a maximum of 5 1/2 hours, or until the convergence, whichever comes first. Convergence was detected when the objective function in two consecutive global iterations was less than 1e-05, or the average change in the variational parameters for unary nodes was less than 0.001. The reported results are the predictions based on the best objective function achieved during the optimization. 10,000 samples ($S=10,000$) were used for approximating expected log likelihood and its gradients and $10\%$ of these samples were
used for the optimal $\hat{a}$ in the control variate calculation. For all the experiments 500 inducing points were used ($M=500$).
In experiments with stochastic optimization, similar to the experiments with batch optimization, each set of parameters were optimized separately. In each global iteration, variational parameters for unary nodes were updated for 3000 iterations and variational parameters for pairwise nodes were updated for 1000 iterations (hyper-parameters were not optimized in the stochastic optimization experiments, and they were fixed to 1). 4,000 samples were used for estimating expected log likelihood and its gradients ($S=4000$). Similar to the batch optimization case, we used 500 inducing points ($M=500$). The step-size for updating the means of the inducing points was set to 1e-4, and the step-size for updating the covariances of the inducing points were set to 1e-5.
\subsection{Performance profiles}
Figure \ref{fig:-perf-profile} shows the performance of our algorithm as a function of time.
We see that the test likelihood decreases very regularly in all the folds and so does overall the error rate, albeit with more variability.
The bulk of the optimization, both with respect to the test likelihood and with respect to the error rate, occurs during the first 120 minutes. This suggests that the kind of approach described in this paper might be particularly suited for cases in which speed of convergence is a priority.
\begin{figure}
\includegraphics[width=0.5\textwidth]{nlp-basenp-large}
\includegraphics[width=0.5\textwidth]{error-basenp-large}
\caption{The test performance of \name{gp-var-s} on \name{base np} for the large scale experiment as a function of time.
\label{fig:-perf-profile}
}
\end{figure}
|
1,108,101,565,724 | arxiv | \section{Introduction}
QCD at the finite temperature $T$ and/or baryon chemical potential
$\mu_B$ is of fundamental importance, since it describes the
relevant features of particle physics in the early universe, in
neutron stars and in heavy--ion collisions (see e.g.
\cite{W00,K02}). With the relativistic heavy ion collision
experiments at AGS, SPS and RHIC accelerators one explores the
phase diagram of strongly interacting matter in a broad parameter
range of temperature and baryon density. Lattice QCD results on
the Equation of State (EoS) of QCD matter provide a basic input
for the analysis of experimental signatures of a possible
quark--gluon plasma formation in heavy--ion collisions. Directly
addressing the EoS, hydrodynamics realizes the connection between
the matter properties and observables. The hydrodynamic treatment
of the whole time-evolution of colliding nuclei requires knowledge
of the\ \ nuclear\ \ EoS within a large interval of its thermodynamic
variables covering both quark--gluon and hadronic sectors.
In the recent years a significant progress has been made in
understanding the phase diagram of QCD at non-zero baryon chemical
potential as the
nonperturbative lattice QCD methods were extended to access the
relevant regions of the phase diagram. Recently, the first lattice
calculations have been performed for a non-vanishing $T$ and
$\mu_B$ for
systems with $N_f=2$~\cite{Allton} and $N_f=2+1$~\cite{Fodor02,Fodor04}
flavors. However, due to a set of approximation the Lattice Gauge
Theory (LGT)
is still not able to provide results on the properties of the
hadronic matter in the confined phase. LGT is also restricted to
moderate values of the baryon chemical potential $\mu_B$ such that
$\mu_B \lsim T$. That is why different phenomenological models
are required to describe thermodynamic properties and equation of
state of QCD matter for larger baryon densities. Obviously, such
models depend on the set of parameters that are usually fixed to
reproduce existing LGT results as well as the basic
phenomenological properties of the nuclear matter obtained from
the experimental data.
Recently, the thermodynamics of
the quark--gluon phase was interpreted quite successfully within
the QCD inspired massive quasi-particle
models~\cite{LH98,Pesh96,Szabo03,Rebhan03,Cleymans,Bla,Weise01,%
Weise05,IST05}.
On the other hand, demonstrated by lattice calculations, a rapid
growth of the energy density $\varepsilon$ and pressure $p$ when
approaching the critical temperature $T_c$ was shown to be
reproduced in terms of the hadron resonance gas model with scaled
masses \cite{KRT-1,KRT-2}. Only recently there were attempts to
describe lattice QCD thermodynamics both above and below $T_c$ in
terms of a field theoretical model, including features of both
deconfinement and chiral symmetry restoration~\cite{RTW05}, as
well as within some phenomenological models that are based on
lattice QCD results for the quark--gluon partition
function~\cite{ADK05}. Some unique parametrization of the QCD EoS
below and above $T_c$ was also presented in Ref. \cite{BKS04}.
The phenomenological equation of state should be not only
thermodynamically consistent~\cite{TNFNR03} but should also be
capable to reproduce the global behavior of the nuclear matter
near the ground state and its saturation properties. In addition,
there are experimental restrictions coming from the flow analysis
in heavy--ion collisions which limit the acceptable theoretical
values of pressure in a finite interval of baryon densities $n_B$
at $T=0$~\cite{Dan02}. Some constraints on the EoS are also
imposed through the analysis of cold charge--neutral baryonic
matter in $\beta$-equilibrium compact stars~\cite{IKKSTV,KB06}.
There are also essential constraints on the model properties
coming from the recent LGT results.
In this paper, we will construct the EoS of a strongly
interacting QCD matter with deconfinement phase transition that
satisfies the above mentioned hadronic constraints and those
imposed by the recent lattice QCD results obtained for $(2+1)$
-- flavor system at the finite $T$ and non-vanishing baryonic
chemical potential.
The paper is organized as follows: In Section 2 we introduce the
quasi-particle model for the EoS with deconfinement phase
transition. In Section 3 the model predictions are compared with
the recent lattice data obtained in (2+1)--flavor QCD at the
finite $T$ and $\mu_B$. Our results and comments on the
properties of the QCD equation of state and thermodynamics are
summarized in the last Section.
\section{The Equation of State}
Lattice results show that even at temperature $T$ much larger
than deconfinement temperature $T_c$ the thermodynamical
observables like pressure or entropy, baryon number and energy
density are still by $\gsim 20\%$ deviating from their asymptotic
ideal gas values. Such deviations observed at $T>2T_c$ were shown
to be well understood by a systematic contribution in the
self--consistent implementation of quasi-particle masses in the
HTL--resummed perturbative QCD \cite{jp}. On the other hand,
the LGT thermodynamics below $T_c$ was shown to be well
reproduced by the hadron resonance gas partition function
\cite{KRT-1,KRT-2}. To possibly describe the thermodynamics at
$T=T_c$ or near the phase transition an additional model
assumptions are required \cite{quasi,Szabo03}.
It is clear from the above that the straightforward model for the
QCD EoS can be constructed by connecting a non-interacting
hadron resonance gas in the low temperature phase with an ideal
quark gluon--plasma in some non-perturbative bag for the color
deconfined phase~\cite{Cleymans}. These phases are matched at the
phase transition boundary by means of the Gibbs phase equilibrium
condition. By construction, this approach yields the first order
phase transition. Such MIT bag-like model~\cite{CJJT74} is so far
the simplest method to implement the confinement phenomenon in
the EoS, though it has some serious shortcomings.
A more complete method to model QCD EoS is based on the effective
Hamiltonian that includes interactions of the constituents. In
the quasi-particle approximation such
Hamiltonian can be modelled through density--dependent
mean--field interactions ~\cite{TNFNR03,NST98,TNS98}:
\begin{eqnarray}
H &&= \sum_{j\in h,q,g} \sum_s \int d{\bf r} \ \psi^+_j({\bf r},s) \ \nonumber
\cdot \\ &&\left( \ \sqrt{-\nabla^2 + m^2_i}+U_j(\rho) \ \right) \
\psi_j({\bf r},s) - C(\rho ) \cdot V \ , \label{eqH}
\end{eqnarray}
where $j$ \ \ enumerates \ \ different species of quasi-particles (hadrons
and/or unbound quarks and gluons) and $s$ stands for their
internal degrees of freedom. Here $U_j(\rho )$ is the
density-dependent mean--field acting on the quasi-particle $j$
described by the field operator $\psi_j $ with $m_j$ being the
current mass of quarks and gluons or the free mass of hadrons.
Applying the density-dependent Hamiltonian (\ref{eqH}) in the
partition function requires some additional constraints that are
needed to fulfill the thermodynamic consistency
condition~\cite{shan}~:
\begin{equation}
\langle \frac{\partial H}{\partial T} \rangle\, = \,0\, , \quad
\langle \frac{\partial H}{\partial \rho_{j}} \rangle\, = \,0 \;\;
, \label{eq2}
\end{equation}
where $\langle A \rangle$ denotes the average value of the
operator $A$ over the statistical ensemble. With the Hamiltonian
(\ref{eqH}) the conditions (\ref{eq2}) can be also expressed as
~\cite{shan}
\begin{eqnarray}
\sum\limits_{j}\;\rho_{j}\,
\frac{\partial U_{j}}{\partial \rho_{i}}\; - \;
\frac{\partial C}{\partial \rho_{i}}
\;=\;0
\,,\, \,
\sum\limits_{j}\;\rho_{j}\,
\frac{\partial U_{j}}{\partial T}\;-\;
\frac{\partial C}{\partial T}\;=\;0\,.
\label{eq3}\end{eqnarray}
It can be shown ~\cite{TNFNR03,NST98,TNS98} that the conditions
(\ref{eq3}) are satisfied only if the mean field $U_j(\rho)$ and
the correcting function $C(\rho )$ are
temperature independent.
In the following, we consider the basic structure of the
effective Hamiltonian (\ref{eqH}) to model the EoS of hadronic and
quark--gluon plasma phase.
\subsection{The hadronic phase}
The hadronic phase is considered as a gas of hadrons and
resonances in the thermodynamic equilibrium. In general, the
particle density of species $j$ is obtained from
\begin{eqnarray}
n_j&\equiv& n_j(T,\mu_j-U_j )=\nonumber \\ &&\frac{d_j}{2\pi^2}\int_0^{\infty} dk\
k^2 \ f_j(k,T,\mu_j-U_j)~, \label{eqt1}
\end{eqnarray}
where the one-particle distribution function with an argument $z$
is
\begin{eqnarray}
f_j(k,T,z) = \left[ \ \exp \left( \frac{\sqrt{k^2 +m_j^2}-z}{T} \right)
+ {\cal L}_j \right]^{-1}
\label{eqt2}
\end{eqnarray}
with ${\cal L}_j=+1$ for fermions and ${\cal L}_j=-1$ for
bosons, while $d_j$ is the spin--isospin degeneracy factor. The
chemical potential $\mu_j$ is related to the baryon ($\mu_B$) and
strangeness ($\mu_S$) chemical potentials
\begin{equation}
\mu_j = b_j \ \mu_B+s_j \ \mu_S\,, \label{eqt1a}
\end{equation}
with $b_j$ and $s_j$ being the baryon number and strangeness of
the particle $j$. The hadronic potential $U_{j} \equiv
U_{j}^{(h)}$ is described by a non-linear mean--field
mo\-del~\cite{Zim}
\begin{eqnarray}
U_{j}^{(h)}\;= g_{r,j}\;\varphi_1 (x) + g_{a,j}\;\varphi_2 (y)\;,
\label{eqZ}
\end{eqnarray}
where $g_{r,j} > 0$ and $g_{a,j} < 0$ are repulsive and attractive
coupling constants, respectively. The effect of interactions
results also in an additional density-dependent term $C(\rho)$
that contributes to the thermodynamic pressure and energy
densities. If the particle interaction is taken in form of
(\ref{eqH}), the thermodynamic consistency implies that the
functions $\varphi_1(x)$ and $\varphi_2(y)$ depend only on
particle densities. In Ref. \cite{Zim} these functions were chosen
such that
\begin{eqnarray} b_1
\varphi_1 = x, \quad -b_1 (\varphi_2 + b_2 \varphi_2^3 ) = y
\label{eq22}
\end{eqnarray}
where
$$ x=\sum\limits_{i} g_{r,i}\; n_{i},\quad
y=\sum\limits_{i} g_{a,i}\;n_{i}\;. $$ with $b_1$ and $b_2$
being free parameters. The $\varphi_2^3$ term is introduced to
get a slower than linear increase of attraction with density at a
high compression as it happens in the relativistic mean--field
models. Having in mind that the the hadronic EoS will be compared
with that of the quark--gluon plasma, it is convenient to rewrite
(\ref{eq22}) in terms of a number of constituent quarks and
antiquarks $\nu_i$~:
\begin{eqnarray}
\rho_j=\nu_j n_j \equiv \nu_j n_j(T,\mu_j-U_j)~.
\label{rhoj}
\end{eqnarray}
In the original paper \cite{Zim}, the hadronic phase was modelled
as a mixture of nucleons and $\Delta$'s (i.e. $j=N,\Delta$).
Following~\cite{TNFNR03}, we generalize this approach by including
all hadrons and resonances with the mass up to 1.6 GeV. One also
assumes that all coupling constants scale with the number of
constituent quarks~\cite{TNFNR03}~:
\begin{eqnarray}
U_{j}^{(h)} = \nu_j\,\Bigl(
[\widetilde\varphi_1(\rho^{(h)})]^\alpha +
\widetilde\varphi_2(\rho^{(h)})\,\Bigr)\;, \label{eq23}
\end{eqnarray}
where $\widetilde\varphi_1$ and $\widetilde\varphi_2$ satisfy Eq.
(\ref{eq22}) in the following form
\begin{eqnarray}
c_1 \widetilde\varphi_1^{\alpha} = \rho^{(h)}, \quad
-c_2 \widetilde\varphi_2 - c_3 \widetilde\varphi_2^3 =
\rho^{(h)}\; \label{eq24}
\end{eqnarray}
with $\rho^{(h)}=\sum\nolimits_{j}\nu_j \rho_{j}=3\sum_B
n_j+2\sum_M n_j$. As compared to Eq. (\ref{eq22}) a free
parameter $\alpha$ is also introduced in Eq. (\ref{eq23}). This
parameter is used to control the strength of the repulsive
interactions at high density ~\cite{NST98,TNS98}. The parameters
in Eq. (\ref{eq24}) are expressed as \cite{NST98}:
$$ c_1
= \frac{b_1}{(g_{r,j}/\nu_j)^2}, \quad c_2 =
\frac{b_1}{(g_{a,j}/\nu_j)^2}, \quad c_3 = \frac{b_1
b_2}{(g_{a,j}/\nu_j)^4}\; $$ and are fixed by requiring that the
properties of the ground state ($T=0$ and
$n_B=n_0\approx0.15\;{\rm fm^{-3}})$ of the nuclear matter are
reproduced: zero pressure, binding energy per nucleon of -16 MeV
and incompressibility of 210 MeV.
Solving the cubic equation (\ref{eq24}), one gets the
interaction potential as
\begin{eqnarray}
U_{j}^{(h)}\;= \nu_j \left[ \frac{1}{c_1} \cdot
(\rho^{(h)})^\alpha - F(\rho^{(h)} ) \right]
\label{eqZ1}
\end{eqnarray}
where the function $F$
depends on the density of { quarks bounded inside hadrons} as
follows
\begin{eqnarray}
\label{Ff} F(t) = \frac{12^{1/3}}{6} \eta - 2 \beta \eta^{-1} \
\mbox{with} \ \eta = \left( \frac{t}{a} + \sqrt{\beta^3 +
\frac{t^2}{a^2}} \right)^{\frac{1}{3}}.
\end{eqnarray}
Here $a, \beta$ are proportional to the coefficients of the
equation (\ref{eq24}): $a =c_3/9$ and $\beta
=c_2/12^{1/3}$.
In this representation we obtained for the hadronic
pressure
\begin{eqnarray}
&&p^{(H)} (T,\mu_j-U_j^{(h)} )=\label{EoS:eqp} \\ \nonumber \sum_{j\in h} &&
\frac{d_j}{6\pi^2} \int_0^{\infty}
\frac{k^2}{ \sqrt{k^2 + m_j^2} } f_j(k, T, \mu_j-U_j^{(h)}) \ k^2 dk + C(\rho^{(h)})
\end{eqnarray}
and for the energy density
\begin{eqnarray}
&&\varepsilon^{(H)}(T,\mu_j-U_j^{(h)} ) = \nonumber \sum_{j\in h} \frac{ d_j}{2\pi^2}\int_0^{\infty} \left( \sqrt{k^2 + m_j^2} + U^{(h)}_{j} \right) \cdot\\&&\cdot
f_j(k, T, \mu_j-U_j^{(h)}) k^2 dk -C(\rho^{(h)})~,
\label{EoS:eqeps}
\end{eqnarray}
where the function $C$ is obtained from
\begin{eqnarray}
&&C(\rho^{(h)}) = \frac{1}{c_1} \frac{\alpha}{\alpha +1 } \
(\rho^{(h)})^\alpha - \rho^{(h)} F(\rho^{(h)}) +\\ \nonumber
&&+\int_0^{\rho^{(h)}} F(t) \ dt. \label{EoS:B_analitic}
\end{eqnarray}
\begin{figure}[thb]
\centerline{
\includegraphics[width=60mm,clip]{danmp.eps}}
\caption{ Pressure as a function of baryon density for $T=0$.
Grey and black solid lines are calculated for the modified Zimanyi
model and ideal gas EoS, respectively. The shaded region
corresponds to the Danielewicz { et. al.} constraint~\cite{Dan02}.
}
\label{dan}
\end{figure}
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{pi_rwmp.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{e_rwmp.eps}}
\caption{Temperature dependence of the reduced pressure and energy
density for an interacting pion gas ($\pi +\rho$ system). The
solid line is our result, dash-dotted and dashed ones are the
interacting ~\cite{RW96} and ideal pion gas, respectively. }
\label{pion_gas}
\end{figure*}
As shown in Fig. \ref{dan}, the above hadronic EoS satisfies the
constraint resulting from the nucleon flow analysis of heavy ion
collisions in the energy range $\lsim 10$ AGeV. The upper boundary
of the shaded area is consistent with the constraint coming from
the analysis of the neutron star properties~\cite{KB06}. In the
high temperature regime there is also a reasonable agreement of
our model with the thermodynamics of the interacting pion gas
from ~\cite{RW96} (see Fig. \ref{pion_gas}).
\subsection{The two--phase bag model}
In the MIT bag--like model, the deconfinement phase transition
is determined by matching the EoS of an ideal relativistic gas of
hadrons and resonances to that of an ideal gas of quarks and
gluons. In the following we consider the two--phase (2P) model
that accounts for interactions separately in the hadronic and
quark--gluon plasma phase. The hadronic phase is described within
the phenomenological mean--field model introduced in the previous
Section. Following Eq. (\ref{eqt1}), the total baryon density and
the strangeness density in the hadronic phase can be expressed as
\begin{eqnarray}
\label{eqt1b}
n_B^H &=& \sum_{j \in h} b_j \ n_j(T,\mu_j-U^{(h)}_j)~, \\
n_S^H &=& \sum_{j \in h} s_j \ n_j(T,\mu_j-U^{(h)}_j)~,
\label{eqt1c}
\end{eqnarray}
where the sum is taken over all hadrons and resonances. Similarly,
the pressure and energy density of the species $j$ are given by
Eqs. (\ref{EoS:eqp}) and (\ref{EoS:eqeps}).
In the quasi-particle approximation, the QGP phase is commonly
described as a gas of partons (non--interacting point-like quarks,
antiquarks and gluons) confined in a "bag". The non--perturbative
effects associated with confinement are presented by the constant
vacuum energy $B$. The recent LGT results show that such an
approach is not adequate as the EoS differs from the asymptotic
ideal gas values even at temperatures
as high as $100
\ T_c$~\cite{ABPS02}. The perturbative QCD results can be,
however, improved through the so-called Hard Thermal Loop (HTL)
expansion. According to the HTL perturbative expansion, the QCD
thermodynamics at high temperature is controlled by
quasi-particles\ \ with\ \ a \ \ temperature\ \ dependent mass $m_q(T)$. For
$\mu_B=0$ one gets~\cite{ABPS02,HTL}:
\begin{eqnarray}
m_q^2(T) - m_{q0}^2 = \frac{N_g}{16 Nc} T^2 g^2~. \label{htl1}
\end{eqnarray}
To model the HTL results within the mean--field approach one
introduces the quark and gluon potentials to reproduce the
behavior of the HTL masses (\ref{htl1}) in the high temperature
limit. This in general results in an additional equation for the
unknown gluon density. To simplify the problem we modify the
potential so that it coincides only with the HTL expression for
quarks. In the high temperature limit and having in mind that
$\rho \sim T^{3}$, the simplest phenomenological choice of the
potential is
\begin{equation}
U^{(pl)} = {\cal B} \ (\rho^{(pl)})^{1/3}, \label{mod}
\end{equation}
where ${\cal B}$ is obtained comparing the asymptotic expansion
of (\ref{mod}) with the HTP result
\begin{eqnarray}
{\cal B} = g \frac{\sqrt{\frac{N_g}{16 N_c}}}{ \left(
\frac{\zeta(3)}{2 \pi^2} \left( 2 d_g + 3 N_f d_q \right) \right)^
{1/3}~} \label{B}
\end{eqnarray}
with $d_q$ and $d_g$ being the degeneracy factors for quarks and
gluons, respectively. For $N_c = 3$ and $N_g = 8$ one gets
\begin{eqnarray}
{\cal B}(N_f=3) = 0.2351\ g ~, \\
{\cal B}(N_f=2) = 0.2542\ g~,
\label{Bnum}
\end{eqnarray}
where \ \ the \ \ strong interaction coupling constant $g$ is
treated as a free parameter.
The \ \ thermodynamic \ \ self-consistency conditions require that the
mean--field contribution to the pressure and energy density in
equations like Eqs. (\ref{EoS:eqp}) and (\ref{EoS:eqeps}) is
respectively
\begin{eqnarray}
U_i^{(pl)} &=& {\cal B} \ \nu_i \ (\rho^{(pl)})^{1/3},\label{U2P}\\
C(\rho^{(pl)})&=& \frac{\cal B}{4} \ (\rho^{(pl)})^{4/3} +B~,
\label{C2P}
\end{eqnarray}
where the plasma particle density $\rho^{(pl)}=\sum_{j \in
g,q,\bar q} \rho_j$ and the bag constant $B$ is included in the
correcting function $C$.
With such mean--field potentials the pressure and energy density
in the plasma phase carried by $u,d$ and $ s$ quarks and
antiquarks is obtained as
\begin{eqnarray}
p^{Q}(T,\mu_j-U_j^{(pl)}) &=& \sum_{j \in g,q,\bar q} \nonumber
p_j(T,\mu_j-U_j^{(pl)}) -\\&&-C(\rho^{(pl)})~,
\label{eqt8} \\
\varepsilon^{Q}(T,\mu_j-U_j^{(pl)}) &=&
\sum_{j \in g,q,\bar q} \nonumber
\varepsilon_j(T,\mu_j-U_j^{(pl)})+\\&&+C(\rho^{(pl)})~, \label{eqt10}
\end{eqnarray}
To quantify these observables we use the quark masses $m_u=m_d=65$
MeV and $m_s=135$ MeV, the gluon mass $m_g=700$ MeV and the bag
constant $B^{1/4}={\rm 207 \ MeV}$. Such parameters \, \, \, yield \, a
transition temperature $T_c \approx 170 \ \mbox{MeV}$ in agreement
with the recent lattice result obtained for the vanishing net
baryon number ~ \cite{K02}.
For massless gluons the equation of state has a simple form
\begin{equation}
p_g(T) = \frac{d_g \pi^2}{90} T^4~, \ \ \ \varepsilon_g(T) = 3 p_g(T) =
\frac{d_g \pi^2}{30} T^4
\label{eqt9}
\end{equation}
with $d_g$=16~.
The baryon number and strangeness density in the quark--gluon
plasma are obtained following Eqs. (\ref{eqt1b}) and
(\ref{eqt1c}) from
\begin{eqnarray}
\label{eqt11b}
n_B^Q &=& \sum_{j \in g,q,\bar q} b_j \ n_j(T,\mu_j-U_j^{(pl)} )~, \\
n_S^Q &=& \sum_{j \in g,q,\bar q} s_j \ n_j(T,\mu_j-U_j^{(pl)}
)~. \label{eqt11c}
\end{eqnarray}
The equilibrium between the plasma and the hadronic phase is
determined by the Gibbs conditions for the thermal ($T^Q=T^H$),
mechanical ($p^Q=p^H$) and chemical ($\mu_B^Q =\mu_B^H, \ \mu_S^Q
=\mu_S^H$) equilibrium.
At a given temperature $T$ and baryon chemical
potential $\mu_B$ the strange chemical potential $\mu_S$ is
obtained by requiring that the net strangeness of the total
system vanishes. Consequently, the phase equilibrium condition and
strangeness conservation imply that:
\begin{eqnarray}
\label{eqt12a}
&&p^H(T,\mu_j-U_j^{(h)}) = p^Q(T,\mu_j-U_j^{(pl)}), \\
\label{eqt12b} &&n_B = (1-
\lambda )n_B^H(T,\mu_j-U_j^{(pl)} ) +\\ \nonumber && \quad \quad + \lambda n_B^Q(T,\mu_j-U_j^{(pl)} ), \\
&&0= (1-\lambda )
n_S^H(T,\mu_j-U_j^{(pl)}) + \\ \nonumber && \quad \quad + \lambda n_S^Q(T,\mu_j-U_j^{(pl)} ) , \label{eqt12c}
\end{eqnarray}
where $\lambda = V_Q / V$ is the fraction of the volume occupied
by the plasma phase. The phase boundaries of the coexistence
region are found by putting $\lambda = 0$ for the hadron phase
boundary and $\lambda = 1$ for the plasma boundary. By
construction the 2P EoS results in the first-order phase
transition with discontinuous behavior of energy and baryon
densities.
According to the Gibbs condition ~\cite{LL}, the number of
thermodynamic degrees of freedom that may be varied without
destroying the equilibrium of a mixture of $r$ phases with $n_c$
conserved charges is ${ \cal N}=n_c+2-r$. For the considered
hadron--quark deconfinement transition $r=2$. If the baryon number
is the only conserved quantity then $n_c=1$ and ${\cal N}=1$.
Thus, the phase boundary is one--dimensional, i.e. a line. The
Maxwell construction for the first-order phase transition
corresponds to $r=2$ and $n_c=1$. When both the baryon number
and strangeness are conserved, that is when $n_c=2$, one has
${\cal N}=2$ and therefore the phase boundary is a surface. In
such a system, a standard Maxwell construction is no longer
possible~\cite{Glend92,DCG06}\footnote[1]{In~\cite{Glend92,DCG06} the baryonic and electric charge
conservation was considered in application to a nuclear liquid-gas
phase transition. As to strangeness conservation the emphasis was
made mainly on the strangeness distillation effect~\cite{GKS93}.
Phase boundaries for this case were studied in detail
in~\cite{LH93}. More complete list of appropriate references can
be found in~\cite{TNFNR03}. }.
When two phases coexist, the system is in general { not
homogeneous} because the phases occupy separate domains in
space. We do not, however, explicitly account for such a domain
structure or
a possible surface energy contribution to the equation of state.
The only consequence of the phase separation in the considered 2P
model is that the interaction between quasi-particles in the
plasma and hadronic phase are neglected. This is different from
the statistical mixed phase model that will be discussed in the
next subsection.
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{pd2Pmu.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{pd2P.eps}}
\caption{ The phase boundary calculated in the 2P model with the
physical values of parameters as explained in the next Section. }
\label{pr}
\end{figure*}
The resulting phase boundaries between the hadronic phase and the
quark--gluon plasma in the 2P model are shown in Fig.~\ref{pr}.
At $T=0$ the coexisting region appears at $n_B/n_0 \simeq 8$. This
density is by factor two larger than that one obtained in the conventional
MIT-bag like model ~\cite{TNFNR03} (see also Ref.~\cite{IKKSTV}).
This is because in our calculations the quarks and gluons are
treated as massive quasi-particles. As it will be shown in the
next Section, the finite mass of quasi-particles is needed in the
quark gluon plasma to get the EoS that is consistent with LGT
results.
\subsection{The mixed-phase model}
In the 2P model the interactions between quark, gluons and hadrons
are entirely neglected in the coexistence region. In the following
we introduce the MP model where such interactions are possible.
The underlying assumption of the MP model \cite{NST98,TNS98} is
that unbound quarks and gluons { may coexist} with hadrons
forming a { homogeneous} quark/gluon--hadron phase. Since the mean
distance between hadrons and quarks or gluons in this mixed phase
may be of the same order as that between hadrons, the interaction
between all these constituents
(unbound quarks, gluons and hadrons) plays an important
role as it defines the order of the phase transition.
Under a quite general requirement for the confinement
of color charges, the mean--field potential of quarks and gluons
in the plasma phase is approximated as
\begin{eqnarray}
U_q(\rho^{(pl)})=U_g(\rho^{(pl)})&=&\\ \nonumber &=&{{\cal
A}\over(\rho^{(pl)})^{\gamma}}+{\cal B} \ (\rho^{(pl)})^{1/3}~;
\ \ \ \gamma >0~,
\label{eq6}
\end{eqnarray}
where $\rho^{(pl)}=n_q+n_{\bar q} +n_g$. The second term in
Eq. ({\ref{eq6}}) is introduced to account for the growth of the
quasi-particle mass with the density as that obtained in the HTL
approximation (see Eq. (\ref{U2P})). The first term in
Eq. (\ref{eq6}) reflects two important limits of the QCD
interactions. For $\rho^{(pl)} \to 0$, this potential term
approaches the infinity, {\em i.e.} an infinite energy is
necessary to create an isolated quark or gluon that corresponds to
the confinement of color objects. The other extreme limit of
infinite density is consistent with the asymptotic freedom.
The generalization of the mean--field potential from Eq.
(\ref{eq6}) to the case of the { mixed} quark--hadron phase is
obtained replacing $\rho^{(pl)}$ in Eq. (\ref{eq6}) by the total
density of quarks and gluons $\rho^{(mp)}$ with
\begin{equation}
\rho^{(mp)}=\rho_q + \rho_{\bar q} +\rho_g +\eta
\sum\limits_{j}\;\nu_j\rho_{j} \equiv \rho^{(pl)}+\eta \
\rho^{(h)}~, \label{rhomp}
\end{equation}
The presence of the total number density $\rho^{(mp)}$
in Eq. (\ref{eq6}) implies interactions between all components of
the mixed phase. For $\eta=0$ there is no interaction between
hadrons and unbound quarks and gluons. This case corresponds to
such a strong binding of hadron constituents that the presence of
free color charges in their surrounding does not result in their
color polarization, i.e. hadrons remain color neutral and do not
see quarks and gluons outside hadron. Thermodynamically, the
potential with $\eta=0$ implies the first order phase
transition. For $\eta=1$ there is a very strong color
polarization of hadrons. Consequently, there is no difference
between bound and unbound quarks and gluons. This approximation
was used in \cite{NST98,TNS98}. Here we consider $\eta$ as a free
parameter that is chosen in a way to reproduce the LGT results
for the QCD equation of state.
The hadronic potential in the Hamiltonian
(\ref{eqH}) was described by a non-linear mean--field model.
However, the presence of unbound quarks and gluons will modify
this \ \ hadronic interaction\ \ due to \ \ the polarization of color
charges. Thus, in general
\begin{eqnarray}
U_{j}^{(mp)}=U_{j}^{(h)}+U_{j}^{(h-pl)}\;. \label{eq43}
\end{eqnarray}
The constraints imposed by the thermodynamic consistency
conditions (\ref{eq3}) can be used to find the potential for
the interaction of unbound quarks/gluons with hadrons as
~\cite{NST98,TNS98}
\begin{eqnarray}
U_{j}^{(h-pl)}\;&=&\;\nu_j \eta \left( \frac{\cal
A}{(\rho^{(mp)})^{\gamma}} -\frac{\cal A}{(\eta
\rho^{(h)})^{\gamma}} + \right. \nonumber \\ &&+\left.{\cal B} \ [(\rho^{mp)})^{1/3}-(\eta
\rho^{(h)})^{1/3}]\right)~. \label{eq17}
\end{eqnarray}
Consequently, the pressure and the energy density in the MP model
are obtained from
\begin{eqnarray}
&&p^{MP}(T,\mu_j-U_j^{(mp)}) = \sum_{j \in g,q,\bar q} \nonumber
p_j(T,\mu_j-U_j^{(mp)}) + \\ && +\sum_{j \in h} p_j(T,\mu_j-U_j^{(mp)})
-C(\rho^{(mp)})~,
\label{eqt8m} \\
&& \varepsilon^{MP}(T,\mu_j-U_j^{(mp)}) = \nonumber
\sum_{j \in g,q,\bar q} \varepsilon_j(T,\mu_j-U_j^{(mp)})+\\ && +\sum_{j
\in h} \varepsilon_j(T,\mu_j-U_j^{(mp)})+C(\rho^{(mp)})~,
\label{eqt10m}
\end{eqnarray}
where
\begin{eqnarray}
C(\rho^{(mp)})&=& \frac{x \alpha}{\alpha +1 } \ (\rho^{(h)})^{\alpha +1 } -\nonumber \\&-&\rho^{(h)} \ F(\rho^{(h)})
+ \int_0^{\rho^{(h)}} F(t)dt - \nonumber \\ &-&
\frac{\gamma {\cal A} }{1-\gamma} \left[ (\rho^{(mp)})^{1-\gamma}
- (\rho^{(h)})^{1-\gamma}\right]+\nonumber \\&+&\frac{\cal B}{4} \ \left[
(\rho^{(mp)})^{4/3}-(\eta \rho^{(h)})^{4/3}\right]~. \label{eqp}
\end{eqnarray}
The MP model described above exhibits a crossover deconfinement
phase transition. The transition temperature $T_c$ corresponds to
a maximum in the $T$-dependence of the heat capacity at the given
value of $\mu_B$ (see the next Section). The resulting phase
boundary is shown in Fig. \ref{boundmp}. At $T\lsim $ 50 MeV the
maximum of the heat capacity is not well defined. The calculation
in Fig. \ref{boundmp} was performed with the physical values of
the parameters as introduced in the next Section.
\begin{figure}[thb]
\centerline{
\includegraphics[width=60mm,clip]{phase.eps}}
\caption{ The phase boundary calculated in the MP model (solid
line). The dotted and dot-dashed lines correspond to the state
where with the fraction of unbound quarks is 0.5 and 0.6,
respectively. }
\label{boundmp}
\end{figure}
In the MP model hadrons survive at $T>T_c$. If the fraction
of unbound quarks is defined as
$\rho^{(pl)}/(\rho^{(pl)}+\rho^{(h)})$, then one can see from
Fig. \ref{boundmp} that at $\mu_B=0$ and the temperature as high
as $T-T_c\sim 100$ MeV there is still 40$\%$ quarks that are
bounded inside hadrons.
\section{The comparison with the lattice data}
The Lattice Gauge theory is the only approach that allows to
extract the physical EoS of QCD medium. To further constraint the
phenomenological models for the EoS introduced in the last
Section, we will compare their predictions with the available LGT
results. We focus mainly on the recent LGT findings obtained in
(2+1)--flavor QCD at the finite temperature and chemical potential
~\cite{Fodor02,Fodor04}.
In order to use the mixed phase and 2P models for the further comparison
with lattice results one needs, however, to take into account
that lattice calculations are generally performed with quark
masses heavier than those realized in the nature. Consequently,
the hadron mass spectrum generated on the lattice is modified.
In Refs.~\cite{Fodor02,Fodor04} the ratio of the pion mass $m_\pi$
to the mass of the $\rho$ meson is around 0.5-0.75, which is
roughly 3 times larger than its physical value. Thus, to compare
the model predictions with LGT results the hadron mass spectrum
used in the model calculations should be properly scaled. For this
we use a phenomenological parametrization of the quark mass
dependence of the hadron masses $m_j(x)$ that was shown in
Refs.~\cite{KRT-1,KRT-2} to be consistent with the MIT bag model
results as well as with LGT findings. For the non-strange hadrons
this parametrization reads \cite{KRT-1,KRT-2}:
\begin{eqnarray}
\label{non-str} \frac{m_j(x)}{\sqrt{\sigma}}\simeq \nu_{lj} a_1 x
+\frac{m_j / \sqrt{\sigma}}{1+a_2x+a_3x^2+a_4x^3+a_5x^4}~.
\end{eqnarray}
Here $x\equiv m_\pi / \sqrt{\sigma}, \ \nu_{lj}$ is the number of
light quarks inside the non-strange hadron ( i.e. $\nu_{lj}=2$ for
mesons and $\nu_{lj}=3$ for baryons) and $\sigma=(0.42 \ GeV)^2$.
\begin{table}
\caption{\label{tab:table2} Parameters of the
interpolation formulae (\ref{non-str})}
\centerline{
\begin{tabular} {c c c c c c }
\hline
$a_1$ & $a_2$& $a_3$& $a_4$&$a_5$ \\
\hline
& & & & & \\
0.51 & $\frac{a_1\nu_{lj}\sqrt{\sigma}}{m_j}$ &0.115 &
-0.0223 & 0.0028 \\
& & & & & \\
\hline
\end{tabular}}
\end{table}
For strange hadrons that carries strangeness $s_j=1$ and $s_j=2$
we have, respectively
\begin{eqnarray}
\label{str1}
\frac{m_j(x)}{ \sqrt{\sigma}}&=&0.55 \nu_{lj} x+\frac{1.7\cdot 0.42 \
\frac{m_j} {\sqrt{\sigma}}}{(1+0.068 x)}, \\
\frac{m_j(x)}{\sqrt{\sigma}}&=&0.5788 x +\frac{0.42 \frac{m_j}{\sqrt{\sigma}}}{(0.4758+0.0142
x)}.
\label{str2}
\end{eqnarray}
Simultaneously with the change of the hadron mass spectrum with
the pion mass one needs to account for the shift of the transition
temperature $T_c$ with $m_\pi$. We use the parametrization that
is extracted from LGT calculations ~\cite{KRT-1},
\begin{eqnarray}
\label{Tc}
\left(\frac{T_c}{\sqrt{\sigma}}\right)_{m_{\pi}/\sqrt{\sigma}}\simeq
0.4 +0.04(1) \ \left(\frac{m_{\pi}}{\sqrt{\sigma}}\right)~.
\end{eqnarray}
To compare our phenomenological model EoS with that obtained on
the lattice in Refs.~\cite{Fodor02,Fodor04} we use the modified
hadron mass spectrum form Eqs. (\ref{non-str})--(\ref{str2})
corresponding to the pion $m_\pi\simeq $ 508 MeV as fixed in
these LGT calculations. In the deconfined phase the current quark
masses and gluon mass are in general also free parameters. In
the present calculations we fixed $m_u=m_d=65$ MeV, $m_s=2.08 \
m_u$ and $m_g\simeq $ 700 MeV as followed from the successful
description of the quark sector of the above LGT data in terms of
the quasi-particle model~\cite{Szabo03}.
We look for a phase transition
at the appropriate temperature $T_c$ defined by Eq. (\ref{Tc}) by
varying mainly the bag constant $B$ in the 2P model or strength
parameter $\cal A$ in the mixed phase model. The further fine
tuning is carried out by means of remaining parameters (one
parameter in the 2P model and three ones for the mixed phase
model) to get the best description of LGT findings on temperature
dependence of different thermodynamical quantities.
In the 2P
model, where the two phases do not interact with each other, the
critical temperature $T_c$ is governed mainly by the value of the
bag constant $B$ and the parameter $\alpha$ that characterizes
the hardness of the EoS. Choosing $B^{1/4}=\rm 223 \ MeV$ and
$\alpha=2.1$ one gets $T_c \approx 176$ MeV and
$\varepsilon/T^4~|_{T_c}=7.84$ to be consistent with the LGT
results. We have to stress, however, that
for some values of the parameters, e.g. for too heavy masses, the set of
Eqs. (\ref{eqt12a})-- (\ref{eqt12c}) may have no solution.
In the MP model the critical temperature is defined at the
position of the maximum of the heat capacity
$$ c_V= \partial\varepsilon /\partial T|_{V=const} $$
The value of $T_c$ depends mainly on parameters that quantify the
quark/gluon interactions. With ${\cal A}^{1/(3\gamma+1)}= 270$
MeV and $\gamma= 0.3$ the accepted value of the critical
temperature is seen in Fig. \ref{cV} to be 188 MeV. As was noted
in Ref. \cite{TNFNR03}, $\gamma=1/3$ corresponds to a string-like
quark interaction.
\begin{figure}[thb]
\centerline{
\includegraphics[width=60mm,clip]{cV_h.eps}}
\caption{ Temperature dependence of the reduced heat capacity at
$\mu_B=0$ }
\label{cV}
\end{figure}
In Figs. \ref{p0} and \ref{e0} we show the comparison of the MP
and the 2P model predictions with LGT data obtained for the
thermodynamic pressure $p/T^4$ and the energy density $\varepsilon
/T^4$ at the finite $T$ but for $\mu_B=0$.
The lattice calculations in Refs.~\cite{Fodor02,Fodor04} were done
on the lattices with $N_t=4$ temporal extension. To account for
the finite size effects, the LGT results have to be extrapolated
to the continuum limit corresponding to $N_t \to \infty$. In
general, such a procedure requires a detailed LGT calculations
on the lattices with different $N_\tau$. In
Refs.~\cite{Fodor02,Fodor04} to account approximately for the
finite size effect the $N_\tau=4$ data for the basic thermodynamic
quantities were corrected being multiplied by the constant
factors: $c_0=0.518$ and $c_\mu=0.446$ for $\mu_B=0$ and
$\mu_B\neq 0$ respectively. These factors were determined from
the ratios of the Stefan-Boltzmann ideal-gas limit for the
thermodynamic pressure to its corresponding values calculated on
the lattice with $N_t=4$.
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{p0_2p.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{p0.eps}}
\caption{ The reduced pressure at $\mu_B=0$ in 2P (the left panel)
and MP (the right panel) models. Circles are the lattice data for
the (2+1)--flavor QCD system~\cite{Fodor02,Fodor04} multiplied by
$c_0$, squires are the Bielefeld group data for the same case
\cite{KLP01} (see also results cited in~\cite{KRT-1}.)}
\label{p0}
\end{figure*}
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{e0_2p.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{e0.eps}}
\caption{ The reduced energy density at $\mu_B=0$ in 2P (left
panel) and MP (right panel) models. Notation is the same as in
Fig. \ref{p0}}
\label{e0}
\end{figure*}
As it is seen in Fig. \ref{p0}, a smooth $T$--dependence of
pressure in a deconfined phase may be quite well reproduced within
both the MP and 2P model. However, in the hadronic phase, that
is for $T/T_c<1$, the models overestimate LGT results from
Refs.~\cite{Fodor02,Fodor04}. In Fig. \ref{p0} also shown are
LGT results for $(2+1)$--flavor QCD at $\mu_B=0$ from the
Bielefeld group~\cite{KLP01}. Improved gauge and staggered fermion
actions were used there on the lattices with temporal extent of
$N_t=4$ and $N_t=6$. These data were also extrapolated to the
chiral limit \cite{KLP01}. As seen in Fig. \ref{p0}, the
Bielefeld data exhibit a smaller limiting pressure as compared
to~\cite{Fodor02,Fodor04} and essentially higher pressure in the
hadronic sector, though the pion mass is $m_\pi=770$ MeV in the
latter calculations. Our models are seen in Fig. \ref{p0} to
coincide with Bielefeld results in the confined phase. The
strongly suppressed pressure at $T\leq T_c$ found in
Refs.~\cite{Fodor02,Fodor04} is non-physical and could be partly
related with too simplified procedure to extrapolate LGT results
to a continuum limit when applying the same constant scaling
factor for all values of temperatures.
The energy density shown in Fig. \ref{e0} behaves differently in the
MP and 2P
model. As it is expected in the 2P model, that exhibits the
first order phase transition, the $\varepsilon /T^4$ suffers a
jump at the critical temperature. This jump corresponds to the
energy density change by $\Delta \varepsilon \sim 0.9 \ {\rm
GeV/fm^3}$. The LGT results on the temperature dependence of
$\varepsilon/T^4$ are seen in Fig. \ref{e0} to be
noticeably better reproduced within the MP than with the 2P model.
This is because the MP model exhibits a crossover type transition
as also found in the above LGT calculations. The difference
between LGT results obtained with an improved and standard action
is also seen on the level of energy density.
Having established the model parameters at $\mu_B=0$ we can
further study the model comparisons with LGT results at the finite
baryon density. The temperature dependence of pressure and energy
density for finite values of $\mu_B$ is shown in Figs. \ref{dpmu}
and \ref{demu} in terms of the "net bryonic pressure" $\Delta p
/T^4= (p(T,\mu_B)-p(T,\mu_B=0))/T^4$ and the "interaction measure"
$\Delta /T^4= (\varepsilon -3p)/T^4$.
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{dp.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{dpmp.eps}}
\caption{ Temperature dependence of the reduced pressure
$(p(\mu_B)-p(0))/T^4$ at the baryon chemical potential $\mu_B=$
210, 330, 410 and 530 MeV (from the bottom) within 2P ( the left
panel) and MP (the right panel) models. Points are lattice data
for the (2+1)--flavor system~\cite{Fodor02,Fodor04} multiplied by
$c_\mu$. }
\label{dpmu}
\end{figure*}
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{diff.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{diffmp.eps}}
\caption{ Temperature dependence of the interaction measure
$(\varepsilon-3p)/T^4$ at the baryon chemical potential $\mu_B=$
210, 330, 410 and 530 MeV (from the bottom) within 2P (the left
panel) and MP (the right panel) models. Points are lattice data
for the (2+1)--flavor QCD system~\cite{Fodor02,Fodor04}
multiplied by $c_\mu$. }
\label{demu}
\end{figure*}
The $T$ dependence of $\Delta p /T^4$ for different values of
$\mu_B$ is quite well reproduced by both the MP and 2P model. The
fall of $\Delta p /T^4$ for $T\geq T_c$ is entirely determined
by the value of the coupling $g$ that describes the strength of
the interactions of quasi-particles and their effective mass. The
observed fall does not require any artificial reduction of the
number of quark--gluon degrees of freedom. It turns out that in
both models a similar value of $g=0.5$ is necessary to reproduce
LGT results.
The interaction measure $\Delta /T^4$ exhibits a rather sharp
maximum \ \ slightly\ \ above $T_c$ with\ \ the shape of $T$-dependence that
is weakly changing with $\mu_B$. In general, both models
reproduce the above properties of the interaction measure.
However, quantitatively the $\Delta /T^4$ is overestimated in the
2P model and underestimated in the MP model near the maximum.
The interaction measure characterizes the strength of
interactions in a system. It is equal to zero for the EoS of the
ultrarelativistic ideal gas of massless particles where
$\varepsilon=3p$. In the considered 2P model and at $T>T_c$ we
are dealing with a gas of massive quarks and gluons interacting
via the HTL-like potential (\ref{U2P}). In contrast, in the MP
model at $T>T_c$, there are interacting unbound quarks, gluons
and bound quarks within hadrons. The fraction of the bound quarks
amounts in about 85 $\%$ at $T\sim T_c$ (which allows to
describe this region in terms of the resonance gas
model~\cite{KRT-1,KRT-2}) and almost vanishes at $3T_c$ ($\sim
5\%$). In this context the quark--gluon plasma may be
considered as a strongly interacting correlated system
~\cite{SZ03}. In the confined phase there is an admixture of
quarks at $T<T_c$ until about $0.9~T_c$. This property of the
model is very essential for a possible explanation of the "horn"
structure in the $K^+/\pi^+$ excitation function~\cite{mg:04} due
to manifestation of the strangeness distillation effect near the
critical end point~\cite{TP05}.
The\ \ model comparison with \ \ LGT results for the baryon density is
shown in Fig. \ref{nb}. It is clear from this figure that in the
hadronic sector the baryon density $n_B/T^3$ obtained on the
lattice is smaller than that predicted by the models. However,
above $T_c$ there is quite a good agreement of the model with LGT
results. This is particularly a case for the MP model which shows
a better description of LGT data near the phase transition.
In our models the absolute values of $\Delta p /T^4$, $\Delta
/T^4$ and $n_B /T^3$ are strongly affected by the parameter
$\eta$ appearing in Eq. (\ref{rhomp}). In the actual calculations
the $\eta=0.025$, thus it is essentially smaller than $\eta=1$
that was found in our earlier parametrization based only on the
LGT data obtained for $\mu_B=0$ ~\cite{TNFNR03}. If $\eta=1$ is
to be substituted in our actual calculations, then all the above
quantities would increase by a factor of two.
The properties and the behavior of the LGT thermodynamics at
finite T and $\mu_B$ have been recently discussed in the context
of the Polyakov-loop-extended Nambu and Jona-Lasinio (PNJL)
model~\cite{Weise05}. This PNJL model represents a minimal
synthesis of the spontaneous chiral symmetry breaking and
confinement. The model correctly describes the pion properties but
obviously is not applicable near the nuclear ground state. It
also does not contain the resonance contributions to the QCD
thermodynamics nor the hadronic correlations below and above $T_c$
that are essential near the phase transition. Nevertheless, the
PNJL model reproduces the LGT data \cite{Allton} obtained in 2
flavor QCD on the pressure difference and the quark number density
at various temperatures and chemical potentials remarkably well
\cite{Weise05}. However, in the PNJL model the interaction measure
$\Delta / T^4$ was found to be underestimated by $\sim 25\%$
similarly as seen in Fig.\ref{demu} from our MP model comparison
with the (2+1)--flavor QCD results obtained in LGT.
\begin{figure*}[thb]
\centerline{
\includegraphics[width=60mm,clip]{nb.eps} \hspace*{3mm}
\includegraphics[width=60mm,clip]{nbmp.eps}}
\caption{ Temperature dependence of the baryon density at the
baryon chemical potential $\mu_B=$~ 210, 330, 410 and 530 MeV
(from the bottom) in 2P (the left panel) and MP (the right panel)
models. Points are lattice data for the (2+1)--flavor QCD
system~\cite{Fodor02,Fodor04} multiplied by $c_\mu$. }
\label{nb}
\end{figure*}
The phenomenological models considered here describe the EoS of
the QCD matter in a broad range of thermal parameters that
includes the hadronic and quark gluon plasma phase. These models
are also applicable in cold nuclear matter as they satisfy
essential phenomenological constraints expected near nuclear
saturation. In heavy ion collisions, dense QCD matter created in
the initial stage is expected to thermalize and expand without
further generation of the entropy $S$. In the realistic expansion
scenario some particles may be crated and/or absorbed implying
changes in the total entropy of the system. In general, it is
more convenient to consider the EoS at fixed entropy per baryon
($S/N_B$). This thermodynamic quantity should be strictly
conserved in an equilibrium case and is also less affected by any
possible particle losses or creation during the expansion stage.
The predictions of our models for the evolution path in the
$(T,\mu_B)$-plane as obtained from the condition of fixed $S/N_B$
is shown in Fig. \ref{lat_tr}. There are still no such
isentropic lattice data for (2+1)--flavor system. Recently, the
isentropic EoS was obtained on the lattice for 2-flavor QCD at
finite $\mu_B$ \cite{EKLS05}, however still for non--physical
mass spectrum that corresponds to the pion mass $m_\pi\simeq 770$
MeV. These data are plotted in Fig. \ref{lat_tr} together with our
model results obtained with the EoS parameters that are fixed for
$m_\pi\simeq 508$ MeV and for (2+1)--flavor system.
\begin{figure}[thb]
\centerline{
\includegraphics[width=70mm,clip]{phase_isoS.eps} \hspace*{3mm}}
\caption{ Phase trajectories in the $T-\mu_B$ representation.
Circles, squares and triangles are the lattice 2-flavor QCD
results~\cite{EKLS05} for $S/N_B=$ 30, 45 and 300, respectively.
The (2+1)--flavor model predictions are plotted by solid (2P EoS),
dashed (MP EoS) and dot-dashed
(Hadronic EoS) lines for every value of the reduced entropy. The dotted
line parameterizes the freeze out curve~\cite{CR98}.
}
\label{lat_tr}
\end{figure}
In general, in the high-temperature deconfined phase, one should
not expect a large difference between 2 and (2+1)--flavor
thermodynamics. The value \ \ of the \ \ quark mass in the quark--gluon
plasma is also not relevant thermodynamically if $m_q/T<1$. In the
hadronic phase the number of quark flavors is as well not
essential and leads only to a moderate change of the global
thermodynamics. However, here the value of the quark/pion mass
is of particular importance as it influences the hadronic mass
spectrum. Due to the non-physical and still large pion mass used
in the actual LGT calculations it is not straightforward to
associate the values of the reduced entropy with the specific
bombarding energy. In particular, as noted in Ref.~\cite{EKLS05},
the correspondence of $S/N_B=$~ 30, 45 and 300 to the AGS, SPS and
RHIC energies, respectively, is only a rough approximation. The
QGSM transport model results~\cite{ST06} for central Pb+Pb
collisions at the top SPS energy show that the isentropic regime
is reached after about 1 fm/c with $S/N_B\approx$ 25. Also
calculations performed in terms of 3--fluid relativistic
hydrodynamic model show that the isentropic expansion of central
Pb+Pb collisions at the bombarding energy 158 and 30 AGeV results
in $S/N_B\approx$ 30 and 15~\cite{IRT05} respectively. Thus, the
above dynamical models imply noticeably lower values of $S/N_B$
than that obtained within actual LGT calculations \cite{EKLS05}.
The main origin of the above differences is related with still too
large quark mass used on the lattice.
As seen in Fig. \ref{lat_tr} the MP and 2P models reproduce the
general trend of the lattice trajectories. The lattice evolution
paths are just between 2P and MP model predictions. With
increasing $S/N_B$ these differences are noticeable smaller. The
hadronic EoS predicts higher initial temperatures, however all
three phenomenological models give similar results for the
freeze-out temperature. It is interesting to see in Fig.
\ref{lat_tr} that irregularity appearing near the turning point
of the lattice trajectory correlates with the flattening of the
T-dependence in the Gibbs mixed phase resulting in the 2P model.
The phenomenological model results, discussed so far, were
obtained assuming the hadronic mass spectrum that corresponds to
the pion mass $m_\pi=508$ MeV. The extrapolation of the EoS to
the physical limit is quite straightforward. It amounts in
replacing the $m_j(m_\pi)$ masses by their physical values. The
quark and gluon masses are kept to be the same as being extracted
from the LGT data. This approximation is justified since the
change of $m_q$ in the interval $5<m_q<70$ MeV does not influence
the thermodynamics in the plasma phase~\cite{Szabo03} very much. Clearly,
taking the physical limit in the EoS also requires to account for
the shift in $T_c$. In the 2P model the critical temperature is
recalculated according to Eq. (\ref{Tc}) and fitted in the model
by the bag $B$ constant and the coupling $g$ to satisfy also the
condition that the critical energy density $\varepsilon_c /T_c^4
\simeq 6 \pm 2$~ as found in LGT \cite{KRT-1}. Within the 2P model
the physical limit is achieved choosing: $B^{1/4}=\rm 207 \ MeV$
and $g=0.7$ which results in $T_c= 173.3$ MeV and
$\varepsilon_c/T_c^4=7.83$.
The extrapolation of the MP model EoS to the physical limit is
less transparent due to a rather strong
nonlinear relation between the hadronic and plasma phase. In
this model the physical limit is approximately accounted for by
replacing the LGT mass spectrum by its physical form. All
further parameters that are require in the MP model to quantify
the EoS are kept the same as that found in the comparison of the
model predictions with the LGT results. With the above chosen
model parameters the crossover deconfinement transition appears
at $T_c=183$ MeV. Note that the phase boundaries in the
preceding Section were calculated for these physical parameters of
the EoS.
\section{Summary}
We have
formulated two different phenomenological models for the
equation of state within the quasi-particle approximation of the
QCD matter: the \ \ two\ \ phase (2P) model with the first order
deconfinement transition and the mixed phase (MP) model in which
transition from hadronic phase to quark--gluon plasma is of the
crossover type.
In our approach both the hadronic and the quark--gluon plasma
phase are considered to be the non-ideal systems. The interactions
between constituents are included within the mean-field
approximation. The modified mean--field Zimanyi model is applied
to describe the interacting resonance gas component. In this
approach, the saturation properties of a symmetric nuclear matter
in the ground state are reproduced correctly and the Danielewicz
constraints resulting from heavy--ion collision data at
intermediate energies are well fulfilled.
The \ \ quark--gluon phase in the 2P model is constructed as a
massive quasi-particle system supplemented by the
\ \ density-dependent potential term which simulates the HTL
interactions. The\ \ first\ \ order\ \ phase transition from the hadronic
phase to the deconfined quark--gluon plasma is constructed within
the 2P model by means of the Gibbs phase equilibrium conditions.
In the MP model the coexistence and correlations between
quarks/qluons and hadrons are assumed near deconfinement. In
addition to the HTL-like interaction term a string-like
interaction is introduced between both unbound quarks/qluons and
quarks that are confined within hadrons. In this model we are
dealing with strongly interacting QCD matter which exhibits a
crossover-type deconfinement phase transition.
The models are constructed in the way to be thermodynamically
consistent and to reproduce the properties of the EoS as
calculated on the lattice. The limited set of model parameters is
defined from the constraints imposed by the recent lattice data on
the temperature and chemical potential dependence of the basic
thermodynamical observables. The comparison of the model
predictions with LGT data was performed within the same set of
approximations as used on the lattice. Of particular importance is
a correct treatment of the hadronic mass spectrum which in the LGT
calculations is non-physical due to the still too large value of
the quark mass.
Keeping in mind a principal difference between the first order
and crossover type phase transition both the 2P and MP model were
shown to provide quite satisfactory description of the LGT
thermodynamics for (2+1)--flavor QCD. Both models reproduce the
$T$ and $\mu_B$ dependence of the main thermodynamic quantities in
a broad range of thermal parameters. The observed deviations of
the model predictions from the lattice results near $T_c$ and
in the hadronic sector for (2+1)--flavor case may be, to a large
extend, attributed to uncertainties in the LGT data due to the
finite size effect. The predicted isentropic trajectories in the
phase diagram were shown to be consistent with that recently
calculated on the lattice within the 2-flavor QCD.
The phenomenological equations of state constructed here satisfy
all physically relevant constraints expected in the cold and
excited nuclear matter. These EoS can be applied in a broad
parameter range that covers the region of deconfinement
transition in QCD. Thus, both the MP and 2P EoS could be used as
an input in dynamical models that describe the space-time dynamics
and evolution of a medium created in heavy ion collisions. Within
hydrodynamic models our EoS can be important to study the role
and influence of deconfinement and the order of the phase
transition on physical observables. Such studies are in
progress.
\vspace*{5mm} {\bf Acknowledgements} \vspace*{5mm}
We are grateful to D. B.~Blaschke, Yu. B.~Ivanov, V. N.~Russkikh,
S.A.~Sorin and D. N.~Voskresensky for interesting discussions and
comments. This work was supported in part by the Deutsche
Forschungsgemeinschaft (DFG project 436 RUS 113/558/0-3), the
Russian Foundation for Basic Research (RFBR grants 06-02-04001 and
05-02-17695) by the special program of the Ministry of Education
and Science of the Russian Federation (grant RNP.2.1.1.5409) and
by the Polish Committee for Scientific Research (KBN-2P03B 03018).
|
1,108,101,565,725 | arxiv | \section{Introduction}
In \cite{O}, Owen proves the higher-order Rellich inequality
\begin{equation}
\int_\Omega u(x)(-\Delta)^m u(x) dx \geq A(m) \int_\Omega \frac{u^2(x)}{d^{2m}(x)}dx, \qquad u\in C^\infty_c(\Omega),
\label{rellich}
\end{equation}
for the polyharmonic operator $(-\Delta)^m$, where $\Omega \subseteq \mathbb{R}^n$ is a convex open set, $d:\Omega \rightarrow \mathbb{R}_+$ is the Euclidean distance from the boundary of $\Omega$ and $A(m)$ is the best constant given explicitly by
$$
A(m) = \frac{(2m-1)^2(2m-3)^2 \cdots 1^2}{4^m}.
$$
This inequality has been subsequently extended and improved in various directions. In \cite{A} and for the case $2m=4$ a simple sufficient condition was given for non-convex domains so that the Rellich inequality
is valid with the sharp constant 9/16; in \cite{BT,B} sharp improvements to (\ref{rellich}) were obtained. We refer to the recent book \cite{BEL} for additional information.
While the literature for Rellich inequalities for the polyharmonic operator $(-\Delta)^m$ is substantial, there are hardly any results on Rellich inequalities involving the distance to the boundary for more general higher-order elliptic operators. This is partly due to the lack of invariance under rotations and to the (related) fact that neither the Euclidean metric nor indeed any other Riemannian metric is suitable for the study of such operators.
Anisotropic Hardy inequalities with distance to the boundary have recently been obtained in \cite{DBG,MST}. Concerning anisotropic (non-Riemannian) Rellich inequalities, there is a growing literature on inequalities with distance to a point, see e.g. \cite{KY,KR,RSS,RS} and references therein, but we are not aware of any results involving the distance to the boundary. To our knowlegde, the best Rellich constant for $\int |\Delta u|^pdx$ is not known even in the case of a half-space.
The objective of this note is to investigate inequalities of the form
\begin{equation}
\int_\Omega u(x)Hu(x)dx \geq \kappa \int_\Omega \frac{u^2(x)}{d_H^{2m}(x)} dx
\label{kappa}
\end{equation}
where $H$ is a homogeneous elliptic differential operator of order $2m$ with real constant coefficients and
$d_H$ is a suitable Finsler distance to the boundary of $\Omega$ associated to $H$.
In particular, we will prove the following result for half-spaces which is shown to be optimal in an important class of cases.
\begin{theorem}
Let $H$ be a homogeneous elliptic operator of order $2m$ with real constant coefficients and let $\mathbf{H} \subseteq \mathbb{R}^n$ be a half-space. Then the inequality $$\int_{\mathbf{H}} u(x)Hu(x)dx \geq A(m) \int_{\mathbf{H}} \frac{u^2(x)}{d_H^{2m}(x)}dx$$ holds for all $u\in C^\infty_c(\mathbf{H})$.
\label{thm0}
\end{theorem}
Note that since the operator $H$ is not rotationally invariant, proving the inequality for the
commonly used half-space $\mathbb{R}^n_+ = \{ x\in \mathbb{R}^n : x_n > 0 \}$ does not imply the validity of the inequality for half-spaces in other directions.
In the second part of this note we investigate the case where $\Omega \subseteq \mathbb{R}^n$ is an arbitrary convex domain, and we provide a uniform (independent of the domain) lower bound for the best constant which - although most likely non-optimal - is nonetheless better than what can be achieved by simply comparing with $(-\Delta)^m$.
\section{Preliminaries}
Let $H$ be a homogeneous elliptic differential operator of order $2m$
with real constant coefficients, acting on real-valued functions on $\mathbb{R}^n$. So $H$ has the form
$$
H = (-1)^m \sum_{|\alpha| = 2m} a_\alpha D^\alpha,
$$ where $a_\alpha$ is a constant for each multi-index $\alpha$ and $D^{\alpha}=\partial_{x_1}^{\alpha_1}\ldots \partial_{x_n}^{\alpha_n}$.
The symbol of the operator $H$ is the polynomial $H:\mathbb{R}^n \rightarrow \mathbb{R}$ given by
$$
H(\xi) = \sum_{|\alpha| = 2m} a_\alpha \xi^\alpha.
$$
Setting $F_H(\xi) = H^{1/{2m}}(\xi)$ (which is positively homogeneous of order one in $\xi$), we define the associated Finsler norm $F_H^*:\mathbb{R}^n \rightarrow \mathbb{R}$ by
\begin{equation}
F_H^*(\omega) = \sup_{\xi \neq 0} \frac{\omega \cdot \xi}{F_H(\xi)}= \max_{|\xi|=1} \frac{\omega \cdot \xi}{F_H(\xi)}.
\label{eq:finsler}
\end{equation}
The Finsler distance of two points $x,x'\in \mathbb{R}^n$ is then defined as $F_H^*(x-x')$.
It is well known, see e.g. \cite{EP}, that this is the distance suitable to use when studying properties of $H$, especially so when one seeks sharp constants.
From now on we will suppress the index $H$ when there is no ambiguity and simply write $F$ for $F_H$ and $F^*$ for $F_H^*$. It is clear from the definition that
for any $\omega,\xi \in \mathbb{R}^n$ we have the inequality
\begin{equation}
\label{eq1}
H(\xi) \, F^*(\omega)^{2m}\geq ( \omega \cdot \xi)^{2m} .
\end{equation}
Now let $\Omega \subseteq \mathbb{R}^n$ be open with non-empty boundary and let
$d(x)$ denote the Euclidean distance of $x\in\Omega$ to $\partial \Omega$.
The Euclidean distance of a point $x\in \Omega$ to $\partial\Omega$ along the direction
$\omega \in S^{n-1}$ is given by
$$
d_\omega(x) = \inf \{ |s|: x+s\omega \notin \Omega \},
$$
and we have
$$
d(x)= \min_{\omega \in S^{n-1}} d_\omega(x).
$$
In the context of Finsler geometry, distances are scaled by the Finsler norm (\ref{eq:finsler})
along each direction, so the Finsler distance of $x$ from the boundary of $\Omega$ along the direction $\omega$ is given by
$$
d_{H,\omega}(x) = F^*(\omega) d_\omega(x).
$$
Denoting by
\begin{equation}
d_H(x) =\min\{ F^*(x-y) : y\in\partial\Omega\} \; , \quad x\in\Omega ,
\label{eq:y}
\end{equation}
the Finsler distance to the boundary we then have
$$
d_H(x) = \min_{\omega \in S^{n-1}} d_{H,\omega}(x) = \min_{\omega \in S^{n-1}} \big(F^*(\omega)d_\omega(x) \big).
$$
\section{Finsler-Rellich inequality for half-spaces}
Let $\nu \in S^{n-1}$ and $\mathbf{H}^n_\nu = \{ x\in \mathbb{R}^n : \nu \cdot x > 0 \}$.
The Euclidean distance of $x\in \mathbf{H}^n_\nu$ from $\partial \mathbf{H}^n_\nu$
in the direction of $\omega \in S^{n-1}$ is given by
$$
d_\omega(x) = \frac{\nu \cdot x}{|\nu \cdot \omega|},
$$
and so the Finsler distance to $\partial \mathbf{H}^n_\nu$ is given by
$$
d_H(x) = \min_{\omega \in S^{n-1}} \big(F^*(\omega)d_\omega(x)\big) = \min_{\omega \in S^{n-1}} \bigg( \frac{F^*(\omega)}{|\nu \cdot \omega|} \bigg) \nu \cdot x \, ;
$$
so the minimum is achieved independently of $x$. Letting $\theta \in S^{n-1}$ be a unit vector that achieves the minimum we arrive at
\begin{equation}
d_H(x) = F^*(\theta)d_\theta(x) = \frac{ \nu\cdot x}{F^{**}(\nu)} =\frac{ d(x)}{F^{**}(\nu)} .
\label{eq:6}
\end{equation}
We are now ready to prove Theorem \ref{thm0}. We restate it as follows.
\begin{theorem}
\label{thm1}
Let $H$ be a homogeneous elliptic operator of order $2m$ with real constant coefficients. Then the inequality
\begin{equation}
\int_{\mathbf{H}^n_\nu} u(x)Hu(x)dx \geq A(m) \int_{\mathbf{H}^n_\nu} \frac{u^2(x)}{d_H^{2m}(x)}dx
\label{102}
\end{equation}
holds for any $\nu \in S^{n-1}$ and all $u\in C^{\infty}_c(\mathbf{H}^n_{\nu})$. Moreover, the constant $A(m)$ is optimal in the case where $F_H$ is a convex function.
\end{theorem}
\begin{proof}
Let $\hat{u}(\xi)$, $\xi\in \mathbb{R}^n$, denote the Fourier transform of $u$. Recalling (\ref{eq1}),
applying Plancherel's theorem and using the one-dimensional Rellich inequality we obtain
\begin{eqnarray*}
\int_{\mathbf{H}^n_\nu} u(x)Hu(x)dx &=& \int_{\mathbb{R}^n} H(\xi) |\hat{u}(\xi)|^2 d\xi \\
&\geq& \frac{1}{F^*(\theta)^{2m}} \int_{\mathbb{R}^n} (\theta \cdot \xi)^{2m} |\hat{u}(\xi)|^2 d\xi \\
&=& \frac{1}{F^*(\theta)^{2m}} \int_{\mathbf{H}^n_\nu} (\partial_\theta^{m} u(x))^2 dx \\
& \geq& \frac{A(m)}{F^*(\theta)^{2m}} \int_{\mathbf{H}^n_\nu} \frac{u^2(x)}{d_\theta^{2m}(x)} dx \\
& =& A(m) \int_{\mathbf{H}^n_\nu} \frac{u^2(x)}{d_H^{2m}(x)}dx.
\end{eqnarray*}
To prove the optimality, we proceed as follows. For $\epsilon>0$ we consider the function $g_{\epsilon}(t) =t^{ \frac{2m-1}{2} +\epsilon}$, $t>0$. This is a sequence of minimizers for the one-dimensional
Rellich inequality of order $m$, that is
\begin{equation}
\label{eq:A}
\frac{ \displaystyle\int_0^1 (g_{\epsilon}^{(m)}(t))^2 dt}{ \displaystyle\int_0^1 \frac{g_{\epsilon}^2}{t^{2m}} dt} \longrightarrow A(m) \, , \qquad \mbox{ as }\epsilon\to 0+ \, .
\end{equation}
Let $v_{\epsilon}(x)=g_{\epsilon}( x\cdot \nu)$. For any multiindex $\alpha$ with $|\alpha|=2m$ we then have
$D^{\alpha}v_{\epsilon}(x)=\nu^{\alpha} g_{\epsilon}^{(2m)}( x\cdot \nu)$ and therefore
\begin{equation}
Hv_{\epsilon}(x)=(-1)^m H(\nu) g_{\epsilon}^{(2m)}( x\cdot \nu)
\label{eq:5}
\end{equation}
We next localize $v_{\epsilon}$. We consider a function $\psi\in C^{\infty}_c(\mathbb{R})$ such that $0\leq \psi\leq 1$, $\psi(t)=1$, if $|t|\leq 1/2$, $\psi(t)=0$, if $|t|\geq 1$.
Let $\pi_\nu: \mathbf{H}^n_\nu \rightarrow \partial \mathbf{H}^n_\nu $ denote the orthogonal projection from the half-space to its boundary. We define
\[
\phi(x) =\psi(\nu\cdot x)\psi (\pi_\nu(x)) \; , \qquad u_{\epsilon}(x) =\phi(x)v_{\epsilon}( x).
\]
Then $u_{\epsilon}\in H^m_0(\mathbf{H}^n_\nu)$ and $\|u_{\epsilon}\|_{H^m_0(\mathbf{H}^n_\nu)} \to +\infty$ as $\epsilon\to 0+$. We shall estimate
$\int_{\mathbf{H}^n_\nu} u_{\epsilon} \, Hu_{\epsilon} dx$ and for this we note that when we use Leibniz rule to expand $Hu_{\epsilon} =H(\phi v_{\epsilon} )$ any term containing at least one derivative of
$\phi$ stays bounded as $\epsilon\to 0$. Setting $k=\int_{\mathbb{R}^{n-1}} \psi(|y|)^2dy$ and applying (\ref{eq:5}) we thus have
\begin{eqnarray}
\int_{\mathbf{H}^n_\nu} u_{\epsilon} \, Hu_{\epsilon} dx &=& \int_{\mathbf{H}^n_\nu} \phi^2 v_{\epsilon} \, Hv_{\epsilon}dx +O(1) \nonumber \\
&=& k(-1)^m H(\nu) \int_0^1 \psi^2 g_{\epsilon} \, g^{(2m)}_{\epsilon}dt +O(1) \nonumber \\
&=& kH(\nu) \int_0^1 (g^{(m)}_{\epsilon})^2dt +O(1) . \label{100}
\end{eqnarray}
On the other hand, recalling also (\ref{eq:6}) we similarly have
\begin{eqnarray}
\int_{\mathbf{H}^n_\nu} \frac{u_{\epsilon}^2(x)}{ d_H^{2m}(x)}dx &=& F^{**}(\nu)^{2m} \int_{\mathbf{H}^n_\nu } \frac{\phi^2 v_\epsilon^2}{d^{2m}} dx \nonumber \\
&=& k F^{**}(\nu)^{2m} \int_0^1 \frac{g_\epsilon^2}{t^{2m}} dt +O(1) . \label{101}
\end{eqnarray}
From (\ref{100}), (\ref{101}) and (\ref{eq:A}) we conclude that
\begin{eqnarray*}
\frac{\displaystyle\int_{\mathbf{H}^n_\nu} u_{\epsilon}(x)Hu_{\epsilon}(x)dx }{\displaystyle\int_{\mathbf{H}^n_\nu} \frac{u_{\epsilon}^2(x)}{ d_H^{2m}(x)}dx}
&=& \bigg(\frac{F(\nu)}{F^{**} (\nu) } \bigg)^{2m} \frac{\displaystyle\int_0^1 (g^{(m)}_{\epsilon}(t))^2dt +O(1) }{\displaystyle\int_0^1 \frac{g_\epsilon^2(t)}{t^{2m}} dt +O(1) } \\
&\to & \bigg(\frac{F(\nu)}{F^{**} (\nu) } \bigg)^{2m} A(m) , \qquad \mbox{ as }\epsilon\to 0+.
\end{eqnarray*}
Since $F$ is convex, $F=F^{**}$, and optimality follows.
\end{proof}
\noindent
{\bf Remark.} It is known \cite[Section 1.6]{S} that the set $\{ \xi\in\mathbb{R}^n : F^{**}(\xi) \leq 1\}$ is the convex hull of the set $\{ \xi\in\mathbb{R}^n : F(\xi) \leq 1\}$. This shows that
$ F^{**}(\xi)\leq F(\xi)$ for all $\xi\in\mathbb{R}^n$ and also that there exists
a direction $\nu\in S^{n-1}$ such that $ F^{**}(\nu)= F(\nu)$. It follows
in particular that if $F$ is not convex the constant $A(m)$ is still the best possible constant for which (\ref{102}) is valid for all
$\nu\in S^{n-1}$ and all $u\in C^\infty_c(\mathbf{H}^n_{\nu})$.
\section{Convex domains}
If the symbol $H(\xi)$ of the operator $H$ satisfies
\[
\lambda |\xi|^{2m} \leq H(\xi) \leq \Lambda |\xi|^{2m}, \qquad \xi\in\mathbb{R}^n,
\]
then applying the polyharmonic Rellich inequality (\ref{rellich}) we obtain that
for any convex domain $\Omega\subset\mathbb{R}^n$ there holds
\begin{equation}
\int_{\Omega} u(x)Hu(x)dx \geq A(m) \frac{\lambda}{\Lambda} \int_{\Omega} \frac{u^2(x)}{d_H^{2m}(x)}dx \, , \quad u\in C^{\infty}_c(\Omega).
\label{stn}
\end{equation}
In this section we adapt Davies' well known mean distance function method \cite{D}
to establish an alternative lower bound for the best Rellich constant of
(\ref{stn}).
While we have not obtained the actual constant $A(m)$, we nevertheless provide a constant which depends only on the symbol and which can be easily computed numerically in any particular case. This has been carried out at the end of the section for two monoparametric families of operators and it turns out that the constants obtained are better than those given by (\ref{stn}).
To state our result, we need some additional definitions related to the operator in question. Assuming that $H$ is an elliptic differential operator of order $2m$ as above and denoting
by $d\sigma(\omega)$ the normalized surface measure on $S^{n-1}$, we define the positive constants $\mu_H$ and $M_H$ as the best constants for the inequalities
\[
\mu_H \, F^{**}_H(\xi)^{2m} \leq \int_{S^{n-1}} \frac{ (\xi\cdot \omega)^{2m}}{ F^*(\omega)^{2m}}d\sigma(\omega) \leq M_H \, H(\xi) \, , \qquad \xi\in\mathbb{R}^n .
\]
With this settled, we have the following.
\begin{theorem}
\label{thm2}
Let $H$ be a homogeneous elliptic operator of order $2m$ with real constant coefficients.
Then for any open convex set $\Omega \subseteq \mathbb{R}^n$ the inequality
\begin{equation}
\int_{\Omega} u(x)Hu(x)dx \geq A(m) \frac{\mu_H}{M_H} \int_{\Omega} \frac{u^2(x)}{d_H^{2m}(x)}dx
\label{rel10}
\end{equation}
holds for all $u\in C^\infty_c(\Omega)$.
\end{theorem}
\begin{proof}
We have
\begin{eqnarray*}
\int_\Omega u(x)Hu(x)dx &=& \int_{\mathbb{R}^n} H(\xi) |\hat{u}(\xi)|^2 d\xi \\
&\geq & \frac{1}{M_H} \int_{S^{n-1}} \frac{1}{F^*(\omega)^{2m}} \int_{\mathbb{R}^n} (\omega \cdot \xi)^{2m} |\hat{u}(\xi)|^2 d\xi \, d\sigma(\omega) \\
&=& \frac{1}{M_H} \int_{S^{n-1}} \frac{1}{F^*(\omega)^{2m}} \int_\Omega (\partial^m_\omega u(x))^2 dx \, d\sigma(\omega).
\end{eqnarray*}
We now apply the one-dimensional Rellich inequality in the direction $\omega$ to get
\begin{equation}
\int_\Omega u(x)Hu(x)dx \geq
A(m)\frac{1}{M_H} \int_\Omega u^2(x) \int_{S^{n-1}} \frac{1}{(F^*(\omega)d_\omega(x))^{2m}} d\sigma(\omega) dx.
\label{eq:est}
\end{equation}
To estimate the last integral we consider a point $x\in\Omega$ and a point $y = y(x) \in \partial \Omega$ that realizes the infimum in (\ref{eq:y}).
Let $\Pi_x$ be a supporting hyperplane at $y(x)$ and let $N=N(x)$ be the outward normal unit vector to $\Pi_x$.
We denote by $z(\omega)=z(\omega,x)$ the intersection of $\Pi_x$
with the line $\{x+t\omega: t\in \mathbb{R}\}$. From the previous discussion, it follows that $|z(\omega)-x| \geq d_\omega(x)$ and therefore
$F^*(z(\omega)-x) \geq F^*(\omega)d_\omega(x)$ for all $x\in\Omega$ and $\omega\in S^{n-1}$.
Let $s\in\mathbb{R}$ be such that $z(\omega)=x+s\omega$. Since $z(\omega)$ and $y$
both belong to $\Pi_x$, $z(\omega)-y$ is perpendicular to $N$, that is
$$
(x+s\omega-y) \cdot N = 0.
$$
It follows that
$$
s=\frac{(y-x)\cdot N}{\omega \cdot N},
$$
and so
$$
z(\omega)=x+\frac{(y-x)\cdot N}{\omega \cdot N} \omega.
$$
Returning to (\ref{eq:est}), we now have
\begin{eqnarray*}
\int_{S^{n-1}} \frac{1}{(F^*(\omega)d_\omega(x))^{2m}} d\sigma(\omega) &\geq &\int_{S^{n-1}} \frac{1}{F^*(z(\omega)-x)^{2m}} d\sigma(\omega) \\
&=&\frac{1}{((y-x)\cdot N)^{2m}} \int_{S^{n-1}} \bigg( \frac{\omega \cdot N}{F^*(\omega)} \bigg)^{2m} d\sigma(\omega) \\
&\geq & \mu_H \bigg( \frac{F^{**}(N)}{(y-x)\cdot N} \bigg)^{2m} \\
&\geq& \frac{\mu_H}{F^*(y-x)^{2m}} = \frac{\mu_H}{d_H^{2m}(x)},
\end{eqnarray*}
and the proof is complete.
\end{proof}
We think of estimate (\ref{rel10}) as an explicit estimate in the sense that $\mu_H$ and $M_H$ can be computed numerically in any specific case.
In the next two examples we illustrate the estimate of Theorem \ref{thm2} and in particular
show that inequality (\ref{rel10}) is better than the one obtained from (\ref{stn}).
\
\noindent
{\bf Example 1.} Let $\beta>-1$ (for ellipticity) and
\[
H_{\beta}(\xi) =\xi_1^4 +2\beta \xi_1^2\xi_2^2 + \xi_2^4 , \qquad \xi\in\mathbb{R}^2.
\]
We have
\[
\left\{
\begin{array}{ll}
\frac{\beta +1}{2} |\xi|^4 \leq H_{\beta}(\xi) \leq |\xi|^4 , & \mbox{ if } -1<\beta\leq 1, \\[0.2cm]
|\xi|^4 \leq H_{\beta}(\xi) \leq \frac{\beta +1}{2} |\xi|^4 , & \mbox{ if } \beta\geq 1,
\end{array}
\right.
\]
hence (\ref{stn}) gives
\[
\int_{\Omega} u(x)H_{\beta}u(x)dx \geq \frac{9}{16}c(\beta) \int_{\Omega} \frac{u^2(x)}{d_{H_{\beta}}^{2m}(x)}dx \; , \qquad u\in C^{\infty}_c(\Omega),
\]
where
\[
c(\beta)=\left\{
\begin{array}{ll}
\frac{\beta +1}{2} , & \mbox{ if } -1<\beta\leq 1, \\[0.1cm]
\frac{2}{\beta +1} , & \mbox{ if } \beta\geq 1 \, .
\end{array}
\right.
\]
In Figure 1 below we have plotted the function $s(\beta) =\mu_{H_{\beta}} / M_{H_{\beta}} $ (blue line) against $c(\beta)$ (red line) and it is seen that the estimate of
Theorem \ref{thm2} is better than the one obtained from (\ref{stn}).
\
\noindent
{\bf Example 2.} Let
\[
\hat{H}_{\beta}(\xi) =\xi_1^6 +\beta \xi_1^4\xi_2^2+ \beta \xi_1^2\xi_2^4 + \xi_2^6 , \qquad \xi\in\mathbb{R}^2.
\]
We have $\hat{H}_{\beta}(\xi) =(\xi_1^2+\xi_2^2) [ (\xi_1^2-\xi_2^2)^2 +(\beta+1)\xi_1^2\xi_2^2]$, so we assume $\beta>-1$ for ellipticity. We now have
\[
\left\{
\begin{array}{ll}
\frac{\beta +1}{4} |\xi|^4 \leq \hat{H}_{\beta}(\xi) \leq |\xi|^4 , & \mbox{ if } -1<\beta\leq 3, \\[0.2cm]
|\xi|^4 \leq \hat{H}_{\beta}(\xi) \leq \frac{4}{\beta +1}|\xi|^4 , & \mbox{ if } \beta\geq 3,
\end{array}
\right.
\]
hence (\ref{stn}) gives
\[
\int_{\Omega} u(x)\hat{H}_{\beta}u(x)dx \geq \frac{9}{16}\hat{c}(\beta) \int_{\Omega} \frac{u^2(x)}{d_{\hat{H}_{\beta}}^{2m}(x)}dx \; , \qquad u\in C^{\infty}_c(\Omega),
\]
where
\[
\hat{c}(\beta)=\left\{
\begin{array}{ll}
\frac{\beta +1}{4} , & \mbox{ if } -1<\beta\leq 3, \\[0.1cm]
\frac{4}{\beta +1} , & \mbox{ if } \beta\geq 3 \, .
\end{array}
\right.
\]
In Figure 2 below we have plotted the function $\hat{s}(\beta) =\mu_{\hat{H}_{\beta}} / M_{\hat{H}_{\beta}} $ (blue line) against $\hat{c}(\beta)$ (red line).
\begin{figure*}[ht]
\centering
\captionsetup{justification=centering,margin=0.2cm}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{beta1.pdf}
\caption{\small Plots of $s(\beta)$ and $c(\beta)$}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1\linewidth]{beta2.pdf}
\caption{ \small Plots of $\hat{s}(\beta)$ and $\hat{c}(\beta)$}
\end{minipage}
\end{figure*}
\noindent
{\bf Remark.} Considering Example 1, it is not difficult to prove that for any $\beta>-1$ there holds
\[
\frac{1}{4} \max\{ 1 , \frac{2}{\beta +1} \} |\xi|^4 \leq F^*( \xi)^4 \leq
\max\{ 1 , \frac{2}{\beta +1} \} |\xi|^4 \; , \quad \xi \in\mathbb{R}^2 \, .
\]
It then follows easily that $s(\beta)\geq 3/32$ for all $\beta>-1$. Hence not only is $s(\beta)$ larger than $c(\beta)$ but we also have that $s(\beta)/c(\beta) \to +\infty$
as $\beta\to -1$ or $\beta\to +\infty$. Similarly, in Example 2 one has
\[
\frac{1}{8} \max\{ 1 , \frac{2}{\beta +1} \} |\xi|^6 \leq F^*( \xi)^6 \leq
\max\{ 1 , \frac{2}{\beta +1} \} |\xi|^6 \; , \quad \xi \in\mathbb{R}^2 \, .
\]
It follows that $\hat{s}(\beta)\geq 5/128$ and therefore $\hat{s}(\beta)/\hat{c}(\beta) \to +\infty$
as $\beta\to -1$ or $\beta\to +\infty$.
\
\noindent
{\bf Acknowledgement.} We thank G. Kounadis for his help with the Matlab diagrams. The research of MP was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI PhD Fellowship grant (Fellowship Number 1250).
\bibliographystyle{amsplain}
|
1,108,101,565,726 | arxiv | \section{Methodology: usage of classification trees to compare 2 data sets}
The scientific return of the LAT instrument depends on how
accurately we monitor its performance, and how promptly we can fix problems.
Such data monitoring is a very complex task, since the LAT contains more
than 850000 channels in the trackers, 1536 CsI crystals
and 97 ACD plastic scintillator tiles and ribbons.
The standard way is to monitor the parameter values, correlations and time evolution
(by means of histograms and charts), to check their consistency with a well known reference.
This methodology is explained elsewhere \cite{StandardMonitoring}.
A different (and complementary) approach is to try to find differences between the reference
data set and the just taken data set; both data sets being represented in a N-dimensional space
of N selected parameters. Classification trees can provide an efficient way of finding
potential differences between data sets in an automated fashion. Here we used the Random Forest method
\cite{RFBreiman}, and a custom interface described in \cite{RRando}.
In this approach, we use the classification error
to quantify the magnitude of the differences between the two data sets; and we use the
Z-score value to pinpoint the parameters where the differences lie.
Both classification error and Z-scores are estimated during the growing of the forest,
using the so-called Out-Of-Bag (OOB) events, which are a bootstrapped sample of events that were not
used in the growing of the individual trees of the forest.
The classification error \texttt{OOB Err} is the percentage of OOB events that were incorrectly predicted by
the forest. In case of equal data sets (no separation possible): OOB Err $\sim$ 50\%.
If the two event classes can be separated (they are different in some way), then OOB Err $<$ 50\%.
The Z-score is a statistical measure, that relies on the OOB Err, to estimate
the importance of a given variable to distinguish between the two data sets.
The Z-score quantifies the statistical significance ($\sim$ number of sigmas)
of the differences between the two event classes in a given parameter.
Each of the N parameters used to grow the forest has its own Z-score value.
High Z-score (e.g. $>$ 5) implies large (statistically significant) differences between the
two event classes in that variable.
\section{Illustration of working principle: quick detection of anomolous data sets}
\vspace{-0.3cm}
In order to test the working principle of this novel technique we chose several
data sets (event classes) taken during the pre-launch tests during the fall 2006 at the Naval Research Lab.
These data (Cosmic Rays, mostly muons) were processed with the standard LAT event reconstruction software.
We defined the event \texttt{class A} as
taken when LAT was supposedly working correctly; this is our reference data set.
\texttt{Class B} will be the data that needs to be evaluated. The event \texttt{class B1} contains data
taken when LAT was supposedly working correctly, while \texttt{class B2} is
data taken when LAT was NOT working correctly. In the B2-type data, the information from half
layer 0 from tracker tower 10 was not properly read;
thus there is missing information in some events. Therefore, in this test, we expect to have
compatibility between A and B1; while we expect differences between A and B2.
Two forests of trees were grown; one using A and B1 type data (A-B1), and the other one using
A and B2 type data (A-B2). For this test, we used only high level data
(derived quantities, using the reconstruction software, from basic detector outputs)
and we only considered non-empty events which triggered tower 10.
The random forests were grown using 10000 events, 1000 trees, 80 variables and 4 variables/node.
The time required to grow each of these forests was less than 1/2 hour in
dual 1.8GHz Opteron CPU machine.
The classification error vs the number of trees is shown in the left-hand plot of Fig. \ref{Fig1}.
While there is no effective separation between event types A and B1 (Err $\sim$ 50\%),
the separation between event types A and B2 is clearly possible,
which implies differences between these event classes. Note also that
100 trees are enough for a good separation (in this example) which would allow
us to grow the forest 10 times faster.
The highest Z-score in the A-B2 Random Forest was for the parameter
that denotes the number of clusters (hit planes) in the main track; Tkr1Hits.
The Z-score for this parameter was 41; which implies large differences in this
variable. A charge particle passing through all the 19 layers (36 planes) of the LAT
tracker (all towers) will have Tkr1Hits $\sim$ 36.
The right-hand plot of Fig 1 shows the distribution of Tkr1Hits for the event classes
A, B1 and B2. Class B2 has a larger fraction of events with odd number of hit planes.
This is due to the missing information from plane 0 for some of the events.
\begin{figure}
\includegraphics[height=.21\textheight]{fig1.eps}
\includegraphics[height=.21\textheight]{fig2.eps}
\caption{\texttt{Left-hand}; Classification error for A-B1 (blue) and A-B2 (red) event class comparison.
\texttt{Right-hand} Distribution of Tkr1Hits for the event class A (Black), B1 (blue) and B2 (red, filled histogram).}
\label{Fig1}
\end{figure}
\vspace{-0.75cm}
\section{Conclusions}
\vspace{-0.25cm}
Random Forest can be a useful tool to monitor the performance of
LAT during on-orbits operations. A test with pre-launch data suggests
that the method is fast and efficient. Application of this method to
low level data would increase the potential of discovering hardware problems,
at the expense of more computing power.
Note that the application of this method to monitor LAT data during on-orbits
operations is not straight forward. The success depends on: \texttt{a)} the correct selection of the
reference data set; and \texttt{b)} the selection of the variables (high/low level) and filters
to be used. These selections will be tuned up prior to launch;
yet this learning will probably continue during the first months of space operation.
\vspace{-0.75 cm}
|
1,108,101,565,727 | arxiv | \section{Introduction}
The model of Starobinsky is formulated as a string-inspired effective action, with~higher-order curvature terms arising from corrections due to quantized conformally covariant matter fields \citep{starobinsky1980,starobinsky1983}. The~relevant action for FLRW cosmology is given by the simplest version of the model, with~only quadratic scalar curvature \citep{kaneda}, $L_S=\sqrt{-g}(R+\frac{\alpha}{6}R^2)$. The~$R^2$ term is able to drive inflation in the large curvature regime $R\gg M^2=\alpha^{-1}$. Nowadays, it continues to be a viable inflationary model. The~predicted spectral index and tensor-to-scalar ratio, $n_s\approx 0.96$, $r\approx 0.004$, respectively, with~$M$ of the order of $10^{13}$ GeV, are in good agreement with current observational data \citep{mukhanov,defelice2010,ketovn}.
The model with quadratic scalar curvature is also the simplest of the $f(R)$ theories of modified gravity. These theories have drawn a lot of attention since they might account for, for example,~inflation as~well as the late-time accelerated cosmic expansion \citep{nojiri,defelice2010, sotiriou}. This is due to a higher derivative scalar degree of freedom, so-called scalaron, which is, therefore, of~a purely gravitational nature, at~least from a macroscopic point of view \citep{alexandre}. Although~the Starobinsky theory is higher derivative, it has the relevant feature that it clearly does not have ghosts and is stable \citep{woodard,chen}. This property can be seen from its relation through a Weyl rescaling, with~Einstein's theory with a minimally coupled scalar model \citep{whitt, cecotti}. The~potential of the scalar in this model is consistent with large-field~inflation.
Supersymmetric theories \citep{wessbagger} have been the subject of intense study for many years, creating a wide expectation for present LHC energies. As~it is well known, it could not be confirmed. Nevertheless, it continues to be considered, in~particular in view of recent results \citep{lhc}. On~the other side, there is no reason why supersymmetry should break at LHC scales; it can be much higher \citep{ketovn}. Inflation can be considered from Planck energy scale \citep{planck}. In~fact, the~physically relevant inflation, taking place after horizon exit of the observable universe, is typically several orders of magnitude below $M_P$ \citep{lyth}, but~still sufficiently high to be considered in a context of supergravity, as~a possible effective theory, or~even as an ultraviolet completion of quantum gravity \citep{ellis1982,ellis2013}.
Regarding the model of Starobinsky as an $f(R)$ action, one might expect $f(\mathcal{R})$, with~$\mathcal{R}$ the four-dimensional chiral curvature superfield, to~provide an adequate supersymmetrization \citep{ketov2013-1,diamandis}. The~connection between $f(\mathcal{R})$ and $f(R)$ is, however, not straightforward, due to auxiliary fields satisfying the rather involved algebraic equations of motion \citep{ketov2011}. More general actions depending not only on $\mathcal{R}$, but~its supersymmetric covariant derivatives, have been used to properly embed the model of Starobinsky into N=1, 2 4D supergravity~\citep{ketov2013-2,ketov2014,terada,ketovn}.
On the side of scalar--tensor theories, the~main obstacle for the embedding of inflation into supergravity is finding a suitable scalar potential (it has to be sufficiently flat, at~least in a certain field direction in multi-field models) that generates the proper amount of inflation in a consistent way with the observations as~well as with the predictions of the standard model of cosmology on the early universe \citep{mcallister,stewart}. The~generic scalar potential in N=1 4D supergravity is too steep for slow-roll inflation \citep{ketov2011}. Some methods have been put forward to, for instance, make the potential independent of some field such that it is flat along its direction \citep{kawasaki, kallosh2011pr}. The~behavior of fermionic fields is also relevant for precise predictions; in~this regard, a~new kind of inflationary models with a simplified fermionic sector, i.e.,~with no inflation, is considered by~means of nilpotent superfields \citep{carrasco2015,terada2021}. Flat directions of the scalar potential can also be achieved with non-minimal coupling as in NMSSM models (Higgs inflation) \citep{kallosh2010, kallosh2011}. With~respect to the specific scalar potential of Starobinsky in its dual scalar-tensor form, its supersymmetrization is addressed in super-conformal theory \citep{kallosh2013} and no-scale supergravity \citep{ellis2013, tamvakis}.
As supersymmetry involves fermions, its study requires quantum theory, unless~only the scalar potential is considered as~is usually done in supergravity and superstring cosmology \citep{stewart}. On~the other side, the~homogeneity at the beginning of inflation has led to the consideration of homogeneous supergravity formulations \citep{obregon0}; see also \citep{moniz}. Due to the dimensional reduction to one dimension, these formulations lack Lorentz constraints, which were implemented by hand in \citep{qc-sugra2,obregon1}; otherwise the theory has a quite complicated constraint algebra \citep{damour}. In~view of that, in~\citep{tkach} a supersymmetric extension of the FLRW model was proposed, considering, as~usual, the~rescaled scale factor with length dimension. As~well as in 4D supergravity, the~basic superfield in the Lagrangian does not follow from the geometric considerations, but~is ad hoc \citep{wessbagger,tkach}.
In this work, we construct supersymmetric generalizations for the FLRW model of Starobinsky in its modified gravity form, using the `new' superspace formulation \citep{wessbagger,ramirez} for homogeneous supergravity \citep{garcia}, i.e.,~depending only on time. As~FLRW involves gauge fixing, the~supersymmetric extension must keep it. This formulation is fully geometric, and~can be seen as a minimal dimensional reduction from 4D that keeps only the minimum elements for supersymmetry. In~fact, for~a dimensional reduction to one dimension, to~each fermionic component of the supersymmetric charge corresponds, in one dimension, one supersymmetry. In~this view, we consider N=1 and N=2 one-dimensional supergravity (as a complex representation, N=2 can be taken as N=1). Thus, an~extension back to spatial dimensions is straightforward, for~instance, to include perturbations. Actions based on four-dimensional supergravity and more fundamental theories may contain a vast number of additional both dynamical and auxiliary fields, which usually renders the sole identification of the scalar potential a nontrivial task~\cite{mcallister}. In~our case, the~small number of degrees of freedom involved allows us to write both bosonic and fermionic sectors of the Lagrangians and Hamiltonians in full detail, not just the leading terms of certain Taylor~expansion.
In Section~\ref{s2}, we discuss the effective 1D FLRW model of Starobinsky. In~order to substantiate the following sections, we discuss the Ostrogradsky formulation, and~show the canonical transformations that relate its Hamiltonian to~the two Hamiltonians of the scalar--tensor formulations usually related to the Starobinsky model. One of these scalar--tensor formulations is the one in standard form, which is manifestly stable, and~has a large-field inflationary potential. The~other scalar--tensor form, of~the BF-type, is the one we extend by supersymmetry. In~Section~\ref{n1}, we present a formulation based on the simplest possible superspace compatible with time-dependent supersymmetry transformations \citep{ramirez2008}. It has local coordinates $(t, \Theta)$, where $t$ is the time coordinate and $\Theta$ is a real Grassmann number \citep{henneaux}. Superfields are very simple: they only contain one real boson scalar and one real fermion scalar, and~there are no auxiliary fields. Despite the simplicity of this superspace, we can write a supersymmetric Lagrangian whose bosonic part contains exactly $R+\frac{\alpha}{6}R^2$. In~this construction, the Grassmann parity of the curvature superfield is odd. In~Section~\ref{n2}, we make use of a complex superspace having local coordinates $(t, \Theta, \bar{\Theta})$. In~this case, superfields contain twice as many components: two real scalar bosons and one complex scalar fermion. Usually, one of the bosons is an auxiliary supersymmetric field that is used to ensure the off-shell closure of the supersymmetry algebra \citep{belluci,wessbagger}. As~we will show, in~higher-derivative models it becomes a dynamical field on its own. In~this case, the curvature superfield is of even parity and the action can, in~principle, be generalized to the superspace of 4D supergravity. We propose an action depending on the curvature superfield and its supersymmetric covariant derivatives in a similar fashion to the proper embeddings of Starobinsky into 4D supergravity \citep{ketov2013-2,terada,cecotti}. By~choosing a specific superpotential, we obtain a model whose bosonic part contains Starobinsky and a massive scalar field. We present numerical solutions to the pure bosonic equations of motion, which exhibit inflation driven by the quadratic curvature term, whereas the scalar field is pushed to the minimum energy state (see \citep{ketovn} for another example of a two-field model, where the effect on inflation of both fields is detailed). In~Section~\ref{cf}, we present the Hamiltonian formulation, including bosons and fermions, of~the equivalent scalar--`tensor' actions for the two models constructed. Unlike ordinary supersymmetric theories, these actions contain quadratic terms in the fermionic velocities; nonetheless, the~ordinary Hamiltonian formulation can be carried out as normal. Section~\ref{concl} is dedicated to the conclusions and final remarks. In~Appendix \ref{app}, we summarize the basic results about the superspace formalism following references \citep{ramirez,garcia, wessbagger}, and~in Appendix \ref{app2}, we provide some lengthy~equations.
\section{FLRW Model of~Starobinsky}\label{s2}
The FLRW geometry corresponds to a spacetime with spatial 3-surfaces of uniform curvature $^3R=6ka^{-2}$, where $k$ takes the values $-$1, 0 or 1. The~line element, in~comoving coordinates ($t,r,\theta,\phi$), reads $ds^2=-N^2(t)dt^2+a^2(t)\left((1-kr^2)^{-1}dr^2+r^2 d\Omega_2^2\right)$. The~scale factor $a(t)$ is the only degree of freedom. Actually, it is the Hubble factor $H=\frac{\dot{a}}{a}$, which is directly observable. The~field equations for a spatially homogeneous and isotropic $f(R)$ universe, which generalize the usual Friedmann and acceleration (Raychaudhuri) equations, can be directly obtained from the $f(R)$ action evaluated at the FLRW geometry, after~integration of the spatial coordinates. With~the FLRW metric, the~scalar curvature $R$~becomes the following:
\begin{equation}\label{curvature}
R=6 \left(-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2}+\frac{k}{a^2}\right).
\end{equation}
Substituting these expressions into $\frac{1}{2\kappa^2} \int d^4 x \sqrt{-g} R$, and~integrating the spatial coordinates over a suitable region, yields the FLRW action.
In this work, we are mainly interested in the model of Starobinsky $L_S=(2\kappa^2)^{-1} Na^3 [R+(\alpha/6) R^2]$, with~$k=0$, that is, the following:
\begin{equation}\label{starobinsky}
L_S=\frac{3}{\kappa^2} N a^3 \left[-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2}+\alpha \left(-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2}\right)^2\right].
\end{equation}
This higher derivative action is our starting point for the supergravity version; hence, it is important to verify its consistency. In~order to do it, we derive the Ostrogradsky--Hamiltonian formulation, and~show that it is related through canonical transformations to two well-known scalar--tensor formulations: one of the BF type, with~the scalar playing the role of the scalar curvature, and~the other the standard Hamiltonian obtained by means of a Weyl~transformation.
\subsection{Ostrogradsky--Hamiltonian~Formulation}
The canonical formulation of the FLRW model of Starobinsky can be obtained directly from the higher derivative action (\ref{starobinsky}) following the method of Ostrogradsky (see, for example,~\cite{woodard} for a review). For~simplicity, we integrate by parts the second derivative on the linear curvature term in (\ref{starobinsky}). In~this formalism, there are eight canonical variables, namely the following:
\begin{subequations}
\begin{eqnarray}
&&a, \\
&&p_a \equiv \frac{\partial L_S}{\partial \dot{a}}-\frac{d}{dt} \frac{\partial L_S}{\partial \ddot{a}}=-\frac{6}{\kappa^2} \left[\frac{a \dot{a}}{N}+\frac{\alpha a^2}{N} \frac{d}{dt} \left(\frac{\ddot{a}}{N^2 a}-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\dot{a}^2}{N^2 a^2}\right)\right], \\
&&A\equiv \dot{a},\\
&&p_A\equiv \frac{\partial L_S}{\partial \ddot{a}}=\frac{6\alpha}{\kappa^2} \frac{a^2}{N} \left(\frac{\ddot{a}}{N^2a}-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\dot{a}^{2}}{N^2 a^2}\right),\\
&&N,\\
&&p_N\equiv \frac{\partial L_S}{\partial \dot{N}}-\frac{d}{dt} \frac{\partial L_S}{\partial \ddot{N}}=-\frac{6\alpha}{\kappa^2} \frac{a^2 \dot{a}}{N^2} \left(\frac{\ddot{a}}{N^2a}-\frac{\dot{N} \dot{a}}{N^3 a}+\frac{\dot{a}^{2}}{N^2 a^2}\right), \label{pNos} \\
&&n \equiv \dot{N}, \\
&&p_n \equiv \frac{\partial L_S}{\partial \ddot{N}}=0. \label{pn}
\end{eqnarray}
\end{subequations}
Since (\ref{starobinsky}) does not depend on $\ddot{N}$, we obtain a primary constraint (\ref{pn}). It can already be seen that $p_N$ can be written in terms of $A$ and $P_A$, which would yield another~constraint.
The Hamiltonian of Ostrogradsky is defined by the Legendre~transformation as follows:
\begin{subequations}
\begin{eqnarray}
&&H_{\text{Ost}} \equiv A p_a+\dot{A} p_A+n p_N+\dot{n} p_n-L_S \label{ostro1}\\
&&\ \ \ \ \ \ \ \ \ =A p_a+\frac{3}{\kappa^2} \frac{a A^2}{N}+\frac{\kappa^2}{12 \alpha} \frac{N^{3} p_A^2}{a}-\frac{A^{2} p_A}{a}+n\left(\frac{A p_A}{N}+p_N\right)+\dot{n} p_n. \label{ostro2}
\end{eqnarray}
\end{subequations}
At first sight, $H_{\text{Ost}}$ is not bounded due to terms being linear in the momenta $p_a$ and $p_N$ \mbox{(\ref{ostro1})}. However, the~model of Starobinsky is known to be stable. This is due to the system (\ref{starobinsky}) being degenerate in the sense that the matrix $(\partial^2 L_S/\partial \ddot{q}_i \partial \ddot{q}_j)$ is not invertible. Otherwise, it would have fourth-order equations of motion for both $a$ and $N$, but~only $a$ obtains a higher-derivative~equation.
As we obtain a primary constraint, we proceed following Dirac's standard procedure (see~\cite{henneaux}). The~result is that there are three first-class~constraints as follows:
\begin{subequations}
\begin{eqnarray}
&& 0 \approx C \equiv p_n \label{c01}, \\
&& 0\approx D\equiv \frac{A p_A}{N}+p_N, \label{c1}\\
&& 0\approx H_0\equiv \frac{A p_a}{N}+\frac{3}{\kappa^2} \frac{a A^2}{N^2}+\frac{\kappa^2}{12 \alpha} \frac{N^{2} p_A^2}{a}-\frac{A^{2} p_A}{N a} \label{h0ostr}.
\end{eqnarray}
\end{subequations}
(\ref{c1}) follows from the conservation in time of (\ref{c01}), whereas (\ref{h0ostr}) follows from the conservation of (\ref{c1}). The~time derivative of (\ref{h0ostr}) vanishes identically and no more constraints arise. Therefore, the~total Hamiltonian vanishes as a constraint $0 \approx H=NH_0+n D+\nu C$, where $N$, $n$ and $\nu$ remain arbitrary. In~order to understand the Hamiltonian constraint $H_0$, taking into account that $p_A$ is proportional to $R$, we make a canonical transformation $p_\phi=-A$, and~$\phi=-p_A$. Thus, the following holds:
\begin{equation}
H_0=\frac{1}{N}\left(\frac{3a}{\kappa^2 N} +\frac{\phi}{a}\right) p_\phi^2-\frac{1}{N} p_\phi p_a+\frac{\kappa^2}{12 \alpha} \frac{N^2\phi^2}{a},
\end{equation}
which requires further analysis to see if it contains ghosts. As~we show in the following, there is a further canonical transformation that puts it in a more familiar form. The~other two constraints cannot affect~stability.
\subsection{Scalar-Tensor~Formulation}\label{bef}
There are other well-known ways to reduce the order of a Lagrangian by means of additional fields that keep track of the higher derivative terms, for~instance, by using Lagrange multipliers~\cite{vilenkin}, or~by actions of the BF type as follows:
\begin{equation}\label{firststar}
L_S^\phi =\frac{N {a}^{3}}{2\kappa^2} \left[(1+2 \alpha \phi) R-6\alpha {\phi}^{2} \right].
\end{equation}
Further, substituting $R$, and~integrating by parts yields the following:
\begin{eqnarray}\label{lagphi}
L_S^\phi=\frac{3}{\kappa^2} Na^3 \left[-\frac{\dot{a}^{2}}{a^2 N^2} \left(1+2\alpha \phi \right)-2\alpha \frac{\dot{a}\dot{\phi}}{aN^2}-\alpha {\phi}^{2}\right] .
\end{eqnarray}
Its Hamiltonian is $H^\phi=N H_0+\mu p_N$, where the Hamiltonian constraint is the following:
\begin{equation}\label{hamilphi}
H_0^\phi=-\frac{\kappa^2}{6\alpha} \frac{p_{a} p_\phi}{a^2}+\frac{\kappa^2}{12 \alpha^2} (1+2\alpha \phi) \frac{p_\phi^{2}}{a^3}+\frac{3\alpha}{\kappa^2} {a}^{3} {\phi}^{2}.
\end{equation}
As can be expected, this approach and that of Ostrogradsky give equivalent Hamiltonian constraints (\ref{h0ostr}) and (\ref{hamilphi}), as~they are related by the canonical~transformation as follows:
\begin{subequations}\label{canonical1}
\begin{align}
&a=a, && p_a^{\text{Ost}}=p_a-2\frac{\phi p_\phi}{a}, \\
&A=-\frac{\kappa^2}{6 \alpha} \frac{N p_\phi}{a^2}, && p_A=-\frac{6\alpha}{\kappa^2} \frac{a^2}{N} \phi, \\
&N=N, && p^{\text{Ost}}_N=p_N+\frac{\phi p_\phi}{N}.
\end{align}
\end{subequations}
In fact, also the other constraints transform into each other by this transformation, namely $p_N=N^{-1} A p_A+p_N^{\text{Ost}}=D$. Here, $ p_a^{\text{Ost}}$ and $p_N^{\text{Ost}}$ denote the corresponding momenta in the Ostrogradsky~formalism.
Both versions of the Hamiltonian constraint, (\ref{hamilphi}) or (\ref{h0ostr}), yield the same nontrivial relation that, expressed in configuration space variables, can be recognized as the generalized Friedmann equation for the model of Starobinsky, which in the flat case $k=0$ can be written as a second-order differential equation for the Hubble factor. In~the gauge $N=1$, it~reads as follows:
\begin{equation}
\ddot{H}-\frac{\dot{H}^{2}}{2H}+3H \dot{H}+\frac{1}{2} M^2 H=0 \label{friedmannstarobinsky}.
\end{equation}
Stable inflationary dynamics, for~which the model of Starobinsky is greatly appreciated in cosmology, can be obtained from (\ref{friedmannstarobinsky}) \cite{defelice2010}. See also~\cite{ketovn} for a recent review of Starobinsky~inflation.
\subsection{Standard~Formulation}
It is well known that by means of a Weyl rescaling of the gravitational field plus a redefinition of the scalaron field, say $\phi$, any $f(R)$ action can be written as standard Einstein gravity with a scalar minimally coupled, with~a large field inflationary potential; see e.g., \cite{nojiri}. It is interesting to note that the corresponding canonical formulation in the FLRW case can be obtained from (\ref{hamilphi}) by the canonical~transformation as follows: \vspace{6pt}
\begin{subequations}\label{conformal}
\begin{align}
&\phi=(2 \alpha)^{-1} (e^{c \varphi}-1), && p_\phi=\alpha e^{-c \varphi} (\tilde{a} p_{\tilde{a}}+\tilde{N} p_{\tilde{N}})+\frac{2\alpha}{c} e^{- c \varphi} p_\varphi, \\
&a=e^{-\frac{1}{2} c\varphi} \tilde{a}, &&p_a=e^{\frac{c}{2} \varphi} p_{\tilde{a}}, \\
&N=e^{-\frac{1}{2} c\varphi} \tilde{N}, && p_N=e^{\frac{c}{2} \varphi} p_{\tilde{N}}
\end{align}
\end{subequations}
\noindent with $c^2=\frac{2}{3}\kappa^2$. Indeed, applying (\ref{conformal}) transforms (\ref{hamilphi}) into the following:
\begin{equation}\label{frwstar}
0\approx H_0^\varphi=-\frac{{\kappa}^{2}}{12} \frac{p_{\tilde{a}}^{2}}{\tilde{a}}+\frac{p_\varphi^{2}}{2 \tilde{a}^3}+\frac{3 M^2}{4 \kappa^2} \tilde{a}^3 (1-e^{-c \varphi})^2.
\end{equation}
In lagrangian terms, (\ref{frwstar}) corresponds to an ordinary Friedmann equation, which is complemented by the second-order equation of motion for $\varphi$.
Therefore, by~combining the transformations (\ref{canonical1}) and (\ref{conformal}), one can pass directly from the Ostrogradsky Hamiltonian (\ref{h0ostr}) to (\ref{frwstar}).
Now, the~Weyl rescaling on which (\ref{conformal}) is based takes us to the Einstein frame~\cite{nojiri}. We regard the frame of Starobinsky's modified gravity (generically called Jordan) as physical, such that inflation has a gravitational origin. Additionally, since $c \varphi=\ln (1+2\alpha \phi)$, the~transformation (\ref{conformal}) becomes singular at a sufficiently negative curvature (recalling $\phi=R/6$). Thus, for~example, it might be convenient to perform quantization of the supersymmetric models constructed in the following sections (see Section~\ref{cf}), while staying in the Jordan frame~\cite{hawkingluttrell}.
Finally, although~the Hamiltonian constraint (\ref{hamilphi}) is already suitable for quantization~\cite{vilenkin}, one might prefer to have it in canonical form. This can be done by the canonical~transformation as follows:
\begin{subequations}
\begin{align}
&a=b+\varphi, && p_a=\frac{b p_b+\varphi p_\varphi}{b+\varphi}, \\
&\phi=\frac{-1}{\alpha} \frac{\varphi}{b + \varphi}, && p_\phi=\alpha (b+\varphi) (p_b-p_\varphi),
\end{align}
\end{subequations}
\noindent which diagonalizes the kinetic part of (\ref{hamilphi}) such that it reads as follows:
\begin{equation}
H_0^\phi= \frac{\kappa^2}{12} (-p_b^2+p_\varphi^2)+\frac{3}{\kappa^2 \alpha} \varphi^2 (b+\varphi)^2.
\end{equation}
\section{N=1 Locally Supersymmetric~Action}\label{n1}
In this section, we construct the first example of a supersymmetric extension of the FLRW model of Starobinsky (\ref{starobinsky}). For~this, we use the simplest possible superspace compatible with time-dependent supersymmetry transformations~\cite{ramirez2008}. Its basic features are summarized in Appendix A; we recall here that it has local coordinates ($t, \Theta$), where $\Theta$ is a real odd parity Grassmann number. Superfields have only two components (\ref{realsuper}): one real boson scalar and one real fermion~scalar.
We define the real scale factor superfield (complex conjugation reverses the order, therefore, the product of two real odd parity Grassmann numbers is imaginary) as follows:
\begin{equation}\label{A}
\mathcal{A}(t,\Theta)=a(t) [1+i\Theta \lambda(t)].
\end{equation}
where, for~convenience, the~component expansion differs from the standard one \mbox{(\ref{realsuper})}. However, the~superfield transformation of (\ref{A}) is the usual one (see~\citep{garcia}), and~the transformation of its components can be obtained from (\ref{realtrans}). We~obtain the following:
\begin{align}\label{realscalet}
\delta_\zeta a=-i \zeta a \lambda, && \delta_\zeta \lambda=\zeta\left(\frac{\dot{a}}{aN}-i \psi \lambda\right).
\end{align}
We define the $k=0$ curvature superfield (non-vanishing spatial curvature can be introduced via an interaction with a Golstino field $\beta(t)$. Specifically, we add to the lagrangian density in (\ref{linear}) a term $-k\mathcal{EA} \mathcal{B}$, where $\mathcal{B}=\beta+\Theta \left(-1+i\beta \psi+i \beta N^{-1} \dot \beta\right)$ is a Goldstino superfield~\cite{ramirez2008}) as follows:
\begin{equation}\label{superreal}
\mathcal R=i\mathcal{A}^{-1}\nabla_\tau \nabla_\theta \mathcal{A}+i \mathcal{A}^{-2} \nabla_\tau \mathcal{A} \nabla_\theta \mathcal{A},
\end{equation}
where covariant derivatives are given in (\ref{realcov}). Thus, we have the following:
\begin{equation}\label{superreal2}
\mathcal{R}=-\frac{2\dot{a}}{Na} \lambda-\frac{\dot{\lambda}}{N}-\frac{\dot{a}}{Na} \psi+\Theta \left(-\frac{\dot{N}\dot{a}}{N^3a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2}+i \frac{\lambda \dot{\psi}}{N}-\frac{6i \dot{a}}{Na} \psi \lambda-2i \frac{\psi \dot{\lambda}}{N}+2i \frac{\lambda \dot{\lambda}}{N}\right)
\end{equation}
and for the ordinary (linear) action we have the following:
\begin{equation}\label{linear}
L_1=\frac{3}{\kappa^2} \int d\Theta \mathcal{E} \mathcal{A}^3 \mathcal{R} \doteq \frac{3{a}^{3}}{\kappa^2} \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}-i \lambda \dot{\lambda}\right),
\end{equation}
where $\mathcal{E}$ is the density superfield given in (\ref{realdensity}).
Since $\mathcal{R}$ has odd parity ($\mathcal{R}^2=0$), we cannot construct higher-order polynomials of it. However, it is possible to construct an exact supersymmetric model of Starobinsky by considering the product of $\mathcal{R}$ and its supersymmetric covariant derivative.
We write the supersymmetric Starobinsky Lagrangian in the following form:
$L_S=L_1+\alpha L_2$, where $L_1$ is given by (\ref{linear}), and~\begin{equation}\label{streal}
L_2=\frac{3}{\kappa^2} \int d\Theta \mathcal{E} \mathcal{A}^3 \mathcal{R} \nabla_\theta \mathcal{R}.
\end{equation}
Integrating over the $\Theta$ variable, we~obtain the following:
\begin{eqnarray}\label{realsta}
L_S=\frac{3}{\kappa^2} N a^3 \left[-\frac{\dot{N}\dot{a}}{N^3a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2}+\alpha \left(-\frac{\dot{N}\dot{a}}{N^3a}+\frac{\ddot{a}}{N^2 a}+\frac{\dot{a}^2}{N^2 a^2} \right)^2+i \frac{\lambda \dot{\psi}}{N}-\frac{i \dot{a}}{Na} \psi \lambda-i \frac{\psi \dot{\lambda}}{N}-i \frac{\lambda \dot{\lambda}}{N} \right. \nonumber \\
+\alpha \left(i \frac{\dot{\lambda} \ddot{\lambda}}{N^3}+\frac{9i \dot{N} {\dot{a}}^{2}}{N^4 a^2} \psi \lambda-\frac{8i \dot{a} \ddot{a}}{N^3 a^2} \psi \lambda-\frac{9i {\dot{a}}^{3}}{N^3 a^3} \psi \lambda+\frac{7i {\dot{a}}^{2}}{N^3 a^2} \lambda \dot{\lambda}+\frac{2i \dot{a}}{N^3 a} \lambda \ddot{\lambda}-\frac{i \dot{N} \dot{a}}{N^4a} \lambda \dot{\lambda}-\frac{i \ddot{a} \dot{a}}{N^3 a^2} \psi \lambda \right. \nonumber \\
+4i \frac{\dot{N} \dot{a}}{N^4 a} \psi \dot{\lambda}-\frac{4i \ddot{a}}{N^3 a} \psi \dot{\lambda}-\frac{i {\dot{a}}^{2}}{N^3 a^2} \psi \dot{\lambda}+4 \psi \lambda \frac{\dot{\psi} \dot{\lambda}}{N^2}- \frac{i \dot{a}}{N^3 a} \dot{\psi} \dot{\lambda}-\frac{i \ddot{a}}{N^3 a}\psi \dot{\lambda}+\frac{4i {\dot{a}}^{2}}{N^3 a^2} \lambda \dot{\psi}-\frac{i \ddot{a}}{N^3 a} \lambda \dot{\lambda} \nonumber \\
\left. \left. +\frac{i \dot{a}}{N^3 a} \psi \ddot{\lambda}+\frac{i {\dot{a}}^{2}}{N^3 a^2} \psi \dot{\psi}-2i \frac{\dot{N} \dot{a}}{N^4 a} \lambda \dot{\psi}+\frac{2i \ddot{a}}{N^3a} \lambda \dot{\psi} \right) \right].
\end{eqnarray}
The scale factor satisfies the expected fourth-order equation of motion, now with fermionic contributions. On~the other hand, the~term $\dot{\lambda} \ddot{\lambda}$ yields a third-order equation of motion. There is a tripling of fermionic degrees of freedom, compared to the ordinary case, as~we require not only the initial value of $\lambda$, but~also those of its first two time derivatives. (The classical equations of motion are merely formal. Although~Grassmann algebras can be represented ``classically'' by matrices, the~proper treatment of a theory with fermions requires the algebra of quantum operators.)
The purely bosonic fourth-order equation of motion can be written in terms of the Hubble parameter, and~the third-order equation as follows:
\begin{equation}
0=\dddot{H}+6H \ddot{H}+\frac{9}{2}{\dot{H}}^{2}+9{H}^{2} \dot{H}+M^2 \left( \frac{3}{2}{H}^{2}+\dot{H}\right). \label{third}\\
\end{equation}
Figure~\ref{fig:Mesh2} shows a numerical solution to (\ref{third}) displaying inflation during the large curvature regime. Initial values are chosen to satisfy the slow-roll conditions, and we set $\kappa=1$. The~scale factor increases from the order of $10^0$ up to $10^{32}$, which corresponds to, roughly, $73$ e-folds. From~that point, we obtain an oscillating amplitude superimposed on an overall expansion but~without inflation (this corresponds to the flattened part of the curve in Figure~\ref{fig:Mesh2}a). At~the final time displayed, we have in total 76 e-folds, approximately.
\begin{figure}
\includegraphics[width=0.7\textwidth]{Figure1.png}
\caption{Numerical solution to (\ref{third}) with initial values \protect $a=1$, \protect $H=5M$, \protect $\dot{H}=-\frac{1}{6} M^2$, \protect $\ddot{H}=0$ and \protect $M=0.2$. (\textbf{a}) Logarithm of \protect $a(t)$; (\textbf{b}) comoving Hubble length (scale horizon).}
\label{fig:Mesh2}
\end{figure}
\unskip
\subsection{Scalar-Tensor~Formulation}\label{scalart}
As for the bosonic action (\ref{starobinsky}), a~Hamiltonian formulation can be obtained in different ways, and~the simplest is the one of type BF; Equation~(\ref{firststar}).
In our case, we can rewrite the Lagrangian density of (\ref{streal}) in~terms of $\mathcal{R}$ and another odd parity superfield $\Phi=\eta+\Theta \phi$ (with $\eta \eta=0$), as~\begin{equation}\label{last}
\mathcal{L}_2^\Phi=\mathcal E\mathcal A^3 (2\mathcal R-\Phi)\nabla_\theta \Phi+\mathcal E\Phi \mathcal R \nabla_\theta \mathcal A^3.
\end{equation}
We can check the equivalence at the superfield level. Using (\ref{realeq2}), we write the superfield equation of motion as $2 \mathcal{A}^3 \nabla_\theta (\mathcal{R}-\Phi)+(\nabla_\theta \mathcal{A}^3) (\mathcal{R}-\Phi)=0$. Therefore, $\Phi=\mathcal{R}$, and, substituting this back into (\ref{last}), we recover the integrand in the r.h.s. of (\ref{streal}). Note that the last term on the r.h.s of (\ref{last}) does not contribute when we replace $\Phi$ by $\mathcal{R}$, as~$\mathcal{R}$ is~an odd parity superfield.
Therefore, the~total equivalent scalar--tensor action has a Lagrangian density as follows:
\begin{equation}
\mathcal{L}_S^\Phi=\frac{3}{\kappa^2} \mathcal{E} \mathcal{A}^3 \left[ \mathcal{R}+\alpha \left( (2\mathcal R-\Phi)\nabla_\theta \Phi+3 \mathcal{A}^{-1} \Phi \mathcal{R} \nabla_\theta \mathcal A\right) \right],
\end{equation}
\noindent which~yields, after integrating by parts, the following:
\begin{eqnarray}\label{lagrangianreal}
L^\phi=\frac{3Na^3}{\kappa^2} \left[-\frac{\dot{a}^{2}}{a^2 N^2} \left(1+2\alpha \phi \right)-2\alpha \frac{\dot{a}\dot{\phi}}{aN^2}-\alpha {\phi}^{2} +2i \frac{\dot{a}}{a N} \psi \lambda -i\frac{\lambda \dot{\lambda}}{N}+\alpha \left(-3i \frac{\dot{a}\phi}{aN} \psi \lambda+7i \frac{\dot{a}\dot{\eta} \lambda}{a N^2}-2i \frac{\phi \psi \dot{\lambda}}{N} \right. \right. \nonumber \\
\left. \left. +2i \frac{\dot{\eta} \dot{\lambda}}{N^2}-2i \frac{\dot{a} \psi \dot{\eta}}{aN^2}+2i \frac{\dot{\phi}\psi \lambda}{N}+i \frac{\phi \lambda \dot{\lambda}}{N}-i \frac{\eta \dot{\eta}}{N}+3i \phi\eta \lambda+9i \frac{\dot{a}^{2} \eta \lambda}{a^2 N^2}-6\frac{\psi \eta \lambda \dot{\lambda}}{N}+6i \frac{\dot{a} \eta \dot{\lambda}}{N^2}-3i \frac{\dot{a}^{2}\psi \eta}{a^2 N^2} \right) \right].
\end{eqnarray}
This Lagrangian contains two boson scalars, $a$ and $\phi$, and~two (real) fermion scalars, $\lambda$ and $\eta$.
One can verify that the equations of motion for $\phi$ and $\eta$ are solved by $\eta\doteq -2 \frac{\dot{a}}{a} \lambda-\dot{\lambda}$ and $\phi \doteq \left(\frac{1}{6}R+2i \lambda \dot{\lambda} \right)$, as~expected from the superfield~solution.
The absence of auxiliary fields greatly simplifies the expressions and allows us to construct a model whose bosonic part reproduces exactly the flat FLRW model of Starobinsky. Since the curvature superfield (\ref{superreal}) has odd parity, it probably does not correspond to the four-dimensional curvature superfield, and it could be related to its covariant derivatives. In~Section~\ref{cf}, we perform the Hamiltonian~formulation.
\section{N=2 Locally Supersymmetric~Action}\label{n2}
Now, we increase the dimension of the superspace by promoting $\Theta$ to a complex Grassmann variable ($\bar{\Theta}\equiv \Theta^*$, and~$\Theta \bar{\Theta}+\bar{\Theta} \Theta=0$, $\Theta \Theta=0=\bar{\Theta} \bar{\Theta}$). The basic results for this superspace are summarized in Appendix \ref{sec:A2}. Superfields have four components; one of them is usually an auxiliary field. For~convenience, we define the real scale factor superfield (the imaginary unit appearing in (\ref{a}) is a matter of convention, and we can dispense with it by redefining the $\lambda$s, e.g., $i(\Theta \bar{\lambda}+\bar{\Theta} \lambda) \to \Theta \lambda-\bar{\Theta} \bar{\lambda}$) as follows:
\begin{equation}\label{a}
\mathcal{A}(t,\Theta,\bar{\Theta})\equiv a(t) [1+i\Theta \bar{\lambda}(t)+i\bar{\Theta} \lambda(t)-\Theta \bar{\Theta} (s(t)-\lambda(t) \bar{\lambda}(t))],
\end{equation}
where $\bar{\lambda}=\lambda^*$. We write the scale factor superfield this way, which differs from the usual $\Theta$-expansion (\ref{auxsuper}) such that the lowest component of the curvature superfield $\mathcal{R}$ (defined below) is simply given by $s$. The~supersymmetry transformation of the supermultiplet $(a, \lambda, s)$ is non-linear and~can be obtained from (\ref{complextrans}).
The $k=0$ curvature superfield ({one can include positive spatial curvature adding $\sqrt{k} \mathcal{A}^{-1}$ to (\ref{supercurvature})}) is defined as the following:
\begin{equation}\label{supercurvature}
\mathcal{R}=\frac{1}{2}\mathcal{A}^{-1} [\nabla_{\bar{\theta}},\nabla_\theta]\mathcal{A}+\mathcal{A}^{-2} \nabla_{\bar{\theta}} \mathcal{A} \nabla_\theta \mathcal{A},
\end{equation}
Thus, we~have the following:
\begin{eqnarray}\label{supercurvature2}
\mathcal{R}=s+\Theta \left(\frac{2 \dot{a}}{Na}\bar{\lambda}+\frac{\dot{\bar{\lambda}}}{N}-\bar{\psi} \frac{\dot{a}}{Na}-i \psi \bar{\psi} \bar{\lambda}-i s \bar{\psi}-2 i s \bar{\lambda}\right)-\bar{\Theta} \left(\frac{2 \dot{a}}{Na}\lambda+\frac{\dot{\lambda}}{N}-\psi \frac{\dot{a}}{Na}+i \psi \bar{\psi} \lambda+i \psi s+2i s \lambda \right) \nonumber \\
+\Theta \bar{\Theta} \left(-\frac{\dot{N} \dot{a}}{N^3a}+\frac{\ddot{a}}{N^2a}+\frac{\dot{a}^2}{N^2a^2}+2{s}^{2}-i \frac{\dot{\psi} \bar{\lambda}+\dot{\bar{\psi}} \lambda}{N}-\frac{6i \dot{a}}{Na} (\psi \bar{\lambda}+\bar{\psi} \lambda)-2i \frac{\psi \dot{\bar{\lambda}}+\bar{\psi} \dot{\lambda}}{N}-2\psi \bar{\psi} s-4\psi \bar{\psi} \lambda \bar{\lambda} \right. \nonumber \\
\left. -2s (\psi \bar{\lambda}-\bar{\psi} \lambda)-2i \frac{\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda}}{N}-8\lambda \bar{\lambda} s\right).
\end{eqnarray}
To better appreciate the content of large supersymmetric expressions, for~simplicity, we use the flat superspace gauge
$N=1, \psi=0$ (this gauge corresponds to global supersymmetry; to check invariance under local supersymmetry (\ref{supergravity})--(\ref{complextrans}), actions must be written without gauge fixing, as~we do in Appendix \ref{app2}).
In~the following, we indicate this gauge by equality with a dot, $\doteq$. Thus, the~Lagrangian for pure FLRW-supersymmetric cosmology is the following:
\begin{equation}\label{L1}
L_1=\frac{3}{\kappa^2} \int d\Theta d\bar{\Theta} \mathcal{EA}^3 \mathcal{R} \doteq \frac{3a^3}{\kappa^2} \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}-s^{2}+i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+s \lambda \bar{\lambda}\right),
\end{equation}
with the scalar density $\mathcal{E}$ given in (\ref{density}). Note that the superfield form of this action does not coincide with the one of previous works, e.g.,~\citep{garcia}.
In analogy with the $f(R)$ actions, we can write a superfield Lagrangian proportional to some function $F(\mathcal{R})$ (see~\cite{ketov2011} for the corresponding analysis in four-dimensional supergravity). From~(\ref{supercurvature2}) we have the following:
\begin{eqnarray}\label{fdr}
L_F=\frac{3}{\kappa^2} \int d\Theta d\bar{\Theta} \mathcal{EA}^3 F(\mathcal{R}) \doteq \frac{3 a^3}{\kappa^2} \left[ F'(s) \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+2{s}^{2}+i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda}) +4s\lambda \bar{\lambda}\right)-3 F(s) (s+\lambda \bar{\lambda}) \right. \nonumber \\
\left. -F''(s) \left(\dot{\lambda} \dot{\bar{\lambda}}+4 {s}^{2} \lambda \bar{\lambda}+2 \frac{\dot{a}}{a} (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})+4 \frac{\dot{a}^2}{a^2} \lambda \bar{\lambda}+2 i s (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})\right) \right].
\end{eqnarray}
The equation of motion of the auxiliary field $s$ is of at least the second order in $s$, and~its solution leads to actions whose bosonic sectors are nonpolynomial in $R$. We have considered several examples of this type of actions, and~it seems that they contain ghosts, or~fail to produce large-curvature inflation. Hence, we will not discuss here this sort of~action.
The experience with the N=1 case of Section~\ref{n1} suggests to define an action depending on the covariant derivatives of the curvature superfield. Taking into account that $\nabla_\theta \mathcal{R}$ is an odd parity complex superfield, and~the Lagrangian density must be real and of even parity, the~natural candidate is $\nabla_\theta \mathcal{R} \nabla_{\bar{\theta}} \mathcal{R}$, which is the superfield kinetic term that is used to introduce scalar supersymmetric matter~\cite{tkach2}. As~in the case with real fermions, we write the following:
\begin{equation}\label{complexst}
L=L_1+\alpha L_2,
\end{equation}
with $L_1$ given in (\ref{L1}) and the following:
\begin{equation}\label{higherder}
L_2=\frac{3}{\kappa^2} \int d\Theta d\bar{\Theta}\ \mathcal{EA}^3 \nabla_{\bar \theta} \mathcal{R} \nabla_{\theta} \mathcal{R}.
\end{equation}
Performing the integration over $\Theta$-variables and summing, we~obtain the following:
\begin{eqnarray}\label{L2}
L \doteq \frac{3}{\kappa^2} a^3 \left[\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\alpha \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}\right)^2+\alpha \dot{s}^{2}-s^{2}+4\alpha \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}\right) s^{2} +4\alpha {s}^{4}+i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+s \lambda \bar{\lambda}+\alpha \left(-7 s \dot{\lambda} \dot{\bar{\lambda}} \right. \right. \nonumber \\
+2 s (\lambda \ddot{\bar{\lambda}}-\bar{\lambda} \ddot{\lambda})-i (\dot{\lambda} \ddot{\bar{\lambda}}+\dot{\bar{\lambda}} \ddot{\lambda})+i \frac{\ddot{a}}{a} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})-2 i \frac{\dot{a}}{a} (\lambda \ddot{\bar{\lambda}}+\bar{\lambda} \ddot{\lambda})-6 \frac{\dot{a}}{a} s (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})-7i \frac{\dot{a}^2}{a^2} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda}) \nonumber \\
\left. \left. +\lambda \bar{\lambda} \dot{\lambda} \dot{\bar{\lambda}}+\dot{s} (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})-24 \frac{\dot{a}^2}{a^2} s \lambda \bar{\lambda}-20 {s}^{3} \lambda \bar{\lambda}+4 \frac{\ddot{a}}{a} s \lambda \bar{\lambda}+4 \frac{\dot{a}}{a} \dot{s} \lambda \bar{\lambda}-12i {s}^{2} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})\right) \right].
\end{eqnarray}
Thus, with~the choice (\ref{higherder}), the total Lagrangian (\ref{complexst}) contains the FLRW model of Starobinsky as~well as the terms associated to the scalar field $s$. Further, the~kinetic term $\dot{s}^2$ in (\ref{L2}) tells us that the former auxiliary $s$ is promoted to a dynamical field. It is directly coupled to the curvature by the term $\frac{2}{3}\alpha Rs^2$ and has potential $V(s)=s^2-4 \alpha s^4$. The~equations of motion are of the fourth order for $a$ and second order for $s$. In~the fermionic sector, we find $\dot{\lambda} \ddot{\bar{\lambda}}+\dot{\bar{\lambda}} \ddot{\lambda}$ yielding third-order equations of~motion.
The scalar potential is unbounded for large $s$ but has a local minimum around $s=0$. However, since the effective quadratic mass is $M^2-\frac{2}{3}R$ (of the canonically normalized field $\tilde{s}=\frac{\sqrt{6}}{\kappa M} s$), we require not only $s$, but~also $R$ to be sufficiently small in order to obtain stable dynamics. On~the other hand, if~we set the initial conditions to $s=0=\dot{s}$, we can obtain inflation as in Section~\ref{n1}. However, this is not a stable solution in the sense that nonvanishing initial values of $s$ or $\dot{s}$, however small they are, eventually cause the field amplitude of $s$ to blow up while $a$ goes to~zero.
Up to now, we have partially succeeded in the construction of a supersymmetric model of Starobinsky: the bosonic sector of (\ref{L2}) already contains $R+\frac{\alpha}{6} R^2$, and we have no restriction on the value of $R$. However, we have a scalar field with a generally unstable potential. This is certainly not an unlikely situation; inflationary models derived from supergravity, and~more fundamental theories, e.g.,~string theory, contain several scalar fields. It is required that all of them but one sit in a stable vacuum state during inflation. This is, of~course, not the generic situation; it may happen that some of the additional fields develop instabilities, pushing the overall dynamics away from the inflationary solution. This is the case with theories of extra dimensions; after compactification, one needs a mechanism to stabilize the module fields~\cite{mcallister}.
To avoid fine tuning of the initial conditions, we add to the Lagrangian a term $\mathcal{R}^3$ to cancel the coupling $Rs^2$, which cancels also the negative sign fourth-power potential in \mbox{(\ref{L2})}. Thus, we propose the following superfield Lagrangian:
\begin{equation}\label{lagrangian}
\mathcal{L}_S=\frac{3}{\kappa^2} \mathcal{EA}^3 \left[\mathcal{R}+\alpha \left(\nabla_{\bar{\theta}} \mathcal{R} \nabla_{\theta} \mathcal{R}-\frac{4}{3} \mathcal{R}^3\right) \right],
\end{equation}
which yields the following:
\begin{eqnarray}\label{lagrangianboson}
L_S \doteq \frac{3a^3}{\kappa^2} \left[\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}+\alpha \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}\right)^2+\alpha \dot{s}^{2}-s^{2}+i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+s \lambda \bar{\lambda}+\alpha \left(\dot{s} (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda}) \right. \right. \nonumber \\
+s \dot{\lambda} \dot{\bar{\lambda}}+\lambda \bar{\lambda} \dot{\lambda} \dot{\bar{\lambda}}-i (\dot{\lambda} \ddot{\bar{\lambda}}+\dot{\bar{\lambda}} \ddot{\lambda})-2 i \frac{\dot{a}}{a} (\lambda \ddot{\bar{\lambda}}+\bar{\lambda} \ddot{\lambda})+i \frac{\ddot{a}}{a} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+2 s (\lambda \ddot{\bar{\lambda}}-\bar{\lambda} \ddot{\lambda}) \nonumber \\
\left. \left. -7i \frac{\dot{a}^2}{a^2} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+10 \frac{\dot{a}}{a} s (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})+4 \frac{\dot{a}}{a} \dot{s} \lambda \bar{\lambda}+4 \frac{\ddot{a}}{a} s \lambda \bar{\lambda}+8 \frac{\dot{a}^2}{a^2} s \lambda \bar{\lambda} \right) \right].
\end{eqnarray}
See Appendix \ref{app2}, Equation~(\ref{lagrangianbosoncom}), for~the complete~expression.
Thus, we obtain, in~the bosonic part, Starobinsky plus a massive scalar field. ({With $M$ constrained to be about $10^{-6} M_P$, $s$ is a rather heavy field. Considered in a field-theory context, heavy fields are more benevolent with the picture provided by the standard model of cosmology than lighter fields~\cite{mcallister}.}).
The equations of motion of the bosonic sector~read as follows:
\begin{subequations}\label{scalareq}
\begin{eqnarray}
&&0=\frac{3}{2}{H}^{2}M^2+9{H}^{2} \dot{H}+\dot{H}M^2+\frac{9}{2} \dot{H}^2+6H \ddot{H}+\dddot{H}+\frac{\kappa^2 M^2}{4} (\dot{\tilde{s}}^{2}-M^2 \tilde{s}^2) \label{eq6}, \\
&&0=\ddot{\tilde{s}}+3H\dot{\tilde{s}}+M^2 \tilde{s}, \label{esq}
\end{eqnarray}
\end{subequations}
where $\tilde{s}$ is the canonically normalized field, $\tilde{s}=\sqrt{6\alpha/\kappa^2}s$. Numerical solutions to Equations~(\ref{scalareq}) are shown in Figure~\ref{fig2} for different initial values of $s$ and $\dot{s}$. As~inflation takes place, the~field is driven to the minimum of the potential. Kinetic energy $M^2 \dot{\tilde{s}}^2$ is quickly dissipated by the friction term $H\dot{\tilde{s}}$ on the left-hand side of (\ref{esq}). In~contrast, this same friction term sustains a high value of the field for longer (acting as a cosmological constant), so we get more inflation (measured in e-folds) when most of the initial energy of $s$ is of the potential~type.
\begin{figure}
\includegraphics[width=0.7\textwidth]{Figure2.png}
\caption{Numerical solutions to Equations~(\ref{scalareq}). The~initial conditions for the scale factor are the same as in Figure~\ref{fig:Mesh2}. Here, we have an additional scalar field. (\textbf{a}) Scale factor. (\textbf{b}) Comoving Hubble length for pure kinetic initial energy (blue color) and pure potential initial energy (red color) of the field \protect $s$. The~dotted line is the same as in the pure Starobinky dynamics of Figure~\ref{fig:Mesh2}.}\label{fig2}
\end{figure}
\unskip
\subsection*{Scalar--Tensor~Formulation}
Here, we write an equivalent Lagrangian following Sections~\ref{bef} and~\ref{scalart} such that it contains at most first-order time derivatives. We write in terms of $\mathcal{R}$ and a real scalar superfield $\Phi=\phi+i\Theta \bar{\eta}+i \bar{\Theta} \eta+\Theta \bar{\Theta} G$ the following:
\begin{equation}\label{firstorderreal}
\mathcal{L}_S^\Phi=\frac{3}{\kappa^2} \mathcal{EA}^3 \left[\mathcal{R}+\alpha \left(\nabla_{\bar{\theta}} \Phi \nabla_{\theta} \mathcal{R}-\nabla_{\theta} \Phi \nabla_{\bar{\theta}} \mathcal{R}-\nabla_{\bar{\theta}} \Phi \nabla_{\theta} \Phi-\frac{4}{3} \mathcal{R}^3\right) \right].
\end{equation}
The equivalence can be verified using the superfield equation of motion for $\Phi$. From~\mbox{(\ref{realeq})}, we obtain $\nabla_{\bar{\theta}} \mathcal{A}^3 \nabla_\theta (\mathcal{R}-\Phi)-\nabla_\theta \mathcal{A}^3 \nabla_{\bar{\theta}} (\mathcal{R}-\Phi)-\mathcal{A}^3 [\nabla_{\theta},\nabla_{\bar{\theta}}] (\mathcal{R}-\Phi)=0$. Therefore, the~solution is given by $\Phi=\mathcal{R}+c$, where $c$ is a constant. Replacing $\Phi$ by $\mathcal{R}+c$ into (\ref{firstorderreal}) returns (\ref{higherder}). Performing the fermionic integration in the action yields, for the $\Phi$-dependent part, the following:
\begin{eqnarray}\label{kinetic}
L^{\Phi} \doteq \frac{3a^3}{\kappa^2} \left[2 \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^2}{a^2}\right) G-{G}^{2}+4G {s}^{2}+2 \dot{\phi} \dot{s}- {\dot{\phi}}^{2}+i (\eta \dot{\bar{\eta}}+\bar{\eta} \dot{\eta})+2 (\dot{\lambda} \dot{\bar{\eta}}-\dot{\bar{\lambda}} \dot{\eta})+3 s \eta \bar{\eta} \right. \nonumber \\
+3i \lambda \bar{\lambda} (\dot{\lambda} \bar{\eta}+\dot{\bar{\lambda}} \eta)-G i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})-3 i s (\dot{\lambda} \bar{\eta}+\dot{\bar{\lambda}} \eta)+4 \frac{\dot{a}}{a} (\lambda \dot{\bar{\eta}}-\bar{\lambda} \dot{\eta})+3 \frac{\dot{a}}{a} (\dot{\lambda} \bar{\eta}-\dot{\bar{\lambda}} \eta) \nonumber \\
+3 \dot{\phi} (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})+4i s (\lambda \dot{\bar{\eta}}+\bar{\lambda} \dot{\eta})+3G (\lambda \bar{\eta}-\bar{\lambda} \eta)+3 \frac{\dot{a}^2}{a^2} (\lambda \bar{\eta}-\bar{\lambda} \eta)+3i \dot{s} (\lambda \bar{\eta}+\bar{\lambda} \eta) \nonumber \\
\left. +12 \frac{\dot{a}}{a} \dot{\phi} \lambda \bar{\lambda}-3 \frac{\ddot{a}}{a} (\lambda \bar{\eta}-\bar{\lambda} \eta)-3i \dot{\phi} (\lambda \bar{\eta}+\bar{\lambda} \eta)+3\lambda \bar{\lambda} \eta \bar{\eta}-4Gs \lambda \bar{\lambda}\right].
\end{eqnarray}
As expected from the superfield expression (\ref{firstorderreal}) and (\ref{complextrans}), this Lagrangian depends only on $\dot\phi$; hence it can be eliminated. On~the other hand, it can be seen that $G$ plays the role of the scalaron, and~we eliminate the coupling $Gs^2$ by the shift $G \to G'=G-2s^2$.
Thus, renaming $G'=G$, the~Lagrangian reads as follows:
\begin{eqnarray}\label{lagrangiancomplex}
L_S^G\doteq \frac{3a^3}{\kappa^2} \left[ \left(\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{a^2}\right) (1+2\alpha G)-\alpha G^{2} +\alpha {\dot{s}}^{2}-{s}^{2}+i (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+s \lambda \bar{\lambda}+\alpha \left(8 s \dot{\lambda} \dot{\bar{\lambda}} \right. \right. \nonumber \\
+ i (\eta \dot{\bar{\eta}}+\bar{\eta} \dot{\eta})+2 (\dot{\lambda} \dot{\bar{\eta}}-\dot{\bar{\lambda}} \dot{\eta})-3i s (\dot{\lambda} \bar{\eta}+\dot{\bar{\lambda}} \eta)+4 i s (\lambda \dot{\bar{\eta}}+\bar{\lambda} \dot{\eta})+16 \frac{\dot{a}}{a} s (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda}) \nonumber \\
+10 i {s}^{2} (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+3 \dot{s} (\lambda \dot{\bar{\lambda}}-\bar{\lambda} \dot{\lambda})+7 \frac{\dot{a}}{a} (\lambda \dot{\bar{\eta}}-\bar{\lambda} \dot{\eta})+12 \frac{\dot{a}}{a} \dot{s} \lambda \bar{\lambda}+6 \frac{\dot{a}}{a} (\dot{\lambda} \bar{\eta}-\dot{\bar{\lambda}} \eta) \nonumber \\
-\frac{9}{2} \lambda \bar{\lambda} \dot{\lambda} \dot{\bar{\lambda}}+9 \frac{\dot{a}^{2}}{a^2} (\lambda \bar{\eta}-\bar{\lambda} \eta)+\frac{15}{2} i \lambda \bar{\lambda} (\dot{\lambda} \bar{\eta}+\dot{\bar{\lambda}} \eta)-\frac{3}{2}\lambda \bar{\lambda} \eta \bar{\eta}+6 {s}^{2} (\lambda \bar{\eta}-\bar{\lambda} \eta) \nonumber \\
\left. \left. -Gi (\lambda \dot{\bar{\lambda}}+\bar{\lambda} \dot{\lambda})+3s \eta \bar{\eta}+3G (\lambda \bar{\eta}-\bar{\lambda} \eta)-4Gs \lambda \bar{\lambda}+12{s}^{3} \lambda \bar{\lambda} +32 \frac{\dot{a}^{2}}{a^2} s \lambda \bar{\lambda}\right) \right].
\end{eqnarray}
In Appendix \ref{app2}, Equation~(\ref{finalcomplex}), we show the full~expression.
\section{Canonical~Formulation}\label{cf}
For the main actions worked out in this article, namely, (\ref{realsta}) and (\ref{lagrangianboson}), we have classically equivalent actions, (\ref{lagrangianreal}) and (\ref{lagrangiancomplex}), which still contain terms that are quadratic in the fermionic velocities: $\dot{\eta} \dot{\lambda}$ in (\ref{lagrangianreal}) and $\dot{\lambda} \dot{\bar{\eta}}-\dot{\bar{\lambda}} \dot{\eta}$ in (\ref{lagrangiancomplex}). They are, however, sufficiently good to perform the usual Hamiltonian formulation. Typical fermionic Lagrangians contain, at most, linear terms in the velocities, which ultimately lead to second-class constraints of the form $\pi_\lambda=\partial L/\partial \dot{\lambda}=\bar{\lambda}$ \cite{ramirez2016}. For~the present case, quadratic velocity terms allow us to solve for all the (physical) velocities, bosonic and fermionic, in~terms of coordinates and momenta. There still arise some constraints, but~all of them are first class and~form a closed algebra under the Poisson bracket extended to fermionic variables (as defined in~\cite{henneaux}).
For the case with real fermions, with~Lagrangian (\ref{realsta}), there are two primary (first-class) constraints $p_N=0$, $\pi_\psi=0$. The~rest of the conjugate momenta can be solved for the velocities. Thus, for~the Hamiltonian $H_S^\phi=\dot{N}p_N+\dot{\psi} \pi_\psi+\dot{a} p_a+\dot{\phi} p_\phi+\dot{\lambda} \pi_\lambda+\dot{\eta} \pi_\eta-L$, we obtain the following:
\begin{equation}\label{totalr}
H_S^\phi=N H_0+\frac{1}{2} \Psi S,
\end{equation}
where $\Psi=2N \psi$ (see Appendix A), where the Hamiltonian and supersymmetric constraints~are the following:
\begin{subequations}
\fontsize{9}{9}\selectfont
\begin{eqnarray}
&&H_0=-\frac{\kappa^2}{6\alpha} \frac{p_{a} p_\phi}{a^2}+\frac{\kappa^2}{12 \alpha^2} (1+2\alpha \phi) \frac{p_\phi^{2}}{a^3}+\frac{3\alpha}{\kappa^2} {a}^{3} {\phi}^{2}+i \left( \frac{\kappa^2}{\alpha} \frac{p_\phi^{2}}{a^3}+\frac{3 {a}^{3}}{2\kappa^2} (1-7\alpha \phi)\right) \eta \lambda \nonumber \\
&&\ \ \ \ \ \ \ \ \ \ \ +\frac{7 \kappa^2}{12 \alpha} \frac{p_\phi}{a^3} \lambda \pi_{\lambda}+\frac{1-\alpha \phi}{2\alpha} \lambda \pi_{\eta}+\frac{\kappa^2}{2\alpha} \frac{p_\phi}{a^3} \eta \pi_{\eta}+\frac{i \kappa^2}{6\alpha} \frac{\pi_{\lambda} \pi_{\eta}}{a^3}-\frac{\eta \pi_{\lambda}}{2},\label{hreal}\\
&&S=i \left(a p_{a}-\frac{1-\alpha \phi}{2\alpha } p_\phi \right) \lambda-i \left(\frac{\kappa^2}{4\alpha} \frac{p_\phi^{2}}{a^3}+\frac{3\alpha}{\kappa^2} {a}^{3} \phi\right) \eta+\frac{\kappa^2}{6\alpha} \frac{p_\phi}{a^3} \pi_{\lambda}+\phi \pi_{\eta}. \label{sreal}
\end{eqnarray}
\end{subequations}
Note that $S$ is imaginary and $H$ is real. The~algebra of constraints, all of which are first class, closes under Poisson brackets. In~particular, the~usual relation $\{S,S\}=\frac{i}{2} H_0$~holds.
For the case of complex fermions, the~full Lagrangian corresponding to (\ref{lagrangiancomplex}) is given in the Appendix \ref{app2}. With~this Lagrangian, the bosonic momenta are defined in the usual way, although~we use a slightly different definition for fermions (mostly to keep a consistent notation, and the over-bar is still equivalent to complex conjugation): $\pi_\lambda=\partial{L}/\partial \dot{\bar{\lambda}}$, $\pi_{\bar{\lambda}}=-\partial L/\partial \dot{\lambda}$, and~so on. The~only primary constraints come from the momenta associated to the gauge fields: $p_N=0$, $\pi_\psi=0$, $\pi_{\bar{\psi}}=0$. Solving for the rest of velocities and computing $H=\dot{N}p_N-\dot{\psi} p_{\bar{\psi}}+\dot{\bar{\psi}} \pi_\psi+\dot{a} p_a+\dot{G} p_G+\dot{s} p_s-\dot{\lambda} \pi_{\bar{\lambda}}+\dot{\bar{\lambda}} \pi_\lambda-\dot{\eta} \pi_{\bar{\eta}}+\dot{\bar{\eta}} \pi_\eta-L$, we obtain the following:
\begin{equation}
H=N H_0+\frac{1}{2} (\Psi \bar{S}-\bar{\Psi} S),
\end{equation}
where
\begin{subequations}
\begin{eqnarray}
H_0=\frac{\kappa^2}{12 \alpha^2} (1+2\alpha G) \frac{p_G^{2}}{a^3}-\frac{\kappa^2}{6\alpha} \frac{p_a p_G}{a^2}+\frac{\kappa^2}{12\alpha} \frac{p_s^{2}}{a^3}+\frac{3{a}^{3}}{\kappa^2} \left( \alpha{G}^{2}+{s}^{2}\right)-\frac{18\alpha}{\kappa^2} {a}^{3} \lambda \bar{\lambda} \eta \bar{\eta}-\frac{3 \kappa^2}{4\alpha} \frac{p_G}{a^3} \left(p_s+2 p_G s\right) \lambda \bar{\lambda} \nonumber \\
+\frac{\kappa^2}{4\alpha a^3} \left(p_s+4s p_G \right) (\lambda \pi_{\bar{\eta}}-\bar{\lambda} \pi_\eta)-\frac{is}{2} (\eta \pi_{\bar{\eta}}+\bar{\eta} \pi_\eta)-\frac{7\kappa^2}{12}\frac{p_G}{a^3} (\lambda \pi_{\bar{\lambda}}-\bar{\lambda} \pi_\lambda)-\frac{3i}{4} (p_s+5p_G s) (\lambda \bar{\eta}+\bar{\lambda} \eta) \nonumber\\
+\frac{3 a^3}{2 \kappa^2} \left(1-7\alpha G-6 \alpha{s}^{2} \right) (\lambda \bar{\eta}-\bar{\lambda} \eta)-\frac{\kappa^2}{2\alpha} \frac{p_G}{a^3} (\eta \pi_{\bar{\eta}}-\bar{\eta} \pi_\eta)+2is (\lambda \pi_{\bar{\lambda}}+\bar{\lambda} \pi_\lambda)+\frac{3}{\kappa^2} {a}^{3}s \left(3-4\alpha {s}^{2}\right) \lambda \bar{\lambda} \nonumber \\
+\frac{\kappa^2}{\alpha} \frac{p_G^{2}}{a^3} (\lambda \bar{\eta}-\bar{\lambda} \eta)+\frac{\kappa^2}{6\alpha a^3} (\pi_\lambda \pi_{\bar{\eta}}-\pi_{\bar{\lambda}} \pi_\eta)-\frac{i}{2} \left(G+6 {s}^{2}\right) (\lambda \pi_{\bar{\eta}}+\bar{\lambda} \pi_\eta)-\frac{2\kappa^2}{3\alpha} \frac{s}{a^3}\pi_\eta \pi_{\bar{\eta}}-\frac{6\alpha}{\kappa^2} {a}^{3} s \eta \bar{\eta} \nonumber \\
+\frac{i}{2} (\eta \pi_{\bar{\lambda}}+\bar{\eta} \pi_\lambda)-\frac{15i}{4} \lambda \bar{\lambda} (\eta \pi_{\bar{\eta}}+\bar{\eta} \pi_\eta)+\frac{i}{2\alpha} (\lambda \pi_{\bar{\eta}}+\bar{\lambda} \pi_\eta),\label{hcomplex}\\
S=\left(i a p_a- \frac{3}{\kappa^2} {a}^{3} s- \frac{\kappa^2}{4\alpha} \frac{p_G p_s}{a^3}+\frac{i}{2}G p_G-\frac{ip_G}{2\alpha}(1-6\alpha s^2)+\frac{12\alpha}{\kappa^2} {a}^{3} s (G+{s}^{2}) \right) \lambda+3\left(\frac{5}{4}i p_G +\frac{6\alpha}{\kappa^2} {a}^{3} s\right) \lambda \bar{\lambda} \eta \nonumber\\
+\frac{\kappa^2}{6\alpha} \frac{p_s}{a^3} \pi_\eta-2i \lambda \bar{\lambda} \pi_\lambda+\left(i s-\frac{\kappa^2}{6\alpha} \frac{p_G}{a^3}\right) \pi_\lambda+\left(\frac{i}{2} \left(p_s-3 p_G s\right)+\frac{\kappa^2}{4\alpha} \frac{p_G^{2}}{a^3}+\frac{3\alpha}{\kappa^2} {a}^{3} (G+2{s}^{2})\right) \eta+\frac{9\alpha}{2\kappa^2} {a}^{3} \lambda \eta \bar{\eta} \nonumber \\
-\frac{3i}{2} (\lambda \bar{\eta}+\bar{\lambda} \eta) \pi_\eta -i (G+2 {s}^{2}) \pi_\eta. \label{scomplex}
\end{eqnarray}
\end{subequations}
$\bar{S}$ is the complex conjugate of (\ref{scomplex}).
As in the previous case, we have the Poisson bracket relation $\{S,\bar{S}\}=\frac{i}{2} H_0$.
Both Hamiltonians, (\ref{hreal}) and (\ref{hcomplex}), have, after~diagonalization, a~negative kinetic term that corresponds to the usual gravitational instability. There are also terms lineal in the fermionic momenta. However, fermions do not have classical counterparts and~require a quantum analysis, which will be detailed in future work.
Quantization is accomplished by promoting classical variables to operators satisfying (anti-)commutation relations. The~wave function must satisfy the Wheeler--DeWitt equation $H_0\Psi=0$, as~well as the supersymmetric constraint equations $S\Psi=0=\bar{S}\Psi$ \cite{ramirez2016}. By~virtue of the anticommutation relation $\{S,\bar{S}\}=\frac{i}{2}H_0$, a~solution to $S\Psi=0=\bar{S}\Psi$ automatically satisfies $H_0\Psi=0$. Usually, the~supersymmetric constraint amounts to a partial differential equation of a lower order than the Wheeler--DeWitt equation, which represents an enormous simplification when looking for analytic solutions~\cite{moniz}. That is not quite the case here since there are terms that are quadratic in the momenta of the scalars in (\ref{sreal}) and (\ref{scomplex}), although~the equations are significantly~simpler.
\section{Conclusions}\label{concl}
We presented two supersymmetric extensions of the FLRW model of Starobinsky, with~real and complex fermions, using a superfield formalism for 1D supergravity. It was shown that the bosonic sectors of these actions allow for the large-curvature\linebreak inflationary~solution.
In the case of N=1 supersymmetry, Section~\ref{n1}, computations are very simple because the supermultiplets only contain two components and no auxiliary fields. Despite the small number of degrees of freedom, it is possible to construct a Lagrangian whose bosonic sector contains exactly $R+\frac{\alpha}{6}R^2$. It is a property of 1D supermultiplets that all the components can be physical; in more dimensions, auxiliary fields are, in~general, inevitable. Then, we considered an N=2 complex supermultiplet containing two scalar bosons and one complex scalar fermion. In~theories without higher derivatives, one of the bosons is an auxiliary field, but~in our case with higher-derivative terms, it becomes~dynamical.
In comparison to N=1, the~N=2 model required more maneuvering; the superfield kinetic term generates $R^2$, and~promotes the lowest component of the curvature superfield to a dynamical field. However, this new field comes equipped with a negative quartic potential, preventing the inflationary solution. We fixed this by adding a superpotential term of the form $F(\mathcal{R})$. With~the choice $F(\mathcal{R})=-8\mathcal{R}^3$, the~scalar potential of this extra field was significantly improved. The~final Lagrangian contains, besides~FLRW Starobinsky, a~minimally coupled massive scalar field. The~classical dynamics of the bosonic part of the action, obtained numerically, shows $R^2$-driven inflation, while the scalar field $s$ remains in a low-energy~state.
We wrote equivalent actions for the two models, by~including additional bosonic and fermionic fields. First, we obtained formal superfield expressions, which were verified and further developed at the component level. Let us recall here that these alternative formulations are at an intermediate level, in~the sense that they are not yet in the form of standard supersymmetric theories. Nonetheless, they are already suitable for the Hamiltonian formulation. The~Hamiltonian expressions contain the purely bosonic model of Starobinsky and, in~the N=2 case, contain an additional massive scalar~field.
From our discussion above, the~full tensor--scalar duals of our models would require two additional superfields to accommodate the three fermionic degrees of freedom. The~manifestly supersymmetric way to arrive at that formulation should involve the superfield generalization of the Weyl rescaling, and~redefinition of the scalaron field. However, as~we pointed out, in~our models, it is the highest component of the extra superfield that plays the role of the scalaron. Hence, the~proper superfield transformation should involve not $\Phi$, but~its covariant derivatives. An~adequate derivation of the transformation rules, including fermions, will be addressed in subsequent work. Nonetheless, we can anticipate that their bosonic sectors will be of the form (\ref{frwstar}). For~the N=2 model (\ref{hcomplex}), since the field $s$ is canonically normalized in the frame of Starobinsky, in~the Einstein frame, its contribution would appear in the form of a non-linear sigma model (cf. the two-field model in \citep{ketovn}).
Finally, the~actions proposed in this work belong to supersymmetric, also called ``pseudo'', classical mechanics because the dynamical variables are elements of Grassmann algebras. These actions find application as quantum theories, that is, quantum supersymmetric cosmology \citep{moniz}. The~quantization of these models, along the lines of Refs.~\citep{ramirez2016,garcia} and~its comparison with the pure bosonic case~\cite{hawkingluttrell,vazquez,ramirez2018}, will be the topic of upcoming work. Among~other interesting aspects to be investigated, is deriving the modified Friedmann equation, reflecting the effect of fermions by~means of a semi-classical approach \citep{escamilla}.
|
1,108,101,565,728 | arxiv | \section{Introduction}
There is increasing interest in replacing the fundamental Gaussian laser beam used in all current gravitational-wave detectors~\cite{aLIGO,AdVirgo} with beams of more uniform intensity distribution, such as higher-order Hermite-Gauss (HG) modes. This would reduce the thermal noise of the test-mass optics~\cite{ Mours_2006, Vinet_2007}, which limits detector sensitivity at signal frequencies around 100 Hz. It has been shown that higher-order HG modes such as the $\mathrm{HG_{3,3}}$ mode are nearly as robust against mirror surface deformations as the fundamental $\mathrm{HG_{0,0}}$ mode when vertical astigmatism is deliberately added to the test-mass optics~\cite{PhysRevD.102.122002, PhysRevD.103.042008}.
However A. Jones \emph{et. al.}~\cite{Jones_2020} have shown, using the computational algebra system SymPy~\cite{sympy}, that the mode-mismatch-induced power losses increase monotonically with mode index when, for example,
coupling into optical cavities. This paper takes an analytical approach, and extends their work to include the case of misalignment.
Sec.~\ref{sec:2} derives the mode content and resulting misalignment and mode-mismatch induced power coupling coefficients and losses for arbitrary higher-order HG modes by Taylor expanding the beam spatial profile functions up to second order in the perturbation under consideration. Sec.~\ref{sec:3} then uses a numerical approach, representing the original and perturbed beams as discrete matrices. This shows good agreement with the
analytical results. We report our conclusions and discussions in Sec.~\ref{sec:4}.
\section{Analytical calculations}
\label{sec:2}
A beam perturbed from a state considered to be an eigenmode of a basis, such as the HG mode basis, can be described as a mixture of the original mode, $\Psi_{0}$ and other eigenmodes into which power is `scattered'.
The lowest order perturbations of importance are misalignment and mode mismatch.
This scattering effect is characterized by
the overlap between the perturbed beam, $\Psi^{\prime}$ and the original mode, known as the \textit{mode coupling coefficient}~\cite{Bayer-Helms:84}:
\begin{equation}
\rho \equiv \iint_{-\infty}^{\infty} \Psi^{\prime} \Psi_{0}^{\ast}\,dxdy \,.
\end{equation}
In general $\rho$ is complex; in our case it is more useful to consider
the scattering effect in terms of the real-valued \textit{power coupling coefficient}:
\begin{equation}
\eta \equiv \rho \cdot \rho^{\ast} \,.
\label{equ:genpowcoeff}
\end{equation}
which we will calculate up to the second order in its Taylor series expansion in this manuscript.
We also define the induced \textit{relative power coupling loss},
which for convenient comparison is
normalized by the result for the fundamental Gaussian mode:
\begin{equation}
\Gamma \equiv \frac{1 - \eta}{1 - \eta_{0}} \,,
\label{equ:powerloss}
\end{equation}
A larger value of $\Gamma$ indicates a system with a lower tolerance for a given beam perturbation such as the misalignment or mode mismatch, normalized by the tolerance for the fundamental mode.
In this section we analytically derive
the misalignment and mode-mismatch induced power coupling coefficients and losses for arbitrary higher-order $\mathrm{HG}_{\mathrm{n,m}}$ modes propagating along the $z$ axis. The general expression for a Hermite-Gauss mode is
\cite{Bond2017}
\begin{equation}
\mathcal{U}_{\mathrm{nm}}(x, y, z)=\mathcal{U}_{\mathrm{n}}(x, z) \mathcal{U}_{\mathrm{m}}(y, z)
\label{equ:HGnm}
\end{equation}
with
\begin{equation}
\begin{aligned}
\mathcal{U}_{\mathrm{n}}(x,y, z)&=\left(\frac{2}{\pi}\right)^{1 / 4}\left(\frac{\exp (\mathrm{i}(2 n+1) \Psi(z))}{2^{n} n ! w(z)}\right)^{1 / 2} \\
& \times H_{n}\left(\frac{\sqrt{2} x}{w(z)}\right) \exp \left(-\mathrm{i} \frac{k x^{2}}{2 R_{c}(z)}-\frac{x^{2}}{w^{2}(z)}\right) \,,
\end{aligned}
\label{equ:HGn0}
\end{equation}
where $\Psi(z) = \arctan \left(\frac{z-z_{0}}{z_{R}}\right)$ is the Gouy phase with $z_{R} = \frac{\pi w_{0}^2}{\lambda}$ being the Rayleigh range. $k$ is the wavenumber, $\lambda$ is the wavelength, $w(z)$ is the beam radius and $R_{c}(z)$ is the wavefront radius of curvature.
In the following we use two properties of the Hermite polynomials $H_{n}(\frac{\sqrt{2} x}{w_{0}})$:
\begin{alignat}{2}
2 \frac{\sqrt{2} x}{w_{0}} H_{n} &= H_{n+1} + 2n H_{n-1} \label{equ:hermite11}\\
H^{\prime}_{n} &= 2n H_{n-1} \,.
\label{equ:hermite12}
\end{alignat}
The function argument $\frac{\sqrt{2}x}{w_{0}}$ is implied throughout this manuscript and the derivative is applied with respect to this argument. Applying Eq.~\ref{equ:hermite11} twice, we can write:
\begin{equation}
\begin{aligned}
\frac{x^{2}}{w_{0}^{2}}H_{n} = \frac{1}{8} \Big( H_{n+2} + 2(2n+1)H_{n} + 4n(n-1)H_{n-2}\Big)\,,
\end{aligned}
\label{equ:hermite2}
\end{equation}
and applying four times:
\begin{equation}
\begin{aligned}
&\frac{x^{4}}{w_{0}^{4}}H_{n} = \frac{1}{64}\Big(H_{n+4} + 4(2n+3)H_{n+2} + 12(2n^2+2n+1)H_{n} \\
&+ 16n(n-1)(2n-1)H_{n-2} + 16n(n-1)(n-2)(n-3) H_{n-4}\Big)\,.
\end{aligned}
\label{equ:hermite3}
\end{equation}
The relation shown in Eq.~\ref{equ:hermite12} can also be used twice to write
\begin{equation}
H^{\prime \prime}_{n} = 4n(n-1) H_{n-2}\,.
\end{equation}
\subsection{Misalignment}
\label{sec:misalign}
HG modes are separable in $x$ and $y$, so for misalignment we can consider the single-axis behaviour without loss of generality. We therefore consider a $\mathrm{HG}_{\mathrm{n},0}$ mode propagating along the $z$ axis and explore the effect of misalignment in the $x$-$z$ plane.
\subsubsection{Misalignment: tilt}
Any small misalignment can be resolved into a combination of a lateral displacement and a tilt at the beam waist. First we consider a tilt about the waist in the $x$-$z$ plane, $\alpha$, between the perturbed beam axis and the unperturbed optical axis.
The tilted beam can be described in the original basis as having an additional transverse phase term.
For small angles ($\alpha \ll \Theta$), this can be Taylor-expanded to second order as:
\begin{equation}
\exp \left(\mathrm{i}\frac{2\pi \alpha}{\lambda} x\right) = \exp \left(\mathrm{i}\frac{2 \alpha}{\Theta} \frac{x}{w_{0}}\right) \approx 1+\mathrm{i} \frac{2 \alpha}{\Theta} \frac{x}{w_{0}} - \frac{2 \alpha}{\Theta} \frac{x^2}{w_{0}^2} \,,
\end{equation}
where $\Theta = \frac{\lambda}{\pi w_{0}}$ is the far-field divergence angle, and $w_{0}$ is the beam waist size. At the beam waist $w(z)=w_{0}$ and $R_{c}(z)=\infty$, so the tilted input beam (Eq.~\ref{equ:HGn0}) in this approximation becomes
\begin{equation}
\begin{aligned}
\mathcal{U}_{n}^\mathrm{tilt}(x, z)&=\left(\frac{2}{\pi}\right)^{1 / 4}\left(\frac{\exp (\mathrm{i}(2 n+1) \Psi)}{2^{n} n ! w_{0}}\right)^{1 / 2} H_{n}\left(\frac{\sqrt{2} x}{w_{0}}\right) \\
&\times \exp \left(-\frac{x^{2}}{w_{0}^{2}}\right) \left(1+\mathrm{i} \frac{2 \alpha}{\Theta} \frac{x}{w_{0}} - \frac{2\alpha^{2}}{\Theta^{2}} \frac{x^2}{w_{0}^{2}}\right)\,.
\end{aligned}
\end{equation}
Then using Eqs.~\ref{equ:hermite11} and~\ref{equ:hermite2} we find
\begin{equation}
\begin{aligned}
\mathcal{U}_{n}^\mathrm{tilt}(x, z)&= \mathcal{U}_{\mathrm{n}}(x)
+ \mathrm{i} \frac{\alpha}{\Theta} \Big(\sqrt{n+1} \mathcal{U}_{\mathrm{n}+1}e^{- \mathrm{i}\Psi} + \sqrt{n} \mathcal{U}_{\mathrm{n}-1} e^{\mathrm{i}\Psi}\Big) \\
&- \frac{\alpha^2}{2\Theta^{2}} \big( \sqrt{(n+1)(n+2)}\mathcal{U}_{n+2}e^{-2\mathrm{i}\Psi} + (2n+1) \mathcal{U}_{n} \\
&+ \sqrt{n(n-1)}\mathcal{U}_{n-2}e^{ 2\mathrm{i}\Psi}\big)\,.
\end{aligned}
\end{equation}
To first order, we see that tilt scatters $\mathrm{HG_{n, 0}}$ into $\mathrm{HG_{n\pm1, 0}}$.
The mode coupling coefficient is
\begin{equation}
\rho = \int_{-\infty}^{\infty} dx \cdot \mathcal{U}_{n}^\mathrm{tilt} \cdot \mathcal{U}_{\mathrm{n}}^{\ast} \approx 1 - \frac{\alpha^2}{2\Theta^{2}} \left(2n+1\right)\,,
\end{equation}
so the power coupling coefficient (Eq.~\ref{equ:genpowcoeff}) for $\mathrm{HG_{n,0}}$ due to tilt $\alpha$ is
\begin{equation}
\eta^\mathrm{tilt} \approx 1 - \frac{\alpha^2}{\Theta^{2}} \left(2n+1\right)
\label{equ:10}
\end{equation}
to second order. The relative power coupling loss (Eq.~\ref{equ:powerloss}) $\Gamma_{n}^\mathrm{tilt}$ becomes
\begin{equation}
\Gamma_{n}^\mathrm{tilt} = 2n+1\,,
\label{equ:tiltloss}
\end{equation}
where $n$ is the mode index of the beam. We thus see the relative power coupling loss for $\mathrm{HG_{n,0}}$ as a result of tilt between the beam axis and the unperturbed optical axis scales linearly with mode order. A simple propagation of the beam does not scatter power between modes so this result must be valid for all $z$-axis positions, not just the waist location.
\subsubsection{Misalignment: lateral offset}
For a small lateral displacement $\delta x_{0}\ll w_{0}$ along the $x$ direction, the displaced beam (Eq.~\ref{equ:HGn0}) can be Taylor-expanded at the waist to second order:
\begin{equation}
\begin{aligned}
\mathcal{U}&^\mathrm{offset}(x, z)=\left(\frac{2}{\pi}\right)^{1 / 4}\left(\frac{e^{\mathrm{i}(2 n+1) \Psi}}{2^{n} n ! w_{0}}\right)^{1 / 2} H_{n}\left(\frac{\sqrt{2} (x-\delta x_{0})}{w_{0}}\right) e^{-\frac{(x-\delta x_{0})^{2}}{w_{0}^{2}}} \\
& \approx \left(\frac{2}{\pi}\right)^{1 / 4}\left(\frac{e^{\mathrm{i}(2 n+1) \Psi}}{2^{n} n ! w_{0}}\right)^{1 / 2} \Big(H_{n} - \frac{\sqrt{2} \delta x_{0}}{w_{0}} 2n H_{n-1} + \frac{\delta x_{0}^{2}}{w_{0}^{2}} \\
&\times 4n(n-1) H_{n-2}\Big) e^{-\frac{x^{2}}{w_{0}^{2}}}
\Big(1+2\frac{\delta x_{0}}{w_{0}^{2}} x - \frac{ \delta x_{0}^{2}}{w_{0}^2} + \frac{2 \delta x_{0}^{2} x^{2}}{w_{0}^{4}}\Big)\,.
\end{aligned}
\end{equation}
Using identities~\ref{equ:hermite11} and~\ref{equ:hermite2} and simplifying yields
\begin{equation}
\begin{aligned}
\mathcal{U}^\mathrm{offset}(x, z) &\approx \mathcal{U}_{n} + \frac{\delta x_{0}}{w_{0}} \left( \sqrt{n+1}\mathcal{U}_{n+1}e^{-\mathrm{i}\Psi} - \sqrt{n}\mathcal{U}_{n-1}e^{\mathrm{i}\Psi}\right) + \frac{ \delta x_{0}^{2}}{2w_{0}^2}\\
&\times \Big(-(2n+1) \mathcal{U}_{n} + \sqrt{(n+1)(n+2)}\mathcal{U}_{n+2}e^{-2\mathrm{i}\Psi} \\
&+\sqrt{n(n-1)}\mathcal{U}_{n-2}e^{2\mathrm{i}\Psi} \Big)\,.
\end{aligned}
\end{equation}
Collecting the coefficients of $\mathcal{U}_{n}$, the mode coupling coefficient is
\begin{equation}
\begin{aligned}
\rho = \int_{-\infty}^{\infty} dx \cdot \mathcal{U}^\mathrm{offset} \cdot \mathcal{U}_{\mathrm{n}}^{\ast} \approx 1 - \frac{ \delta x_{0}^{2}}{2 w_{0}^{2}} \left(2n+1\right)
\end{aligned}
\end{equation}
and the power coupling coefficient, to second order, is therefore
\begin{equation}
\eta^\mathrm{offset} \approx 1 - \left(2n+1\right)\frac{ \delta x_{0}^{2}}{w_{0}^{2}}\,.
\end{equation}
In this case the relative power coupling loss also scales linearly with respect to the mode order:
\begin{equation}
\Gamma_{n}^\mathrm{offset} = 2n+1 \,,
\label{equ:offsetloss}
\end{equation}
which equals $\Gamma_{n}^\mathrm{tilt}$ (Eq.~\ref{equ:tiltloss}).
\subsection{Mode mismatch}
Mode mismatches cannot be reduced to a single-axis treatment. Therefore we
consider a generic $\mathrm{HG_{n,m}}$
beam represented by the transverse function $\mathcal{U}_{\mathrm{n},\mathrm{m}}(x, y, z)$
as defined in Eq.~\ref{equ:HGnm}. As in section~\ref{sec:misalign}, we consider the effect of perturbations at the cavity waist.
\subsubsection{Mode mismatch: waist position mismatch}
For a beam waist displacement $\delta z_{0}$ along the $z$-direction, the wavefront radius of curvature $R_{c}$ of the input beam at the cavity waist is no longer infinite.
Assuming a small displacement such that $\frac{\lambda \delta z_{0}}{\pi w_{0}^{2}} \ll 1$, this can be approximated as~\cite{Morrison1:94}
\begin{equation}
\begin{aligned}
\frac{1}{R_{c}}\approx -\left(\frac{\lambda}{\pi w_{0}^{2}}\right)^{2} \cdot \delta z_{0} \,.
\end{aligned}
\label{equ:curvaturemismatch}
\end{equation}
As a result, $\mathcal{U}_{\mathrm{n}}(x,z)$ becomes
\begin{equation}
\begin{aligned}
\mathcal{U}_{n}^{WP}(x, z) \approx & \left(\frac{2}{\pi}\right)^{1 / 4}\left(\frac{\exp (\mathrm{i}(2 n+1) \Psi(z))}{2^{n} n ! w(z)}\right)^{1 / 2} H_{n}\left(\frac{\sqrt{2} x}{w(z)}\right) \\
&\times
e^{-\frac{x^{2}}{w^{2}(z)}}
\Big(1+\mathrm{i} \frac{ \lambda \delta z_{0}}{\pi w_{0}^{2}} \frac{x^{2}}{w_{0}^{2}} - \frac{ \lambda^2 \delta z_{0}^2}{2 \pi^2 w_{0}^{4}} \frac{x^{4}}{w_{0}^{4}} \Big) \,.
\label{equ:WPmm1}
\end{aligned}
\end{equation}
Applying Eqs.~\ref{equ:hermite11} and~\ref{equ:hermite2}, and writing $\gamma= \frac{kw_{0}^2}{R_{c}} \approx -\frac{2\lambda}{\pi w_{0}^2} \delta z$ for convenience, Eq.~\ref{equ:WPmm1} becomes
\begin{equation}
\begin{aligned}
\mathcal{U}&_{n}^{WP}(x, z) \approx \mathcal{U}_{n} -\mathrm{i} \frac{\gamma}{8} \Big(\sqrt{(n+1)(n+2)}\cdot \mathcal{U}_{n+2}e^{ -2\mathrm{i}\Psi} + (2n+1)\cdot \mathcal{U}_{n} \\
&+ \sqrt{n(n-1)}\cdot \mathcal{U}_{\mathrm{n-2}} e^{ 2\mathrm{i}\Psi} \Big) -\frac{\gamma^2}{128} \bigg(\sqrt{(n+1)(n+2)(n+3)(n+4)}\\
&\times \mathcal{U}_{\mathrm{n+4}} e^{ -4\mathrm{i}\Psi} + 2(2n+3)\sqrt{(n+1)(n+2)}\mathcal{U}_{\mathrm{n+2}} e^{ -2\mathrm{i}\Psi} \\
&+ 3(2n^2+2n+1)\mathcal{U}_{\mathrm{n}} + 2(2n-1)\sqrt{n(n-1)} \mathcal{U}_{\mathrm{n-2}}e^{ 2\mathrm{i}\Psi} \\
&+ \sqrt{n(n-1)(n-2)(n-3)}\mathcal{U}_{\mathrm{n-4}}e^{ 4\mathrm{i}\Psi}\bigg)\,.
\end{aligned}
\end{equation}
To first order, we see that waist position mismatch scatters $\mathrm{HG_{n, 0}}$ into $\mathrm{HG_{n\pm2, 0}}$.
The mode coupling coefficient in the $x$-direction is
\begin{equation}
\begin{aligned}
\rho_{x} &= \int_{-\infty}^{\infty} dx \cdot \mathcal{U}_{n}^{WP} \cdot \mathcal{U}_{\mathrm{n}}^{\ast} \\
&\approx 1- \mathrm{i}\frac{2 n+1}{8}\cdot \gamma -\frac{3\left(2n^2+2n+1\right)}{128} \cdot \gamma^2
\end{aligned}
\label{equ:linear}
\end{equation}
We will have a similar result for the coupling coefficient in $y$, $\rho_{y}$, so the full mode coupling coefficient due to waist position mismatch is
\begin{equation}
\begin{aligned}
\rho = \rho_x \cdot \rho_y &\approx 1 - \mathrm{i}\frac{ n+m+1}{4} \cdot \gamma \\
&- \frac{ 3\left(n^2+m^2\right)+5\left(n+m\right)+4nm+4}{64} \cdot \gamma^2
\end{aligned}
\end{equation}
The power coupling coefficient (Eq.~\ref{equ:genpowcoeff}) to second order is then
\begin{equation}
\eta^{WP} \approx 1-\frac{\left(n^2+n+1\right)+\left(m^2+m+1\right)}{32}\cdot \gamma^2.
\end{equation}
Note that the linear term in $\eta^{WP}$ cancels out since $C_{n,m}$ is purely imaginary. The relative power coupling loss (Eq.~\ref{equ:powerloss}) due to waist position mismatch therefore scales quadratically with mode indices $n,m$:
\begin{equation}
\Gamma_{n,m}^{WP} = \frac{n^2+n+m^2+m+2}{2} \,.
\label{equ:WPm}
\end{equation}
\subsubsection{Mode mismatch: waist size mismatch}
In terms of the relative waist size mismatch parameter $\epsilon \equiv \frac{w}{w_{0}}-1$, $\mathcal{U}_{\mathrm{n}}$ can be written as
\begin{equation}
\begin{aligned}
&\mathcal{U}_{\mathrm{n}}^{WS}(x, z) = \left(\frac{2}{\pi}\right)^{\frac{1}{4}}\left(\frac{e^{\mathrm{i}(2 n+1) \Psi(z)}}{2^{n} n ! w_{0}(1+\epsilon)}\right)^{\frac{1}{2}} H_{n}\left(\frac{\sqrt{2} x}{w_{0}(1+\epsilon)}\right) e^{-\frac{x^{2}}{(w_{0}(1+\epsilon))^{2}}} \\
& \approx \left(\frac{2}{\pi}\right)^{\frac{1}{4}}\left(\frac{\exp (\mathrm{i}(2 n+1) \Psi(z))}{2^{n} n ! w_{0}}\right)^{\frac{1}{2}} \left(1 - \frac{\epsilon}{2} + \frac{3}{8}\epsilon^2\right) \Bigg(H_{n}(\frac{\sqrt{2} x}{w_{0}}) \\
&+ \frac{\sqrt{2} x}{w_{0}} (\epsilon^2-\epsilon) 2 n H_{n-1}(\frac{\sqrt{2} x}{w_{0}}) + \frac{1}{2}\frac{2x^2}{w_{0}^2} (\epsilon^2-\epsilon)^2 4n(n-1)\\
&\times H_{n-2}(\frac{\sqrt{2} x}{w_{0}})\Bigg) e^{-\frac{x^2}{w_{0}^2}} \Big(1 + (2\epsilon - 3 \epsilon^2)\frac{x^2}{w_{0}^2} + \frac{2 x^4}{w_{0}^4} \epsilon^2 \Big) \,,
\end{aligned}
\end{equation}
where we assume $\epsilon \ll 1$ and Taylor-expand to second order. Applying identities ~\ref{equ:hermite11} and~\ref{equ:hermite2} gives
\begin{equation}
\begin{aligned}
&\mathcal{U}_{\mathrm{n}}^{WS}(x, z) = \mathcal{U}_{\mathrm{n}} + \frac{\epsilon}{2}\Big(\sqrt{(n+1)(n+2)} \mathcal{U}_{\mathrm{n+2}} e^{-2\mathrm{i}\Psi} - \sqrt{n(n-1)} \\
&\times \mathcal{U}_{\mathrm{n-2}} e^{2\mathrm{i}\Psi}\Big) + \frac{\epsilon^2}{8} \bigg(
\sqrt{n(n-1)(n-2)(n-3)}\mathcal{U}_{\mathrm{n-4}}e^{4\mathrm{i}\Psi} \\
&+ 2\sqrt{n(n-1)}\mathcal{U}_{\mathrm{n-2}}e^{2\mathrm{i}\Psi} -2(n^2+n+1)\mathcal{U}_{\mathrm{n}} - 2\sqrt{(n+1)(n+2)}\\
&\times \mathcal{U}_{\mathrm{n+2}}e^{-2\mathrm{i}\Psi} + \sqrt{(n+1)(n+2)(n+3)(n+4)}\mathcal{U}_{\mathrm{n+4}}e^{-4\mathrm{i}\Psi}
\bigg)
\end{aligned}
\end{equation}
The mode coupling coefficient in the $x$-direction is then
\begin{equation}
\begin{aligned}
\rho_{x} &= \int_{-\infty}^{\infty} dx \cdot \mathcal{U}_{n}^{WS} \cdot \mathcal{U}_{\mathrm{n}}^{\ast} \approx 1 - \frac{\epsilon^2}{4}\left(n^2+n+1\right)
\end{aligned}
\end{equation}
--note that unlike Eq.~\ref{equ:linear} there is no linear term in this case. We will have a similar result for $\rho_{y}$; the full coupling coefficient for $\mathrm{HG_{n,m}}$ due to waist size mismatch is therefore
\begin{equation}
\rho \approx 1 - \frac{\left(n^2+n+1\right)+\left(m^2+m+1\right)}{4} \cdot \epsilon^2
\end{equation}
The power coupling coefficient (Eq.~\ref{equ:genpowcoeff}) is thus
\begin{equation}
\eta^{WS} \approx 1 - \frac{\left(n^2+n+1\right)+\left(m^2+m+1\right)}{2} \cdot \epsilon^2\,,
\label{equ:powercoeff}
\end{equation}
and again we find a quadratic relationship for the relative power coupling loss (Eq.~\ref{equ:powerloss}) in the case of a waist size mismatch:
\begin{equation}
\Gamma_{n,m}^{WS} = \frac{n^2+n+m^2+m+2}{2}\,.
\label{equ:WSm}
\end{equation}
This matches the result for waist position mismatch, Eq.~\ref{equ:WPm}. Evaluation of Eqs.~\ref{equ:WPm} and~\ref{equ:WSm} exactly reproduces the coefficients found by A. Jones et. al.~\cite{Jones_2020}.
\section{Numerical comparison}
\label{sec:3}
The power coupling coefficients are calculated numerically by evaluating the overlap integrals (i.e. $\rho$) of discretized perturbed and original beams. Each beam is modeled as a 2-dimensional matrix of field amplitudes in the $x$-$y$ plane at the cavity waist; the integrals are evaluated using element-wise matrix multiplication. This is repeated for a range of perturbation amplitudes. The result for tilting $\mathrm{HG}_{\mathrm{n},0}$ modes is shown on the left of Fig.~\ref{fig:powercouplings}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/powercouplings.pdf}
\caption{
Left: Numerical power coupling loss in ppm as a function of tilt angle for $\mathrm{HG}_{\mathrm{n},0}$ modes; Right: Relative power coupling loss as a function of mode order.
}
\label{fig:powercouplings}
\end{figure}
The relative power coupling loss (Eq.~\ref{equ:powerloss}) can be obtained numerically by taking the discretized second derivative of the overlap integral at zero perturbation. The normalised result for the case of tilted input is shown on the right of Fig.~\ref{fig:powercouplings}. The numerical result (yellow) agrees well with the analytical result (red) from Eq.~\ref{equ:tiltloss}. A similar result can be obtained for offsets.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/powercouplinglossres.pdf}
\caption{Left: numerical relative power coupling loss for $\mathrm{HG}_{\mathrm{n},\mathrm{m}}$ modes due to waist size mismatch;
Right: residuals when compared to Eq.~\ref{equ:WSm}.
}
\label{fig:powercouplinglossWS}
\end{figure}
The same method is used to calculate the power coupling coefficients and losses for mode mismatched $\mathrm{HG}_{\mathrm{n},\mathrm{m}}$ input beams. In this case the relative power coupling loss scales with both $n$ and $m$, as shown on the left of Fig.~\ref{fig:powercouplinglossWS} for waist size mismatches, becoming larger as we move up and right. The right panel of Fig.~\ref{fig:powercouplinglossWS} compares the numerical results to Eq.~\ref{equ:WSm}, in terms of the magnitude of the difference in the results $|\Delta\Gamma_\mathrm{n,m}^\mathrm{WS}|$. This residual is small, but increases with
mode index
because higher-order modes with more high spatial frequency content need a finer grid resolution to match the accuracy of a lower-order model. A similar result can be obtained for waist position mismatches.
\section{Conclusion and Discussion}
\label{sec:4}
Through analytical and numerical methods we find that misalignment and mode-mismatch tolerances for higher-order HG mode beam are tighter than for $\mathrm{HG}_{0,0}$, as the induced relative power coupling losses scale linearly and quadratically with the mode order respectively.
The maximum allowable mode mismatch for higher-order modes is smaller than for the fundamental mode, given the same mode-mismatch-induced power loss requirement. Specifically, since we are considering the second order expansion, using e.g. Eq.~\ref{equ:WSm} we see that in general the ratio of the maximum allowable mode mismatch for the $\mathrm{HG}_{\mathrm{n},\mathrm{m}}$ mode compared against the fundamental mode, given the same power loss requirement, is $\sqrt{\frac{2}{n^2+n+m^2+m+2}}$. For $\mathrm{HG}_{3,3}$ this is around 0.28.
Future work will investigate alignment and mode-matching sensing and control for arbitrary higher-order HG modes in various sensing schemes. This paper has shown that higher-order HG modes lead to tighter tolerances - if the same principals also lead to higher signal-to-noise in sensing schemes, as has been shown for Laguerre-Gauss modes~\cite{PhysRevD.79.122002}, this will help mitigate the challenges discussed here.
\section{Funding}
This work was supported by National Science Foundation grants PHY-1806461 and PHY-2012021.
\bibliographystyle{unsrt}
|
1,108,101,565,729 | arxiv | \section{Introduction}
Recently, Reinforcement Learning (RL) has made remarkable success in many tasks, such as Go \cite{silver2017mastering},
Atari games \cite{mnih2013playing} and robot control \cite{abbeel2004apprenticeship,andrychowicz2020learning}. In an RL system,
in addition to the agent and the environment, there are
four key elements: policy, reward signal, value function, and model \cite{sutton1998introduction}, among which
the reward signal plays a key role in directly determining the overall performance of the RL system.
More specifically, the reward signal defines the goal of the RL task, and guides the learning agent towards the desirable direction.
However, in some real-world problems, the reward signal is difficult to receive or to measure explicitly.
In such cases, alternative approaches such as Imitation Learning (IL) are required.
In IL, the learning agent receives demonstrations from an expert,
and the goal is to recover a desired policy using the expert demonstrations through interactions with the environment.
In general, IL methods can be categorized as Behavioral Cloning (BC) \cite{pomerleau1991efficient,ross2010efficient,bagnell2007boosting}
and Inverse Reinforcement Learning (IRL) \cite{russell1998learning,ng2000algorithms}.
In BC, the agent tries to mimic the expert's action in each state, with the aim to match the expert as closely as possible.
By contrast, IRL methods try to recover a reward function based on the demonstrations of the expert,
and then use RL algorithms to train an agent based on the recovered reward function. In practice, BC methods are usually used to derive an initial policy for RL algorithms
\cite{nagabandi2018neural,rajeswaran2017learning},
while IRL methods employ an iterative process alternating between reward recovery and RL.
However, BC methods have two major flaws.
First, the prediction error may accumulate and agents in BC cannot recover the true action once mistakes occur.
Second, BC methods are supervised learning methods and require a large amount of data for training.
Furthermore, supervised learning methods assume that
the demonstrations are \textit{i.i.d}, which is unlikely to be the case in real-world tasks.
Most previous IRL methods are model-based and need to call a Markov decision process (MDP) solver multiple times during learning.
Meanwhile, IRL methods are essentially cost function learning methods \cite{ng2000algorithms,abbeel2004apprenticeship,ziebart2008maximum}, aiming to minimize
the distance between the expert and the learning agent. For example,
Adversarial IL \cite{ho2016generative} uses the KL divergence to estimate the cost function, while Primal Wasserstein Imitation Learning (PWIL) \cite{dadashi2020primal} uses the upper bound of the Wasserstein distance to estimate the cost,
and Random Expert Distillation (RED) \cite{wang2019random} uses the expert policy support estimation to estimate the cost.
In this paper, we focus on how to recover the reward function in IRL. The main contribution is
that we propose a concise method to recover the reward function using expert demonstrations
based on probability density estimation.
Instead of being restricted by the concept of cost, we propose to recover the reward function directly, and the form of the reward function is also
simpler and easier to calculate than previous IRL methods. We prove that, in theory, the expert policy can be recovered when our reward function is adopted.
We also propose a novel IRL algorithm named
Probability Density Estimation based Imitation Learning (PDEIL) to demonstrate the
efficacy of the proposed reward function.
Experimental studies in both discrete and continuous action spaces confirm that
the desired policy can be recovered requiring fewer interactions with the environment and
less computing resources compared with existing IL methods.
\section{Background and Notations}
An environment in RL can be modeled as an MDP \cite{sutton1998introduction},
with a tuple $ (S, A, T, \gamma, D, R)$, where $S$ is the state space; $A$ is
the action space; $T$ is the state transition probability model;
$\gamma \in (0, 1)$ is the discount factor; $D$ is the initial state distribution from which $s_0$ is drawn;
$R: S \times A \rightarrow \mathbb{R}$ is the reward signal. A policy is
the rule of the agent's behavior that can be generally denoted as $\pi$.
The goal of RL is to obtain a policy that maximizes the cumulative discounted reward from the initial state,
denoted by the performance objective $J (\pi)$, which can be written as an expectation \cite{silver2014deterministic}:
\begin{equation}
\begin{aligned}
J\left (\pi\right) &=\mathbb{E}_{s \sim \rho^{\pi}, a \sim \pi_{\theta}}[r (s, a)] \\
&=\int_{S} \rho^{\pi} (s) \int_{A} \pi (a|s) r (s, a) \mathrm{d} a \mathrm{d} s
\end{aligned}
\label{optimal_objective}
\end{equation}
where $\rho^{\pi} (s) = \sum_{t = 0}^{\infty}\gamma^{t}p (s_t=s|\pi)$ is the discounted state
distribution of policy $\pi$. In the following, we
introduce a new discounted distribution $\rho^{\pi} (s,a) = \rho^{\pi} (s) \pi (a|s)$, which is called discounted
state-action joint distribution of policy $\pi$. To differentiate between the two discounted distributions,
we denote the discounted state distribution as $\rho^{\pi}_{s} (\cdot)$, and the discounted state-action joint
distribution as $\rho^{\pi}_{s,a} (\cdot)$.
Since there is no reward signal in the environment of IL, we use MDP$\backslash$R to denote an MDP without reward signal \cite{abbeel2004apprenticeship},
in the form of a tuple $ (S, A, T, \gamma, D)$.
Meanwhile, we use $\pi_{e}$ to represent the expert policy from which the demonstrations are generated.
\section{Related Works}
IRL was first defined by Ng \cite{ng2000algorithms}, which aimed to optimize the reward function given expert demonstrations and access to
the environment. IRL itself features inherent ambiguity as the same optimal policy can be potentially derived with different reward functions.
Some IRL methods add an extra constraint to deal with the ambiguity issue, such as the maximum entropy principle \cite{ziebart2008maximum}.
Traditional IRL methods often employ a linear combination of features as the reward function and the task of recovering the reward function is to
optimize a set of weights \cite{abbeel2004apprenticeship}.
Adversarial IL is an IRL method that has recently attracted significant interests,
which uses Generative Adversarial Network (GAN) \cite{goodfellow2014generative} to simulate the IL process.
In Adversarial IL, the generator in GAN represents a policy, while the discriminator in GAN corresponds to the reward function,
and the recovering of the reward function and the learning of policy in IL are transformed into the training of the discriminator and generator.
Expert support estimation is another direction in IRL.
The key idea is to encourage the agent to stay within the state-action support of the expert
and several reward functions have been proposed, such as Soft Q Imitation Learning (SQIL)
\cite{reddy2019sqil}, RED, Disagreement-Regularized Imitation Learning (DRIL) \cite{brantley2019disagreement}, and PWIL.
SQIL features a binary reward function, according to which when the agent selects an action in a state endorsed by the expert,
the agent receive a reward of $+1$.
Otherwise, the agent receive a reward of $0$. RED uses a neural network to estimate the state-action support of the expert.
DRIL relies on the variance among an ensemble of BC models to estimate the state-action support, and constructs a reward function based on the distance to the
support and the KL divergence among the BC models.
PWIL employs a “pop-outs” trick to enforce agent to stay within the expert's support.
In some sense, our method is similar to these methods as it employs a probability model to estimate the expert support as a component of the reward function.
\section{Methodology}
The objective of our work is to design a reward function that can make the resulting optimal policy equal to the expert policy.
In this section, we first present the structure of our reward function,
and then investigate why our reward function can make the optimal policy equal to the expert policy.
Furthermore, we propose a revised version of the original reward function that can overcome its potential issue.
Finally, the PDEIL algorithm that employs the reward function is introduced.
\subsection{Reward Based on Probability Density Estimation}
In traditional RL,
the goal of the agent is to seek a policy that maximizes $J (\pi)$ in Equation \eqref{optimal_objective}.
However, in an MDP$\backslash$R environment,
the agent cannot receive reward signals from the interactions with the environment.
To accommodate this lack of reward signals,
we need to design a reward function based on expert demonstrations.
If the reward function can guarantee in theory that the corresponding optimal policy is equal to the expert policy,
we can expect to recover the expert policy using an RL algorithm.
\begin{theorem}\label{as1}
Assume $\pi_e$ is a deterministic policy, then:
\begin{equation}
\forall \pi, \langle \pi_e (a | s), \pi_e (a | s) \rangle \geq \langle \pi (a | s), \pi (a | s) \rangle,
\label{ase1}
\end{equation}
\end{theorem}
\noindent where $\langle \cdot, \cdot \rangle$ is the inner product; $\langle \pi (a | s), \pi (a | s) \rangle = \int_a\pi^2 (a|s)da$ for continuous action spaces and
$\langle \pi (a | s), \pi (a | s) \rangle = \sum_a\pi^2 (a|s)$ for discrete action spaces.
Moreover, if a policy $\pi$ becomes more deterministic, the value of $\langle \pi (a | s), \pi (a | s) \rangle$ will also increase, and
$\langle \pi (a | s), \pi (a | s) \rangle$ can be used to measure the stochasticity of a policy
(i.e., similar to the entropy of a policy \cite{haarnoja2018soft}).
\begin{proof}
When the action space is discrete, and $\pi_e$ is a deterministic policy,
the expert agent only selects
an action with probability 1 for each state, then:
$$\langle \pi_e, \pi_e \rangle = 1,$$
for all policies:
\begin{equation*}
\langle \pi, \pi \rangle = \sum_{a \in A}\pi^{2} (a|s),
\end{equation*}
for all actions:
$$0 \leq \pi (a | s) \leq 1,$$
then:
$$\langle \pi, \pi \rangle \leq \sum_{a \in A}\pi (a|s) = 1,$$
we have:
$$\forall \pi, \langle \pi_e, \pi_e \rangle \geq \langle \pi, \pi \rangle$$
When the action space is continuous, the probability density function of a deterministic policy is a shifted Dirac function ($\delta (a - a_0)$) \cite{lillicrap2015continuous}. Note that $\int_a\pi_e^2 (a|s)da$ is not
integrable when $\pi_e (a | s) = \delta (a - a_0)$, and $ \langle \pi_e, \pi_e \rangle$ goes to infinity, so intuitively, $\langle \pi_e, \pi_e \rangle \geq \langle \pi, \pi \rangle$ .
\end{proof}
Table \ref{t1} is an example of Theorem \ref{as1} when the action space is discrete. In table {\ref{t1}}, $\langle \pi_1 (a | s), \pi_1 (a | s) \rangle = 1, \langle \pi_2 (a | s), \pi_2 (a | s) \rangle = \frac{1}{3}$. Since $\pi_1$ is a
deterministic policy (it only selects action $a_3$), and $\pi_2$ is a uniform random policy,
therefore $\langle \pi_1 (a | s), \pi_1 (a | s) \rangle \geq \langle \pi_2 (a | s), \pi_2 (a | s) \rangle$.
\begin{table}[tbh]
\centering
\caption{\small{An example of two different policies on the same state with a discrete action space.}}
\begin{tabular}{llll}
\toprule
& $a_1$ & $a_2$ & $a_3$ \\ \midrule
$\pi_1 (a|s)$ & $0$ & $0$ & $1$ \\
$\pi_2 (a|s)$ & $\frac{1}{3}$ & $\frac{1}{3}$ & $\frac{1}{3}$ \\ \bottomrule
\end{tabular}
\label{t1}
\end{table}
Figure \ref{kk} gives an example of Theorem \ref{as1} when the action space is continuous. It shows two alternatives to approach the shifted Dirac function:
uniform distributions as Figure \ref{un}, and triangle distributions as Figure \ref{tri}.
When the height of the rectangle or triangle goes to infinity, the probability density function approaches $\delta (a - a_0)$.
Meanwhile, the higher the rectangle or triangle, the more deterministic the policy is. If we calculate the inner product of the policies in Figure\ref{kk},
it is clear that more deterministic policies have greater inner product values.
\begin{figure}[tbh]
\centering
\subfigure[Uniform policies]{
\resizebox{0.2\textwidth}{!}{\includesvg{save1.svg}}
\label{un}
}
\subfigure[Triangle policies]{
\resizebox{0.2\textwidth}{!}{\includesvg{save2.svg}}
\label{tri}
}
\caption{An example of different policies on the same state with a continuous action space. In uniform policies, $\langle \pi_1, \pi_1 \rangle = 0.5$, $\langle \pi_2, \pi_2 \rangle = 1$, $\langle \pi_3, \pi_3 \rangle = 2$,
$\langle \pi_1, \pi_1 \rangle < \langle \pi_2, \pi_2 \rangle < \langle \pi_3, \pi_3 \rangle$; In triangle policies $\langle \pi_1, \pi_1 \rangle = 0.5$,
$\langle \pi_4, \pi_4 \rangle = \frac{2}{3}, \langle \pi_5, \pi_5 \rangle = \frac{4}{3}$,
$\langle \pi_1, \pi_1 \rangle < \langle \pi_4, \pi_4 \rangle < \langle \pi_5, \pi_5 \rangle$.}
\label{kk}
\end{figure}
\begin{theorem}\label{pro}
Assume:
\begin{equation}
\begin{aligned}
r (s, a) &= \frac{\rho_{s}^{\pi_e} (s)}{\rho^{\pi}_{s} (s)}\pi_{e} (a | s) \\
&= \frac{\rho_{s, a}^{\pi_e} (s,a)}{\rho_{s}^{\pi} (s)}
\end{aligned},
\label{origin_reward}
\end{equation}
when the expert policy is a deterministic policy (i.e., Equation \eqref{ase1} is satisfied), the optimal policy $\pi_{*}$ is identical to the expert policy
under the optimal objective of Equation \eqref{optimal_objective}.
\end{theorem}
\begin{proof}
Since Equation \eqref{origin_reward}, then:
\begin{equation*}
\begin{aligned}
J (\pi) &= \int_{S} \rho^{\pi}_{s} (s) \int_{A} \pi (a|s) r (s, a) dads \\
&= \int_{S}\rho^{\pi}_{s} (s) \int_{A} \pi (a|s)\frac{\rho_{s}^{\pi_e} (s)}{\rho^{\pi}_{s} (s)}\pi_{e} (a | s)dads \\
&= \int_{S}\rho^{\pi_e}_{s} (s) \int_{A} \pi (a|s)\pi_{e} (a | s)dads
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
J (\pi_e) &= \int_{S} \rho^{\pi_e}_{s} (s) \int_{A} \pi_e (a|s) r (s, a) dads \\
&= \int_{S}\rho^{\pi_e}_{s} (s) \int_{A} \pi_e (a|s)\frac{\rho_{s}^{\pi_e} (s)}{\rho^{\pi_e}_{s} (s)}\pi_{e} (a | s)dads \\
&= \int_{S}\rho^{\pi_e}_{s} (s) \int_{A} \pi_e (a|s)\pi_{e} (a | s)dads \\
\end{aligned}
\end{equation*}
Given the fact of Cauchy Inequality:
\begin{equation*}
(\int f (x) g (x) dx)^2 \leq \int f^2 (x)dx \int g^2 (x)dx
\end{equation*}
then:
\begin{equation*}
(\int_{A} \pi (a|s)\pi_{e} (a | s))^2da \leq \int_{A}\pi^2 (a|s)da \int_{A} \pi_e^2 (a|s)da
\end{equation*}
$\exists$ Equation \eqref{ase1}, then:
\begin{equation*}
\begin{aligned}
(\int_{A} \pi (a|s)\pi_{e} (a | s))^2da &\leq \int_{A}\pi^2 (a|s)da \int_{A} \pi_e^2 (a|s)da \\
&\leq (\int_{A} \pi_e^2 (a|s)da)^2
\end{aligned}
\end{equation*}
then:
\begin{equation*}
J (\pi) \leq J (\pi_e), \pi_* = \pi_{e}
\end{equation*}
\end{proof}
According to Theorem \ref{pro}, we can construct the reward function
as Equation \eqref{origin_reward},
and once the optimal policy based on this reward function is found,
we can recover the expert policy.
Since the distribution of population is not known \textit{a priori}, $\rho^{\pi_e}_{s,a} (s,a)$ and $\rho^{\pi}_{s} (s)$ cannot be computed directly.
The most common and intuitive solution is to estimate
the two probability densities from corresponding samples. Consequently, the practical reward function can be written as:
\begin{equation}
r (s, a) = \frac{\widehat{\rho^{\pi_e}_{s, a}} (s,a)}{\widehat{\rho^{\pi}_{s}} (s)},
\label{mis_lead}
\end{equation}
where $\widehat{\rho^{\pi_e}_{s, a}} (s,a)$ can be estimated using the demonstrations of the expert,
and $\widehat{\rho^{\pi}_{s}} (s)$ can be estimated through the agent's interactions with the environment.
However, the reward function in the original form of Equation \eqref{mis_lead} has a major defect, which we will discuss in the following section.
\subsection{Misleading Reward}\label{mr}
\begin{figure}[tb]
\centering
\subfigure[Change in ratio]{
\resizebox{0.45\textwidth}{!}{\includesvg{adjust.svg}}
}
\subfigure[Reward variance]{
\resizebox{0.45\textwidth}{!}{\includesvg{haha.svg}}
\label{haha}
}
\caption{Comparison between two reward functions.
The original ratio $ \frac{\rho_{s}^{\pi_e} (s)}{\rho^{\pi}_{s} (s)}$ may be extremely high when $\rho^{\pi}_{s} (s)$ is close to 0
, while the revised ratio $\frac{2\rho_{s}^{\pi_e} (s)}{\rho_{s}^{\pi_e} (s) + \rho^{\pi}_{s} (s)}$ is close the original ratio when the original ratio is low,
and has an upper bound of 2.
The reward in the form of Equation \eqref{mis_lead} has a higher variance than the revised reward in the form of Equation \eqref{adjust_error}.}
\label{vbt}
\end{figure}
The reward function in the form of Equation \eqref{mis_lead} has a potential issue, referred to as the misleading reward problem.
It means that the agent relying on this reward function may get extremely wrong rewards in some cases.
This problem happens when the agent reaches a state that it never encountered before.
When the agent explores such states, where the values of $\widehat{\rho^{\pi}_{s}} (s)$ are close to zero,
the estimated rewards in these states are likely to have a large
variance among different actions. Consequently, the agent will receive very high reward signals in these states compared with other ordinary states.
These wrong rewards can inevitably mislead the agent, and make the RL algorithm fail to reach its intended target.
The cause of this misleading reward problem is the estimation error of $\rho^{\pi}_{s}$.
When we use certain states as the samples to estimate the probability density,
the probability density of other states may be estimated to be close to 0.
Although this problem can be partially alleviated by increasing the number of states used to estimate $\rho^{\pi}_{s}$,
more interactions will also be required, which is not sample-efficient.
To solve this problem, we make a trade-off between bias and variance in estimating the reward:
\begin{equation}
r (s, a) = \frac{\widehat{\rho^{\pi_e}_{s,a}} (s,a)}{\alpha \widehat{\rho^{\pi_e}_{s}} (s)+ \beta \widehat{\rho^{\pi}_{s}} (s)},
\label{origin_adjust_error}
\end{equation}
where $\alpha + \beta = 1$ and $0 \leq \alpha \leq 1$. The coefficient $\alpha$ plays the role of a variance controller: when $\alpha$ is close to $0$,
the estimator has high variance and low bias; when $\alpha$ is close to 1, the estimator has high bias and low variance.
Intuitively, $\alpha = 0.5$ indicates a reasonable balance between bias and variance, and a revised reward function is:
\begin{equation}
r (s, a) = \frac{2 \widehat{\rho^{\pi_e}_{s,a}} (s,a)}{\widehat{\rho^{\pi_e}_{s}} (s) + \widehat{\rho^{\pi}_{s}} (s)}
\label{adjust_error}
\end{equation}
Figure \ref{vbt} gives an illustration of the comparison between the reward functions in the form of Equation \eqref{mis_lead} and Equation \eqref{adjust_error}.
In Figure \ref{haha}, we select two states where $\widehat{\pi_e(a | s_1)} = \widehat{\pi_e(a | s_1)} = \widehat{\pi_e(a | s)}$ (the blue dot line), $\widehat{\rho^{\pi_e}(s_1)} = \widehat{\rho^{\pi_e}(s_2)} = 1$,
$\widehat{\rho^{\pi}(s_1)} = 0.1, \widehat{\rho^{\pi}(s_2)} = 0.05$.
\subsection{Algorithm}
\begin{algorithm}[tb]
\caption{PDEIL:Probability Density Estimation based Imitation Learning }
\label{algo}
\begin{tabular}{ll}
\textbf{Input:}
& Expert demonstrations $D = \{s_i^e, a_i^e\}_{i \in [1:D]}$ \\
& An agent's policy model $\pi$ \\
& Probability Density Estimator $\widehat{\rho_{s,a}^{\pi_e}}$ \\
& Probability Density Estimator $\widehat{\rho_{s}^{\pi_e}}$ \\
& Probability Density Estimator $\widehat{\rho_{s}^{\pi}}$ \\
& A state’s buffer $R$ \\
\textbf{Parameter}: &Number of epochs $N$ \\
&Number of trying steps $T$ \\
&Number of learning steps $L$ \\
&Trade-off parameter $\alpha$ \\
\textbf{Output}: & An agent with a desired policy \\
\end{tabular}
\
\begin{algorithmic}[1]
\STATE Watching: train $\widehat{\rho_{s,a}^{\pi_e}}$ using the expert demonstrations $D$; train $\widehat{\rho_{s}^{\pi_e}}$ by using $\{s_i^e\}_{i \in [1:D]}$\\
\FOR{\texttt{$i$ in $1:N$}}
\STATE Trying: agent with model $\pi$ interacts with the environment for $T$ steps and save $s_1, s_2 \dots s_T$ into $R$
\STATE Train $\widehat{\rho_{s}^{\pi}}$ using $R$ and update $\widehat{\rho_{s}^{\pi}}$
\STATE Clear $R$
\STATE Update reward function using Equation \eqref{origin_adjust_error}
\STATE Learning: agent performs learning for $L$ steps using an RL algorithm and updates its model $\pi$
\ENDFOR
\end{algorithmic}
\end{algorithm}
We present a novel IL algorithm based on the reward function in Equation \eqref{origin_adjust_error} and the
pseudo code of the algorithm is shown in Algorithm \ref{algo}.
The framework of PDEIL consists of three major components, watching, trying and learning. In the watching part,
the agent watches expert demonstrations, and uses these demonstrations to train
$\widehat{\rho_{s,a}^{\pi_e}}$ and $\widehat{\rho_{s}^{\pi_e}}$. In the trying part,
the agent interacts with the environment for some steps to make better understanding of the environment as well as to
train and update $\widehat{\rho_{s}^{\pi}}$. In the learning part, the agent uses an RL algorithm to improve itself by further approximating the expert.
In a complete training process, our agent watches once and iteratively performs the trying and learning operations
several times as necessary, similar to the framework in \cite{zhou2019watch}.
Probability density estimation is a classical problem that has been extensively investigated in statistics.
For a discrete random variable, the most common way to estimate its probability is using a frequency table representing
its distribution. For a continuous random variable, there are two different ways to estimate its probability density: the parametric density estimation
and the nonparametric density estimation.
Parametric density estimation methods model the overall distribution as a certain distribution family,
such as Gaussian distributions, and the key is to determine the parameters of the distribution.
Nonparametric density estimation methods are model-free and
one of the most commonly used methods is the kernel density estimation.
For a mixed random variable, such as$ (s, a)$ where $s$ is continuous and $a$ is discrete, given the fact of $p (s,a) = p (a|s)p (s)$,
this joint distribution estimation problem can be transformed into a continuous distribution
estimation problem and a conditional probability estimation problem. For the conditional probability estimation,
we can regard it as a classification task, and,
for instance, use an SVM \cite{noble2006support} classifier or other classifier methods to estimate $p (a|s)$.
\section{Experiments}
In the experimental studies, we aim to answer the following questions:
\begin{enumerate}
\item How does PDEIL perform in different environments?
\item Is PDEIL more efficient than other IL algorithms?
\item Is the recovered reward by PDEIL close to the ground truth reward?
\item Does the misleading reward problem really exist?
\end{enumerate}
Our experiments were conducted in two environments in Gym, which is an open source platform for studying reinforcement learning algorithms.
We chose two classical control environments,
CartPole and Pendulum (see Figure \ref{environments}), where CartPole is a discrete action space environment and Pendulum is a continuous action space
environment. To evaluate PDEIL, we hid the original reward signals of these two environments during training.
For CartPole, two Gaussian models were used to estimate $\rho^{\pi_e}_{s}$ and $\rho^{\pi}_{s}$ and an SVM model was used to estimate $\pi_e (a | s)$.
PPO \cite{schulman2017proximal} was used
to update the policy in the learning steps.
For Pendulum, we used three Gaussian models to estimate $\rho^{\pi_e}_{s, a}$, $\rho^{\pi_e}_{s}$ and $\rho^{\pi}_{s}$, while
SAC \cite{haarnoja2018soft} was
used to update the policy in the the learning steps. Furthermore,
we set $N = 100$, $T = 1000$ in both environments with $L = 10$ and $5000$ on CartPole and
Pendulum, respectively, according to some preliminary trials.
\begin{figure}[tbh]
\centering
\subfigure[\small{CartPole}]{
\includegraphics[width=.225\textwidth]{car_env.pdf}
}
\subfigure[\small{Pendulum}]{
\includegraphics[width=.225\textwidth]{pend_env.pdf}
}
\caption{The two experimental environments.}
\label{environments}
\end{figure}
To answer Question 1,
we applied PDEIL with various expert demonstrations with $\alpha = 0.5$.
The performance of PDEIL in the two environments is shown in Figure \ref{af}.
The results clearly indicate that PDEIL can recover desired policies
that are reasonably close to the expert policies with a small amount of expert demonstrations in both discrete and continuous action spaces.
We also find that PDEIL may occasionally experience some slight stability issue. For example, the performance of PDEIL with
5 episodes of expert demonstrations on Pendulum (the blue line in Figure \ref{pe}) was a bit fluctuated. We argued for that there are two possible reasons:
\begin{enumerate*}[label=\roman*)]
\item the estimated reward function is biased when the agent is learning;
\item the optimization process for the neural network has some inherent instability.
\end{enumerate*}
\begin{figure}[tbh]
\centering
\subfigure[\small{CartPole-v1}]{
\resizebox{0.225\textwidth}{!}{\includesvg{CartPole.svg}}
}
\subfigure[\small{Pendulum-v0}]{
\resizebox{0.225\textwidth}{!}{\includesvg{Pendulum.svg}}
\label{pe}
}
\caption{The performance of PDEIL with different episodes.}
\label{af}
\end{figure}
To answer Question 2, we conducted extensive comparison among PDEIL, GAIL and BC algorithms with the same expert demonstrations.
The trade-off parameter $\alpha$ was also fixed to $0.5$ while the number of episodes of expert demonstrations was varied among 1, 2 and 5.
In Figure \ref{epo}, it is obvious that PDEIL is much more efficient than GAIL and BC.
Although it seems that GAIL and PDEIL have a similar efficiency on CartPole,
PDEIL is more stable and GAIL requires many more interactions with the environment in each learning step.
Furthermore, PDEIL uses the Gaussian model for reward estimation while GAIL uses a neural network,
which is more complicated and expensive in the training steps.
\begin{figure*}[tbh]
\centering
\subfigure[\scriptsize{Performance on CartPole using 1 episode expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_CartPole-v1_epoch_1.svg}}
}
\subfigure[\scriptsize{Performance on CartPole using 2 episodes expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_CartPole-v1_epoch_2.svg}}
}
\subfigure[\scriptsize{Performance on CartPole using 5 episodes expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_CartPole-v1_epoch_5.svg}}
}
\subfigure[\scriptsize{Performance on Pendulum using 1 episode expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_Pendulum-v0_epoch_1.svg}}
}
\subfigure[\scriptsize{Performance on Pendulum using 2 episodes expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_Pendulum-v0_epoch_2.svg}}
}
\subfigure[\scriptsize{Performance on Pendulum using 5 episodes expert demonstrations}]{
\resizebox{0.32\textwidth}{!}{\includesvg{gail_Pendulum-v0_epoch_5.svg}}
}
\caption{PDEIL vs GAIL and BC on CartPole and Pendulum environments.}
\label{epo}
\end{figure*}
To answer Question 3, we collected the recovered rewards and the ground truth rewards from the original environment in the trying steps in PDEIL.
We chose the Pendulum environment as the example for the sake of visual demonstration, as its ground truth reward is continuous.
Each round of the experiment had 100 epochs from which the 13th to 16th epochs were picked as examples for illustration in Figure \ref{epo2}.
It is clear that the correlation between the recovered reward and the ground truth reward gets increasingly stronger,
which means that our proposed reward function can be expected to guide the RL algorithm to learn a competent policy.
\begin{figure}[tb]
\centering
\subfigure[\scriptsize{Rewards in the 13th epoch}]{
\resizebox{0.2\textwidth}{!}{\includesvg{epoch13.svg}}
}
\subfigure[\scriptsize{Rewards in the 14th epoch}]{
\resizebox{0.2\textwidth}{!}{\includesvg{epoch14.svg}}
}
\subfigure[\scriptsize{Rewards in the 15th epoch}]{
\resizebox{0.2\textwidth}{!}{\includesvg{epoch15.svg}}
}
\subfigure[\scriptsize{Rewards in the 16th epoch}]{
\resizebox{0.2\textwidth}{!}{\includesvg{epoch16.svg}}
}
\caption{The correlation coefficients between the two rewards are 0.19 in the 13th learning epoch,
0.37 in the 14th learning epoch, 0.70 in the 15th learning epoch and 0.76 in the 16th learning epoch.}
\label{epo2}
\end{figure}
\begin{figure}[tbh]
\centering
\subfigure[\small{CartPole-v1}]{
\resizebox{0.225\textwidth}{!}{\includesvg{CartPole_multalpha.svg}}
}
\subfigure[\small{Pendulum-v0}]{
\resizebox{0.225\textwidth}{!}{\includesvg{Pendulum_multalpha.svg}}
\label{ee}
}
\caption{The average performance of PDEIL with various $\alpha$ values.}
\label{sessad}
\end{figure}
To answer Question 4, in addition to the theoretical analysis in Section \ref{mr}, we conducted further empirical study as shown in Figure \ref{sessad}.
The introduction of the trade-off parameter $\alpha$ is to control the variance of estimated reward.
When $\alpha = 0$, we use the original reward function in Equation \ref{mis_lead}, and the misleading reward problem may occur. For example,
the agent using the original reward function in the Pendulum environment(red line in Figure \ref{ee}) had a poor performance,
which implied that the misleading reward problem occurred.
\section{Conclusion and Discussion}
In this work, we proposed a brand-new reward function in the scenario of IRL, which has a concise and indicative form.
We also proposed an algorithm called PDEIL based on this reward function, featuring a "watch-try-learn" style.
In PDEIL, to recover the reward function, the agent watches the expert demonstrations
and performs interactions with the environment, and uses RL algorithms to update the policy.
It is expected that our work may reveal a new perspective for IRL
by transforming the original IRL problem into a density estimation problem.
We can prove that, with a perfect probability density estimator, the corresponding optimal policy is identical to the expert policy
as long as it is deterministic.
However, constructing a good probability density estimator can be challenging in some cases,
for example, when the state is of high dimensionality (e.g., an image).
Consequently, further enhancing the efficacy of PDEIL with
more competent probability density estimators will be a key direction for our future work.
\bibliographystyle{named}
|
1,108,101,565,730 | arxiv | \section{}
\textit{Research Notes of the \href{https://aas.org}{American Astronomical Society}}
(\href{http://rnaas.aas.org}{RNAAS}) is a publication in the AAS portfolio
(alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can
promptly and briefly share materials of interest with the astronomical community
in a form that will be searchable via ADS and permanently archived.
The astronomical community has long faced a challenge in disseminating
information that may not meet the criteria for a traditional journal article.
There have generally been few options available for sharing works in progress,
comments and clarifications, null results, and timely reports of observations
(such as the spectrum of a supernova), as well as results that wouldn’t
traditionally merit a full paper (such as the discovery of a single exoplanet
or contributions to the monitoring of variable sources).
Launched in 2017, RNAAS was developed as a supported and long-term
communication channel for results such as these that would otherwise be
difficult to broadly disseminate to the professional community and persistently
archive for future reference.
Submissions to RNAAS should be brief communications - 1,000 words or fewer
\footnote{An easy way to count the number of words in a Research Note is to use
the \texttt{texcount} utility installed with most \latex\ installations. The
call \texttt{texcount -incbib -v3 rnaas.tex}) gives 57 words in the front
matter and 493 words in the text/references/captions of this template. Another
option is by copying the words into MS/Word, and using ``Word Count'' under the
Tool tab.}, and no more than a single figure (e.g. Figure \ref{fig:1}) or table
(but not both) - and should be written in a style similar to that of a
traditional journal article, including references, where appropriate, but not
including an abstract.
Unlike the other journals in the AAS portfolio, RNAAS publications are not
peer reviewed; they are, however, reviewed by an editor for appropriateness
and format before publication. If accepted, RNAAS submissions are typically
published within 72 hours of manuscript receipt. Each RNAAS article is
issued a DOI and indexed by ADS \citep{2000A&AS..143...41K} to create a
long-term, citable record of work.
Articles can be submitted in \latex\ (preferably with the new "RNAAS"
style option in AASTeX v6.2), MS/Word, or via the direct submission in the
\href{http://www.authorea.com}{Authorea} or
\href{http://www.overleaf.com}{Overleaf} online collaborative editors.
Authors are expected to follow the AAS's ethics \citep{2006ApJ...652..847K},
including guidance on plagiarism \citep{2012AAS...21920404V}.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.85,angle=0]{aas.pdf}
\caption{Top page of the AAS Journals' website, \url{http://journals.aas.org},
on October 15, 2017. Each RNAAS manuscript is only allowed one figure or
table (but not both). Including the
\href{http://journals.aas.org//authors/data.html\#DbF}{data behind the figure}
in a Note is encouraged, and the data will be provided as a link in the
published Note.\label{fig:1}}
\end{center}
\end{figure}
\acknowledgments
Acknowledge people, facilities, and software here but remember that this counts
against your 1000 word limit.
\section{Introduction} \label{intro}
White Dwarf stars (WDs) are the final stage of evolution of stars whose progenitor masses are below 8-10.5 $M_{\sun}$, depending on metallicity \citep{2015MNRAS.446.2599D}, which contain important information needed to comprehend stellar formation and evolution. Most of WDs, especially single ones (i.e. not in binary system), will exist for a very long time due to their slow cooling processes, which means the cool ones may record information from extremely early time. Therefore, researches on those WDs are also helpful for understanding certain history of the Milky Way \citep[e.g.][]{2014ApJ...791...92T}.
The number of spectroscopically identified WDs is increasing thanks to SDSS \citep{2000AJ....120.1579Y,2011AJ....142...72E,2017AJ....154...28B} and most of them are actually DA types. \citet{2004ApJ...607..426K} reported 1888 DAs out of 2551 WDs based on the first Data Release \citep[DR1;][]{2003AJ....126.2081A}. Using SDSS DR4 \citep{2006ApJS..162...38A}, \citet{2006ApJS..167...40E} presented 8000 DAs out of 9316 WDs. Based on SDSS DR7 \citep{2009ApJS..182..543A}, \citet{2013ApJS..204....5K} reported 19713 WDs, 12831 of which were DAs. Furthermore, 6887 DAs out of 8441 WDs were presented by \citet{2015MNRAS.446.4078K} using SDSS DR10 \citep{2014ApJS..211...17A}. Based on SDSS DR12 \citep{2015ApJS..219...12A}, \citet{2016MNRAS.455.3413K} reported nearly 2883 WDs, 1964 out of which were DAs. 15716 DAs out of 20088 WDs were presented by \citet{2019MNRAS.486.2169K} in SDSS DR14 \citep{2018ApJS..235...42A}. The atmospheric parameters of these WDs were determined from theoretical spectral fitting. Stellar spectra contain much information about a star and it is exactly this point that allows us to explore relation between spectra and parameters of a star in different ways.
Artificial neural network (ANN) has long been applied to the determination of astrophysical parameters. \citet{1997MNRAS.292..157B} trained an ANN on a set of synthetic optical stellar spectra to get \Teff, \logg\ and [M/H]. Nowadays, due to the development of computing power and availability of large data sets, ANN especially deep learning method has been widely used in astronomy to predict physical parameters of stars. \citet{2018MNRAS.475.2978F} applied Convolutional Neural Networks (CNN) to the data set from the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE;][]{2017AJ....154...94M}. \citet{2019ApJ...887..193L} and \citet{2019PASP..131i4202Z} trained deep neural networks respectively to estimate parameters and abundances of main-sequence stars using spectra from APOGEE and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope \citep[LAMOST;][]{2006ChJAA...6..265Z,2012RAA....12..723Z}, which both yielded excellent performance. \citet{2020MNRAS.497.2688C} presented a generative fitting pipeline that interpolates theoretical spectra with ANN to determine atmospheric labels of WDs. However, there are still few works using ANN to estimate parameters of WDs so far.
In this work, a deep learning network is constructed based on an architecture called Residual Network (ResNet) \citep{2015arXiv151203385H} and is directly trained on the spectra of DAs identified in SDSS DR7, DR10 and DR12 to estimate atmospheric parameters. In section \ref{dataset}, we introduce the reduction of data. In section \ref{method}, our method and details of training and test are described. Finally, the conclusion of this work is given in section \ref{conclusion}.
\section{Data Set} \label{dataset}
We collect all spectra of DAs spectroscopically confirmed in SDSS DR7, DR10 and DR12. Corresponding atmospheric parameters are obtained from the catalogs published in \citet{2013ApJS..204....5K} and \citet{2015MNRAS.446.4078K, 2016MNRAS.455.3413K}. To begin with, the data are filtered with following criteria: (a) single DAs, (b) signal to noise rate in g-band $S/N_g\geq 10$, and (c) relative errors of \Teff\ and \logg\ $\leq 10\%$, which yield 9159 objects. Then we apply 3D correction (ML2/$\alpha$=0.8) to \Teff\ and \logg\ according to the calibration from \citet{2013A26A...559A.104T}.
\begin{figure}[htbp]
\plotone{all_hist.png}
\caption{The distributions of \Teff\ and \logg\ of 9159 filtered DAs. \label{fig:all_dist}}
\end{figure}
Figure \ref{fig:all_dist} shows the distributions of \Teff\ and \logg\ of filtered data. Based on this, we further select the objects that satisfy 5000 K $\leq$ \Teff\ $\leq$ 40000 K and 7.0 dex $\leq$ \logg\ $\leq$ 9.0 dex and finally get 8490 DAs. Next we preprocess them in two ways.
\subsection{Preprocessing on Original Spectra: Data Set 1} \label{pre1}
We normalize original spectra of DAs with the script provided by \citet{2014A26A...569A.111B}. Firstly, a spectrum is re-sampled with an equivalent wavelength step. Next, a median and maximum filter with different window sizes are applied to find the continuum points. The median filter (here we set wavelength step = 30 in the script) replaces the flux value of central pixel with the median value in the running window and then the maximum filter (we set wavelength step = 120) will replace it with the maximum value. The former smoothes out noisy and the later ignores deeper fluxes that belong to absorption lines (the continuum will be placed in slightly upper or lower locations depending on the values of those parameters). Once the spectrum is filtered, the continuum model will be fitted with a group of splines (we set splines number = 8 and degree = 2) and finally the spectrum is normalized by dividing all the fluxes by the model. One example (Object ``spSpec-54612-2922-626.fit'' in DR7) is as shown in figure \ref{fig:original_wave}.
\begin{figure}[htbp]
\plotone{original_wave.png}
\caption{Observed and normalized spectrum of ``spSpec-54612-2922-626.fit'' in DR7. \label{fig:original_wave}}
\end{figure}
\subsection{Preprocessing on Spectra with Degraded Resolution: Data Set 2} \label{pre2}
The Chinese Space Station Optical Survey (CSS-OS) is a planned full sky survey operated by the Chinese Space Station Telescope (CSST) \citep{2019ApJ...883..203G}. CSST is a 2-meter space telescope in the same orbit of the China Manned Space Station, which is planned to be launched at the end of 2022. There are three spectroscopic bands covering 255-1000 nm and the resolutions of these bands are all about 200 \citep{2019ApJ...883..203G}.
To figure out whether our deep learning method is practicable under this situation, we plan to test with synthetical spectra. Firstly, we interpolate the original spectra with equal interval of wavelength ($\Delta \lambda =1$ \AA). Then we degrade the resolution of all interpolated spectra to 200 and Gaussian noise ($\sigma$ = flux/30) is added to simulate real spectra observed by CSST. Finally, we normalize them in the same way as mentioned above. Figure \ref{fig:wave_200} shows an example (object ``spSpec-52943-1584-513.fit'' in DR7) of the data reduction.
\begin{figure}[htbp]
\plotone{wave_200.png}
\caption{Observed and synthetical spectrum of ``spSpec-52943-1584-513.fit'' in DR7. \label{fig:wave_200}}
\end{figure}
For all spectra, there are few characteristics of absorption lines and much noise after 7500 \AA\ , so we ignore this portion. Considering actual conditions of all samples, we decide to use fluxes between the range of wavelength from 3834.4 \AA\ to 7500.7 \AA\ (log$\lambda$(\AA) $\in$[3.5837 , 3.8751]), a common region for all spectra, as input of the network.
\section{Method} \label{method}
\subsection{Network} \label{network}
ANN has been widely applied to data analysis in many fields thanks to the vast development of computing power and availability of large data sets. A neural network is a series of algorithms that aims to map the relationships between input data and output labels that we care about. The most important feature of neural networks is that they are adaptive which means they can change or adapt the parameters like weights and biases (see below) within themselves as they learn from continuous training. A deep learning neural network is ``stacked neural networks'' composed of many layers.
\citet{2015arXiv151203385H} introduced Residual Network (ResNet) to solve the problems caused by vanishing or exploding gradient when training very deep neural network. The structure of ResNet block is illustrated in figure \ref{fig:resnet_block}. Output of each block is related to the results of previous hidden layers, as well as the initial input. We adopt this as the basic structure of our network because of its good performance after experiments.
\begin{figure}[htbp]
\epsscale{0.4}
\plotone{resnet_block.png}
\caption{The outline of ResNet block. \label{fig:resnet_block}}
\end{figure}
The frame of our network is displayed in figure \ref{fig:network}. First of all, each flux value of a continuum-normalized spectrum is allocated to a node (black point) of Input layer. All of nodes are fully connected to neurons (blue points) in the next Dense layer, so a neuron will receive all fluxes delivered by the nodes. Output of \textit{n}th neuron $y^{(n)}$ is determined by equation \ref{eq:1} where $x$ is received fluxes, $m$ is the number of flux pixels (nodes) whereas parameters $w^{(n)}$ and $b^{(n)}$ stand for weights and biases of this neuron which are going to be adjusted during training. These outputs are sequentially passed to ResNet blocks.
\begin{equation}
y^{(n)} = \sum \limits ^m_{i=1} w^{(n)}_ix_i +b^{(n)} \label{eq:1}
\end{equation}
Batch Normalization (BN) layer \citep{2015arXiv150203167I} applies a transformation that maintains the mean close to 0 and the standard deviation close to 1 for each neuron channel over training mini-batch (the data are usually divided into several batches and network is successively trained on the data of one batch size at a time rather than the whole data set due to limited computer memory). Use of BN layer will dramatically accelerate the training process and improve performance \citep{2015arXiv150203167I}. Activation layer is used to impart non-linearity to inputs from the last layer to improve performance and stability of the network. The output of each node (gray point) in Activation will be passed to neurons in the next Dense and then neurons will sequentially transmit their outputs calculated with the same mechanism as described in equation \ref{eq:1} to the second sub-block. It is worth nothing that eventual outputs of one ResNet block is obtained by adding the outputs (F(X)) computed from the whole block and the initial inputs (X) as mentioned above.
\begin{figure}[htbp]
\plotone{network.png}
\plotone{my_resnet_block.png}
\caption{Upper panel: the frame of our network. Lower panel: the details of each ResNet block. \label{fig:network}}
\end{figure}
After processes operated by three ResNet blocks, ``data flow'' is continuously passed to the rest layers. The final layer is still a Dense with only two neurons representing the outlets of \Teff\ and \logg. We instantiate two networks based on two data sets with the frame as shown in figure \ref{fig:network}. For data set 1, the number of nodes in Input is 2915 which is determined by the number of flux pixels. We place 280 neurons in each Dense except the last Dense with two. For data set 2, there are 3666 nodes in Input since we interpolate observed spectra (wavelength interval is 0.0001 in logarithmic form) with the wavelength step 1\AA\ and the number of neurons in Dense is 340. For both cases, we use ``ReLU'' function that returns the maximum between the input and 0 in Activation. Other configurations will be fixed automatically.
\subsection{Training and test} \label{train_test}
We obtain \Teff\ and \logg\ of DAs from the WDs catalogs reported by \citet{2013ApJS..204....5K} and \citet{2015MNRAS.446.4078K, 2016MNRAS.455.3413K} and match these parameters with corresponding spectra. The parameters were determined through theoretical fitting and able to generally describe actual states of the DAs. We adopt these parameters as reference values to construct and verify our networks. After 3D correction, we filter the data with certain criteria and get 8490 samples. Then we preprocess these spectra in two ways yielding two data sets: normalized observed spectra and synthetical spectra with R=200. We split them randomly into two parts: 70\% for training (5943) and 30\% for test (2547). Finally, we use continuum-normalized fluxes as input and normalized \Teff, \logg\ (i.e. minimum is 0 and maximum is 1) as output of networks. The results of each part of every data set are shown in figure \ref{fig:da} and figure \ref{fig:da_200}. We adopt root mean square error (RMSE) and mean absolute error (MAE) (see equation \ref{eq:2}, where $z_i$ is reference value and $\hat{z}_i$ is the parameter estimated by networks) as metrics of performance of the networks.
\begin{equation}
\centering
RMSE = \sqrt{\frac{1}{N}\sum\limits^N_{i=1}(z_i-\hat{z}_i)^2} \ ,\quad
MAE = \frac{1}{N}\sum\limits^N_{i=1}|z_i-\hat{z}_i|
\label{eq:2}
\end{equation}
For training part of data set 1, RMSE and MAE of \Teff\ we get are 133.7 K and 100.9 K, respectively. As for \logg, they are 0.01 dex and 0.01 dex, which means that the network has fully learned properties of training data.
Test part is used to provide an unbiased evaluation on the final model. The tendency of result for \Teff\ generally seems to be fine although there is a little dispersion at \Teff\ $\sim 11000$ K owing to the spectral degeneracy of DAs \citep{2006ApJS..167...40E}, leading to RMSE = 906.4 K and MAE = 508.8 K. The situation for \logg\ is complicated. Since the distribution of \logg\ of our samples is mainly located between 7.0 - 9.0 dex, especially $\sim 8.0$ dex (see figure \ref{fig:all_dist}), the performance is better when \logg\ $\in$[7.75 , 8.25] dex where more data are concentrated. For the side with smaller \logg, predicted values are a little higher than reference ones and for the region with higher \logg, the former is mostly smaller than the later, which results in RMSE =0.11 dex and MAE = 0.08 dex. The performance from test part shows that there is no bad overfitting within the network.
\begin{figure}[htbp]
\plotone{da_1.png}
\plotone{da_2.png}
\caption{Results of data set 1. Red points: predicted parameters by the network versus reference parameters from WDs catalogs in published papers. Solid line means there is no error. \label{fig:da}}
\end{figure}
In terms of data set 2, we instantiate the second network with the same frame but different configurations, also resulting in similar consequences (see figure \ref{fig:da_200}). The reason why performances of two networks trained on two data sets with different spectra resolutions are analogous may be because there are strong and wide hydrogen lines rather than very close absorption lines, making different resolutions lead to little effect.
\begin{figure}[htbp]
\plotone{da_200_1.png}
\plotone{da_200_2.png}
\caption{Same as figure \ref{fig:da} but for data set 2. \label{fig:da_200}}
\end{figure}
For both data sets, we also calculate the residual ($\Delta = \hat{z} - z$) of parameters for each part. Furthermore, we infer the mean and standard deviation (Std) of the residual for test part, as shown in figure \ref{fig:residual}.
\begin{figure}[htbp]
\plotone{residual.png}
\plotone{residual_200.png}
\caption{Upper panel: residual of parameters of each part for data set 1. Blue points represent training part and red points represent test part. Lower panel: same as the upper but for data set 2. \label{fig:residual} }
\end{figure}
\section{Conlusion} \label{conclusion}
In this work, we construct a deep learning network based on the ResNet structure. The network is directly trained and tested on the continuum-normalized spectral pixels of DAs spectroscopically confirmed in SDSS DR7, DR10 and DR12 to map the spectra onto atmospheric parameters. The RMSE related to estimated and reference parameters reaches 900 K for \Teff\ and 0.1 dex for \logg. Furthermore, we show that the method is also applicable for the spectra with degraded resolution $\sim 200$.
Compared with the traditional methodology that achieves parameters determination of DAs, our network does not depend on complex theoretical models because it directly use observed normalized fluxes as input to find the matching parameters. The method is feasible for DAs with \Teff\ from 5000 K to 40000 K and \logg\ from 7.0 dex to 9.0 dex.
The existing limitation is that we do not infer error of each estimated parameter due to the running mode of the network. In addition, it only works for DA types currently since it is not easy to recognise and grasp common features from the spectra of other white dwarf types by merely training neural networks.
In the future, we will continue to collect data of WDs and improve the ability of our method to determine physical parameters of the stars more accurately.
\acknowledgments
We thank Qixun Wang for helpful comments and suggestions. This study is supported by the National Natural Science Foundation of China under grant No. 11988101, 11973048, 11927804, 11890694 and National Key R\&D Program of China No. 2019YFA0405502.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \url{http://www.sdss.org/}.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \break \url{http://www.sdss3.org/}.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
|
1,108,101,565,731 | arxiv | \section{Introduction}\label{sec:intro}
Broadcast and superposition are two fundamental properties of the
wireless medium. Due to the broadcast nature, wireless transmission
can be received by multiple receivers with possibly different signal
strengths. Due to the superposition property, a receiver observes a
signal that is a superposition of multiple simultaneous
transmissions. From the \textit{secure communication} point of view,
both features pose a number of security issues. In particular, the
broadcast nature makes wireless transmission susceptible
to \textit{eavesdropping}, because anyone (including adversarial
users) within the communication range can listen and possibly
extract the confidential information. The superposition property
makes wireless communication susceptible to \textit{jamming}
attacks, where adversarial users can superpose destructive signals
(interference) onto useful signals to block the intended
transmission.
A helper can pit one property of the wireless medium against the
security issues caused by the other. An example in which broadcast is
employed to counteract the effects of superposition is the case of a
helper that functions as a relay to facilitate the transmission
from a source terminal to a severely jammed destination terminal. In
this paper, we consider the case in which a helper functions as an
\textit{interferer} to improve the secrecy level of a communication
session which is compromised by a passive eavesdropper. This is an
example where superposition is employed to counteract the security
threat due to the broadcast nature of the wireless medium.
We study the problem in which a transmitter sends confidential
messages to an intended receiver with the help of an interferer, in
the presence of a passive eavesdropper. We call this model the
\textit{wiretap channel with a helping interferer} (WT-HI for
brevity). In this system, it is desirable to minimize the leakage of
information to the eavesdropper. The interferer tries to help by
transmitting a signal without knowledge of the actual
confidential message. The level of ignorance of the eavesdropper
with respect to the confidential messages is measured by the
equivocation rate. This information-theoretic approach was
introduced by Wyner for the \textit{wiretap channel}
\cite{Wyner:BSTJ:75}, in which a single source-destination
communication is eavesdropped upon via a degraded channel. Wyner's
formulation was generalized by Csisz{\'{a}}r and K{\"{o}}rner who
determined the capacity region of the broadcast channel with
confidential messages \cite{Csiszar:IT:78}. The Gaussian wiretap
channel was considered in \cite{Leung-Yan-Cheong:IT:78}. More
recently, there has been a resurgence of interest in
\textit{information-theoretic security} for multi-user channel
models. Related prior work includes the multiple access channel (MAC) with
confidential
messages\cite{Liang:IT:06,Liu:ISIT:06,Tekin:IT:06,Tang:ITW:07,Tekin:IT:07},
the interference channel with confidential messages \cite{Liu:IT:07,
Liang:Allerton:07}, and the relay-eavesdropper channel
\cite{Lai:IT:06,Yusel:CISS:07}.
In this paper, an achievable secrecy rate for the WT-HI under the
requirement of \textit{perfect secrecy} is given. That is, the
eavesdropper is kept in total ignorance with respect to the message
for the intended receiver. A geometrical interpretation of the achievable
secrecy rate is given based on the MAC achievable rate regions from the
transmitter and the interferer to the intended receiver and to the
eavesdropper, respectively. For a symmetric Gaussian WT-HI, both the
achievable secrecy rate and a power control scheme are given. The
results show that the interferer can increase the secrecy level,
and that a positive secrecy rate can be achieved even when the
source-destination channel is worse than the source-eavesdropper
channel. An important example of the Gaussian case is that in which
the interferer has a better channel to the intended receiver than to
the eavesdropper. Here, the interferer can send a (random) codeword
at a rate that ensures that it can be decoded and subtracted from
the received signal by the intended receiver, but cannot be decoded by the
eavesdropper. Hence, only the eavesdropper is interfered with and
the secrecy level of the confidential message can be increased. Our
scheme can be considered to be a generalization of the two schemes
in [8], [9], and [11]. In the cooperative jamming [8] (artificial noise [9]) scheme, the helper generates an independent Gaussian noise. This
scheme does not employ any structure in the transmitted signal. The
noise forwarding scheme in [11] requires that the interferer's
codewords can always be decoded by the intended receiver, which is
not necessary in our scheme.
The remainder of the paper is organized as follows.
Section~\ref{sec:model} describes the system model for the WT-HI.
Section \ref{sec:result} states an achievable secrecy rate followed
by its geometrical interpretations in Section \ref{sec:GI}. Section
\ref{sec:Gaussian} gives the achievable secrecy rate and a power
control scheme for a symmetric Gaussian WT-HI. Section
\ref{sec:numerical} illustrates the results through some numerical
examples. Conclusions are given in Section~\ref{sec:conclusions}.
\section{System Model}\label{sec:model}
We consider a communication system including a transmitter ($X_1$),
an intended receiver ($Y_1$), a helping interferer ($X_2$), and a
passive eavesdropper ($Y_2$). The transmitter sends a confidential
message $W$ to the intended receiver with the help from an
\textit{independent} interferer, in the presence of a passive but
\textit{intelligent} eavesdropper. We assume that the helper does
not know the confidential message $W$ and the eavesdropper knows
codebooks of the transmitter and helper. As noted above, we refer to
this channel as the wiretap channel with a helping-interferer (WT-HI).
The channel can be defined by the alphabets $\mathcal X_1$, $\mathcal X_2$,
$\mathcal Y_1$, $\mathcal Y_2$, and channel transition probability
$p(y_1,y_2|x_1,x_2)$ where $x_t\in\mathcal X_t$ and $y_t\in\mathcal Y_t$, $t=1,2$.
The transmitter uses encoder 1 to encode a confidential message $w
\in \mathcal W = \{1,\dots, M\}$ into $x_1^n$ and sends it to the intended
receiver in $n$ channel uses. A stochastic encoder
\cite{Csiszar:IT:78} $f$ is specified by a matrix of conditional
probabilities $f(x_{1,k}|w)$, where $x_{1,k} \in \mathcal X_1$, $w \in
\mathcal W$, $\sum_{x_{1,k}}f_1(x_{1,k}|w)=1$ for all $k=1,\dots, n$, and
$f(x_{1,k}|w)$ is the probability that encoder 1 outputs $x_{1,k}$
when message $w$ is being sent. The helper generates its output
$x_{2,k}$ randomly and can be considered as using another stochastic
encoder $f_2$, which is specified by a matrix of probabilities
$f_{2}(x_{2,k})$ with $x_{2,k} \in \mathcal X_{2}$ and
$\sum_{x_{2,k}}f_{2}(x_{2,k})=1.$ Since randomization can increase
secrecy, encoder 1 uses stochastic encoding to introduce
\textit{randomness}. Additional randomization is provided by the
helper and the secrecy can be increased further.
The decoder uses the output sequence $y_1^n$ to compute its estimate
$\hat{w}$ of $w$. The decoding function is specified by a
(deterministic) mapping $g: \mathcal Y_1^n \rightarrow \mathcal W$.
The average probability of error is
\begin{equation}\label{pe}
P_e=\frac{1}{M}\sum_{w}\mathrm{Pr}\left\{g(Y_1^n) \neq w | w
~\mbox{sent}\right\}.
\end{equation}
The secrecy level (level of ignorance of the eavesdropper with
respect to the confidential message $w$) is measured by the
equivocation rate $(1/n)H(W|Y_2^n)$.
A secrecy rate $R_s$ is achievable for the WT-HI if, for any
$\epsilon>0$, there exists an ($M,n,P_e$) code so that
\begin{equation}\label{ach_def1}
M \geq 2^{nR_s}, ~ P_e \leq \epsilon
\end{equation}
\begin{equation}\label{ach_def2}
\text{and} \qquad R_s - \frac{1}{n}H(W|Z^n) \leq \epsilon \quad
\qquad ~
\end{equation}
for all sufficiently large $n$. The secrecy capacity is the maximal
achievable secrecy rate.
\section{Achievable Secrecy Rate}\label{sec:result}
\begin{theorem} \label{thm:WT-HI}
Let $\mathcal R_1$ denote the achievable rate region of the MAC $(\mathcal X_1,\mathcal X_2) \rightarrow \mathcal Y_1$:
\begin{align}
\mathcal R_1^{[\rm MAC]}=\left\{(R_1,R_2)\left|
\begin{array}{l}
R_1\ge 0,~R_2\ge 0,\\
R_1\le I(X_1;Y_1|X_2), \\
R_2\le I(X_2;Y_1|X_1),\\
R_1+R_2 \le I(X_1,X_2; Y_1)
\end{array}
\right.\right\}
\end{align}
and $\mathcal R_2$ denote the region of the MAC $(\mathcal X_1,\mathcal X_2) \rightarrow
\mathcal Y_2$:
\begin{align}
\mathcal R_2^{[\rm MAC]}=\left\{(R_1,R_2)\left|
\begin{array}{l}
R_1\ge 0,~R_2\ge 0,\\
R_1< I(X_1;Y_2|X_2), \\
R_2< I(X_2;Y_2|X_1),\\
R_1+R_2 < I(X_1,X_2; Y_2)
\end{array}
\right.\right\}.
\end{align}
We also define
\begin{align}
& & \mathcal R_1^{[\rm S]}&=\left\{(R_1,R_2)\left|~
\begin{array}{l}
R_1\ge 0,~R_2\ge 0,\\
R_1\le I(X_1;Y_1),\\
R_2 >I(X_2;Y_1|X_1)
\end{array}
\right.\right\}\\
&\text{and}& \mathcal R_2^{[\rm S]}&=\left\{(R_1,R_2)\left|~
\begin{array}{l}
R_1\ge 0,~R_2\ge 0,\\
R_1< I(X_1;Y_2),\\
R_2 >I(X_2;Y_2|X_1)
\end{array}
\right.\right\}.
\end{align}
The following secrecy rate is achievable for the WT-HI:
\begin{align}
R_s=\max_{\pi, R_1,R_2,R_{1,d}}\left\{R_{1,s}\left|
\begin{array}{l}
R_{1,s}+R_{1,d}=R_1,\\
(R_1,R_2)\in \left\{\mathcal R_1^{[\rm MAC]} \cup \mathcal R_1^{[\rm S]}\right\}, \\
(R_{1,d},R_2) \notin \left\{\mathcal R_2^{[\rm MAC]} \cup \mathcal R_2^{[\rm S]}\right\}
\end{array}
\right.\right\},
\end{align}
where $\pi$ is the class of distributions that factor as
\begin{equation} \label{eq:dis-IC}
p(x_1)p(x_2)p(y_1,y_2|x_1,x_2).
\end{equation}
\end{theorem}
\begin{proof}
We briefly outline the achievable coding scheme here and omit the
details of the proof, which can be found in \cite{Tang:Preprint:07}.
We consider two independent stochastic codebooks. Encoder 1 uses
codebook $\mathcal C_1(2^{nR_1},2^{nR_{1,s}}, n)$, where $n$ is the
codeword length, $2^{nR_1}$ is the size of the codebook, and
$2^{nR_{1,s}}$ is the number of confidential messages that $\mathcal C_1$
can convey ($R_{1,s}\leq R$). In addition, encoder 2 uses codebook
$\mathcal C_2(2^{nR_2},n)$, where $2^{nR_2}$ is the codebook size. The
$2^{nR_1}$ codewords in codebook $\mathcal C_1$ are randomly grouped into
$2^{nR_{1,s}}$ bins each with $M=2^{n(R_1-R_{1,s})}$ codewords.
During the encoding, to send message $w \in [1,\dots,2^{nR_{1,s}}]$,
encoder 1 randomly selects a codeword from bin $w$ and sends to
channel, while encoder 2 randomly selects a codeword from codebook
$\mathcal C_2$ to transmit.
\end{proof}
\begin{remark}
The rate $R_1$ is split as $R_1 = R_{1,s} + R_{1,d}$, where
$R_{1,s}$ denotes a secrecy information rate intended by receiver 1
and $R_{1,d}$ represent a redundancy rate sacrificed in order to
confuse the eavesdropper. The interferer helps the receiver~1
confuse the eavesdropper by transmitting dummy information with rate
$R_2$.
\end{remark}
\section{Geometric Interpretations}\label{sec:GI}
When the intended receiver needs to decode both codewords from
$\mathcal C_1$ and $\mathcal C_2$, we essentially have a compound MAC.
\begin{figure}[htbp]
\centerline{\hbox{ \hspace{0.01in}
\epsfig{file=mac_o1.eps, angle=0, width=0.2\textwidth}
\hspace{0.01in}
\epsfig{file=mac_o2.eps, angle=0, width=0.2\textwidth}}
}
\hbox{\footnotesize \hspace{0.5in} (a) intended receiver
\hspace{0.8in} (b) eavesdropper} \caption{Code rate $R_1$ versus
$R_2$ for the intended receiver and eavesdropper.} \label{fig:macs}
\vspace{-0.2cm}
\end{figure}
However, the receiver cares about only $\mathcal C_1$ and does not need to
decode $\mathcal C_2$. Hence, as shown in Fig.~\ref{fig:macs}, the
``achievable'' rate region in the $R_1$-$R_2$ plane at the receiver
is the union of $\mathcal R^{[\rm MAC]}_1$ and $\mathcal R^{[\rm S]}_1$. Here
$\mathcal R^{[\rm MAC]}_1$ is the capacity region of the MAC $(\mathcal X_1,\mathcal X_2)
\rightarrow \mathcal Y_1$, in which the intended receiver can decode both
$\mathcal C_1$ and $\mathcal C_2$, while $\mathcal R^{[\rm S]}_1$ is the region in which
the receiver treats codewords from $X_2$ as noise and decodes
$\mathcal C_1$ only.
Similar analysis applies for the eavesdropper as shown in
Fig.~\ref{fig:macs}.b.
We note that a
proper choice of the redundancy rate $R_2$ can put the eavesdropper
in its unfavorable condition, which can increase secrecy. In the
following, we consider three typical cases: very strong
interference, strong interference, and weak interference. The
analysis for general cases can be found in \cite{Tang:Preprint:07}.
\subsection{Very Strong Interference}
Fig.~\ref{fig:IC-VS} illustrates the interference channel with very
strong interference. In this case, since
\begin{align}
I(X_1;Y_2)\ge I(X_1;Y_1|X_2),
\end{align}
we cannot obtain any positive secrecy rate.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.4\linewidth,draft=false]{ic_vs.eps}}
\caption{Very strong interference channel}
\label{fig:IC-VS}
\vspace{-0.4cm}
\end{figure}
\subsection{Strong Interference}
We consider strong interference, i.e.,
\begin{align}
& & I(X_1;Y_1|X_2)&\le I(X_1;Y_2|X_2) &\notag\\
&\text{and}& I(X_2;Y_2|X_1)& \le I(X_2;Y_1|X_1) &\label{eq:IC-S}
\end{align}
for all product distributions on the input $X_1$ and $X_2$. This
condition implies that, without the interferer, channel
$\mathcal X_1\rightarrow \mathcal Y_2$ is more capable than channel
$\mathcal X_1\rightarrow \mathcal Y_1$ and, hence, the achievable secrecy rate may
be $0$.
\begin{figure}[htbp]
\centerline{\hbox{ \hspace{0.01in}
\epsfig{file=ic_s_1.eps, angle=0, width=0.18\textwidth}
\epsfig{file=ic_s_2.eps, angle=0, width=0.18\textwidth}
}
}
\hbox{\footnotesize \hspace{0.1in} (a) $I(X_2;Y_1)\le I(X_2;Y_2|X_1)$ \hspace{0.2in} (b) $I(X_2;Y_1)> I(X_2;Y_2|X_1)$}
\caption{Strong interference channel and $I(X_1,X_2;Y_1)>I(X_1,X_2;Y_2)$}
\label{fig:IC-S}
\vspace{-0.2cm}
\end{figure}
However, as shown in Fig.~\ref{fig:IC-S}, we may achieve a positive
secrecy rate with the help of the interferer. Here we choose the
rate pair $(R_1,R_2)\in \mathcal R_1^{[\rm MAC]}$ so that the intended
receiver can first decode $\mathcal C_2$ and then $\mathcal C_1$. Moreover, the
dummy rate pair satisfies
$$(R_{1,d},R_2)\notin \left\{\mathcal R_2^{[\rm MAC]}\cup \mathcal R_2^{[\rm S]}\right\},$$
i.e., we provide enough randomness to confuse the eavesdropper. Hence, for
strong interference, the achievable secrecy rate can be simplified as
\begin{align*}
R_s = \max_{\pi}\left\{ \min \left[
\begin{array}{l}
I(X_1,X_2;Y_1)-I(X_1,X_2;Y_2),\\
I(X_1;Y_1|X_2)-I(X_1;Y_2)
\end{array}
\right]\right\}^{+}.
\end{align*}
\subsection{Weak Interference}
Weak interference implies that
\begin{align}
& & I(X_1;Y_1|X_2)&\ge I(X_1;Y_2|X_2)&\notag\\
&\text{and}& I(X_2;Y_2|X_1)&\ge I(X_2;Y_1|X_1) & \label{eq:IC-W}
\end{align}
for all product distributions on the input $X_1$ and $X_2$. Let
\begin{align}
& & \Delta_1&=I(X_1;Y_1|X_2)-I(X_1;Y_2|X_2)&\\
&\text{and}& \Delta_2&=I(X_1;Y_1)-I(X_1;Y_2). &
\end{align}
As shown in Fig.~\ref{fig:IC-W}.a, the achievable secrecy can be
increased by the help from the interferer when
$\Delta_1\le\Delta_2$.
\begin{figure}[htbp]
\centerline{\hbox{ \hspace{0.01in}
\epsfig{file=ic_w_2.eps, angle=0, width=0.18\textwidth}
\epsfig{file=ic_w_1.eps, angle=0, width=0.18\textwidth}
}
}
\hbox{\footnotesize \hspace{0.7in} (a) $\Delta_1 \le \Delta_2$ \hspace{0.8in} (b) $\Delta_1 > \Delta_2$}
\caption{Weak interference channel}
\label{fig:IC-W}
\vspace{-0.2cm}
\end{figure}
In this case, the interferer generates an ``artificial noise'' with
the dummy rate $R_2>I(X_2;Y_2|X_1)$ so that neither the receiver nor
the eavesdropper can decode $\mathcal C_2$. On the other hand, when
$\Delta_1>\Delta_2$, the interferer ``facilitates'' the transmitter
by properly choosing the signal $X_2$ to maximize $\Delta_1$. In the
case of weak interference, the achievable secrecy rate can be
summarized as
\begin{align*}
R_s = \max_{\pi}\left\{ \max \left(\Delta_1,\Delta_2\right)\right\}.
\end{align*}
\section{Symmetric Gaussian Channels}\label{sec:Gaussian}
In this section, we consider the Gaussian wiretap channel with a
helping interferer (GWT-HI). In order to introduce the results in
the simplest possible setting, in this paper we focus on a symmetric
Gaussian channel as illustrated in Fig.~\ref{channel}, where the
source-eavesdropper and interferer-receiver channels have the same
channel condition. The results for the GWT-HI with general parameter
settings can be found in \cite{Tang:Preprint:07}.
\begin{figure} [hbt]
\centering
\includegraphics[width=2.6in]{channel.eps}\\
\caption{A symmetric Gaussian wiretap channel with a helping interferer.}\label{channel}
\vspace{-0.2cm}
\end{figure}
The channel outputs at the intended receiver and the eavesdropper
can be written as
\begin{eqnarray}\label{signal}
Y_{1,k} &=& X_{1,k} +\sqrt{a}X_{2,k} + N_{1,k}, \nonumber\\
Y_{2,k} &=& \sqrt{a}X_{1,k} + X_{2,k} + N_{2,k},
\end{eqnarray}
for $k=1, \dots, n$, where ${N_{1,k}}$ and ${N_{2,k}}$ are sequences
of independent and identically distributed zero-mean Gaussian noise
variables with unit variances. The channel inputs $X_{1,k}$ and
$X_{2,k}$ satisfy average block power constraints of the form
\begin{equation}\label{power}
\frac{1}{n}\sum_{k=1}^{n}E[X_{1,k}^2] \leq \bar{P_1}, \quad \frac{1}{n}\sum_{k=1}^{n}E[X_{2,k}^2] \leq \bar{P_2},
\end{equation}
\subsection{Achievable Secrecy Rate}
We give an achievable secrecy rate by assuming that both encoders
use Gaussian codebooks. In this subsection, we assume that the
codewords in $\mathcal C_1$ and $\mathcal C_2$ have average block powers $P_1$ and
$P_2$, respectively. The optimal $P_1$ and $P_2$ satisfying the
requirements of $P_1 \leq \bar{P_1}$ and $P_2 \leq \bar{P_2}$ are
found in Subsection \ref{sec:power}.
\begin{theorem}
For the symmetric Gaussian wiretap channel with a helping interferer
given by (\ref{signal}),
i) if $a \geq 1+P_2$, the achievable secrecy rate is $R_s=0$;
ii) if $1 \leq a < 1+P_2$, the achievable secrecy rate is
\begin{eqnarray}\label{rate1}
\lefteqn{R_s(P_1, P_2) = } \nonumber\\
&\left\{ \begin{array}{ll}
{\mathrm g}(P_1) - {\mathrm g}(\frac{aP_1}{1+P_2}) &\mbox{if $P_1<P_2$, $a > 1 +P_1 $,}\\
{\mathrm g}(P_1+aP_2) - {\mathrm g}(aP_1+P_2) &\mbox{if $P_1 < P_2$, $a \leq 1 + P_1$,}\\
0 &\mbox{otherwise;}
\end{array} \right.\nonumber
\end{eqnarray}
iii) if $a<1$, the achievable secrecy rate is
\begin{eqnarray}\label{rate2}
R_s(P_1, P_2) = \left\{ \begin{array}{ll}
{\mathrm g}(\frac{P_1}{1+aP_2}) - {\mathrm g}(\frac{aP_1}{1+P_2}) &\mbox{if $P_1 > P_2$,}\\
{\mathrm g}(P_1) - g(aP_1) &\mbox{otherwise,}
\end{array} \right.\nonumber
\end{eqnarray}
where ${\mathrm g}(x)=(1/2)\log_2(1+x)$.
\end{theorem}
\begin{proof}
We use the achievability scheme in Theorem$~1$ with Gaussian input
distributions.
\end{proof}
\begin{remark}
For comparison, we recall that the secrecy capacity of the Gaussian
wiretap channel \cite{Leung-Yan-Cheong:IT:78} (the case without an interferer in the GWT-HI model)
is
\begin{eqnarray}\label{gwiretap}
R_s^{\mathrm{WT}}(P_1)=\left\{ \begin{array}{ll}
{\mathrm g}(P_1)-{\mathrm g}(aP_1) &\mbox{if $a<1$,}\\
0 &\mbox{if $a \geq 1$.}
\end{array} \right.
\end{eqnarray}
That is, a positive secrecy rate can be achieved for the wiretap
channel only when $a<1$. According to Theorem$~2$, a positive
secrecy rate can be achieved for the symmetric GWT-HI when
$a<1+P_2$. If the interferer has sufficiently large power, a
positive secrecy rate can be achieved for any $a>0$.
\end{remark}
\begin{remark}
$a \geq 1+P_2$, $1 \leq a < 1+P_2$, and $a<1$ fall into the cases of
very strong interference, strong interference and weak interference,
respectively.
\end{remark}
\subsection{Power Control}\label{sec:power}
Power control is essential to interference management for
accommodating multi-user communications. As for the GWT-HI, power
control also plays a critical role. In this subsection, we consider
the optimal power control strategy for increasing the secrecy rate
given in Theorem$~2$.
\begin{theorem}
When $a \geq 1$, the power control scheme for maximizing the secrecy
rate is given by
\begin{eqnarray}\label{power1}
(P_1, P_2) = \left\{ \begin{array}{ll}
\left(\min\{\bar{P_1},P_1^{\ast}\}, \bar{P_2}\right) &\mbox{if $\bar{P_2} > a-1 $,}\\
(0,0) &\mbox{otherwise,}
\end{array} \right.
\end{eqnarray}
where $P_1^{\ast}=a-1$.
When $a < 1$, the power control scheme for maximizing the secrecy
rate is given by
\begin{equation}\label{power2}
(P_1, P_2) = \left(\bar{P_1}, \min\{\bar{P_2}, P_2^{\ast}\}\right),
\end{equation}
where
\begin{equation}\label{past}
P_2^{\ast}=\frac{\sqrt{1+(1+a)\bar{P_1}}-1}{1+a}.
\end{equation}
\end{theorem}
\begin{proof}
The proof can be found in \cite{Tang:Preprint:07}.
\end{proof}
\begin{remark}
When $a<1$, the interferer controls its power so that it does not bring too much interference to the primary transmission. When $a \geq 1$, the benefits of power control at the transmitter are two-fold: First, less information is leaked to the eavesdropper; and furthermore, the intended receiver can successfully decode (and
cancel) the interference.
\end{remark}
\subsection{Power-Unconstrained Secrecy Rate}\label{sec:pcon}
A fundamental parameter of wiretap-channel-based wireless secrecy systems is the achievable secrecy rate when the transmitter has unconstrained power. This secrecy rate is related only to the channel conditions, and is the maximal achievable secrecy rate no matter how large the transmit power is. For example, the power-unconstrained secrecy rate for a Gaussian wiretap channel (when there is
no interferer in the GWT-HI model) is given by
\begin{equation}\label{limit1}
\lim_{\bar{P_1}\rightarrow
\infty}R_s^{\mathrm{WT}}(\bar{P_1})=\lim_{\bar{P_1} \rightarrow
\infty}\left[{\mathrm g}(\bar{P_1})-{\mathrm g}(a\bar{P_1})\right]^{+}=\frac{1}{2}\left[\log_{2}\frac{1}{a}\right]^{+}.
\end{equation}
After some limiting analysis, we have the following result for the
symmetric GWT-HI model.
\begin{theorem}
The achievable power-unconstrained secrecy rate for the symmetric
GWT-HI is
\begin{eqnarray}\label{limit2}
\lim_{\bar{P_1},\bar{P_2} \rightarrow \infty}R_s = \left\{
\begin{array}{ll}
\frac{1}{2}\log_{2}a &\mbox{if $a \geq 1$,}\\
\log_{2}\frac{1}{a} &\mbox{if $a < 1$.}
\end{array} \right.
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof can be found in \cite{Tang:Preprint:07}.
\end{proof}
When the interference is strong ($a>1$), the power unconstrained secrecy rate is $(1/2)\log_{2}a$. Note that $(1/2)\log_{2}a$ is the power-unconstrained secrecy rate if confidential messages are sent from the interferer to the intended receiver in the presence of the eavesdropper. This is particularly interesting because we do not even assume that there is a source-interferer channel (which enables the interferer to relay the transmission). When the interference is weak ($a<1$), the interferer assists the secret transmission by
doubling the achievable secrecy rate.
\section{Numerical Examples}\label{sec:numerical}
In Fig. \ref{intpower}, we present a numerical example to show the benefits of the power control scheme to the secrecy rate $R_s$. In this example, we assume that the source power constraint is $\bar{P_1}=2$, and the interferer power constraint $\bar{P_2}$ varies from $0$ to $8$. We can see that the power control scheme can increase the secrecy rate significantly. When $a=2$, the power control scheme uses the maximum interferer power and holds the source power to be $P_1^{\ast}=1$, so that the intended receiver can decode the interference first. When $a=1/2$, the power control scheme uses the maximum source power and holds the interferer power below $P_2^{\ast}=2/3$, so that the interferer does not introduce too much interference to the intended receiver (which treats the
interference as noise in this case).
\begin{figure}
\centering
\includegraphics[width=2.1in]{rate_p2.eps}\\
\caption{Secrecy rate $R_s$ versus $\bar{P_2}$, where $\bar{P_1}=2.$}\label{intpower}
\vspace{-0.3cm}
\end{figure}
In Fig. \ref{chgain}, we present another example to show the achievable secrecy rate $R_s$ for different values of $a$. In this example, we assume that $\bar{P_1}=\bar{P_2}=2$, and $a$ varies from $0$ to $4$. Comparing the secrecy rates achievable for the GWT-HI and GWT, we find that an independent interferer increases $R_s$. For the GWT, $R_s$ decreases with $a$ and remain $0$ when $a \geq 1$. For the GWT-HI, $R_s$ first decreases with $a$ when $a<1$; when $1<a \leq 1.73$, $R_s$ increases with $a$ because the intended receiver now can decode and cancel the interference, while the eavesdropper can only treats the interference as noise; when $a>1.73$, $R_s$ decreases again with $a$ because the interference does not hurt the eavesdropper much when $a$ is large. In particular, when $a \geq 3 (=1+\bar{P_2})$, the eavesdropper can fully decode the primary transmission by treating the interference as noise. Therefore,
$R_s=0$ when $a \geq 3$.
\begin{figure}
\centering
\includegraphics[width=2.1in]{rate_a.eps}\\
\caption{Secrecy rate $R_s$ versus $a$, where $\bar{P_1}=\bar{P_2}=2$.}\label{chgain}
\vspace{-0.3cm}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this paper, we have considered the use of the superposition property of the wireless medium to alleviate the eavesdropping issues caused by the broadcast nature of the medium. We have studied a wiretap channel with a helping interferer, in which the interferer assists the secret communication by injecting independent interference. We have given an achievable secrecy rate with its geometrical interpretation. The results show that interference can be exploited to benefit secret wireless communication.
\bibliographystyle{IEEEtran}
|
1,108,101,565,732 | arxiv |
\section*{Introduction}
The impact of glaucoma, one of the leading causes of blindness, can be significantly reduced if diagnosed early. Automatic systems can improve the success of screening programs by reducing the workload of specialists. However, current state-of-the-art-systems are usually not robust in real-world scenarios, providing over-confident predictions with out-of-distribution (OOD) cases.
With this in mind, we propose an uncertainty-aware deep network that predicts a Dirichlet distribution on the class probabilities. During inference, this type of approach allows to obtain class-wise probabilities together with a sample-wise uncertainty $\in[0\,1]$ of that same classification, and has already proven successful for uncertainty estimation in other tasks~\citep{Sensoy2018}. Additionally, to fully automate OOD detection, we exploit the assumption that referable glaucoma detection is only possible if the region of the optic disc (OD) has sufficient quality for diagnosis, since the primary manifestations of glaucoma occur there. This introduces an additional challenge, as the network has to additionally provide, without supervision, the location of the OD.
\paragraph{AIROGS challenge}
This paper describes our submission to the Artificial Intelligence for RObust Glaucoma Screening Challenge (AIROGS challenge)~\citep{Vente2021}. The main task was to develop an automatic method for \emph{referable glaucoma} detection in eye fundus image. Additionally, the system should provide a soft and binary decision on whether each image isn't diagnosable (\emph{ungradable}), i.e. automatically identify OOD samples and bad quality images. No definition or example of what an ungradable image was provided.
Furthermore, usage of external datasets and annotations was forbidden.
\section*{Glaucoma classification with uncertainty}
\subsection*{Dataset}
The AIROGS development data~\citep{Vente2021data} contains 101\,442 images, from which 3\,270 have referable glaucoma. For our experiments, we randomly split the data into training, validation and test sets with 80\%, 10\% and 10\% of the data, respectively. Thus, both the validation and the test set contained 10\,145 images, from which 327 were graded as referable glaucoma. All images were resized to the input size of the network.
The test dataset has approximately 11\,000 images. These images and their labels were hidden from the participants, and instead performance evaluation was performed by submitting the algorithm to the AIROGS web platform. However, prior to the final test phase on these images, the challenger organizers allowed to assess the performance of the algorithm on around 10\% of the test data. A total of 3 attempts were possible for this preliminary test phase
\paragraph{Base architecture} The algorithm was developed using exclusively the AIROGS dataset~\citep{Vente2021data}.
The classification model is composed of the first two inception blocks from the Inception-V3~\citep{Szegedy2016} network pre-trained on ImageNet~\citep{Russakovsky2015}. Using only these blocks reduces the size of the receptive field which, as it will be addressed later, allows to identify in detail the relevant diagnosis regions and subsequently propose an OOD binary decision.
\subsection*{Deep Dirichlet uncertainty estimation}
Our method is based on the direct modeling of the uncertainty following the evidential deep learning approach~\citep{Dempster2008}. In particular, we deal with the $K$ class probabilities as resulting from a Dirichlet distribution, i.e., a belief mass $b_k$ is attributed to each singleton (i.e, class label) $k$, $k\in \{1,...,K\}$, from a set of mutually exclusive singletons, and an overall uncertainty mass $u$ is provided, with $u \geq 0$, $b_k \geq 0$ and $u + \sum_{k=1}^K b_k = 1$.
Each $b_k$ is computed based on the evidence for that singleton $e_k$ via $b_k = {e_k}/{S}$, where $S$ is the total evidence.
The prediction uncertainty $u$ is:
\begin{equation}
u = \frac{K}{S} = \frac{K}{\sum_{k=1}^K(e_k + 1)}.
\label{eq:dir_unc}
\end{equation}
The uncertainty is thus inversely proportional to the total evidence, and in the extreme case of no evidence we have $b_k = 0, \forall k \implies u = 1$.
This evidence can be modeled by a Dirichlet distribution characterized by $K$ $\alpha_k$ parameters, with $\alpha_k = e_k + 1$.
The probability $\hat{p_k}$ of the class $k$ is given by the mean of the Dirichlet distribution parameters:
\begin{equation}
\hat{p_k} = \frac{\alpha_k}{S}
\label{eq:dir_prob}
\end{equation}
We utilize the uncertainty value $u$ to detect OOD cases.
\subsection*{Network Training}
The network receives as input $224 \times 224$
pixels RGB images and outputs per sample the glaucoma probability and the confidence of the prediction. For that, we first obtain $K=2$ logits
, which are clipped to $[-200, 200]$ and then converted to evidences ($e$) using a softplus activation. We train the model with two loss terms based on Kullback-Leibler (KL) divergence. The first term aims at increasing $e$
for the correct class by assessing the divergence between the predicted $\alpha$ and the theoretically maximum $\alpha_\mathrm{max}=201$:
\begin{equation}
L_{\mathrm{KL}_\mathrm{evid}} = \mathrm{KL}\left(D(p_i \rvert {\alpha_i})~||~D(p_i\rvert y_\mathrm{gt}\odot\langle \alpha_\mathrm{max},...,\alpha_\mathrm{max}\rangle)\right)
\end{equation}
where $y_\mathrm{gt}$ is the reference categorical label.
A second KL divergence term regularizes the distribution by penalizing the divergence from the uniform distribution in the uncertain cases:
\begin{equation}
L_{\mathrm{KL}_\mathrm{unif}} = \mathrm{KL}\left(D(p_i \rvert \hat{\alpha_i})~||~D(p_i\rvert \langle 1,...,1\rangle)\right)
\end{equation}
where $D(p_i\rvert \langle 1,...,1\rangle)$ is the uniform Dirichlet distribution and $\hat{\alpha_i}$ is the Dirichlet parameters after removing the non-misleading evidence from the $\alpha_i$ parameters for sample $i$: $\hat{\alpha_i} = y_i + (1-y_i)\odot \alpha_i$.
The final loss is then defined as:
\begin{equation}
L = L_{\mathrm{KL}_\mathrm{evid}} + a_t L_{\mathrm{KL}_\mathrm{unif}}
\end{equation}
with $a_t$ being the annealing coefficient that increases as the training progresses. In particular, $a_t = \min(1, t/s)$, where $t$ is the current training epoch and $s$ is the annealing step, gradually increasing the effect of the second term in the final loss, avoiding the premature convergence to the uniform distribution for misclassified images in the beginning of the training~\citep{Sensoy2018}.
Our model was trained with balanced batches and the data was randomly augmented with flips, translations, rotations, scales, Gaussian blur and brightness modifications.
\subsection*{Out-of-distribution binary decision}
The challenge required participants to indicate, both with a continuous score and a binary label, if an image is ungradable. Since no examples of ungradable images were provided, we made the assumption that diagnosis is only possible if the OD has enough image quality for diagnosis, as glaucoma main structural manifestation occurs in that region. Thus, we artificially created OOD images by zeroing the regions of the images where their Grad-CAM \cite{Selvaraju2017} is greater than 0.5. This allowed us to produce in-distribution (ID) and OOD samples in our validation set, with which we computed the threshold for the binary ungradability decision. In particular, we contructed a receiver operating characteristic (ROC) curve using $u_{\text{ID}}$ and $u_{\text{OOD}}$. The ROC curve was used for selecting two decision thresholds, one at 0.5 sensitivity ($u=0.35$) and the other at the optimal operating point ($u=0.13$). We tested both thresholds on the preliminary test phase, and we kept $u=0.35$ as it performed better on that data.
\section*{Evaluation and Results}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/histogram_v2.pdf}
\caption{Uncertainty histogram of the in-distribution (ID) and the artificial out-of-distribution (OOD) cases.}
\label{fig:uncertainty_histogram}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{figures/img_18_gt_1_pred_1_unc_0.08.png}
\caption{Representative example of (left-to-right) original image ($u=0.08$), Grad-CAM overlay, out-of-distribution by optic disc obscuring ($u=0.54$), and out-of-distribution by flipping the binarized Grad-CAM ($u=0.08$).}
\label{fig:example}
\end{figure}
The uncertainty histogram (Fig~\ref{fig:uncertainty_histogram}) for ID and OOD shows that the predicted uncertainty $u$ is a viable metric to identify images where the OD is not visible. To ensure that this behaviour was due to the OD being obscured, we compared the AUC values for detecting our OOD cases with the values for detecting the cases where the corresponding Grad-CAM mask was flipped vertically (see Fig.~\ref{fig:example}). The achieved AUC values were 0.905 and 0.506, respectively, thus validating our hypothesis that the OD image quality is pivotal for this task.
The challenge participants were evaluated using four metrics: 1) the partial area under the ROC curve (90-100\% specificity) for referable glaucoma (pAUC), 2) sensitivity at 95\% specificity (TPR@95), 3) Cohen's kappa score between the reference and the decisions provided by the challenge participants on image ungradability ($\kappa$) and 4) the ungradability AUC (gAUC). Table~\ref{tab:performance} shows our results on our test set and on the preliminary test phase. As shown, besides a 10\% performance drop at TPR@95 and an over-optimistic estimation of $\kappa$, which were expected given the reduced number of glaucoma cases and complexity of the ungradability task, our approach shows a similar behaviour on both datasets. Importantly, the model shows high scores across all the metrics. In fact, at the time of the opening of the final test phase (Feb. 8$^\text{th}$, 2022), our method had the highest average score among all submissions.
\begin{table}[]
\centering
\caption{Performance of the proposed method using the challenge metrics on the preliminary test phase. \label{tab:performance}}
\begin{tabular}{|c|c|c|c|c|} \hline
Test data & pAUC & TPR@95 & $\kappa$ & gAUC \\ \hline\hline
Our test set & 0.9187 & 0.8990 & 0.6915 & 0.9049 \\ \hline
Pr. test phase & 0.8464 & 0.7813 & 0.4452 & 0.8691 \\ \hline
\end{tabular}
\label{tab:my_label}
\end{table}
\section*{Conclusion}
In this paper, we presented our method for the AIROGS challenge which, being based on the Dirichlet distribution, allows to obtain a probability of referable glaucoma and the corresponding prediction uncertainty. Even without explicit supervision, the model is capable of detecting OOD cases while maintain a high performance on the classification task.
\section*{Bibliography}
\input{main.bbl}
\end{document}
|
1,108,101,565,733 | arxiv | \section{Summary}\label{sec:summary}
The data set presented in this paper comprises the results of the Portuguese 2019 Parliamentary Elections for 4 hours and 25 minutes since the start of results coming public until the last parish was accounted. The electoral process had the participation of 27 parties, over 21 areas (global results plus 20 area results). Overall, the data set contains 21643 records over 28 features (including the target variable).
\subsection{Motivation}
Portugal has made results available in an online and updated fashion for many elections until today. However, from the perspective of predictive modelling, the data available raises several issues. The motivation for developing this data set relies on the fact that there is no public information concerning the order of such results - these are not published at the same time. Such detail adds a new level of information concerning the precedence of results and forecasting abilities depending on such level of information. The challenges and possibilities in such a level of information are the primary motivation of the data acquisition and curation process that led to this data set. The envisioned task for this process is numerical forecasting, attempting to assert the ability of significant anticipation of final results with a very high degree of confidence.
\section{Data Acquisition}\label{sec:dataacquisition}
The \textit{Secretaria-Geral do Ministério da Administração Interna} (SGMAI) handles the results of the electoral processes in Portugal. Such results are published online continuously, as they become available. The overall process on the communication/publication of results is as follows:
\begin{enumerate}
\item Votes are counted when polls close;
\item Once polling sections finish their counting, results are gathered and communicated to the SGMAI;
\item Results are verified and published.
\end{enumerate}
Results of the 2019 Parliamentary Elections in Portugal are presented in the following website: \url{https://www.legislativas2019.mai.gov.pt/}. Additionally, we should stress that this website published what is considered provisional results. Final results are made available after a due process by the competent authorities. As such, this data set relates to such provisional results.
\subsection{Procedure}\label{subsec:methodology}
Information published on the official website by the SGMAI describe a nation-, district-, county- and parish-wide breakdown of results. Through the analysis of content and environment variables, it is possible to understand that results are stored in JSON files with a fixed-structure path as such:
\begin{quote}
\url{https://www.legislativas2019.mai.gov.pt/frontend/data/TerritoryResults?territoryKey=LOCAL-XXXXXX&electionId=AR&ts=}
\end{quote}
The respective ids of each location must replace the local id (\textbf{XXXXXX}), which is also made available in the content and environment variables of the website.
For conciseness, we will not describe the schema of the information. Regardless, given its availability, it is relatively easy to grasp, and the accompanying code of this paper describes the information acquired.
We should note that there were two processes of data acquisition: an online and an offline process. In other words, some information (district-level information) is acquired while the results are published, and other information (parish-level information) acquired after the process. Due to the limitation of queries, the acquisition of parish-level information (3092 queries) would be too much of a burden to the results website.
\subsubsection{Online Acquisition}\label{subsubsec:onlineacquisition}
For the online process of data acquisition, while the results are published, the following methodology was applied.
\begin{enumerate}
\item the identifiers of each voting district were collected (20), also, the global (national) results (total 21);
\item results start to appear a few minutes after 8 PM (local time) on the 6$^th$ of October 2019, at which point, an automatic procedure to acquire the data was activated in 5-minute intervals;
\item this process includes obtaining the JSON file containing the electoral results of each district (21);
\item for each district, information is obtained concerning the statistics of the overall voting procedure, along with the individual level of voting for each party;
\item new information is used to update the files containing the acquired data.
\end{enumerate}
The automatic procedure to acquire data was active from 6:56 PM (local time) of the 6$^th$ of October 2019, and 00:35 AM of the 7$^th$ of October 2019.
The application of this methodology resulted in two separate files: \textit{i)} overall statistics on the voting procedure and \textit{ii)} voting results for each party and each district. The description of each attribute of these files follows in Tables~\ref{tbl:overallresults} and \ref{tbl:votes}
\begin{table}[h]
\begin{center}
\scriptsize
\caption{Attributes collected in real-time concerning the overall district-level information of the electoral process.}\label{tbl:overallresults}
\begin{tabular}{l l p{8cm}}
\textbf{Variable} & \textbf{Type} & \textbf{Description} \\
\hline
\textbf{time} & \textit{timestamp} & Date and time of the data acquisition \\
\textbf{territoryFullName} & \textit{string} & Complete name of the location (district or nation-wide) \\
\textbf{territoryName} & \textit{string} & Short name of the location (district or nation-wide) \\
\textbf{territoryKey} & \textit{string} & Official identifying key of the location (e.g. \textit{LOCAL-500000}) \\
\textbf{totalMandates} & \textit{numeric} & MP's elected at the moment\\
\textbf{availableMandates} & \textit{numeric} & MP's left to elect at the moment\\
\textbf{numParishes} & \textit{numeric} & Total number of parishes in this location\\
\textbf{numParishesApproved} & \textit{numeric} & Number of parishes approved in this location\\
\textbf{blankVotes} & \textit{numeric} & Number of blank votes\\
\textbf{blankVotesPercentage} & \textit{numeric} & Percentage of blank votes\\
\textbf{nullVotes} & \textit{numeric} & Number of null votes\\
\textbf{nullVotesPercentage} & \textit{numeric} & Percentage of null votes\\
\textbf{votersPercentage} & \textit{numeric} & Percentage of voters\\
\textbf{subscribedVoters} & \textit{numeric} & Number of subscribed voters in the location\\
\textbf{totalVoters} & \textit{numeric} & Number of votes cast\\
\textbf{pre.totalMandates} & \textit{numeric} & MP's elected at the moment (previous election)\\
\textbf{pre.availableMandates} & \textit{numeric} & MP's left to elect at the moment (previous election)\\
\textbf{pre.blankVotes} & \textit{numeric} & Number of blank votes (previous election)\\
\textbf{pre.blankVotesPercentage} & \textit{numeric} & Percentage of blank votes (previous election)\\
\textbf{pre.nullVotes} & \textit{numeric} & Number of null votes (previous election)\\
\textbf{pre.nullVotesPercentage} & \textit{numeric} & Percentage of null votes (previous election)\\
\textbf{pre.votersPercentage} & \textit{numeric} & Percentage of voters (previous election)\\
\textbf{pre.subscribedVoters} & \textit{numeric} & Number of subscribed voters in the location (previous election)\\
\textbf{pre.totalVotes} & \textit{numeric} & Percentage of blank votes (previous election)\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\scriptsize
\caption{Attributes collected in real-time concerning political party results in the districts of the electoral process.}\label{tbl:votes}
\begin{tabular}{l l p{8cm}}
\textbf{Variable} & \textbf{Type} & \textbf{Description} \\
\hline
\textbf{time} & \textit{timestamp} & Date and time of the data acquistion \\
\textbf{District} & \textit{string} & Short name of the location (district or nation-wide) \\
\textbf{Party} & \textit{string} & Political Party \\
\textbf{Mandates} & \textit{numeric} & MP's elected at the moment for the party in a given district\\
\textbf{Percentage} & \textit{numeric} & Percentage of votes in a party\\
\textbf{validVotesPercentage} & \textit{numeric} & Percentage of valid votes in a party\\
\textbf{Votes} & \textit{numeric} & Percentage of party votes\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Offline Acquisition}
Although not included in the Real-time Election Results data set, data on the finest granularity available (parishes) of the results is also acquired. In order to accomplish such task, a methodology similar to the online acquisition process was carried out, with the following difference: we query the respective site for the JSON files of each parish.
There is no information on the website of the available local identifiers in bulk. As such, instead of an exhaustive search, we resorted to the information divulged by the Portuguese Government's Open Data initiative. This repository contains information concerning Portuguese parishes (see \url{https://dados.gov.pt/pt/datasets/freguesias-de-portugal/}), including their identification. Luckily, this is the same information used to identify locations in the election results website. As such, we were able to reduce the search from 490000 queries (from 010000, the lowest known identifier, to 500000, the national level identifier), down to 3092, the exact number of parishes. This process was carried out in the 3$^rd$ of December 2019.
The attributes of each file (\textit{i)} the overall election information and \textit{ii)} the voting information) are the same as in the online acquisition of data, with the following differences (strike-through for removed attributes):
\begin{itemize}
\item To the overall election information:
\begin{itemize}
\item \textbf{Council} \textit{(string)}: The council of the parish;
\item \textbf{District} \textit{(string)}: The district of the parish;
\end{itemize}
\item To the voting information:
\begin{itemize}
\item \textbf{Council} \textit{(string)}: The council of the parish;
\item \textbf{District} \textit{(string)}: The district of the parish;
\item \sout{\textbf{time} \textit{(timestamp)}}: Removed;
\item \sout{\textbf{Mantes} \textit{(numeric)}}: Removed;
\end{itemize}
\end{itemize}
Parish-level data provides a deeper understanding of the overall results and allows the introduction of several dimensions of the phenomena surrounding and influencing such results. Additionally, although not carried out at this point, this data also allows the addition of results from previous parliamentary elections in Portugal, under the assumption that the sequence of incoming results from parishes is the same.
We should clarify, again, that this data is made available jointly with the data set. Still, it was not used (does not belong to the group of raw data, which is only district-level) in the development of the final data set.
\section{Data Curation and Description}\label{sec:cur_and_description}
In this section, we describe the curation process applied to the raw data described in Section~\ref{sec:dataacquisition} and the final data set.
\subsection{Curation}\label{subsec:curation}
The process of data curation consists of the analysis of the information w.r.t consistency, data input/format errors, and related issues that may affect the future use of the data. In this case, we include the process of feature engineering under this concept.
An analysis of the overall and party voting data shows common issues concerning format errors and missing values. Concerning format errors, timestamps are formatted as such (previously as strings), and numerical variables were also cast accordingly. Concerning missing values, we observed that in the early stages of electoral results becoming available, individual districts did not have any information yet. As such, their attributes were marked as not available. Such cases are excluded from the analysis.
Concerning feature engineering, it was decided to add three features that could potentially help future users of the data set. First, the time elapsed (\textbf{TimeElapsed}) since the first data acquisition in minutes (\textit{numeric}); second, the application of the Hondt method (\textbf{Hondt}) to the results at each data acquisition step, at a district-level, resulting in their respective number of parliament members (\textit{numeric}); and third, the target variable (\textbf{FinalMandates}) with the final number of elected members of parliament for each district (\textit{numeric}).
We should note that some of the available attributes are excluded given their lack of relevance (at least apparently) to the possible predictive modelling tasks. The attributes removed are the following: \textbf{pre.availableMandates}, \textbf{pre.totalMandates}, \textbf{territoryFullName}, \textbf{territoryKey}. It should be reminded that at this point, we are focusing on the objective of predictive modelling. As such, repeated identifiers and redundant information (as these examples) should be removed.
Finally, the data concerning the overall election and the party voting information are joined w.r.t the timestamp of data acquisition and the respective district.
\subsection{Description}\label{subsec:description}
For the sake of consistency, the type of data in each attribute of the final data set on the election results for the Portuguese Parliament in 2019 is presented in Table~\ref{tbl:finalats}
\begin{table}[h]
\begin{center}
\scriptsize
\caption{Attributes of the Real-time Election Results - 2019 Portuguese Parliament Election.}\label{tbl:finalats}
\begin{tabular}{l l p{8cm}}
\textbf{Variable} & \textbf{Type} & \textbf{Description} \\
\hline
\textbf{TimeElapsed} & \textit{Numeric} & Time (minutes) passed since the first data acquisition \\
\textbf{time} & \textit{timestamp} & Date and time of the data acquisition \\
\textbf{territoryName} & \textit{string} & Short name of the location (district or nation-wide) \\
\textbf{totalMandates} & \textit{numeric} & MP's elected at the moment\\
\textbf{availableMandates} & \textit{numeric} & MP's left to elect at the moment\\
\textbf{numParishes} & \textit{numeric} & Total number of parishes in this location\\
\textbf{numParishesApproved} & \textit{numeric} & Number of parishes approved in this location\\
\textbf{blankVotes} & \textit{numeric} & Number of blank votes\\
\textbf{blankVotesPercentage} & \textit{numeric} & Percentage of blank votes\\
\textbf{nullVotes} & \textit{numeric} & Number of null votes\\
\textbf{nullVotesPercentage} & \textit{numeric} & Percentage of null votes\\
\textbf{votersPercentage} & \textit{numeric} & Percentage of voters\\
\textbf{subscribedVoters} & \textit{numeric} & Number of subscribed voters in the location\\
\textbf{totalVoters} & \textit{numeric} & Percentage of blank votes\\
\textbf{pre.blankVotes} & \textit{numeric} & Number of blank votes (previous election)\\
\textbf{pre.blankVotesPercentage} & \textit{numeric} & Percentage of blank votes (previous election)\\
\textbf{pre.nullVotes} & \textit{numeric} & Number of null votes (previous election)\\
\textbf{pre.nullVotesPercentage} & \textit{numeric} & Percentage of null votes (previous election)\\
\textbf{pre.votersPercentage} & \textit{numeric} & Percentage of voters (previous election)\\
\textbf{pre.subscribedVoters} & \textit{numeric} & Number of subscribed voters in the location (previous election)\\
\textbf{pre.totalVoters} & \textit{numeric} & Percentage of blank votes (previous election)\\
\textbf{Party} & \textit{string} & Political Party \\
\textbf{Mandates} & \textit{numeric} & MP's elected at the moment for the party in a given district\\
\textbf{Percentage} & \textit{numeric} & Percentage of votes in a party\\
\textbf{validVotesPercentage} & \textit{numeric} & Percentage of valid votes in a party\\
\textbf{Votes} & \textit{numeric} & Percentage of party votes\\
\textbf{Hondt} & \textit{numeric} & Number of MP's according to the distribution of votes now \\
\textbf{FinalMandates} & \textit{numeric} & \textbf{Target}: final number of elected MP's in a district/national-level \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{User Notes}\label{sec:usernotes}
Basic operations in the programming language \textbf{R} are made available to facilitate the use of the data set. This file is located in the UCI data set archive.
\section*{Acknowledgments}\label{sec:ack}
National Funds finance this work through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia within project UID/EEA/50014/2019.
\end{document}
|
1,108,101,565,734 | arxiv | \section{BACKGROUND}
Phasons are degrees of freedom unique to quasicrystals~\cite{schechtman-originalQCpaper,phasonreview,bakphasons,steinhardtphononsohasons,Goldman-Widom_AnnualReview}.
The role of phasons in determining quasicrystal properties remains incompletely understood: open questions include the effects of electron-phason coupling, the nature of electronic transport, spectral statistics, topological properties, and even the shape of the electronic wavefunctions~\cite{hofstadter-fibonacci-butterfly-2007,Thiel-dubois-QCcommentary,QCinAg,Kraus_Zilberberg_Topological_PRB,ZilberbergQC,boundaryphenomena2,verbin-photonic-topological_PRL,brouwerpaper,ames-QCs,hofstadter-superlattice-coldatom-proposal}. These lacunae are in part due to the theoretical intractability of quasiperiodic matter, and in part due to the experimental difficulty of disentangling the effects of domain walls, crystalline impurities, and disorder from those due to phason modes, which arise from broken translation symmetry in the in the higher-dimensional space from which the quasiperiodic lattice is projected. The exquisite controllability of ultracold atoms in optical lattices makes them well-suited to the study of quasicrystal phenomena from structure to transport to self-similarity~\cite{Lye-Inguscio-interaction-localization_PRA,inguscio-andersonloc,bloch-mbl,Verkerk-Grynberg-5fold_PRL, Verkerk-5fold-diffusion_PRA,QC-corcovilos19,fibonacci-PRA,Gadway-schneble_PRL, Viebahn-Schneider-8fold_PRL,8fold-phase-transition}. Beyond the fundamental interest of such questions, they may point the way to technological applications of quasicrystals' anomalous electrical and thermal transport characteristics.
Here we report the realization of phasonic spectroscopy on a one-dimensional quasicrystal, using a quantum gas in a tunable bichromatic optical lattice. In addition to standard dipolar modulation, the experiment enables dynamic driving of a phasonic degree of freedom~\cite{Kromer-PhasonTrajectories_PRL,Widom_PhasonReview} via modulation of the relative spatial phase between the two sublattices. We observe that the quasicrystal responds very differently to dipolar and phasonic drives: most strikingly, phasonic modulation generates a broad non-perturbative plateau of high-order ``multi-photon'' transitions, in which multiple energy quanta (``photons'') with energy corresponding to the driving frequency are absorbed. To further elucidate the spectroscopic signatures of quasicrystallinity we measure excitation spectra while varying the strength of quasiperiodicity through a localization transition, observing the emergence of minibands, identifying spectral features arising from interatomic interactions and localization-induced diabaticity, and mapping a slice of the Hofstadter butterfly energy spectrum.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Fig0v11_crop.pdf}
\caption{Experimental schematic. (a)BEC (blue) in a bichromatic lattice (yellow). Photodiodes (PD), beam samplers (BS), and dichroic mirrors (DC) are indicated, as is the configuration for both dipolar and phasonic driving using a piezo-driven mirror (solid block). (b) Sample band-mapped data. Dotted lines indicate zone edges of the primary lattice.}
\label{fig:fig0}
\end{figure}
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{fig1v12_overlay.png}
\caption{Comparison of dipolar and phasonic spectroscopy; areas where no data was taken are marked in gray. \textbf{(a)} Excitation due to dipolar driving as a function of drive frequency $f_\mathrm{dip}$ and primary lattice depth $V_P$ with $\alpha_{\mathrm{dip}}=0.16\times V_S/V_P$ and
$V_S=1 E_{R,S}$. Green hatched (Blue horizontal) overlay shows calculated first (second) interband transition. \textbf{(b)} High-resolution dipolar spectrum at $V_P=20 E_{R,P}$. Line shows the calculated center of the first interband transition. \textbf{(c)} Excitation due to phasonic driving as a function of drive frequency $f_\mathrm{phason}$ and primary lattice depth $V_P$. $\alpha_{\mathrm{phason}}$ is set to $\approx$ 1. Green hatched (Blue horizontal) overlays show calculated first (second) interband transition, with multiphoton subharmonics also indicated for the first transition. \textbf{(d)} High-resolution phasonic spectrum at $V_P=20 E_{R,P}$. Lines show the calculated center of the first twelve multiphoton transitions corresponding to the lowest interband transition. \textbf{(e)} Data from (c) plotted versus drive period $1/f_\mathrm{phason}$, showing a broad low-frequency absorption feature. \textbf{(f)} Theoretical prediction for (e) (details in text).}
\label{fig:fig1}
\end{figure*}
The experiments (diagrammed in Fig.~\ref{fig:fig0}) use a 1D bichromatic potential which superposes a primary and secondary lattice formed by light with wavelengths $\lambda_P = $ 1064 nm and $\lambda_S = $ 915 nm. Neglecting interactions, the Hamiltonian of atoms in this potential is
\begin{equation}
\begin{aligned}
H ={} -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} & + \frac{V_P}{2} \cos(2k_P(x-\delta_P)) \\[1.2ex]
& +\frac{V_S}{2} \cos(2k_S(x-\delta_S)),
\end{aligned}
\label{eq:aamodel}
\end{equation}
where $k_{P(S)} = 2\pi/\lambda_{P(S)}$ and $V_P$ and $\delta_P$ ($V_S$ and $\delta_S$) are the amplitude and spatial phase of the primary (secondary) lattice. For $V_P\gg V_S$, in the tight-binding limit with respect to the primary lattice, this Hamiltonian is closely related to both the Aubry-Andr\'e model~\cite{aubryandre} and the Harper model~\cite{harpermodel}; for larger $V_S$, deviations from these models appear in the form of mobility edges~\cite{spme-boers_PRA07,spme-das-sarma_PRB,spme-bloch_PRL18,monika-mbl-spme}. Lattice depths are measured in the respective recoil energies, $E_{R,i} = \hbar^2 k_i^2/2m$, $i\!\in\!\{P,S\}$. The chosen value of the ratio $\nu = \lambda_S/\lambda_P$ is effectively irrational in the sense that it gives rise to a unit cell larger than our 30 $\upmu$m sample size; in other words, the potential is quasiperiodic to within experimental resolution.
A key feature of the experiment is the ability to modulate the different degrees of freedom of the bichromatic lattice. Standard dipolar excitation, which drives the lowest-energy phononic mode of the lattice, is achieved by equal translation of both lattices:
\begin{equation}
\delta_S(t)\ =\ \delta_P(t)\ =\ A_\mathrm{dip} \sin(2\pi f_\mathrm{dip} t),
\label{phononeq}
\end{equation}
\noindent where $A_\mathrm{dip}$ and $f_\mathrm{dip}=\omega_\mathrm{dip}/2\pi$ are the amplitude and frequency of the dipolar drive. In the lattice frame the force applied to the atoms is $F(t) = F_0 \sin({2\pi f_\mathrm{dip} t)}$ for $F_0 = m (2\pi f_\mathrm{dip})^2 A_\mathrm{dip}$. Using the primary lattice constant $a=\lambda_P/2$, we define a dimensionless driving parameter
$\alpha_\mathrm{dip} = a F_0/\hbar \omega_\mathrm{dip} = a m \omega_\mathrm{dip} A_\mathrm{dip}/\hbar$ (which determines the modification of tunnelling matrix elements in the lowest band~\cite{Eckardt2017}).
To keep $\alpha_\mathrm{dip}$ fixed for different drive frequencies, we take $A_\mathrm{dip} \propto 1/f_\mathrm{dip}$; this normalization procedure for phase modulation has been used previously to study multiphoton excitations in a single-color lattice~\cite{interband_eckardt_sengstock_simonet, latticeheating-blochgroup}. Phasonic modulation is achieved by translating only the secondary lattice:
\begin{equation}
\delta_S(t) = A_\mathrm{phason} \sin(2\pi f_\mathrm{phason} t),\,\,\,\,\,\,\,\,
\delta_P(t) = 0,
\label{phasoneq}
\end{equation}
\noindent where $A_\mathrm{phason}$ and $f_\mathrm{phason}$ are the amplitude and frequency of the phasonic drive. As with the dipolar drive, we define a dimensionless amplitude $\alpha_\mathrm{phason}$, for which $A_\mathrm{phason} =C\alpha_\text{phason}/f_\text{phason}$, taking $C=1000 \text{ nm}\cdot\text{kHz}$.
We report spectroscopic measurements of the quasicrystal's response to phasonic and dipolar excitation with varying drive and lattice parameters. The experiments begin by adiabatically loading a Bose condensate of $^{84}$Sr into the bichromatic lattice. The amplitude of dipolar or phasonic modulation is linearly ramped to the final value over 4~ms, followed by constant-amplitude modulation for 16~ms. After modulation, both lattices are ramped down simultaneously at a rate which is adiabatic with respect to the energy gaps of the primary lattice, to achieve approximate band mapping onto free-space momentum states~\cite{demarco-bandmap}. This enables measurement of the primary spectroscopic observable: the fractional population of atoms in the ground band of the primary lattice after modulation.
As a first application of phasonic spectroscopy, we measure and plot in Fig.~2 the difference between a quasicrystal's response to standard dipolar driving and its response to phasonic driving.
We fix the phasonic driving amplitude to $\alpha_{\mathrm{phason}}\approx 1$. To facilitate comparison, the dipolar drive is scaled with respect to the phasonic one by a factor proportional to the sublattice depth ratio: $\alpha_\mathrm{dip} = 0.16\times V_S/V_P$. $V_S$ is held at $1E_{R,S}$ for both drives.
Dipolar driving causes excitations to higher bands which are consistent with expected interband transitions of the primary lattice (Fig.~\ref{fig:fig1}(a) and \ref{fig:fig1}(b)). The second interband transition is visible but suppressed compared to the first transition, since the odd-parity dipolar force does not couple unperturbed Wannier states of the primary lattice of equal parity on the same site. No multiphoton transitions are apparent at this drive amplitude.
The response to phasonic driving is qualitatively different, although due to the chosen $\alpha$ scaling the main interband transition is driven at similar strength. Most strikingly, we observe strong multiphoton processes up to the twelfth order (Fig.~\ref{fig:fig1}(c) and \ref{fig:fig1}(d)). Phasonic excitation in this regime apparently gives rise to an efficient high-harmonic response, in which atoms can absorb energy at high multiples of the drive frequency. Additionally, comparison of Fig.~2(a) and 2(c) indicates that phasonic driving appears to relax the suppression of even interband transitions, which we attribute to the fact that it is not parity (anti)symmetric on site. Finally, we observe a broad low-frequency absorption feature at large tunneling amplitudes in the phasonic spectrum. This feature, most easily seen in Fig.~\ref{fig:fig1}(e), is likely due to overlap of numerous high-order harmonics. The experimental results in Fig.~\ref{fig:fig1}(e) are reproduced well by the non-interacting exact time-evolution numerical simulations shown in Fig.~\ref{fig:fig1}(f)~\cite{SuppMat}. The experimental observation of these unique features of phasonic spectroscopy of a tunable quantum quasicrystal --- efficient high-harmonic response, relaxed selection rules, and broadband IR absorption --- constitute the first main result of this report.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig2v9.png}
\caption{Amplitude dependence of multiphoton resonances. (a) Theoretical simulation of phasonic spectra for varying drive amplitude $\alpha_\mathrm{phason}$. (b) Experimentally measured phasonic spectra for $V_P=20 E_{R,P}$ and varying $\alpha_\mathrm{phason}$. Both experiment and theory show the onset of a non-perturbative regime near $\alpha_\mathrm{th} = 0.9$. (c) Line cuts of experimental phasonic (solid) and dipolar (dashed) spectra at various $\alpha$ values. Note the extreme power broadening in the dipolar spectrum.}
\label{fig:fig2}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=\columnwidth]{fig3v14.png}
\caption{Spectroscopy of an interacting quasicrystal. (a) Calculated energy spectrum vs. $\nu=\lambda_S/\lambda_P$. Dashed line shows the slice corresponding to the quasicrystal used in this experiment. (b) Post-expansion atomic density distribution at varying disorder strengths $V_S$, showing the effects of crossing the localization transition. (c) Experimentally measured dipolar excitation spectra for varying $V_S/V_P$ at $\alpha_{\mathrm{dip}}=0.022$, showing spectral minigaps. No data were taken for the gray areas in the upper-left and lower-right. (d) Calculated density of final states for a non-interacting system, starting from a BEC. (e) Calculated density of final states for an interacting BEC; a shift of the resonance line to lower frequencies from Fig.~\ref{fig:fig3}(d) is observed. (f) Calculated non-interacting transition density assuming all single-particle orbitals below 1.5 $E_{R,S}$ are initially populated.}
\label{fig:fig3}
\end{figure*}
The proliferation of multiphoton resonances is connected to the breakdown of the regime where the driving amplitude can be treated perturbatively \cite{Straeter2016}. In the phasonically excited quasicrystal, the threshold for entering the non-perturbative regime can be estimated by expanding the shaken secondary lattice potential as
$\cos(2k_S [x-A\sin(\omega t)])=\sum_{n=-\infty}^\infty J_n(2k_SA)[\cos(2 k_S x)\cos(n\omega t)+\sin(2 k_S x)\sin(n\omega t)]$, where the Bessel functions $J_n$ contain all powers (orders) of the scaled driving amplitude
$2k_S A = (4 \pi k_S C / \omega)\alpha_\mathrm{phason}$.
The individual terms can directly induce $n$-photon transitions with $\Delta E = |n|\hbar\omega$, but (as a property of the Bessel function) contribute only as long as $|n|\lesssim 2k_SA$, corresponding to the estimated threshold value
$\alpha_\text{th} = \Delta E / 4\pi \hbar k_S C$
for the dimensionless driving amplitude $\alpha_\text{phason}$. For transitions to the first excited band, we obtain
$\alpha_\text{th}$ close to unity~\cite{SuppMat},
in reasonable agreement with both the numerical simulations shown in Fig~3(a) and the experimental data shown in Fig.~3(b).
Note in particular that at low frequencies, in order to keep $\alpha_\mathrm{phason}$ constant and equivalent to that used for the dipolar drive, the position-space amplitude of the phasonic drive used for the data shown in Fig.~2 increases to significantly more than one lattice constant. A noticeable difference between the dipolar and the phasonic drive is the significantly flatter distribution of transition strengths in the phasonic case. As an experimental indication of this effect, note the comparison of dipolar and phasonic spectra in Fig.~3(c): while the high-amplitude phasonic spectrum shows numerous narrow transition lines, a dipolar spectrum taken at an amplitude sufficient to weakly drive multiphoton transitions already exhibits extreme power-broadening of the first interband transition, in agreement with previous work on periodic lattices~ \cite{interband_eckardt_sengstock_simonet,latticeheating-blochgroup}. As an additional point of interest, we note that connections between harmonic generation and quasiperiodicity have been made in other physical systems~\cite{physletta-phonon-limas05,science-hhgqc-zhu97}.
The spectroscopic probes of tunable cold-atom quasicrystals which we demonstrate can be deployed to study the rich variety of phenomena arising from quasiperiodicity and interactions. As a consequence of a mapping to the Harper model~\cite{harpermodel}, the non-interacting energy spectrum of a 1D quasicrystal constitutes a slice through the multifractal spectrum of two-dimensional electron gases in the integer quantum Hall regime known as the Hofstadter butterfly~\cite{hofstadter-theoriginal}, plotted in Fig.~4(a). Mapping this fascinating spectrum in an entirely different physical context than high-field 2D Fermi gases, and probing the interplay of interactions and quasiperiodicity are two natural applications for the spectroscopic techniques we describe.
With these goals in mind, we measured the evolution of the spectral response as the strength of quasiperiodicity was increased from zero by tuning $V_S/V_P$. While the flattened selection rules of phasonic driving are potentially appealing for such a measurement, phasonic driving is not available at $V_S/V_P=0$ and in any case the strong high-harmonic response complicates the interpretation of phasonic spectra. Therefore for this measurement we chose to use dipolar driving. Tuning of quasiperiodicity was achieved for a fixed $V_P$ by varying the relative strength of the weaker lattice $V_S/V_P$ between zero and one. We note that this range spans a localization transition of the generalized Aubry-Andr\'e type~\cite{aubryandre,inguscio-andersonloc} which has strong effects on transport: Fig.~4(b) shows atomic density distributions after 4 ms of expansion at various values of $V_S/V_P$, clearly indicating the effects of localization. Fig.~4(c) shows results of dipolar modulation spectroscopy on a 10~$E_{R,P}$ primary lattice at fixed $\alpha_\mathrm{dip} = 0.022$ and variable $V_S$, allowing direct measurement of the spectral effects of bichromaticity. We observe the formation of the ``minigaps'' which are hallmarks of the Hofstadter spectrum.
Comparison of the experimental data (Fig.~\ref{fig:fig3}(c)) to the theoretically computed density of states for the non-interacting quasiperiodic lattice (Fig.~\ref{fig:fig3}(d)) reveals a number of interesting features. For small $V_S$, we can clearly identify the three lowermost bands in Fig.~\ref{fig:fig3}(d). While the ground band is not reflected in the experimental data, since intraband excitations were not measured, the observed main resonance clearly corresponds to excitations to the first excited band. However, the blueshift (bending to the right) of this resonance with increasing $V_S$ is clearly lower in the experimental data than in the theory of Fig.~\ref{fig:fig3}(d). We attribute this effect to interactions; taking them into account on a mean-field level~\cite{SuppMat}, we obtain a reduced blueshift in agreement with experiment (Fig.~\ref{fig:fig3}(e)). Excitations to the second excited band are suppressed by weak coupling matrix elements, since for $V_S=0$ the dipolar drive couples on-site Wannier states of opposite parity. Nevertheless, the experiment shows a few narrow resonance lines, induced by switching on a finite $V_S$; we interpret these as a signature of the emergence of minibands. The experimental plot also features an additional resonance line, which merges with the main resonance near $V_S/E_{R,S}=4$. This feature can be reproduced by a theory which includes the initial presence of excitations as a result of localization-induced non-adiabatic loading (Fig.~\ref{fig:fig3}(f))~\cite{SuppMat}. This line is effectively a copy of the resonance immediately to its right, corresponding to transitions into these states from the now populated upper edge of the ground band where the density of states shows a pronounced peak (Fig.~\ref{fig:fig3}(d)). The observation of the emergence of minigaps in a slice of the Hofstadter butterfly spectrum and the identification of spectral shifts due to interactions in a quasicrystal together constitute the second main result of this report.
The techniques and results we present open up several exciting directions for future work. Most broadly, they enable exploration of numerous open questions concerning quantum quasicrystals. Continuous tuning of the period ratio of the bichromatic lattice would allow direct mapping of the 2D Hofstadter butterfly spectrum. Spectroscopy across the Aubry-Andr\'e transition may allow study of the effects of localization on heating processes, including in regions with band-dependent localization and single-particle mobility edges~\cite{spme-boers_PRA07,spme-das-sarma_PRB,spme-bloch_PRL18,monika-mbl-spme}. Monotonic increase of a phasonic degree of freedom should allow the realization of various topological pumps: a recent proposal suggests high-temperature topological quantized phasonic Thouless pumping of bulk states~\cite{Lindner-Rudner-thouless_PRX}, and the Hofstadter spectrum supports edge states which can be topologically pumped from one end of the system to the other in a single phasonic cycle~\cite{topological-quasicrystals-PRL,fibonacci-PRA}.
In conclusion, we have demonstrated phasonic spectroscopy of a tunable quantum quasicrystal, showed theoretically and experimentally that phasonic excitation efficiently drives non-perturbative high-order multiphoton processes and gives rise to a broad low-frequency absorption feature, mapped the spectral features of a transition from a crystal with extended states to a quasicrystal with localized states, measured the emergence of minigaps in a slice of the Hofstadter spectrum, and identified spectral shifts due to the presence of interactions in a quasicrystal.
\begin{acknowledgements}
The authors thank Zach Geiger, Cora Fujiwara, Kevin Singh, and Max Prichard for experimental assistance and G.~\v{Z}labys and U. Schneider for useful discussions. DW acknowledges support from the National Science Foundation (CAREER 1555313), the Office of Naval Research (N00014-16-1-2225), the Army Research Office (PECASE W911NF1410154 and MURI W911NF1710323), and the University of California's Multicampus Research Programs and Initiatives (MRP-19-601445). The work of MR and EA was supported by the European Social Fund under Grant No. 09.3.3-LMT-K-712-01-0051. AE acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG) via the Research Unit FOR 2414 under Project No.\ 277974659.
\end{acknowledgements}
\bibliographystyle{apsrev4-1}
|
1,108,101,565,735 | arxiv | \section{Introduction}
\subsection{Motivation: A constrained Willmore problem}
The motivation for the present paper comes from a constrained Willmore
problem. More precisely, let us consider smooth hypersurfaces $M\subset\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^3$ of a fixed topology, with
constraints on the
amount of surface area where the surface is flat and non-flat
respectively. Here, by flat we mean that the second fundamental form vanishes. Within this class, we are
interested in the variational problem
\begin{equation}
\inf_M\int_M \left(H^2-2 K\right) \d \H^2=\inf_M\int_M(\kappa_1^2+\kappa_2^2)\d\H^2\,,\label{eq:28}
\end{equation}
where $\kappa_1$,
$\kappa_2$ denote the principal curvatures, $H=\kappa_1+\kappa_2$ the mean curvature, $K=\kappa_1\kappa_2$ the Gauss curvature, and $\H^2$ is the
two-dimensional Hausdorff measure.
\medskip
Here we are going to simplify this problem in two ways: First, we are not going
to consider arbitrary surfaces, but only graphs. Secondly, we are going to
replace the constraint of having a fixed amount of non-flat surface area by a
penalization of the non-flat part. This is the usual attempt of capturing
constraints via the introduction of Lagrange multipliers. We will however not be able to prove a rigorous equivalence between the constrained variational
problem and the problem involving Lagrange multipliers.
The latter consists in, for $\lambda>0$ and a set of graphs $M$ with fixed
surface area, in the variational problems
\begin{equation}
\inf_M\int_M \left(H^2-2 K\right) \d \H^2+\lambda \H^2\left(\{x\in M:S_M\neq 0\}\right)\,,\label{eq:29}
\end{equation}
where $S_M$ denotes the second fundamental form of $M$. Additionally, boundary conditions or other constraints on $M$ may be imposed.
\medskip
Obviously, the shape of minimizers for such
a problem depend on the penalization parameter $\lambda$. One expects that the
concentration of curvature increases with $\lambda$, i.e., the area where the
surface is flat
becomes larger as $\lambda$ increases (for configurations of low energy). The main purpose of the
present paper is a rigorous investigation of the limit $\lambda\to \infty$ for
the variational problem \eqref{eq:29}.
\subsection{Statement of main result}
For any Borel set $U\subset\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$,
let
$\M(U)$ denote the set of signed Radon measures on $U$. We denote
by $\M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^p)$ the $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^p$ valued Radon measures on $U$. Let $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{n\times n}_{\mathrm{sym}}$ denote the symmetric $n\times n$
matrices, and let
$\M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{n\times n}_\mathrm{sym}} \newcommand{\Lip}{\mathrm{Lip}\,)$ denote the space of measures with values in the symmetric matrices, i.e., $\{\mu\in\M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{n\times
n}):\mu_{ij}=\mu_{ji}\text{ for }i\neq j\}$.
For $\mu\in \M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{p})$, let $|\mu|$ denote the total variation
measure.
For $\mu\in \M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^p)$, we have by the Radon-Nikodym differentiation Theorem (see
Theorem \ref{thm:RN} below) that for $|\mu|$-almost every $x\in U$, the derivative
$\d\mu/\d|\mu|$ exists.
For any one-homogeneous function $h:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{p}\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ and any $\mu\in
\M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{p})$, we may hence define
\[
h(\mu)=h\left(\frac{\d\mu}{\d|\mu|}\right)\d|\mu|\,.
\]
This is a well defined Borel measure.
\medskip
For $\xi\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$, let $\tau_1(\xi), \tau_2(\xi)$ denote the eigenvalues
of $\xi$.
We set
\[
\rho^0(\xi):=\sum_{i=1}^2 |\tau_i(\xi)|\,.
\]
We will repeatedly use the following estimates:
\begin{equation}
|\xi|\leq \rho^0(\xi)\leq 2|\xi|\,.\label{eq:30}
\end{equation}
Note that $\rho^0:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ is sublinear and positively one-homogeneous.
\medskip
Let $\Omega\subset\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ be an open bounded set
with smooth boundary,
and let $u\in BH(\Omega)$; that is the space of $u\in
W^{1,1}(\Omega)$ such that $\nabla u\in BV(\Omega;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)$, where $\nabla u$
denotes the approximate gradient of $u$.
We will use the usual notation for the $BV$ function $\nabla u$:
$J_{\nabla u}$ denotes the jump set of $\nabla u$. On $J_{\nabla u}$, there
exists a measurable function $\nu_{\nabla u}$ with values in $S^2$ such that $\nabla u$ has
well defined limits $\nabla u^\pm$ on both sides of the hyperplane defined by
$\nu_{\nabla u}$. The set $S_{\nabla u}$ is the singular set of $D\nabla u$, i.e., the set where
$D\nabla u$ is not absolutely continuous w.r.t. $\L^2$. (The operator ``$D$''
denotes the distributional derivative.) Furthermore, $C_{\nabla
u}:=S_{\nabla u}\setminus J_{\nabla u}$. We have the decomposition
\[
D\nabla u=\nabla^2 u \L^2+(\nabla u^+-\nabla u^-)\otimes\nu_{\nabla u} \ecke J_{\nabla u}+
D^s\nabla u \ecke C_{\nabla
u}\,.
\]
Let $C_0(\Omega)$ denote the completion of $C_c^0(\Omega)$ with respect to the $\sup$-norm. We say that a sequence $\mu_j\in\mathcal M(\Omega)$ converges weakly * in the sense of measures to $\mu\in \mathcal M(\Omega)$ if for any $f\in C_0(\Omega)$, we have
\[
\int_\Omega f\d\mu_j\to \int_{\Omega}f\d\mu\,.
\]
The convergence of vector-valued measures is defined analogously.
For a sequence $u_j\in BV(\Omega)$, we say that $u_j\to u$ weakly * in $BV$ if
$u_j\to u$ in $W^{1,1}$ and $D u_j\to Du$ weakly * in the sense of measures.
For $v\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ and
$\xi\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}$, we define
\[
\begin{split}
\mathbf{N}(v)&=\frac{1}{\sqrt{1+|v|^2}}\left(\begin{array}{c}v\\-1\end{array}\right)\,,\\
\mathbf{g}}%norme \newcommand{\step}[1]{\noindent \textbf{Step #1.}(v)&=\id_{2\times 2}+v\otimes v\,,\\
\boldsymbol{I\!I}(v,\xi)&=\frac{1}{\sqrt{1+|v|^2}}\xi\\
S(v,\xi)&=\mathbf{g}}%norme \newcommand{\step}[1]{\noindent \textbf{Step #1.}(v)^{-1/2}\boldsymbol{I\!I}(v,\xi)\mathbf{g}}%norme \newcommand{\step}[1]{\noindent \textbf{Step #1.}(v)^{-1/2}\,.
\end{split}
\]
By this definition, $S(\nabla u(x),\nabla^2 u(x))\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$ is the second fundamental form (or
shape operator) of the graph of $u$ at $u(x)$ in matrix form (supposing $u$ is
sufficiently smooth at $x$); its eigenvalues are the
principal curvatures of the graph.
Let $F_\lambda:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be defined by
\[
F_\lambda(\xi)=\begin{cases} 0 &\text{ if }\xi=0\\
|\xi|^2+\lambda &\text{ else.}\end{cases}
\]
We define $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda: W^{2,2}(\Omega)\to[0,+\infty)$ by
\begin{equation}
\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u)=\lambda^{-1/2}\int_{\Omega}F_\lambda(S(\nabla u,\nabla^2
u))\sqrt{1+|\nabla u|^2}\d x\,.\label{eq:31}
\end{equation}
Note that up to a normalizing factor, for smooth functions $u$, the right hand side is precisely the functional introduced in the
previous subsection,
\[
\lambda^{1/2}\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u)=\int_{\mathrm{gr}(u)}\left(H^2-2 K\right)\d\H^2+\lambda
\H^2\left(\{x\in\mathrm{gr}(u):S_{\mathrm{gr}(u)}\neq 0\}\right)\,\,,
\]
where $\mathrm{gr}(u)$ denotes the graph of $u$.
For $u\in W^{2,2}(\Omega)$, the right hand side in \eqref{eq:31} is
finite, since the Willmore integrand
$\sqrt{1+|\nabla u|^2}|S|^2$ is bounded from above by $|\nabla^2 u|^2$ (see
Lemma \ref{lem:basicSestimate} below).
Let $\arccos:[-1,1]\to[0,\pi]$ be the inverse function of
$\cos:[0,\pi]\to[-1,1]$, and for $v=(v_1,v_2)^T\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, let $v^\bot=(-v_2,v_1)^T$. We define $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}: BH(\Omega)\to[0,+\infty)$ by
\[
\begin{split}
\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}(u)&=2\int_\Omega \rho^0(S(\nabla u,\nabla^2 u))\sqrt{1+|\nabla u|^2}\d x\\
&\quad+2\int_{C_{\nabla u}} \rho^0\left(S\left(\nabla
u,\frac{\d D\nabla u}{\d |D\nabla u|}\right)\right)\sqrt{1+|\nabla u|^2}\d
|D\nabla u|\ecke C_{\nabla u}\\
&\quad+2\int_{J_{\nabla u}} \arccos \mathbf{N} (\nabla u^+)\cdot \mathbf{N} (\nabla
u^-)\sqrt{1+|\nu_{\nabla u}^\bot\cdot\nabla u|^2}\d\H^1\,.
\end{split}
\]
Again, the right hand side always exists and is finite, since $\rho^0(S(v,\xi))\sqrt{1+|v|^2}\leq
2|\xi|$, and hence the integrands can be estimated by the Lebesgue regular, jump
and Cantor part of the measure $\rho(D\nabla u)$ respectively.
Finally, let us write $\mathcal{A}} \newcommand{\M}{\mathcal{M}=BH(\Omega)\cap W^{1,\infty}(\Omega)$.
\medskip
Our main result is the following theorem, which establishes the
$\Gamma$-convergence $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda\to \mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}$ in the weak-* topology of $BH(\Omega)$.
\begin{theorem}
\label{thm:main}
\begin{itemize}
\item[(i)] Let $u_\lambda$ be a sequence in $W^{2,2}(\Omega)$ with
$\limsup_{\lambda\to\infty}\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u_\lambda)<\infty$, $\int_\Omega
u_\lambda\d x=0$ and $\|\nabla u_\lambda\|_{L^\infty}<C$. Then there exists a
subsequence (no relabeling) and $u\in\mathcal{A}} \newcommand{\M}{\mathcal{M}$ such that
\begin{equation}
u_\lambda\to u \text{ in }W^{1,1}(\Omega), \quad\nabla u_\lambda\to \nabla
u\text{ weakly * in }BV(\Omega;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)\,.\label{eq:22}
\end{equation}
\item[(ii)]
Let $u_\lambda$, $u$ be as in \eqref{eq:22}. Then we have
\[
\liminf_{\lambda\to\infty} \mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_{\lambda}(u_{\lambda})\geq \mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}(u)\,.
\]
\item[(iii)]
Let $u\in \mathcal{A}} \newcommand{\M}{\mathcal{M}$. Then there exists a sequence $u_\lambda$ such that \eqref{eq:22}
is fulfilled and
\[
\limsup_{\lambda\to\infty}\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u_\lambda)\leq \mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}(u)\,.
\]
\end{itemize}
\end{theorem}
\begin{remark}
\begin{itemize}
\item[(i)] For $u\in C^2(\Omega)$, the limit functional $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}$ can be written as
\begin{equation}
\label{eq:41}
\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}(u)=\int_{\mathrm{gr}(u)}2\rho^0(S_{\mathrm{gr}(u)})\d\H^2\,,
\end{equation}
where $\mathrm{gr}(u)$ denotes the graph of $u$.
The formula for $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}$ from the statement of the theorem is a generalization
for surfaces whose second fundamental form is a measure. We note that graphs of
functions in $BH(\Omega)$ do not belong to the class of curvature varifolds as
defined in \cite{hutchinson1986second,mantegazza1998curvature}. The latter do
not allow for a Cantor part in the curvature measure.
\item[(ii)] For the ``geometrically linearized'' functionals
\[
\mathcal{G}_\lambda(u)=\lambda^{-1/2}\int_\Omega F_\lambda(\nabla^2 u)\d
x
\]
we have shown in \cite{olbermann2017michell} that the limit functional (again in
the sense of $\Gamma$-convergence) is given by $\mathcal
G(u)=2\int_\Omega\d\left(\rho^0(D\nabla u)\right)$. Here we merely replace the
second derivative $\nabla^2 u$ by the second fundamental form $S(\nabla
u,\nabla^2 u)$. However, the presence of lower order terms makes the analysis
more difficult for several reasons. There exist a few different techniques for the
proof of lower semicontinuity of integral functionals that depend on lower order
terms starting from the results without those terms, see
\cite{marcellini1985approximation,acerbi1984semicontinuity,fonseca1998analysis}. These
techniques do not work here since we consider the convergence $\nabla
u_\lambda\to \nabla u$ weakly * in $BV$ (and not in $W^{1,p}$ with $p>1$ as in
the quoted references). The lower semicontinuity in $BV$ for integral functionals that
depend on lower order terms has been treated in \cite{MR1218685}. Their
technique cannot be applied in a straightforward way here either, the reason
being that for fixed $\lambda$ the integrands of our functionals have 2-growth
at infinity. Our technique will be a modification of the one from
\cite{MR1218685}, choosing a cutoff that despite the 2-growth does not increase
the energy by too much.
Carrying on with the comparison of our result with the one in \cite{MR1218685},
we would like to point out that we are able to determine the form of the $\Gamma$-limit
on the jump part explicitly, which is not possible in the general situation
treated in \cite{MR1218685}. This requires the solution of a certain variational
problem that we obtain through some geometric considerations (see Section
\ref{sec:geometry}).
Concerning the upper bound, this is more difficult here than in
\cite{olbermann2017michell} again because of the presence of lower order
terms. In that reference, the upper bound follows directly from well known
properties of approximations of $BV$ functions by mollification. Here, we need
to keep track of the behavior of the lower order terms in this approximation
process, for which we need to use some results on the fine properties of $BV$ functions.
\item[(iii)]
The requirement $\|\nabla u_\lambda\|_{L^\infty}<C$ in the
compactness part of the theorem (statement (i)) may seem unnatural. Without such an
assumption however, we are not able to obtain control of the $BH$-norm from
the energy alone. This can be seen by considering graphs of functions with
almost vertical parts. The energy of these almost vertical parts can be made
arbitrarily small. In this way, we might obtain functions of arbitrarily
large $L^1$ norm with bounded energy. This can be considered as an artefact of
the restriction to graphs, and shows that a geometric description would be
more appropriate. This will be the topic of future work.
The requirement $\int_\Omega u_\lambda \d x=0$ is included in the statement (i) to enforce the
convergence $u_\lambda\to u$ in $L^1$. Without such an assumption, we would
still have the convergence $\nabla u_\lambda\to \nabla u$ weakly * in $BV$ (for
a subsequence).
\end{itemize}
\end{remark}
\subsection{Scientific context}
Vesicles of polyhedral shape play an important role in biology. Examples are
virus capsids \cite{caspar1962physical,lidmar2003virus}, carboxysomes \cite{yeates2008protein}, cationic-ionic vesicles, and assembled
supramolecular structures \cite{macgillivray1999structural}.
In \cite{PNAS}, a model
for the formation of polyhedral structures
based on minimization of the free elastic energy of topologically spherical
shells has been suggested. In the model, the free energy
is a function of the deformation of the shell, and the material distribution of
the two elastic components that the shell is made of.
Elastic inhomogeneities are known to exist in many virus
capsids and for carboxysomes; in both of these cases, the vesicle shell is made
up of different protein types. In \cite{PhysRevE.85.050501},
it has been suggested that the inhomogeneities can act as the
driving force for faceting.
In this reference, it is assumed that the vesicle wall consists of two
components, with different elastic properties (``soft'' and ``hard''), and the amount of soft and hard material available for the formation of the vesicle is fixed.
The variational problems \eqref{eq:28} and \eqref{eq:29},
interpreted as minimization problems for the free elastic energy, are models for
such two-phase vesicles. Following this interpretation, we investigate here the
limit in which the contrast between soft and hard phase is large (the hard
phase does not bend at all), and there is
a very small amount of soft material.
\subsection{Comparison to the analogous one-dimensional problem}
Consider the following variational problem, which is a lower dimensional analogue for problem
\eqref{eq:29}, with the topology fixed to be that of a
sphere instead of a graph:
\[
\begin{split}
\inf \left\{\int_{M} \kappa^2 \d s+\lambda\H^1(\{x\in M:\kappa\neq 0\}):\text{ $M$ homeomorphic to $S^1$},\,
\H^1(M)=2\pi\right\}
\end{split}
\]
Pulling the penalization term into the integral, we obtain
\[
\begin{split}
\inf \left\{\int_{M} \tilde f_\lambda(\kappa) \d s :\text{ $M$ homeomorphic to $S^1$},\, \H^1(M)=2\pi\right\}\,,
\end{split}
\]
with
\[
\tilde f_\lambda(\kappa)=\begin{cases}\lambda+\kappa^2&\text{ if } \kappa\neq 0\\
0&\text{ else.}\end{cases}
\]
It is well known that such a problem requires relaxation to guarantee the
existence of minimizers. The relaxed problem is obtained by replacing the integrand
with its convex lower semicontinuous envelope,
\[
\tilde f_\lambda^{**}(\kappa)=\begin{cases}2\sqrt{\lambda}|\kappa|&\text{ if }|\kappa|\leq\sqrt{\lambda}\\
\lambda+\kappa^2&\text{ else.}
\end{cases}
\]
We see immediately that the integrands $\lambda^{-1/2}\tilde f_\lambda^{**}$
are monotone decreasing, and converge to the function $\kappa\mapsto
2|\kappa|$. From this convergence, one deduces without difficulty the
$\Gamma$-convergence of the respective integral functionals, with respect to weak *
convergence of the curvatures. The limit
functional $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}^{(1)}:M\mapsto 2\int_M|\kappa|\d s$ is also defined for one-spheres whose
curvature is only a measure. Note that there is a large set of minimizers for
$\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}^{(1)}$: Any one-sphere with non-negative curvature will be a
minimizer.
\medskip
The situation in dimension two is completely different: From Theorem
\ref{thm:main}, it is natural to conjecture that one may define a limit
functional in the sense of $\Gamma$ convergence that for smooth surfaces is
given by \eqref{eq:41}.
For surfaces
of convex bodies, this
functional is the same as the
total mean curvature. For sufficiently smooth surfaces, it is
known that the only minimizer of this functional within the class of topological
two-spheres is the round sphere, see
\cite{minkowski1989volumen,bonnesen1926quelques}.
\subsection{Some notation, plan of the paper}
The
symbol ``$C$'' is used as follows: A statement such as ``$f\leq Cg$'' is shorthand for
``there exists a constant $C>0$ such that $f\leq
Cg$''. The value of $C$ may change within the same line.
For $f\leq Cg$, we also write $f\lesssim g$.
For $\xi\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$,
let $\tau_i(\xi)$, $i=1,2$ denote the eigenvalues of $\xi$. We
denote the operator norm of $\xi$ by
\[
|\xi|_{\infty}=\max(|\tau_1(\xi)|,|\tau_2(\xi)|)\,.
\]
The two-dimensional Lebesgue measure is denoted by $\L^2$, the $d$-dimensional
Hausdorff measure by $\H^d$. For $x=(x_1,x_2)^T\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, we write
$x^\bot=(-x_2,x_1)^T$.
The following objects will be defined in $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$, but most often we are going to consider the case $n=2$. It will be obvious from the context when this special choice is made and we will not mention it explicitly. (The symbol $\Omega$ will always denote a two-dimensional domain.) Let $Q=[-1/2,1/2]^n$ and for $\nu\in
S^{n-1}=\{x\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n:|x|=1\}$, let $Q_\nu$ be a closed cube of sidelength one in $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$, centered in the origin, with one of its
sides parallel to $\nu$. (This defines $Q_\nu$ uniquely for $n=2$.) For a set $K\in
\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$, $x_0\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$ and $\rho>0$, we write $K(x_0,\rho)= x_0+\rho K$.
By $O(t)$, we denote terms $f(t)$ that satisfy $\limsup_{t\to \infty} t^{-1}|f(t)|<\infty$. We fix a radially symmetric function $\eta\in C^\infty_c(\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n)$ such that $\supp \eta\subset B(0,1)$ and $\int_{\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n}\eta\d x=1$, and define $\eta_\varepsilon} \newcommand{\ee}{\mathbf{e}:=\varepsilon} \newcommand{\ee}{\mathbf{e}^{-n}\eta(\cdot/\varepsilon} \newcommand{\ee}{\mathbf{e})$.
\bigskip
The plan of the paper is as follows: In Section \ref{sec:preliminaries}, we will collect a number of theorems from the literature that we will apply later on. In Section \ref{sec:some-auxil-lemm}, we prove some auxiliary lemmas, that will be used in Section \ref{sec:proof-main-theorem}, which contains the proof of the main theorem. In an attempt to increase readability, we have separated the part of the proof concerning the upper bound for points in the jump part from the rest into a section on its own, Section \ref{sec:proof-jump}.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Measures and BV functions}
Let $U\subset \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$ be open.
\begin{theorem}[Proposition 2.2 in \cite{ambrosio1992relaxation}]
\label{thm:RN}
Let $\lambda, \mu$ be Radon measures in $U$ with $\mu\geq 0$. Then there
exists a Borel set $E\subset U$ with $\mu(E)=0$ such that for any $x_0\in \supp \mu\setminus E$ we have
\[
\lim_{\rho\downarrow 0} \frac{\lambda(x_0+\rho K)}{\mu(x_0+\rho
K)}=\frac{\d\lambda}{\d\mu}(x_0)
\]
for any bounded convex set $K$ containing the origin. Here, the set $E$ is
independent of $K$.
\end{theorem}
\begin{theorem}[Theorem 2.3 in \cite{ambrosio1992relaxation}]
\label{thm:BVblow}
Let $u\in BV(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)$ and for a bounded convex open set $K$ containing the
origin, and let $\xi$ be the density of $Du$ with respect to $|Du|$,
$\xi=\frac{\d(Du)}{\d(|Du|)}$.
For $x_0\in \supp (|Du|)$, assume that $\xi(x_0)=\eta\otimes \nu$ with $\eta\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m$,
$\nu\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n$, $|\eta|=|\nu|=1$, and for
$\rho>0$ let
\[
v^{(\rho)}(y)=\frac{\rho^{n-1}}{|Du|(x_0+\rho K)}\left(u(x_0+\rho
y)-\fint_{x_0+\rho K}u(x')\d x'\right)\,.
\]
Then for every $\sigma\in (0,1)$ there exists a sequence $\rho_j$ converging to
0 such that $v^{(\rho_j)}$ converges in $L^1(K;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)$ to a function $v\in
BV(K;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)$ which satisfies $|Dv|(\sigma \overline{K})\geq \sigma^n$ and can be
represented as
\[
v(y)=\psi(y\cdot \nu)\eta
\]
for a suitable non-decreasing function $\psi:(a,b)\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$, where $a=\inf\{y\cdot
\nu:y\in K\}$ and $b=\sup\{y\cdot
\nu:y\in K\}$.
\end{theorem}
\medskip
When considering the blow-up of measures, the following special case of Theorem
0.1 in \cite{delladio1991lower} will be useful:
\begin{theorem}
\label{thm:delladio}
Let $\{\mu_j\}_j,\mu\in \mathcal M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^p)$, such that
\[
\mu_j\to \mu \quad \text{ and }\quad
|\mu_j|\to|\mu|\quad\text{ weakly * in the sense of measures.}
\]
Furthermore, let $h:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^p\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be positively one-homogeneous. Then
\[
h(\mu_j)\to h(\mu) \quad\text{ weakly * in the sense of measures.}
\]
\end{theorem}
We recall that $BH(U)$ denotes the set of functions $u\in W^{1,1}(U)$ such that
$\nabla u\in BV(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^n)$.
The set
$BH(U)$
can be made into a normed space by setting
\[
\|u\|_{BH(U)}=\|u\|_{W^{1,1}(U)}+|D\nabla u|(U)\,.
\]
We say that a sequence $u_j\in BH(U)$ converges weakly * to $u\in
BH(U)$ if $u_j\to u$ in $W^{1,1}(U)$ and $D\nabla u_j\to D\nabla u$ weakly * in
$\M(U;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{n\times n})$.
\begin{theorem}[\cite{demengel1989compactness}]
\label{thm:BHcompact}
Let $u_j$ be a bounded
sequence in $BH(U)$. Then there exists a subsequence (no relabeling) and
$u\in BH(U)$ such that
\[
u_j\to u\quad\text{ weakly * in }BH(U)\,.
\]
\end{theorem}
Now let us assume that $U$ has smooth boundary. The trace operator
\[
\gamma_0:u\mapsto u|_{\partial U }
\]
is linear surjective as a map $W^{1,1}( U )\to L^1(\partial U )$ and also as a map $BV( U )\to
L^1(\partial U )$. For the spaces
$W^{2,1}( U )$ and $BH( U )$, we may also consider the operator
\[
\gamma_1:u\mapsto \nabla u|_{\partial U }\cdot n\,,
\]
where $n$ denotes the unit outer normal of $\partial U $.
The following theorem combines statements from Chapter 2 and the appendix of \cite{demengel1984fonctions}.
\begin{theorem}
\label{thm:traceop}
\begin{itemize}
\item[(i)] The operator
$(\gamma_0,\gamma_1)$ is linear surjective as a map
\[
BH( U )\to \gamma_0(W^{2,1}( U ))\times L^1(\partial U )\,.
\]
\item[(ii)] There exists a continuous right inverse
\[
\gamma_0(W^{2,1}( U ))\times L^1(\partial U )\to W^{2,1}( U )\,.
\]
\end{itemize}
\end{theorem}
For more on the space $BH$, see e.g.~\cite{savare1998superposition,fonseca2003lower}.
\subsection{Relaxation of integral functionals that depend on higher
derivatives}
A
function $f:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times n^k}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ is called $k$-quasiconvex if
\begin{equation}
f(\xi)=\inf\left\{\int_{[-1/2,1/2]^n}f(\xi+\nabla^k \varphi)\d x:\varphi\in
W^{k,\infty}_0([-1/2,1/2]^n;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)\right\}\,,\label{eq:23}
\end{equation}
see \cite{MR0188838}.
\medskip
The so-called $k$-quasiconvexification of $f:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times n^k}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ is given by
the right hand side above,
\[
Q_kf(\xi)=\inf\left\{ \int_{[-1/2,1/2]^n}
f(\xi+\nabla^k\varphi)\d x:\,\varphi\in W^{k,\infty}_0([-1/2,1/2]^n;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)\right\}\,.
\]
In the case $k=1$, one obtains the relaxation of integral
functionals $u\mapsto\int f(\nabla u)\d x$ by replacing $f$ by its
quasiconvex envelope $Q_1f$.
\subsection{Blow-up method}
The main tool in our proof will be the so-called blow-up method. In the context
of lower semicontinuity of integral functionals in $BV$, this has been developed
by Fonseca and M\"uller.
\begin{theorem}[Theorem 2.19 in \cite{MR1218685}]
\label{thm:MFsingularpart} Let $f:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m\times \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times n}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be
quasiconvex and positively one-homogeneous\footnote{When comparing our
statement of the theorem with the one in \cite{MR1218685}, note that the assumption that $f$ is positively one-homogeneous implies that the
recession function for $f$ is identical to $f$.} in the second argument. Assume that
$v_j\to v$ weakly * in $BV(U)$ and $f(v_j,\nabla
v_j)\L^n\to \mu$ weakly * in the sense of measures, and that
$\zeta_2,\zeta_3$ are defined as the Radon-Nikodym derivatives
\[
\zeta_2=\frac{\d\mu}{\d(|D^sv|\ecke
C_v)},\quad \zeta_3=\frac{\d \mu}{\d (\H^1\ecke J_v)}\,.
\]
Then
\[
\begin{split}
\zeta_2(x_0)&\geq f\left(v(x_0),\frac{\d Dv}{\d |Dv|}(x_0)\right)
\quad\text{ for
}|D^sv|\ecke C_{v}\text{ a.e. }x_0\in \Omega\\
\zeta_3(x_0)&\geq K_f(v^+(x_0),v^-(x_0),\nu_v(x_0)) \quad\text{ for }|D^sv|\ecke
J_{v}\text{ a.e. }x_0\in \Omega\,,
\end{split}
\]
where
\[
\begin{split}
K_f(a,b,\nu)=\inf\Big\{&\int_{Q_\nu}f(w,\nabla w)\d x: w\in W^{1,1}(Q_\nu), \\
& w(x)=a \text{ for } x\cdot \nu=+1/2, \, w(x)=b \text{ for } x\cdot
\nu=-1/2\Big\}\,.
\end{split}
\]
\end{theorem}
\section{Some auxiliary lemmas}
\label{sec:some-auxil-lemm}
\subsection{Relaxation and quasiconvexification}
We consider the following integrands, defined for $v\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, $\xi\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$:
\[
f_\lambda(v,\xi)=\lambda^{-1/2} F_\lambda(S(v,\xi))\sqrt{1+|v|^2}\,.
\]
This choice implies $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u)=\int_\Omega f_\lambda(\nabla u,\nabla^2 u)\d x$.
\medskip
In order to find the lower semicontinuous envelope of $\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda$, we will need to
determine the 2-quasiconvexification of $f_\lambda$. In
principle this is contained in \cite{MR820342,allaire1993optimal}, and the
appendix of \cite{olbermann2017michell} contains a detailed proof of the case
$v=0$. Hence we only point out the modifications that are
necessary with respect to the latter; these changes can be found in the appendix to the
present paper.
\begin{proposition}
\label{prop:Q2F}
Let $v\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2}$. The 2-quasiconvexification of $f_\lambda(v,\cdot)=\lambda^{-1/2}F_\lambda( S(v,\cdot))\sqrt{1+|v|^2}$ is given by
\begin{equation}
Q_2f_\lambda(v,\xi)=\sqrt{1+|v|^2}\begin{cases}2 \rho^0(S(v,\xi))-2\frac{|\det S(v,\xi)|}{\sqrt{\lambda}} & \text{ if }\rho^0(S(v,\xi))\leq
\sqrt{\lambda}\\
\frac{|S(v,\xi)|^2}{\sqrt{\lambda}}+\sqrt{\lambda} & \text{ else.}\end{cases}\label{eq:1}
\end{equation}
\end{proposition}
In the sequel we use the notation
\begin{equation}
\begin{split}
g_\lambda(\xi)&=\begin{cases} 2\left(\rho^0(\xi)-\frac{|\det \xi|}{\sqrt{\lambda}}\right)&\text{ if
}\rho^0(\xi)\leq \sqrt{\lambda}\\
\frac{|\xi|^2}{\sqrt{\lambda}}+\sqrt{\lambda}& \text{ else.}\end{cases}\\
h_\lambda(v,\xi)&=Q_2 f_\lambda(v,\xi)
=g_\lambda(S(v,\xi))\sqrt{1+|v|^2}\,.
\end{split}\label{eq:55}
\end{equation}
\subsection{Properties of $h_\lambda$}
The following straightforward estimate will be used repeatedly:
\begin{lemma}
\label{lem:basicSestimate}
Let $v\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2,\xi\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}$. Then
\[
|S(v,\xi)|^2\leq (1+|v|^2)^{-1}|\xi|^2\,.
\]
\end{lemma}
\begin{proof}
This follows easily from the observation that $g(v)^{-1}$ is a symmetric matrix
with eigenvalues $1$ and $(1+|v|^2)^{-1}$.
\end{proof}
\medskip
In the following lemma, we collect some properties of $g_\lambda$.
\begin{lemma}
\label{lem:hladd}
\begin{itemize}
\item[(i)] Let $M>1$. There exists a constant $C=C(M)$ such that whenever
$A,B\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$ with $|A|\leq M|B|$, we have
\[
g_\lambda(A)\leq C\, g_\lambda(B)\,.
\]
\item[(ii)] For $A,B\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$, we have
\[
|g_\lambda(A)-g_\lambda(B)|\leq C|A-B|\left(1+\frac{|A|+|B|}{\sqrt{\lambda}}\right)\,.
\]
\item[(iii)] For every $\lambda>0$, we have
\[
g_\lambda(\xi)\geq 2 |\xi|_{\infty}\,.
\]
\end{itemize}
\end{lemma}
\begin{proof}
We prove (i) by case distinction:
If $\sqrt{\lambda}\geq \rho^0(A)$, then we have
\[
g_\lambda (A)\leq
2\rho^0(A)\leq 4|A|\leq 4M|B|\leq 4M g_\lambda(B)\,.
\]
If $\rho^0(B)\leq \sqrt{\lambda}\leq \rho^0(A)$, then we have
\[
g_\lambda(A)= \sqrt{\lambda} +\frac{|A|^2}{\sqrt{\lambda}}\leq
2|A|+\frac{|A|^2}{M^{-1}|A|} \leq 3M^{2} |B|\leq 3M^2 g_\lambda(B)\,.
\]
If $\sqrt{\lambda}\leq \min(\rho^0(A),\rho^0(B))$, then
\[
g_\lambda(A)=\sqrt{\lambda} +\frac{|A|^2}{\sqrt{\lambda}}
\leq M^2 g_\lambda(B).
\]
This completes the proof of (i).
\medskip
To prove (ii) it suffices to observe that $g_\lambda$ is piecewise
differentiable. A direct computation yields
\[
|\nabla g_\lambda(A)|\leq C\left(1+\frac{|A|}{\sqrt{\lambda}}\right)
\]
almost everywhere, which immediately implies (ii).
\medskip
Finally we prove (iii). For $\xi=0$, the inequality is trivial. So let $\xi\neq
0$, and denote the eigenvalues of $\xi$ by $\tau_1,\tau_2$. For $\rho^0(\xi)\leq\sqrt{\lambda}$, we have
\[
\begin{split}
g_\lambda(\xi)&=2\left(\rho^0(\xi)-\frac{|\det \xi|}{\sqrt{\lambda}}\right)\\
&\geq 2\left(\rho^0(\xi)-\frac{|\det \xi|}{|\xi|_\infty}\right)\\
&\geq 2 \left(\rho^0(\xi)-\min(|\tau_1|,|\tau_2|)\right)\\
&= 2 |\xi|_\infty\,.
\end{split}
\]
For $\rho^0(\xi)\geq\sqrt{\lambda}$, we have by the Cauchy-Schwarz inequality,
\[
g_\lambda(\xi)=\sqrt{\lambda}+\frac{|\xi|^2}{\sqrt{\lambda}}
\geq 2\frac{|\xi|\sqrt{\lambda}}{\sqrt{\lambda}}
\geq 2|\xi|_\infty\,.
\]
This proves the lemma.
\end{proof}
In the following lemma, we introduce the following notation: The pointwise limit of $h_\lambda$ for $\lambda\to\infty$ is
\[
G(v,\xi)=2 \rho^0(S(v,\xi))\sqrt{1+|v|^2}\,.
\]
\begin{lemma}
\label{lem:hbounds}
We have that
\[
\begin{split}
\left|h_\lambda(v,\xi)-h_\lambda(\tilde v,\xi)\right|&\leq C|v-\tilde v|
\max
\left(h_\lambda(v,\xi),h_\lambda(\tilde v,\xi)\right) \\
\left|f_\lambda(v,\xi)-f_\lambda(\tilde v,\xi)\right|&\leq C|v-\tilde v|
\max \left(f_\lambda(v,\xi),f_\lambda(\tilde v,\xi)\right)\\
\left|G(v,\xi)-G(\tilde v,\xi)\right|&\leq C|v-\tilde v|
\max \left(G(v,\xi),G(\tilde v,\xi)\right)
\end{split}
\]
for all $v,\tilde v\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, $\xi\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}$, where the constants $C$ do not
depend on $\lambda$.
\end{lemma}
\begin{proof}
We recall that $S(v,\xi)$ is given explicitly by
\[
S(v,\xi)=(1+|v|^2)^{-1/2} \left(\id+v\otimes v\right)^{-1/2}\xi \left(\id+v\otimes
v\right)^{-1/2}\,.
\]
We claim that
\begin{equation}
\label{eq:21}
\left|\nabla_v S(v,\xi)\right|\lesssim \frac{S(v,\xi)}{\sqrt{1+|v|^2}}\,.
\end{equation}
Indeed,
noting that
\[
\left(\id+v\otimes v\right)^{-1/2}= \frac{1}{\sqrt{1+|v|^2}}\frac{v\otimes
v}{|v|^2}+\frac{v^\bot\otimes v^\bot}{|v|^2}\,,
\]
this follows from a direct calculation, which we omit here.
Now we may estimate the partial derivative of $h_\lambda(v,\xi)$ using the chain
rule and Lemma \ref{lem:hladd} (ii),
\[
\begin{split}
\left|\nabla_v h_\lambda(v,\xi)\right|&= \left|g_\lambda(S(v,\xi))\nabla_v
\sqrt{1+|v|^2}
+ \sqrt{1+|v|^2} \nabla g_\lambda(S(v,\xi))\nabla_v S(v,\xi)\right|\\
&\lesssim
g_\lambda(S(v,\xi))+\sqrt{1+|v|^2}\left(1+\frac{S(v,\xi)}{\sqrt{\lambda}}\right)\frac{S(v,\xi)}{\sqrt{1+|v|^2}}\\
&\lesssim g_\lambda(S(v,\xi))\\
&\leq h_\lambda(S(v,\xi))\,.
\end{split}
\]
The analogous claim for $f_\lambda$ is trivial for $\xi=0$, and follows from
\eqref{eq:21} and the chain rule for $\xi\neq 0$. The inequality for $G$ is obtained from the one for $h_\lambda$ by taking the limit $\lambda\to \infty$.
\end{proof}
The following lemma will provide the proof of the lower bound once the
additional complication of the lower
order terms has been treated.
\begin{lemma}
\label{lem:qcL1conv}
Let $\Omega\subset \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ be open and bounded, $v_0\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, $\xi_0\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}$,
$w_\lambda\to 0$ in $L^1(\Omega)$ as $\lambda\to \infty$, and $\|\nabla w_\lambda\|_{L^1}<C$. Then
\[
\liminf_{\lambda\to\infty}\int_\Omega h_\lambda(v_0,\xi_0+\nabla w_\lambda)\d x\geq
2\L^2(\Omega)\rho^0(S(v_0,\xi_0))\sqrt{1+|v_0|^2}\,.
\]
\end{lemma}
\begin{proof}
Up to details, the proof is identical to the proof of Lemma 6.2 (i)
in \cite{olbermann2017michell}. There it is proved that
\[
\liminf_{\lambda\to\infty}\int_\Omega g_\lambda(\xi_0+\nabla w_\lambda)\d x\geq
2\L^2(\Omega)\rho^0(\xi_0)\,.
\]
In that proof, one only needs to replace $g_\lambda$ with
$g_\lambda(S(v_0,\cdot))$. Apart from the additional dependence of some of the
constants ``$C$'' on $v_0$ that appear in the proof, all arguments go through unchanged.
\end{proof}
\subsection{Blow-up of higher order gradients}
Theorem \ref{thm:MFsingularpart} describes the behavior of integrands depending on gradients under
the blow-up procedure. This will not be quite enough for our purposes: For the jump part, our
proof will take advantage of the fact that we consider the second fundamental form of
the graph, which in turn means that we need to consider integrands that depend on first and
second derivatives.
\begin{lemma}
\label{lem:MFvariant}
Let $\Omega\subset\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ be open and bounded.
Assume that $f:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2\times \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ fulfills the following
properties:
\begin{itemize}
\item[(i)] $f$ is quasiconvex and positively one-homogeneous in the second
argument with $f(v,\xi)\leq C|\xi|$
\item[(ii)] The functional $u\mapsto
\int_{\Omega} f(\nabla u,\nabla^2 u)\d x$ is continuous in $W^{2,2}(\Omega)$
\end{itemize}
Furthermore assume that $u_\lambda$ is a sequence in $W^{2,2}(\Omega)$,
$u_\lambda\to u$ weakly * in $BH(\Omega)$, $f(\nabla u_j,\nabla^2
u_j)\L^n\to \mu$ weakly * in the sense of measures, and that
$\zeta_3$ is defined as the Radon-Nikodym derivative
\[
\zeta_3=\frac{\d \mu}{\d (\H^1\ecke J_{\nabla u})}\,.
\]
Then
\[
\begin{split}
\zeta_3(x_0)&\geq \tilde K_f(\nabla u^+(x_0),\nabla u^-(x_0),\nu_{\nabla u}(x_0))
\quad\text{ for }|D^s\nabla u|\ecke
J_{\nabla u}\text{ a.e. }x_0\in \Omega \,,
\end{split}
\]
where
\[
\begin{split}
\tilde K_f(a,b,\nu)=\inf\left\{\int_{Q_\nu}f(\nabla w,\nabla^2 w)\d x: w\in
\mathcal A_{a,b,\nu}\right\}
\end{split}
\]
and
\[
\begin{split}
\mathcal A_{a,b,\nu}&=\Bigg\{ w\in
C^\infty(\overline{Q_\nu}):\\
&\qquad w(x)=a\cdot x \text{ in some neighborhood of
}\left\{x\in \partial Q_\nu:x\cdot\nu=\frac12\right\}\,,\\
&\qquad w(x)=b\cdot x \text{ in some neighborhood of
}\left\{x\in \partial Q_\nu:x\cdot\nu=-\frac12\right\}\,,\\
&\qquad \nabla^k w(x+\nu^\bot)=\nabla^k w(x) \text{ for } x\cdot
\nu^\bot=-\frac12 \text{ and } k=1,2,\dots\Bigg\}\,.
\end{split}
\]
\end{lemma}
\begin{proof}
We write $\nu\equiv \nu_{\nabla u}(x_0)$.
With
\[
u_\lambda^{(\rho)}(x)=\rho^{-1} \left(u_\lambda(x_0+\rho
x)-u_\lambda(x_0)\right)\,,\qquad U(x)= \begin{cases}\nabla u^+(x_0)\cdot x &\text{
if }x\cdot\nu\geq 0\\\nabla u^-(x_0)\cdot x&\text{ if } x\cdot \nu<
0\,,\end{cases}
\]
we have that for $|D\nabla u|\ecke J_{\nabla u}$ almost every $x_0$,
$\lim_{\rho\to 0}\lim_{\lambda\to\infty}u_\lambda^{(\rho)}= U$ in $W^{1,1}(Q_\nu)$, see Theorem 3.77
in
\cite{MR1857292}. Additionally,
\[
\zeta_3(x_0)=\lim_{\rho\to 0}\lim_{\lambda\to
\infty}\rho^{-1}\int_{Q_\nu(x_0,\rho)}f(\nabla u_\lambda,\nabla^2
u_\lambda)\d x\,.
\]
Choose $\rho_j\to 0,\lambda_j\to \infty$ such that $u_{\lambda_j}^{(\rho_j)}\to
U$ in $W^{1,1}(Q_\nu)$ and
\[
\begin{split}
\zeta_3(x_0)&=\lim_{j\to\infty}\rho_j^{-1}\int_{Q_\nu(x_0,\rho_j)}f(\nabla
u_{\lambda_j},\nabla^2
u_{\lambda_j})\d x\,
\end{split}
\]
We write
$u_j:=u_{\lambda_j}^{(\rho_j)}$.
We set $U_j:=\eta_{\rho_j}*U$. $U_j$ is affine on the slices orthogonal to $\nu$.
With this notation, we have
\[
\zeta_3(x_0)=\lim_{j\to\infty}\int_{Q_\nu}f(\nabla
u_{j},\nabla^2
u_j)\d x\,.
\]
Hence it remains to show
\begin{equation}
\tilde K_f(\nabla u^+(x_0),\nabla u^-(x_0),\nu)\leq \lim_{j\to\infty}\int_{Q_\nu}f(\nabla
u_{j},\nabla^2
u_j)\d x\,.\label{eq:32}
\end{equation}
By the continuity assumption (ii), we may assume that $u_j\in C^\infty(\overline{Q_\nu})$
in the proof of \eqref{eq:32}.
\medskip
For $l\in \mathbb{N}} \newcommand{\T}{\mathcal{T}$, let $K_l\in\mathbb{N}} \newcommand{\T}{\mathcal{T}$ be the smallest integer that satisfies
\[
K_l>l\sup_j\left\{\|u_j\|_{W^{2,1}}+\|U_j\|_{W^{2,1}}\right\}\,,
\]
and
\[
\alpha_l:=\max\left(\frac{1}{l},\sup\{\|u_j-U_j\|_{W^{1,1}}:j>l\}\right)\,,\qquad
s_l:=\frac{\alpha_l}{K_l}\,.
\]
Note that $\alpha_l\to 0$ as $l\to \infty$.
For $i=0,\dots,K_l$, let
\[
Q_{i,l}=(1-\alpha_l+i\, s_l)Q_\nu\,.
\]
Consider a family of cut-off functions $\{\varphi_{i,l}:i=1,\dots,K_l\}$ with
\[
\varphi_{i,l}\in C_c^\infty(Q_{i,l})\,,\quad 0\leq \varphi_{i,l}\leq 1\,,\quad\varphi_{i,l}=1
\text{ on }
Q_{i-1,l}\,,\quad\|\nabla^k \varphi_{i,l}\|_{L^\infty}=O(s_l^{-k})\text{ for } k=1,2\,.
\]
For $j>l$, we define
\[
\tilde u_j^{i,l}:=\varphi_{i,l} u_j+(1-\varphi_{i,l}) U_j\,.
\]
We have that $\tilde u_j^{i,l}\in \mathcal A_{\nabla u^+(x_0),\nabla
u^-(x_0),\nu}$ (for $j$ large enough).
On $Q_{i,l}\setminus Q_{i-1,l}$, we have
\[
\begin{split}
\nabla^2 \tilde u_j^{i,l}&= (u_j-U_j)\nabla^2\varphi_{i,l}+\nabla
(u_j-U_j)\otimes \nabla\varphi_{i,l}\\
&\quad + \nabla\varphi_{i,l}\otimes \nabla
(u_j-U_j)+\varphi_{i,l}\nabla^2(u_j-U_j)+\nabla^2 U_j\,.
\end{split}
\]
Now we may estimate, for every $i=1,\dots, K_l$,
\begin{equation}
\begin{split}
\int_{Q_\nu} f(\nabla \tilde u_j^{i,l},\nabla^2 \tilde u_j^{i,l})\d x&
\leq \int_{Q_{i-1,l}} f(\nabla u_j,\nabla^2 u_j)\d x
+ C\int_{Q_{i,l}\setminus Q_{i-1,l}}|\nabla^2 \tilde u_j^{i,l}|\d x\\
&\quad + C\int_{Q_\nu\setminus Q_{i,l}}|\nabla^2 U_j|\d x\\
&\leq \int_{Q_{i-1,l}} f(\nabla u_j,\nabla^2 u_j)\d x\\
&\quad + C\int_{Q_{i,l}\setminus Q_{i-1,l}}s_l^{-2}|u_j-U_j|+s_l^{-1}|\nabla u_j-\nabla
U_j|+|\nabla^2 u_j|+|\nabla^2 U_j|\d x\\
&\quad + C\int_{Q_\nu\setminus Q_{i,l}}|\nabla^2 U_j|\d x\\
\end{split}\label{eq:5}
\end{equation}
We write $T_{i,l}=Q_{i,l}\setminus Q_{i-1,l}$, and choose an increasing sequence $j(l)$ with $j(l)>l$ such that for every $i=1,\dots,K_l$,
\[
\begin{split}
\fint_{T_{i,l}} \left|u_j-U_j\right|\d x&<s_l^2\\
\fint_{T_{i,l}} \left|\nabla u_j-\nabla U_j\right|\d x&
<s_l\,.
\end{split}
\]
This is possible by $\|u_j-U_j\|_{W^{1,1}}\to 0$.
With the help of these estimates,
the second error term in \eqref{eq:5} for $j=j(l)$ can be estimated as follows,
\[
\begin{split}
\int_{T_{i,l}}&s_l^{-2}|u_j-U_j|+s_l^{-1}|\nabla u_j-\nabla
U_j|+|\nabla^2 u_j|+|\nabla^2 U_j|\d x\\
&\leq C \left(\|\nabla^2 u_j\|_{L^1(T_{i,l})}+ \|\nabla^2 U_j\|_{L^1(T_{i,l})}+s_l\right)\,.
\end{split}
\]
Summing over all $i$ and averaging, we obtain
\[
\begin{split}
\frac{1}{K_l}\sum_{i=1}^{K_l} \int_{Q_\nu} f(\nabla \tilde u_j^{i,l},\nabla^2
\tilde u_j^{i,l})\d x &\leq \int_{Q_\nu} f(\nabla u_j,\nabla^2 u_j)\d x
+ \frac{C}{K_l}\int_{Q_\nu}|\nabla^2 U_j|\d x\\
&\quad +\frac{C}{K_l}\int_{Q_\nu}(|\nabla^2 u_j|+|\nabla^2 U_j|+1)\d x+C s_l
\end{split}
\]
Since the error terms vanish for $l\to \infty$, we can choose
$i=i(l)\in\{1,\dots,K_l\}$ such that
\[
\liminf_{l\to \infty}\int_{Q_\nu} f(\nabla \tilde u_j^{i,l},\nabla^2
\tilde u_j^{i,l})\d x
\leq \lim_{j\to \infty}\int_{Q_\nu} f(\nabla u_j,\nabla^2
u_j)\d x\,.
\]
Since $\tilde u_j^{i,l}\in \mathcal A_{\nabla u^+(x_0),\nabla u^-(x_0),\nu}$, the last equation proves
\eqref{eq:32}.
\end{proof}
\subsection{Geometric considerations}
\label{sec:geometry}
We will need to apply Lemma \ref{lem:MFvariant} to the following particular
choice of integrand:
\[
G_\infty(v,\xi)=2\left| S(v,\xi)\right|_{\infty} \sqrt{1+|v|^2}\,.
\]
By some geometric considerations, we are able to determine $\tilde
K_{G_\infty}$ in Lemma \ref{lem:Kcalc} below. We start with a preparatory
lemma. The assumptions are chosen such that we may apply the lemma to graphs of functions
in $\mathcal{A}} \newcommand{\M}{\mathcal{M}_{a,b,\nu}$ as defined in Lemma \ref{lem:MFvariant} with $\nu=e_2$, see Figure \ref{fig:norot}.
\begin{lemma}
\label{lem:slice}
Let $M$ be an oriented $C^2$ submanifold of $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^3$ with the following
properties:
\begin{itemize}
\item[(i)] $M$ is diffeomorphic to a square
\item[(ii)] There exists $l>0$ and for each $x_1\in[0,l]$ there exists a $C^2$
curve $\gamma_{x_1}$ contained in $\{x_1\}\times[0,1]\times\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ with its two endpoints
in $\{x_1\}\times\{0\}\times\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ and $\{x_1\}\times\{1\}\times\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ respectively, such that
\[
M=\bigcup_{x_1\in[0,l]}\gamma_{x_1}\,.
\]
\item[(iii)] There exist $\mathbf{N}_0,\mathbf{N}_1\in S^2$ such that the for each $x_1\in
[0,l]$, the surface normals in the endpoints of $\gamma_{x_1}$ are given by
$\mathbf{N}_0,\mathbf{N}_1$ respectively.
\end{itemize}
Then
\[
\int_{M}|S_{M}|_\infty\d\H^2\geq l \arccos \mathbf{N}_0\cdot\mathbf{N}_1\,,
\]
and equality holds if any two curves $\gamma_{x_1}, \gamma_{x_1'}$ are
parallel translations of each other
in $x_1$ direction, and
their curvature does not change sign.
\end{lemma}
\begin{proof}
Looking at the slices for $x_1=$constant, we have that
\[
\int_{M}|S_M|_\infty\d\H^2 \geq \int_0^l
\int_{\gamma_{x_1}}|S_M|_\infty\d\H^1\,.
\]
Denoting by $N_{x_1}$ a differentiable choice of a normal to $M$ along $\gamma_{x_1}$, we have that the derivative of the normal
$DN_{x_1}$ fulfills
\[|DN_{x_1}|_\infty\leq
|S_M|_\infty\,.
\]
Hence, by the fundamental theorem of calculus, and letting
$\operatorname{dist}} \newcommand{\supp}{\operatorname{supp}_{S^2}(\cdot,\cdot)$ denote the geodesic distance on $S^2$,
\[
\int_{\gamma_{x_1}}|S_M|_\infty\d\H^1\geq
\int_{\gamma_{x_1}}|D N_{x_1}|\d\H^1\geq \operatorname{dist}} \newcommand{\supp}{\operatorname{supp}_{S^2}(\mathbf{N}_0,\mathbf{N}_1)=
\arccos \mathbf{N}_0\cdot \mathbf{N}_1\,.
\]
The claimed inequality follows. If the curves $\gamma_{x_1}$ are parallel translations of
each other in $x_1$-direction and
their curvature does not change sign, then the inequalities become sharp.
\end{proof}
\begin{figure}[h]
\begin{subfigure}{.45\textwidth}
\includegraphics[height=5cm]{graphnorotcrop.pdf}
\caption{The graph of some function $w$ to which Lemma \ref{lem:slice} may be applied.\label{fig:norot}}
\end{subfigure}
\hspace{5mm}
\begin{subfigure}{.45\textwidth}
\includegraphics[height=5cm]{graphrotcrop.pdf}
\caption{After applying a suitable Euclidean motion $R$, we have $R(\mathrm{gr} \, w|_{[0,l]\times\{0\}})\subset \mathbb{R}} \newcommand{\Z}{\mathbb{Z}\times
\{(0,0)\}$ and $R(\mathrm{gr} \, w|_{[0,1]\times\{1\}})\subset \mathbb{R}} \newcommand{\Z}{\mathbb{Z}\times
\{(1,0)\}$. \label{fig:rot}}
\end{subfigure}
\end{figure}
\begin{lemma}
\label{lem:Kcalc}
Let $a,b\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, $\nu\in S^1$ with $a\cdot \nu^\bot=b\cdot\nu^\bot$, and
$G_\infty(v,\xi)=2|S(v,\xi)|_\infty\sqrt{1+|v|^2}$. Then with $\tilde K$
defined as in the statement of Lemma \ref{lem:MFvariant}, we have that
\[
\tilde K_{G_\infty}(a,b,\nu)=2\sqrt{1+|a\cdot \nu^\bot|^2}\arccos \mathbf{N}(a)\cdot \mathbf{N}(b) \,.
\]
\end{lemma}
\begin{proof}
Let $w\in \mathcal{A}} \newcommand{\M}{\mathcal{M}_{a,b,\nu}$. After a rotation of the coordinate system, we may
assume that $\nu= e_2$ and $a_1=b_1$. Let $M_1$ denote the graph of $w$. By
applying a suitable
Euclidean motion (namely, a rotation
with axis parallel to $e_2$ and a translation), we may map $\mathrm{gr}\, w|_{[0,1]\times \{0\}}$ to
$[0,\sqrt{1+a_1^2}]\times\{(0,0)\}$ and $\mathrm{gr}\, w|_{[0,1]\times \{1\}}$ to
$[0,\sqrt{1+a_1^2}]\times\{(1,0)\}$ respectively, see Figure \ref{fig:rot}. Let us denote the
resulting submanifold of $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^3$ by $M_2$. By the periodicity
of $\nabla^k w$ for $k\in \{1,2\}$ in $x_1$-direction, we may translate $M_2\cap [0,l]\times[0,1]\times\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ in $x_1$-direction by
$l=\sqrt{1+a_1^2}$, and the resulting set will still be a $C^2$ submanifold, with $\int_{M_3}|S_{M_3}|_\infty\d\H^2=\int_{M_1}|S_{M_1}|_\infty\d\H^2$. To
$M_3$, we may apply Lemma \ref{lem:slice} to obtain the claimed lower bound. If $\nabla w$
is constant in $x_1$ direction and $e_2\cdot \nabla w$ is monotone in $x_2$ direction, the second
part of that lemma yields that the bound is also attained.
\end{proof}
\section{Proof of the main theorem}
\label{sec:proof-main-theorem}
\subsection{Compactness}
\begin{proof}[Proof of Theorem \ref{thm:main} (i)]
Using $\|\nabla u_\lambda\|_{L^\infty}<C$, we have that
\begin{equation}
|\nabla^2 u_\lambda|\leq C |S(\nabla
u_\lambda,\nabla^2 u_\lambda)|\label{eq:10}
\end{equation}
By Lemma \ref{lem:hladd} (iii), we have that
\begin{equation}
|\xi|\leq g_\lambda(\xi)\quad \text{ for all } \xi \in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}\,.\label{eq:33}
\end{equation}
From \eqref{eq:10} and \eqref{eq:33} it follows that
\[
|\nabla^2 u_\lambda|\leq h_\lambda(\nabla u_\lambda,\nabla^2 u_\lambda)\,,
\]
and hence
\[
\limsup \|\nabla^2 u_\lambda\|_{L^1(\Omega)}\leq C\,.
\]
By Theorem \ref{thm:BHcompact}, we obtain the weak * convergence in $BH$ for
a subsequence.
\end{proof}
\subsection{Lower bound}
\begin{proof}[Proof of Theorem \ref{thm:main} (ii)]
The main tool of the proof is the blowup technique by Fonseca and M\"uller. We
have that
\[
D\nabla u= \nabla^2 u \L^2 + D^s\nabla u\ecke C_{\nabla u}+(\nabla u^+-\nabla
u^-)\otimes \nu_{\nabla u}\H^1\ecke
J_{\nabla u}\,.
\]
In the sequel, we write $\nu\equiv \nu_{\nabla u}$.
After choosing a subsequence, we may assume that $\lim_{\lambda\to
\infty}\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u_\lambda)=\liminf_{\lambda\to\infty}\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}_\lambda(u_\lambda)$,
without increasing the $\liminf$. Recalling the definition \eqref{eq:55} of
$h_\lambda$, we have that $h_\lambda=Q_2f_\lambda\leq
f_\lambda$, and hence there exists a Radon measure $\mu$ such that (after passing to a
further subsequence)
\[
h_\lambda(\nabla u_\lambda,\nabla^2 u_\lambda)\L^2\to \mu\quad\text{ weakly *
in the sense of measures.}
\]
Let $\zeta_1,\zeta_2,\zeta_3$ denote the Radon-Nikodym derivative of $\mu$
with respect to $\L^2$, $|D^s\nabla u|\ecke C_{\nabla u}$ and $\H^1\ecke
J_{\nabla u}$ respectively. By the non-negativity of $\mu$, we have
\[
\mu\geq \zeta_1 \L^2 +\zeta_2 |D^s\nabla u|\ecke C_{\nabla u}+ \zeta_3
\H^1\ecke J_{\nabla u} \,.
\]
We will show that
\begin{align}
\zeta_1(x)\geq &2\rho^0\left(S\left(\nabla u(x),\nabla^2
u(x)\right)\right)\sqrt{1+|\nabla u|^2} \quad
\text{ for }\L^2-\text{a.e.}\, x\in \Omega \label{eq:2}\\
\zeta_2(x)\geq &2\rho^0\left(S\left(\nabla u(x),\frac{\d (D\nabla u)}{\d
|D\nabla u|}(x)\right)\right)\sqrt{1+|\nabla u|^2} \quad \text{ for
}|D^s\nabla u|-\text{a.e.}\, x\in C_{\nabla u}\label{eq:4}\\
\zeta_3(x)\geq &2\arccos\left(\mathbf{N}(\nabla u^+)\cdot \mathbf{N}(\nabla
u^-)\right)\sqrt{1+|\nu^\bot\cdot\nabla u|^2} \label{eq:3}\quad \text{ for
}\H^1-\text{a.e.}\, x\in J_{\nabla u}\,.
\end{align}
This will prove the lower bound.
We will first prove \eqref{eq:2}.
We write $v_\lambda=\nabla u_\lambda$.
For $\L^2$-almost every $x_0$, we may
choose a sequence $(\varepsilon} \newcommand{\ee}{\mathbf{e}_j)_{j\in\mathbb{N}} \newcommand{\T}{\mathcal{T}}$ converging to zero, such that $\mu(\partial
Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}_j))=0$ for every $j\in \mathbb{N}} \newcommand{\T}{\mathcal{T}$. When we write $\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0$ in the sequel, we
actually mean the limit $j\to\infty$ for such a sequence. Also, we will drop the index $j$ in our notation. For every $\varepsilon} \newcommand{\ee}{\mathbf{e}$, we have
\[
\lim_{\lambda\to\infty}\int_{Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e})} h_\lambda(v_\lambda,\nabla
v_\lambda)\d x = \mu(Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}))\,.
\]
Moreover,
\[
\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0}\lim_{\lambda\to\infty} \frac{ Dv_\lambda(Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}))}{|D
v|(Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}))}=\frac{\d D v}{\d |D v|}(x_0)\,.
\]
Note that by Theorem \ref{thm:RN} we have
\begin{equation}
\begin{split}
\zeta_1(x_0)=&\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0} \frac{\mu(Q({x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}}))}{\L^2(Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e}))}\\
=&\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0}\lim_{\lambda\to
\infty}\fint_{Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e})}h_\lambda(v_\lambda,\nabla v_\lambda)\d x\,.
\end{split}\label{eq:12}
\end{equation}
We write $v_0:=v(x_0)$. For $\varepsilon} \newcommand{\ee}{\mathbf{e}$ small enough, define $w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}}:Q\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ by
\[
w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}}(x)=\varepsilon} \newcommand{\ee}{\mathbf{e}^{-1}\left(v_\lambda(x_0+\varepsilon} \newcommand{\ee}{\mathbf{e} x)-v_0\right)\,.
\]
Furthermore let $w_0(x)=\nabla v_0\cdot x$. Using a change of variables and the
Cauchy-Schwarz inequality we have
\begin{equation}
\begin{split}
\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0}\lim_{\lambda\to\infty} \|w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}}-w_0\|_{L^1(Q)}&=
\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0} \frac{1}{\varepsilon} \newcommand{\ee}{\mathbf{e}}\int_{Q}|v(x_0+\varepsilon} \newcommand{\ee}{\mathbf{e} x)-v_0-\nabla v_0\cdot
\varepsilon} \newcommand{\ee}{\mathbf{e} x|\d x\\
&= \lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0} \frac{1}{\varepsilon} \newcommand{\ee}{\mathbf{e}^3}\int_{Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e})}|v(x)-v_0-\nabla
v_0\cdot
(x-x_0)|\d x\\
&\leq \lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0}\frac{1}{\varepsilon} \newcommand{\ee}{\mathbf{e}^2}\left(\int_{Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e})}|v(x)-v_0-\nabla
v_0\cdot
(x-x_0)|^2\d x\right)^{1/2}\\
&=0\,.
\end{split}\label{eq:11}
\end{equation}
The last equality above holds for $\L^2$-almost every
$x_0$ by the remark below Theorem 3.83 in
\cite{MR1857292} (which is a slightly stronger variant of approximate differentiability).
Also note that we have
\[
\fint_{Q(x_0,\varepsilon} \newcommand{\ee}{\mathbf{e})}h_\lambda(v_\lambda,\nabla v_\lambda)\d
x=\int_{Q}h_\lambda(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}}(x),\nabla w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}}(x))\d x\,.
\]
By \eqref{eq:12} and \eqref{eq:11}, it is possible to choose $\varepsilon} \newcommand{\ee}{\mathbf{e}\equiv \varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda)$ depending on $\lambda$ such that
with $w_\lambda:=w_{\lambda,\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda)}$ we have that
\[
\begin{split}
\lim_{\lambda\to \infty}\|w_\lambda-w_0\|&=0\\
\lim_{\lambda\to \infty}\int_{Q}h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda(x),\nabla w_\lambda)\d
x&=\zeta_1(x_0)\,.
\end{split}
\]
We need to modify $w_\lambda$ such that we get a suitable $L^\infty$-bound for fixing
the lower order terms. Namely, we are going to construct $\tilde w_\lambda$ such that
$\|\tilde w_\lambda\|_{L^\infty}\leq \varepsilon} \newcommand{\ee}{\mathbf{e}^{-1/2}$, and
\begin{equation}
\liminf_{\lambda\to\infty}\int_Q h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda,\nabla \tilde w_\lambda)\d
x\leq
\liminf_{\lambda\to\infty}\int_Q h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\d
x\,.\label{eq:14}
\end{equation}
Let $K_\lambda$ be the largest integer smaller than $\log_2 \varepsilon} \newcommand{\ee}{\mathbf{e}^{-1/2}$. For
$k=1,\dots,K_\lambda$, we define
\[
E_k^\lambda:=\left\{x\in Q:\,2^{k-1}<|w_\lambda-w_0|\leq 2^k\right\}\,.
\]
Next we choose $k_\lambda\in\{1,\dots, K_\lambda\}$ such that with $E^\lambda:=E_{k_\lambda}^\lambda$, we have
\begin{equation}
\int_{E^\lambda}\left(1+h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\right)\d
x\leq K_\lambda^{-1}\int_Q \left(1+h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\right)\d
x\label{eq:15}
\end{equation}
and we define $\varphi_\lambda:[0,\infty)\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ such that
\[
\begin{split}
\varphi_\lambda(x)=&1\text{ for } x\leq 2^{k_\lambda-1}\\
\varphi_\lambda(x)=&0\text{ for } x\geq 2^{k_\lambda} \\
|\varphi_\lambda'|\leq &2^{2-k_\lambda}\,.
\end{split}
\]
Now we set
\[
\tilde w_\lambda=w_0+\varphi_\lambda(|w_\lambda-w_0|)(w_\lambda-w_0)\,.
\]
Note that $\|\tilde w_\lambda-w_0\|_{L^\infty}\leq \varepsilon} \newcommand{\ee}{\mathbf{e}^{-1/2}$ by
construction,
and $\tilde w_\lambda=w_0$ on
$\{x\in\Omega:|w_\lambda-w_0|\geq 2^{k_\lambda}\}$. We have that
\begin{equation}
\begin{split}
\liminf_{\lambda\to\infty}\int_Q h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda,\nabla \tilde
w_\lambda)\d x&\leq \liminf_{\lambda\to\infty}\int_{\{|w_\lambda-w_0|\leq 2^{k_\lambda-1}\}}
h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\d
x\\
&\quad + \int_{E^\lambda} h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} \tilde w_\lambda,\nabla \tilde w_\lambda)\d x\\
&\quad + \int_{\{|w_\lambda-w_0|\geq 2^{k_\lambda}\}} h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_0,\nabla
w_0)\d x\,.
\end{split}
\label{eq:35}
\end{equation}
We claim that
\begin{equation}\label{eq:24}
\int_{E^\lambda} h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} \tilde w_\lambda,\nabla \tilde w_\lambda)\d x \leq
C(v_0,\nabla w_0) \int_{E^\lambda} \left(1+h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\right)\d x\,.
\end{equation}
Indeed, we have that on $E^\lambda$, $|\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda-\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda|\lesssim
\varepsilon} \newcommand{\ee}{\mathbf{e}^{1/2}$, and hence
\begin{equation}
\begin{split}
|\mathbf{g}}%norme \newcommand{\step}[1]{\noindent \textbf{Step #1.}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda)-\mathbf{g}}%norme \newcommand{\step}[1]{\noindent \textbf{Step #1.}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda)|&\leq C(v_0) \varepsilon} \newcommand{\ee}{\mathbf{e}^{1/2}\,,\\
\left|\sqrt{1+|v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda|^2}-\sqrt{1+|v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda|^2}\right|&\leq
C \varepsilon} \newcommand{\ee}{\mathbf{e}^{1/2}\,.
\end{split}\label{eq:34}
\end{equation}
Also,
\[
\begin{split}
|\nabla \tilde w_\lambda|&= \left|\nabla w_0+\varphi_\lambda' (w_\lambda-w_0)\otimes \nabla
|w_\lambda-w_0|+\varphi_\lambda \nabla (w_\lambda-w_0)\right|\\
&\leq C\left( |\nabla w_0|+ (2^{-k_\lambda}|w_\lambda-w_0|+1)|\nabla (w_\lambda-
w_0)|\right)\\
&\leq C(\nabla w_0) (|\nabla w_\lambda|+1)\,.
\end{split}
\]
Hence, we have that
\[
|S(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda, \nabla \tilde w_\lambda)|\leq C(v_0,\nabla w_0)\left(|S(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,
\nabla w_\lambda)|+1\right)
\]
and it follows from Lemma \ref{lem:hladd} (i) that
\begin{equation}
g_{\lambda}\left(S(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda, \nabla \tilde w_\lambda)\right)
\leq
C (v_0,\nabla w_0)\left(g_{\lambda}(S(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda,
\nabla w_\lambda))+1\right)\,.\label{eq:25}
\end{equation}
Our claimed inequality \eqref{eq:24} now follows from \eqref{eq:34} and
\eqref{eq:25}.
Using \eqref{eq:15}, \eqref{eq:24} and the fact $K_\lambda\to\infty$ as $\lambda\to \infty$,
as well as $w_\lambda\to w_0$ in $L^1$, the right hand side of \eqref{eq:35} can be
estimated from above by
\[
\begin{split}
\liminf_{\lambda\to\infty}\left(\int_{Q} h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\d x
+ C(v_0,\nabla w_0)\L^2\left(\{|w_\lambda-w_0|\geq 2^{k_\lambda}\}\right)
|\nabla w_0|\right)\\
= \liminf_{\lambda\to\infty}\int_{Q} h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e} w_\lambda,\nabla w_\lambda)\d x\,.
\end{split}
\]
This proves \eqref{eq:14}.
Now we have by Lemma \ref{lem:hbounds} and Lemma \ref{lem:qcL1conv},
\[
\begin{split}
\liminf_{\lambda\to\infty}\int_Q h_{\lambda}(v_0+\varepsilon} \newcommand{\ee}{\mathbf{e}\tilde w_\lambda,\nabla \tilde
w_\lambda)\d x&\geq
\liminf_{\lambda\to\infty}\frac{1-C(v_0,\nabla w_0)\varepsilon} \newcommand{\ee}{\mathbf{e}^{1/2}}{1+C(v_0,\nabla w_0)\varepsilon} \newcommand{\ee}{\mathbf{e}^{1/2}}\int_Q
h_{\lambda_\lambda}(v_0,\nabla
\tilde w_\lambda)\d x\\
&\geq 2 \sqrt{1+|v_0|^2}\rho^0(S(v_0,\nabla w_0))\,.
\end{split}
\]
This proves equation \eqref{eq:2}.
\medskip
Recall $G(v,p)=2\rho^0(S(v,p))\sqrt{1+|v|^2}$, and
$G_\infty(v,p)=2|S(v,p)|_\infty\sqrt{1+|v|^2}$.
Let $v_\lambda\to v$ weakly * in BV. We have by Lemma \ref{lem:hladd} (iii) that
\[
h_\lambda(v_\lambda,\nabla v_\lambda)\geq G_\infty(v_\lambda,\nabla v_\lambda)
\]
for all $\lambda$. By Theorem \ref{thm:MFsingularpart}, we have that for
$|D^sv|\ecke C_v$ almost every point $x_0\in\Omega$,
\[
\zeta_2(x_0)\geq G_\infty\left(v(x_0),\frac{\d Dv}{\d|Dv|}(x_0)\right)
\]
which proves \eqref{eq:4}, since $\frac{\d Dv}{\d|Dv|}(x_0)$ is rank one, and
hence
\[
2\rho^0\left(S\left(v(x_0),\frac{\d
Dv}{\d|Dv|}(x_0)\right)\right)=2\left|S\left(v(x_0),\frac{\d
Dv}{\d|Dv|}(x_0)\right)\right|_\infty\,.
\]
By Lemma \ref{lem:MFvariant}, we have in a similar fashion for $|D^sv|\ecke J_v$
almost every $x_0\in \Omega$,
\[
\zeta_3(x_0)\geq \tilde K_{G_\infty}(v^+(x_0),v^-(x_0),\nu(x_0))\,.
\]
By Lemma \ref{lem:Kcalc}, it follows
\[
\zeta_3(x_0)\geq 2\sqrt{1+|\nu^\bot\cdot v(x_0)|^2}\arccos \mathbf{N}(v^+(x_0))\cdot
\mathbf{N}(v^-(x_0))\,.
\]
This proves \eqref{eq:3} and completes the proof of the lower bound.
\end{proof}
\subsection{Upper bound}
For the proof of the upper bound, we will need a modification of the well known
result in the calculus of variations that states that the relaxation of integral
functionals with suitable integrands that depend on $x,u,\nabla u$ is obtained
by the quasiconvexification of the integrand with respect to the gradient
variable. Here, we will need the analogous result for integrands that depend on
$\nabla u, \nabla^2 u$.
\begin{proposition}
\label{prop:relax2}
Let $1\leq p<\infty$, and let $f:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times 2}\times\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times 2\times 2}\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$
such that
\begin{equation}\label{eq:6}
\left.\begin{split}
0\leq &f(v,\xi)\leq C (1+|\xi|^p)\\
|f(v,\xi)-f(\tilde v,\xi)|\leq &C|v-\tilde v|\max (f(v,\xi),f(\tilde v,\xi)) \\
|Q_2f(v,\xi)-Q_2f(\tilde v,\xi)|\leq &C|v-\tilde v| \max (Q_2f(v,\xi),Q_2f(\tilde v,\xi))
\end{split}\right\}
\,\forall
v,\tilde v\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times n}, \xi\in
\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times 2\times 2}.
\end{equation}
Furthermore, let $\Omega\subset \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ be open and bounded with smooth boundary, $u\in W^{2,p}(\Omega)$ and $\delta>0$. Then there
exists $w\in W^{2,p}(\Omega;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)$ with
\[
\begin{split}
\|u-w\|_{W^{1,p}(\Omega;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^m)}&<\delta\\
\int_\Omega f(\nabla w,\nabla^2 w)\d x&< \int_\Omega Q_2f(\nabla u,\nabla^2 u)\d x+\delta\,.
\end{split}
\]
\end{proposition}
For the proof of the proposition, we are going to use
\begin{lemma}[Lemma A.3 in \cite{olbermann2017michell}]
\label{lem:approx}
Let $\Omega\subset\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ be open and bounded with smooth boundary, and let $p\in
[1,\infty)$. Furthermore let $u\in C^3(\Omega)$ and $\delta>0$. Then there exists $w\in
W^{2,\infty}(\Omega)$ and $\Omega_w\subset\Omega$ such that $\Omega_w$ is
the union of mutually disjoint closed cubes, $w$ is
piecewise a polynomial of degree $2$ on $\Omega_w$, and furthermore
\[
\begin{split}
\|u-w\|_{W^{2,p}(\Omega)}&<\delta\,,\\
\|w\|_{W^{2,\infty}}&\lesssim \|u\|_{W^{2,\infty}}\\
\int_{\Omega\setminus\Omega_w}(1+|\nabla^2u|^p+|\nabla^2w|^p)\d x&<\delta\,.
\end{split}
\]
\end{lemma}
\begin{remark}
We remark that the inequality $\|w\|_{W^{2,\infty}}\lesssim \|u\|_{W^{2,\infty}}$ does not appear in the statement of Lemma A.3 in \cite{olbermann2017michell}, but is clear from the proof there. Also, the assumptions on the domain $\Omega$ that we make here are sufficient; the assumption of simple connectedness made in \cite{olbermann2017michell} does not play a role in the proof of that lemma. Finally, we note in passing that neither there nor here the assumption that the dimension of the domain is $n=2$ is of any relevance; the respective statements hold true for general $n$ just as well.
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:relax2}]
First we recall the well known fact that rank-one convex functions are locally
Lipschitz continuous (see e.g. \cite{MR2361288}).
This holds true in particular
for $Q_2f(v,\cdot)$ for any $v$, and hence the assumption \eqref{eq:6}
implies that $Q_2f$ is locally Lipschitz continuous in both arguments.
More precisely, with the assumed
growth properties for $f$, we have $ Q_2f(v,\xi)\leq C(1+|\xi|^p)$ and
hence
\begin{equation}
|Q_2(v,\xi)-Q_2(v,\tilde \xi)|\leq C|\xi-\tilde \xi|\left(1+|\xi|^{p-1}+|\tilde
\xi|^{p-1}\right) \quad\forall \xi,\tilde \xi\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{m\times n^2}
\label{eq:9}
\end{equation}
where $C$ is some constant that is independent of $v,\xi,\tilde \xi$ (see
Proposition 2.32 in
\cite{MR2361288}).
We set $u_\varepsilon} \newcommand{\ee}{\mathbf{e}:=\eta_\varepsilon} \newcommand{\ee}{\mathbf{e}* u$ and claim that
\begin{equation}
\label{eq:7}
\lim_{\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0}\int_\Omega Q_2f(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e})\d x=\int_\Omega
Q_2f(\nabla u,\nabla^2 u)\d x\,.
\end{equation}
Indeed,
we have that $u_\varepsilon} \newcommand{\ee}{\mathbf{e}\to u$ in $W^{2,p}$, and hence by \eqref{eq:9} and the
assumption \eqref{eq:6}, we have
\begin{equation}
\label{eq:42}
\begin{split}
\int_\Omega &|Q_2f(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e})- Q_2f(\nabla u,\nabla^2 u)|\d
x\\
&\leq \int_\Omega |Q_2f(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e})-
Q_2f(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u)|
+ |Q_2f(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u)-
Q_2f(\nabla u,\nabla^2 u)|\d x\\
&\leq C\int_\Omega |\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e}-\nabla^2 u| (1+|\nabla^2
u|^{p-1}+|\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e}|^{p-1})+\max(|\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e}-\nabla u|,1) (1+|\nabla^2 u|^p)
\d x
\end{split}
\end{equation}
For $\varepsilon} \newcommand{\ee}{\mathbf{e}\to 0$, the integral on the right hand side converges to 0, proving the claim \eqref{eq:7}.
Let $\Delta>0$. We choose $u_\varepsilon} \newcommand{\ee}{\mathbf{e}$ such that $\|u-u_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{W^{2,p}}<\Delta$.
By Lemma \ref{lem:approx}, there exists $w_\varepsilon} \newcommand{\ee}{\mathbf{e}\in W^{2,\infty}$ and a union of
disjoint closed cubes $\Omega_w\subset \Omega$ such that each component of $w_\varepsilon} \newcommand{\ee}{\mathbf{e}$ is a polynomial
of degree 2 on each of the cubes, and
\[
\begin{split}
\|w_\varepsilon} \newcommand{\ee}{\mathbf{e}-u_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{W^{2,p}}&<\Delta\\
|\Omega\setminus\Omega_w|(1+\|w_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{W^{2,\infty}}^p)&<\Delta\,.
\end{split}
\]
By the same kind of estimate as in \eqref{eq:42}, we obtain that additionally, we may choose $w_\varepsilon} \newcommand{\ee}{\mathbf{e}$, $\Omega_w$
such that
\begin{equation}
\int_\Omega Q_2f(\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e})\d x<\int_\Omega Q_2f(\nabla
u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e})\d x+\Delta\,.\label{eq:8}
\end{equation}
Moreover, we may choose the cubes to be so small that on each cube $\tilde
Q$ with center $x_0$ in the collection,
\[
\sup_{x\in\tilde Q}|\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x)-\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)|<\Delta\,.
\]
Let $\tilde Q$ be a cube where the components of $w_\varepsilon} \newcommand{\ee}{\mathbf{e}$ are quadratic polynomials, with midpoint
$x_0$ and sidelength $r$. Choose $\xi\in W_0^{2,\infty}(\tilde Q)$ such that
\[
\int_{\tilde Q} f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)+\nabla^2\xi)\d x\leq \mathrm{vol}(\tilde Q) Q_2f(\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)) +\frac\Delta N\,,
\]
where $N$ is the total number of cubes.
Let us write $\tilde \xi(x)=\xi(x_0+rx)$ for $x\in [-1/2,1/2]^2$, and define $\tilde \xi$ on $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ by
1-periodic extension.
For $M\in\mathbb{N}} \newcommand{\T}{\mathcal{T}$, let
\[
\xi_M(x)=M^{-2} \tilde \xi\left(M(x-x_0)\right)\,.
\]
We have that $\|\xi\|_{W^{1,\infty}}\to 0$ for $M\to \infty$, $\|\nabla^2\xi\|_{L^\infty}=\|\nabla^2\xi_M\|_{L^\infty}$, and
\[
\int_{\tilde Q} f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)+\nabla^2\xi)\d x=\int_{\tilde Q} f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)+\nabla^2\xi_M)\d x\,.
\]
We choose $M$
so large that $\|\nabla \xi_M\|_{L^\infty}<\Delta$.
This implies
\begin{equation}
\label{eq:45}
\|\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e}+\nabla \xi_M-\nabla w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)\|_{L^\infty(\tilde Q)}<2\Delta\,.
\end{equation}
Using the local Lipschitz continuity in the first argument of $f$ as assumed in \eqref{eq:6},
\[
\int_{\tilde Q} f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}+\nabla \xi_M,\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)+\nabla^2\xi_M)\d x<
\frac{1+C\Delta}{1-C\Delta}\int_{\tilde Q} f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_0)+\nabla^2\xi_M)\d x\,.
\]
We repeat the same for all cubes $\tilde Q$ in $\Omega_w$, obtaining a corrector
function $\xi_{\tilde Q}\in W^{2,\infty}_0(\tilde Q)$ in each of them.
We set $\bar w= w_\varepsilon} \newcommand{\ee}{\mathbf{e}+\sum_{\tilde Q} \xi_{\tilde Q}$. Denoting by $x_{\tilde Q}$
the center of the cube $\tilde Q$, we have
\[
\begin{split}
\int_{\Omega} f(\nabla\bar w ,\nabla^2 \bar w)\d x&\leq
\sum_{\tilde Q}\int_{\tilde Q}f(\nabla \bar w,\nabla^2\bar w)\d
x
+\int_{\Omega\setminus\Omega_w}f(\nabla \bar w,\nabla^2\bar w)\d x\\
&\leq \sum_{\tilde Q}\frac{1+C\Delta}{1-C\Delta}\int_{\tilde Q}f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}(x_{\tilde Q}),\nabla^2\bar w)\d x+C \int_{\Omega\setminus\Omega_w}(1+|\nabla^2\bar
w|^p)\d x\\
&\leq \sum_{\tilde Q}\frac{1+C\Delta}{1-C\Delta}\left(\int_{\tilde Q}Q_2f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e}
(x_{\tilde Q}),\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e})\d x+\frac{\Delta}{N}\right)+C\Delta\\
&\leq \left(\frac{1+C\Delta}{1-C\Delta}\right)^2 \int_\Omega Q_2f(\nabla
w_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 w_\varepsilon} \newcommand{\ee}{\mathbf{e})\d x+C\Delta\,.
\end{split}
\]
Here we used again \eqref{eq:45} in combination with the assumption \eqref{eq:6} to obtain the
last inequality.
By $\|u-w_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{W^{2,p}}< 2\Delta$ and \eqref{eq:7}, this last estimate proves the claim
by choosing $\Delta$ small enough.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main} (iii)]
Just as for the lower bound, we will use the blow-up method for the proof of
the upper bound, in combination with a suitable mollification.
We assume that we are given a sequence $\lambda_j\to \infty$, and we will prove
that for any subsequence, there exists a further subsequence fulfilling the
upper bound. We omit the index $j$ from our notation and write $\lambda\to
\infty$ for $j\to \infty$.
\medskip
{\em Step 1}:
Let $\hat \Omega$ be
some neighborhood of $\overline\Omega$ in $\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$.
By Theorem \ref{thm:traceop}, there exists a function $v\in
W^{2,1}(\hat\Omega\setminus \overline \Omega)$ such that the traces $\gamma_0(v),\gamma_1(v)$
on $\partial\Omega$
with respect to $\hat\Omega\setminus\overline\Omega$
are identical with $\gamma_0(u)$, $\gamma_1(u)$ (up to the appropriate sign
change for $\gamma_1$). By Theorem 3.84 in \cite{MR1857292}, the function
$w:=u\chi_\Omega+v\chi_{\hat\Omega\setminus\Omega}$ is an element of $BH(\hat\Omega)$, with
$|D\nabla w|(\partial\Omega)=0$.
\medskip
For $\varepsilon} \newcommand{\ee}{\mathbf{e}>0$ small enough, we have $\{x\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2:\operatorname{dist}} \newcommand{\supp}{\operatorname{supp}(x,\Omega)<\varepsilon} \newcommand{\ee}{\mathbf{e}\}\subset \hat\Omega$,
and we may set $u_\varepsilon} \newcommand{\ee}{\mathbf{e}(x)=(\eta_\varepsilon} \newcommand{\ee}{\mathbf{e} * w)(x)$ for $x\in \Omega$. Here
$\varepsilon} \newcommand{\ee}{\mathbf{e}=\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda,u)$ is chosen such that
\begin{equation}
\|S(\nabla u_\varepsilon} \newcommand{\ee}{\mathbf{e},\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e})\|_{L^\infty}\leq
\|\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{L^\infty}< \sqrt{\lambda}/2\,,\label{eq:36}
\end{equation}
and $\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda,u)\to 0$ as $\lambda\to
\infty$.
Such a choice of $\varepsilon} \newcommand{\ee}{\mathbf{e}$ is possible by the standard estimate
\[
\|\nabla^2 u_\varepsilon} \newcommand{\ee}{\mathbf{e}\|_{L^\infty(\Omega)}\leq C\varepsilon} \newcommand{\ee}{\mathbf{e}^{-2}|D\nabla w|(\hat\Omega)\,
\]
where $C=\sup |\eta|$; the first inequality in \eqref{eq:36} being
valid by Lemma \ref{lem:basicSestimate}.
Writing $u_\lambda=u_{\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda,u)}$, $v_\lambda=\nabla u_\lambda$, $v=\nabla u$ and recalling the definition
\eqref{eq:55} of
$h_\lambda$, we have that
\begin{equation}
\begin{split}
h_\lambda(v_\lambda,\nabla v_\lambda)&\leq 2\sqrt{1+|v_\lambda|^2}\rho^0(S(v_\lambda,\nabla v_\lambda))\\
&\leq 2 |\nabla v_\lambda|\,.
\end{split}\label{eq:16}
\end{equation}
Hence we have that after
passing to a subsequence, there exists a measure $\mu$ such that
\begin{equation}
\label{eq:56}
\left.\begin{split}h_\lambda(v_\lambda,\nabla v_\lambda)\L^2\ecke \Omega
&\to\mu\\
\nabla v_\lambda\L^2&\to Dv\\
|\nabla v_\lambda|\L^2&\to |Dv|\end{split}\right\}
\text{ weakly * in the sense of measures}
\end{equation}
Additionally, it follows from $|D\nabla w|(\partial \Omega)=0$ and
\eqref{eq:16} that
${\mu(\partial\Omega)=0}$, and hence \[
\lim_{\lambda\to\infty}\int_\Omega
h_\lambda(v_\lambda,\nabla v_\lambda)\d x= \mu(\Omega)\,.
\]
\medskip
{\em Step 2.}
For every continuous non-negative function $\varphi\in C_0(\Omega)$, we have
\[
\begin{split}
\int_\Omega\varphi\,\d\mu&=
\lim_{\lambda\to\infty}\int_\Omega \varphi\,h_\lambda(
v_\lambda,\nabla
v_\lambda)\d x\\
&\leq 2\lim_{\lambda\to\infty}\int_\Omega\varphi |\nabla v_\lambda|\,\d x\\
&= 2\int_\Omega\varphi \,\d|Dv|\,.
\end{split}
\]
Hence, $\mu$ is absolutely continuous with respect to $|Dv|$, and
the measure $\mu$ can be decomposed into mutually singular measures,
\[
\mu=\zeta_1\L^2+\zeta_2 |D^sv|\ecke C_{v}+\zeta_3 \H^1 \ecke
J_{v}\,\,.
\]
We will prove
\begin{align}
\zeta_1(x)\leq &2\rho^0\left(S\left(v(x),\nabla v(x)\right)\right)\sqrt{1+|v|^2} \quad
\text{ for }\L^2-\text{a.e.}\, x\in \Omega \label{eq:17}\\
\zeta_2(x)\leq &2\rho^0\left(S\left(v(x),\frac{\d Dv}{\d
|Dv|}(x)\right)\right)\sqrt{1+|v|^2} \quad \text{ for
}|D^sv|-\text{a.e.}\, x\in C_{v}\label{eq:19}\\
\zeta_3(x)\leq &2\arccos\left(\mathbf{N}(v^+)\cdot \mathbf{N}(
v^-)\right)\sqrt{1+|\nu^\bot\cdot v|^2} \label{eq:18}\quad
\text{ for }\H^1-\text{a.e.}\, x\in J_{v}\,.
\end{align}
Once we have proved these inequalities, we have proved
\[
\limsup_{\lambda\to \infty}\int_\Omega h_\lambda(v_\lambda,\nabla v_\lambda)\d x\leq\mathcal F} \newcommand{\Zy}[1]{Z_{[#1]}(u)\,.
\]
The upper bound then follows by $Q_2 f_\lambda=h_\lambda$ and Proposition
\ref{prop:relax2}. Indeed, the assumptions of the proposition (with $p=2$) are
fulfilled for $f_\lambda$ by
Lemma \ref{lem:hbounds}.
\medskip
{\em Step 3.} To prove \eqref{eq:17},
we use that at $\L^2$-almost every $x_0$, $v=\nabla u$ is approximately differentiable, i.e.,
\begin{equation}
\lim_{r\to 0}\frac{1}{r}\fint_{Q(x_0,r)}|v(x)-v(x_0)-\nabla v(x_0)\cdot(x-x_0)|\d
x=0\,.\label{eq:47}
\end{equation}
In particular, we may assume that $x_0$ is a Lebesgue point of $v$,
\[
\lim_{r\to 0}\fint |v(x)-v(x_0)|\d x=0\,.
\]
We define
\[
\begin{split}
v^{(\rho)}(x)&=\rho^{-1}(v(x_0+\rho x)-v(x_0))\\
V(x)&=\nabla v(x_0)\cdot(x-x_0)
\end{split}
\]
and have by \eqref{eq:47} that
\[
v^{(\rho)}\to V \text{ in }L^1(Q(0,2))
\]
and by Theorem \ref{thm:RN} that
\[
\left.\begin{split}
Dv^{(\rho)}&\to \nabla v(x_0)\L^2\\
|Dv^{(\rho)}|&\to |\nabla v(x_0)|\L^2\end{split}\right\}
\text{ weakly * in the sense of measures.}
\]
Now we choose $\rho(\lambda)$ with $\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda,u)<\rho(\lambda)/2$.
In the sequel, we write $\rho\equiv \rho(\lambda)$. We set
\[
v^{(\rho)}_\lambda(x)=\rho^{-1}(v_\lambda(x_0+\rho x)-v(x_0))
\]
and have
\[
v^{(\rho)}_\lambda\to V \text{ in }L^1(Q(0,2))
\]
and
\begin{equation}
\left.\begin{split}
\nabla v^{(\rho)}_\lambda \L^2&\to \nabla v(x_0)\L^2\\
|\nabla v^{(\rho)}_\lambda|\L^2&\to |\nabla v(x_0)|\L^2\end{split}\right\}
\text{ weakly * in the sense of measures.}\label{eq:57}
\end{equation}
Moreover
\begin{equation}
\label{eq:58}
\begin{split}
\sup_{x\in Q(x_0,\rho)} |v_\lambda(x)-v(x_0)|
&\leq \sup_{ Q(x_0,\rho)} |\eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}(\lambda,u)}*(v-v(x_0))|\\
&\leq \|v_\lambda-v(x_0)\|_{L^1(Q(x_0,2\rho))}\\
&\to 0\text{ for }\lambda\to \infty\,.
\end{split}
\end{equation}
By the Radon-Nikodym Theorem,
\[
\zeta_1(x_0)=\lim_{\lambda\to \infty} \fint_{Q(x_0,\rho)}
h_\lambda(v_\lambda,\nabla v_\lambda)\d x\,.
\]
We recall that $G:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2\times \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^{2\times 2}_{\mathrm{sym}}\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ is defined by
\[G(\xi,p)=2\rho^0(S(\xi,p))\sqrt{1+|\xi|^2}\,.
\]
By Lemma \ref{lem:hbounds} we have that
\begin{equation}
|G(\xi,p)-G(\tilde \xi,p)|\lesssim |\xi-\tilde \xi| \max
(G(\xi,p),G(\tilde\xi,p))\,.\label{eq:59}
\end{equation}
By combining \eqref{eq:36} with the definition \eqref{eq:55} of $h_\lambda$,
we have that $h_\lambda( v_\lambda,\nabla v_\lambda)\leq G(
v_\lambda,\nabla v_\lambda)$, and hence
\[
\zeta_1(x_0)\leq \limsup_{\lambda\to \infty} \fint_{Q(x_0,\rho)}
G(v_\lambda,\nabla v_\lambda)\d x\,.
\]
By \eqref{eq:58} and \eqref{eq:59}, we obtain
\[
\begin{split}
\zeta_1(x_0)&\leq \limsup_{\lambda\to\infty}\fint_{Q(x_0,\rho)}
G(v(x_0),\nabla v_\lambda)\d x \\
&=\limsup_{\lambda\to\infty}\int_{Q(0,1)} G(v(x_0),\nabla
v_\lambda^{(\rho)})\d x\,.
\end{split}
\]
By Theorem \ref{thm:delladio} and \eqref{eq:57}, we finally get
\[
\begin{split}
\zeta_1(x_0)&\leq \int_{Q(0,1)} G(v(x_0),\nabla v(x_0))\d x\\
&=G(v(x_0),\nabla v(x_0))\,,
\end{split}
\]
which proves \eqref{eq:17}.
\medskip
{\em Step 4.}
For $|D^sv|\ecke C_{v}$-almost every $x_0$, we have that by Alberti's rank one Theorem \cite{alberti1993rank} and Theorem \ref{thm:BVblow}
the following holds true: There exists a sequence
$\rho_l\downarrow 0$ and
a monotone function $\psi\in BV(-1/2,1/2)$
such that the rescaled functions
\[
v^{(\rho_l)}(x):=\frac{\rho_l}{|Dv|(Q_\xi(x_0,\rho_l))}\left(
v(x_0+\rho_l x)-\fint_{Q_\xi(x_0,\rho_l)}v (x')\d x'\right)
\]
converge for $l\to \infty$ in $L^1(Q_\xi;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)$ to
the function
\begin{equation*}
\Psi:x\mapsto \xi \psi(x\cdot \xi)\,,
\end{equation*}
where $\xi\in S^1$ fulfills $\frac{\d(Dv)}{\d|Dv|}(x_0)=\xi\otimes
\xi$.
Also, from Theorem \ref{thm:RN}, we have that
\begin{equation}
\label{eq:61}
\left.\begin{split} Dv^{(\rho_l)}&\to D\Psi \\ |Dv^{(\rho_l)}|&\to |D\Psi|\end{split}\right\}\text{ weakly * in the sense of measures.}
\end{equation}
From now on, in order to alleviate the notation, we are going to omit the index
$l$ from $\rho_l$, and we write $\lim_{\rho\to 0}$ for $\lim_{l\to\infty}$.
As a consequence of the convergence of $v^{(\rho)}\to\Psi$ in $L^1$, we have in particular that $x_0$ is a Lebesgue point of $v$,
\[
\lim_{\rho\to 0}\fint_{Q(x_0,\rho)}|v-v(x_0)|\d x=0\,.
\]
By $|Dv^{(\rho)}|(Q_\xi)=1$ and the lower semicontinuity of total variation, we have
that $|D\Psi|(Q_\xi)=|D\psi|(-1/2,1/2)\leq 1$.
\medskip
Now we choose $\rho(\lambda)\to 0$ such that $\mu(\partial
Q(x_0,\rho(\lambda)))=0$, $\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho\to 0$, and
\begin{equation*}
\zeta_2(x_0)=
\lim_{\lambda\to 0}\frac{1}{|D^sv
|(Q_\xi(x_0,\rho(\lambda)))}\int_{Q_\xi(x_0,\rho(\lambda))}h_\lambda(v_\lambda,\nabla v_\lambda)\d x\,.
\end{equation*}
Writing $\rho\equiv\rho(\lambda)$, we note that
\begin{equation}
\begin{split}
(v^{(\rho)})_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}(x)&=\frac{\rho}{|Dv|(Q_\xi(x_0,\rho))}v_\lambda(x_0+\rho x)\\
\left(\nabla (v^{(\rho)})_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}\right)(x)
&=\frac{\rho^2}{|Dv|(Q_\xi(x_0,\rho))} \left(\nabla v_\lambda\right)(x_0+\rho x)\,.
\end{split}\label{eq:44}
\end{equation}
As in the previous step, using the fact that $x_0$ is a Lebesgue point for $v$, we obtain
\begin{equation}
\label{eq:60}
\sup_{x\in Q(x_0,\rho)}|v_\lambda(x)-v(x_0)|\to 0 \text{ as }\lambda\to 0\,.
\end{equation}
Using $h_\lambda\leq G$ (see again step 3),
we have that
\begin{equation}
\label{eq:50}
\begin{split}
\limsup_{\lambda\to\infty}&\frac{1}{|D^sv
|(Q_\xi(x_0,\rho))}\int_{Q_\xi(x_0,\rho)} h_\lambda\left(v_\lambda
,\nabla v_\lambda
\right)\d x\\
&\leq \limsup_{\lambda\to\infty}\frac{1}{|D^s
v|(Q_\xi(x_0,\rho))}\int_{Q_\xi(x_0,\rho)} G\left(v_\lambda
,\nabla v_\lambda
\right)\d x\,.
\end{split}
\end{equation}
By a change of variables, we obtain
\begin{equation*}
\label{eq:37}
\int_{Q_\xi(x_0,\rho)} G\left(v_\lambda,\nabla v_\lambda
\right)\d x=\rho^2\int_{Q_\xi}G\left(v_\lambda
(x_0+\rho y),\nabla v_\lambda
(x_0+\rho y)\right)\d y\,.
\end{equation*}
Combining this with \eqref{eq:44} and \eqref{eq:50}, we obtain
\begin{equation*}
\label{eq:62}
\begin{split}
\limsup_{\lambda\to\infty}&\frac{1}{|D^sv
|(Q_\xi(x_0,\rho))}\int_{Q_\xi(x_0,\rho)} h_\lambda\left(v_\lambda
,\nabla v_\lambda
\right)\d x\\
&\leq \limsup_{\lambda\to\infty}\int_{Q_\xi} G\left(v_\lambda(x_0+\rho y)
,\nabla \left(v^{(\rho)}\right)_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}
\right)\d y\,.
\end{split}
\end{equation*}
By \eqref{eq:60}, this yields
\begin{equation}
\label{eq:51}
\begin{split}
\limsup_{\lambda\to\infty}&\frac{1}{|D^sv
|(Q_\xi(x_0,\rho))}\int_{Q_\xi(x_0,\rho)} h_\lambda\left(v_\lambda
,\nabla v_\lambda
\right)\d x\\
&\leq \limsup_{\lambda\to\infty}\int_{Q_\xi} G\left(v(x_0)
,\nabla \left(v^{(\rho)}\right)_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}
\right)\d x\,.
\end{split}
\end{equation}
By \eqref{eq:61} and $\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho\to 0$, we have that
\begin{equation*}
\label{eq:62}
\left.\begin{split} \nabla\left(v^{(\rho)}\right)_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}\L^2&\to D\Psi \\ \left|\nabla \left(v^{(\rho)}\right)_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}\right|\L^2&\to |D\Psi|\end{split}\right\}\text{ weakly * in the sense of measures.}
\end{equation*}
By Theorem \ref{thm:delladio}, it follows that
\[
\begin{split}
\limsup_{\lambda\to\infty}\int_{Q_\xi} G\left(v(x_0)
,\nabla \left(v^{(\rho)}\right)_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}
\right)\d x &= \int_{Q_\xi}G\left(v(x_0)
,\xi\otimes\xi
\right)\d|D\Psi| \\
&\leq G\left(v(x_0)
,\xi\otimes\xi
\right)\,,
\end{split}
\]
which (together with \eqref{eq:51}) proves
\eqref{eq:19}.
\medskip
{\em Step 5.} For $\H^1\ecke J_{\nabla u}$-almost every $x_0$, we have that by Theorem \ref{thm:BVblow}
the following holds true:
The rescaled functions
\[
v^{(\rho)}(x)=v
(x_0+\rho x)-\fint_{Q_\nu(x_0,\rho)}v(x')\d x'
\]
converge in $L^1(2Q_\nu;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)$ to
the function
\begin{equation*}
\Psi:x\mapsto \begin{cases} v^+(x_0)&\text{ if } x\cdot\nu >0\\
v^-(x_0) &\text{ if } x\cdot\nu \leq 0\,.\end{cases}
\end{equation*}
Additionally we have, for every $\beta>0$,
\[
\lim_{\rho\to 0}\int_{(1+\beta)Q_\nu}|\nabla v^{(\rho)}|\d x=|D\Psi|((1+\beta)Q_\nu)\,.
\]
Now we choose $\rho(\lambda)$ such that $\rho\to 0$,
and $\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho\to 0$ as $\lambda\to \infty$.
Then we may write, again using $h_\lambda\leq G$,
\begin{equation}
\label{eq:26}
\begin{split}
\lim_{\lambda\to 0}\frac{1}{\rho}\int_{Q_\nu(x_0,\rho)} h_\lambda(v_\lambda
,\nabla v_\lambda)\d x
&\leq \liminf_{\lambda\to 0}\frac{1}{\rho}\int_{Q_\nu(x_0,\rho)} G(v_\lambda
,\nabla v_\lambda)\d x\\
&\leq \liminf_{\lambda\to 0}\int_{Q_\nu} \rho G(v_\lambda
(x_0+\rho x),\nabla v_\lambda(x_0+\rho x))\d x\\
&=\liminf_{\lambda\to 0}\int_{Q_\nu} G\left(\eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)},
\nabla\eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)}\right)\d x\,.
\end{split}
\end{equation}
Since $\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho\to 0$, we have that
\[
\begin{split}
\eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)}&\to \Psi \quad\text{ in }L^1(Q_\nu)\\
\lim_{\lambda\to \infty}\int_{Q_\nu}|\nabla \eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)}|\d x&=|D\Psi|(Q_\nu)\,.
\end{split}
\]
By Proposition \ref{prop:up_jump_blowup}, which will be proved in Section \ref{sec:proof-jump} below, this implies
\[
\begin{split}
\limsup_{\lambda\to\infty}\int_{Q_\nu}& G(\eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)}, \nabla \eta_{\varepsilon} \newcommand{\ee}{\mathbf{e}/\rho}*v^{(\rho)})\d x\\
&\leq 2\sqrt{1+|\nu^\bot\cdot v(x_0)|^2} \arccos(\mathbf{N}(v^+(x_0))\cdot\mathbf{N}(v^-(x_0)))\,,
\end{split}
\]
and combining the latter with \eqref{eq:26} proves \eqref{eq:18}, since the left hand side of \eqref{eq:26} is just $\zeta_3(x_0)$.
\end{proof}
\section{Proof of the upper bound for the blow-up of the jump part}
\label{sec:proof-jump}
We recall that $G(v,\xi)=2\rho^0(S(v,\xi))\sqrt{1+|v|^2}$. In the present section, we write $I=[-1/2,1/2]$.
\begin{lemma}
\label{lem:cap_estim}
Let $v_j\in C^1( Q)$ such that $v_j\to 0$ in $W^{1,1}(Q)$. Furthermore, let $P:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be the projection $P(x_1,x_2)=x_1$, $\Delta>0$ and for $j\in \mathbb{N}} \newcommand{\T}{\mathcal{T}$,
\[
A_j:=\{x\in Q:|v_j(x)|>\Delta\}\,.
\]
Then $\L^1(P(A_j))\to 0$ as $j\to \infty$.
\end{lemma}
\begin{proof}
By the continuity of the trace operator $W^{1,1}(Q)\to L^1(\partial Q)$, we have that
\[
v_j(\cdot,-1/2)\to 0,\quad v_j(\cdot,1/2)\to 0 \quad\text{ in }L^1(I)\,.
\]
Setting
\[
\begin{split}
J^1_j&:=\left\{t\in I: |v_j(t,-1/2)|+|v_j(t,1/2)|>\frac{\Delta}{2}\right\}\,,\\
J^2_j&:=\left\{t\in I:\int_I|\partial_{x_2}v_j(t,x_2)|\d x_2>\frac{\Delta}{2}\right\}\,,
\end{split}
\]
we have that $P(A_j)\subset J^1_j\cup J^2_j$ and $\L^1(J_j^1)+\L^1(J_j^2)\to 0$. This proves the claim.
\end{proof}
\begin{lemma}
\label{lem:G1d_estim} Let $a_1\in\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ and $w\in C^1(I)$. Then
\[
\begin{split}
\int_{-1/2}^{1/2}G&\left( \left(\begin{array}{c}a_1\\w(t)\end{array}\right),\left(\begin{array}{cc} 0& 0\\0&w'(t)\end{array}\right)\right)\d t \\
&\leq 2 \sqrt{1+a_1^2}\left(\arccos \mathbf{N}\left(\begin{array}{c}a_1\\w(-1/2)\end{array}\right)\cdot \mathbf{N}\left(\begin{array}{c}a_1\\w(+1/2)\end{array}\right)
+2\int_{-1/2}^{1/2}|\min(0,w'(t))|\d t\right)\,.
\end{split}
\]
\end{lemma}
\begin{proof}
First we recall the geometric meaning of the integral: Let
\[
W(x_1,x_2)=\int_0^{x_2}w(t)\d t+a_1 x_1\,.
\]
Then we have that
\[
\begin{split}
\int_{-1/2}^{1/2} G\left( \left(\begin{array}{c}a_1\\w(t)\end{array}\right),\left(\begin{array}{cc} 0& 0\\0&w'(t)\end{array}\right)\right)\d t
& =\int_Q G(\nabla W,\nabla^2 W)\d x\\
&= \int_{\mathrm{gr} W|_Q} 2\rho^0(S_{\mathrm{gr} W|_Q})\d\H^2\,.
\end{split}
\]
As in the proof of Lemma \ref{lem:Kcalc}, we may rotate, cut and glue the graph $\mathrm{gr} W_Q$ to obtain the graph of the function
\[
\tilde W:\left[-\frac{\sqrt{1+a_1^2}}{2},\frac{\sqrt{1+a_1^2}}{2}\right]\times[-1/2,1/2]\to \mathbb{R}} \newcommand{\Z}{\mathbb{Z}\,,
\]
defined by
\[
\tilde W(x_1,x_2)=\frac{1}{\sqrt{1+a_1^2}} \int_0^{x_2} w(t)\d t\,,
\]
without changing the ``energy'', i.e., $\tilde W$ satisfies
\[
\int_{\mathrm{gr} W|_Q} 2\rho^0(S_{\mathrm{gr} W|_Q})\d\H^2=\int_{\mathrm{gr} \tilde W|_{\tilde Q}} 2\rho^0(S_{\mathrm{gr} \tilde W|_Q})\d\H^2\,,
\]
where we have written $\tilde Q=\left[-\frac{\sqrt{1+a_1^2}}{2},\frac{\sqrt{1+a_1^2}}{2}\right]\times[-1/2,1/2]$. Now $\tilde W$ is constant in $x_1$-direction, which implies that the normal vector is contained in the $x_2$-$x_3$ plane,
\[\mathbf{N}(\nabla\tilde W)\in\{(0,x_2,x_3)^T:x_2^2+x_3^2=1\}\,,\]
which implies
\[\rho^0(S_{\mathrm{gr} \tilde W})=\left(1+|\partial_{x_2}\tilde W|^2\right)^{-1/2}|\partial_{x_2}\mathbf{N}(\nabla\tilde W)|\,.
\]
This leaves us, after the passage back to a one-dimensional setting, with the following calculation:
\[
\int_{\mathrm{gr} \tilde W|_{\tilde Q}} 2\rho^0(S_{\mathrm{gr} \tilde W|_Q})\d\H^2
=
2\sqrt{1+a_1^2}\int_{-1/2}^{1/2} \left|\partial_t \mathbf{N}\left(\begin{array}{c}0\\w(t)/\sqrt{1+a_1^2}\end{array}\right)\right|\d t\,.
\]
We observe $\mathbf{N}((0,w(t)/\sqrt{1+a_1^2})^T)=\mathbf{N}((0,w(t))^T)$. By giving $\{(0,x_2,x_3)^T:x_2^2+x_3^2=1\}\simeq S^1$ an orientation (the one corresponding to increasing $w(t)$), the total variation of the curve
$\mathbf{N}((0,w(\cdot))^T)$ can be estimated from above by the distance of its endpoints on $S^1$, plus twice the integral of $|\partial_t\mathbf{N}((0,w(t))^T)|$ over the region where $\partial_t\mathbf{N}((0,w(t))^T)$ is anti-parallel to the orientation:
\[
\begin{split}
\int_{-1/2}^{1/2} \left|\partial_t \mathbf{N}\left(\begin{array}{c}0\\w(t)\end{array}\right)\right|\d t
&\leq \arccos \mathbf{N}\left(\begin{array}{c}0\\w(-1/2)\end{array}\right)\cdot \mathbf{N}\left(\begin{array}{c}0\\w(+1/2)\end{array}\right)\\
&\quad+2\int_{\{t:w'(t)\leq 0\}} \left|\partial_t \mathbf{N}\left(\begin{array}{c}0\\w(t)\end{array}\right)\right|\d t\\
&\leq
\arccos \mathbf{N}\left(\begin{array}{c}a_1\\w(-1/2)\end{array}\right)\cdot \mathbf{N}\left(\begin{array}{c}a_1\\w(+1/2)\end{array}\right)\\
&\quad +2 \int_{-1/2}^{1/2}|\min(0,w'(t))|\d t\,.
\end{split}
\]
In the last inequality, we have used the fact that the angle between the normals at the endpoints does not change when applying a rotation, and $|\partial_t\mathbf{N}(\tilde w(t))|\leq|\partial_t \tilde w(t)|$. This completes the proof of the lemma.
\end{proof}
\begin{proposition}
\label{prop:up_jump_blowup}
Let $a\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$, $\nu\in S^1$, $t\in \mathbb{R}} \newcommand{\Z}{\mathbb{Z}$, $b=a+t\nu$, and $\Psi:Q_\nu\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2$ defined by
\[
\Psi(x)=\begin{cases}a&\text{ if } x\cdot \nu\leq 0\\b& \text{ if }x\cdot \nu>0\,.\end{cases}
\]
Assume that $v_j\in C^1(Q_\nu;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)$ is a sequence such that
\[
\begin{split}
v_j&\to\Psi \quad\text{ in } L^1(Q_\nu;\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2)\\
\int_{Q_\nu}|\nabla v_j|\d x &\to |D\Psi|(Q_\nu)=|t|\,.
\end{split}
\]
Then
\[
\limsup_{j\to\infty} \int_{Q_\nu} G(v_j,\nabla v_j)\d x\leq 2\sqrt{ 1+|a\cdot\nu^\bot|^2}\arccos \mathbf{N}(b)\cdot \mathbf{N}(a)\,.
\]
\end{proposition}
\begin{proof}
We may assume without loss of generality that $t\geq 0$, $\nu=e_2$, which implies $b=(a_1,a_2+t)^T$. Let $P:\mathbb{R}} \newcommand{\Z}{\mathbb{Z}^2\to\mathbb{R}} \newcommand{\Z}{\mathbb{Z}$ be defined by $P(x_1,x_2)=x_1$.
\medskip
Fix $\Delta>0$. For $j\in\mathbb{N}} \newcommand{\T}{\mathcal{T}$, we will split $Q\equiv Q_\nu$ into a ``good'' and a ``bad'' set, and estimate the energy on these sets separately. We write $v_j=((v_j)_1,(v_j)_2)^T$ and define the bad set by setting
\[
\begin{split}
\tilde A_j&:=\{x\in Q: |(v_j)_1( x)-a_1|>\Delta\}\,,\\
A_j&:=P^{-1}(P(\tilde A_j))
\end{split}
\]
The assumptions of the present proposition imply in particular that
\[
\liminf_{j\to\infty}\int_Q|\nabla (v_j)_2|\d x\geq |D\Psi_2|(Q_\nu)=t\,,
\]
and hence
\begin{equation}
\label{eq:27}
\begin{split}
(v_j)_1-a_1&\to 0 \quad\text{ in }W^{1,1}(Q)\\
\partial_{x_1}(v_j)_2&\to 0 \quad\text{ in }L^{1}(Q)\,.
\end{split}
\end{equation}
Next we claim that there exists a sequence $\beta_j\downarrow 0$ such that for any (measurable) $J\subset [-1/2,1/2]$ and any $j\in\mathbb{N}} \newcommand{\T}{\mathcal{T}$, we have that
\begin{equation}
\label{eq:54} \left| \int_J |v_j(x_1,1/2)-v_j(x_1,-1/2)|\d x_1- t\L^1(J)\right|\leq \beta_j\,.
\end{equation}
To prove this claim, we note that we thanks to the continuity of the trace operator $W^{1,1}(Q)\to L^1(\partial Q)$, we have
\[
\left.\begin{split}
v_j(\cdot,-1/2)&\to a \\
v_j(\cdot,1/2)&\to b
\end{split}\right\}\quad\text{ in }L^1(I)\,.
\]
Hence, with
\[
\beta_j:= \int_{-1/2}^{+1/2} |v_j(x_1,1/2)-v_j(x_1,-1/2)-(b-a)|\d x_1\,,
\]
we obtain the claim \eqref{eq:54} by the triangle inequality.
Additionally, by possibly increasing $\beta_j$, but still retaining the property $\beta_j\downarrow 0$, we may achieve
\begin{equation}
\label{eq:64}
\left|\int_Q|\nabla v_j|\d x-t\right|\leq\beta_j\,.
\end{equation}
We set $I_j=P(A_j)$. By \eqref{eq:27} and Lemma \ref{lem:cap_estim}, we have that $\L^1(I_j)\to 0$.
We are now in position to estimate the contribution of the bad sets $A_j$:
\begin{equation}
\label{eq:65}\begin{split}
\limsup_{j\to\infty} \int_{A_j}|\nabla v_j|\d x& =\limsup_{j\to\infty}
\left|\int_{A_j}|\nabla v_j|\d x-t\L^1(I_j)\right|\\
&\leq \limsup_{j\to\infty} \left|\int_{Q}|\nabla v_j|\d x-t\right|
+\left|\int_{Q\setminus A_j}|\nabla v_j|\d x-t\L^1(I\setminus I_j)\right|\\
&=0
\end{split}
\end{equation}
where we have used in the last inequality that by \eqref{eq:54} and \eqref{eq:64},
\[
t\L^1(I\setminus I_j)-\beta_j\leq
\int_{Q\setminus A_j}|\nabla v_j|\d x\leq t+\beta_j\,,
\]
which yields $t=\lim_{j\to\infty}\int_{Q\setminus A_j}|\nabla v_j|\d x$. The estimate \eqref{eq:65} implies that
\[
\limsup_{j\to\infty}\int_{A_j} G(v_j,\nabla v_j)\d x=0\,.
\]
We now turn our attention to the contribution of the good sets
\[
E_j:=Q\setminus A_j\,.
\]
By Lemma \ref{lem:hbounds}, we have that on $E_j$,
\begin{equation}\label{eq:66}
\begin{split}
\Big|G(v_j,\nabla v_j)&-G\left((a_1,(v_j)_2)^T, \nabla v_j\right)\Big|\\
&\leq C\Delta \max \left(G(v_j,\nabla v_j),G\left((a_1,(v_j)_2)^T, \nabla v_j\right)\right)\,.
\end{split}
\end{equation}
By the subadditivity of $\xi\mapsto G(v,\xi)$, $G(v,\xi)\leq C|\xi|$, $\nabla (v_j)_1\to 0$ in $L^1$ and $\partial_{x_1} (v_j)_2\to 0$ in $L^1$ (see \eqref{eq:27}), we have that
\begin{equation}
\label{eq:67}
\begin{split}
\limsup_{j\to\infty} &\int_{E_j}
G\left((a_1,(v_j)_2)^T, \nabla v_j\right)\d x\\
& \leq \limsup_{j\to\infty} \int_{E_j} G\left((a_1,(v_j)_2)^T, \left(\begin{array}{cc}0& 0\\0&\partial_{x_2} (v_j)_2\end{array}\right)\right)\d x\,.
\end{split}
\end{equation}
Combining \eqref{eq:66} and \eqref{eq:67}, we obtain
\[
\begin{split}
\limsup_{j\to\infty} &\int_{E_j}
G\left(v_j, \nabla v_j\right)\d x\\
&\leq
\frac{1+C\Delta}{1-C\Delta}
\limsup_{j\to\infty} \int_{E_j} G\left((a_1,(v_j)_2)^T, \left(\begin{array}{cc}0& 0\\0&\partial_{x_2} (v_j)_2\end{array}\right)\right)\d x\,.
\end{split}
\]
By Lemma \ref{lem:G1d_estim}, we obtain
\[
\begin{split}
\frac{1-C\Delta}{1+C\Delta} \limsup_{j\to\infty}& \int_{E_j}
G\left(v_j, \nabla v_j\right)\d x\\
&\leq
2\sqrt{1+a_1^2}\int_{-1/2}^{1/2} \arccos \mathbf{N}(v_j(x_1,-1/2))\cdot\mathbf{N}(v_j(x_1,1/2))\d x_1\\
&\quad+4\sqrt{1+a_1^2}\int_Q\left|\min(0,\partial_{x_2}(v_j)_2)\right|\d x\,.
\end{split}
\]
Since $v_j(\cdot,-1/2)\to a$ and $v_j(\cdot,1/2)\to b$ in $L^1(I)$, and
\[
\int_Q\left|\min(0,\partial_{x_2}(v_j)_2)\right|\d x\to 0\,,
\]
the proof of the proposition is completed by sending $\Delta\to 0$.
\end{proof}
|
1,108,101,565,736 | arxiv | \section{Introduction}
\label{sec:intro}
Deep learning has witnessed great success in image recognition \cite{he2015delving} using convolutional neural networks (CNNs) and has been widely explored in neuroimaging field \cite{vieira2017dlreview}.
Most previous studies in neuroimaging field either study the extracted features from predefined regions of interest (ROIs) \cite{autism} or feed 3D convolutional neural networks directly with the 3D imaging volume.
The former approach potentially introduces too much prior into the model and limits the input representation.
While the latter approach has the advantage of being agnostic and prior-free, adequate priors from previous neuroimaging studies could be helpful to regularize the input information.
The human cortex is commonly modeled as a 2D manifold sheet-like structure, despite the presence of sulci/gyri folds.
Therefore, 2D CNNs can, in principle, be applied on the cortical sheet after flattening onto a 2D plane \cite{fischl1999cortical}. However, inevitable distortions in the flattening process affect the data representation.
Surface cutting has been proposed to alleviate distortions caused by the intrinsic curvature of the cortical surface,
but this again introduces artificial changes to the topology of the surface \cite{fischl1999cortical}.
Modeling the cortical surface of each hemisphere with a sphere is more accurate and desirable \cite{fischl1999cortical},
and spherical coordinate system is the common practice in neuroimaging field,
as it can preserve the topological structure of the cortical surface.
But 2D CNNs cannot be directly applied on a sphere.
A spherical CNNs framework was recently introduced \cite{s2cnn} and is explored for the first time in this study to analyze the human cortex in a spherical representation.
Spherical CNNs were proposed to model spherical data such as molecular modeling, 3D shape \cite{s2cnn,eccv_scnn} and has shown promising performances.
Alzheimer's disease (AD) is a neurodegenerative disease impacting a large population and is the most common cause of dementia.
Accurate diagnosis of AD and mild cognitive impairment (MCI) is of increasing importance.
Cortical morphometric measures such as cortical thickness derived from T1-weighted structural MRI
have demonstrated to be important biomarkers for the diagnosis of AD, MCI, which are characterized by cortical gray matter atrophy.
In this work, we apply a spherical CNNs based framework on the cortical thickness data derived from structural MRI in Alzheimer's Disease Neuroimaging Initiative (ADNI)\footnote{\url{http://adni.loni.usc.edu/}} cohort,
for the AD versus cognitively normal (CN) classification task, and for MCI conversion prediction within two years.
To the best of our knowledge, this is the first work applying spherical CNNs on human cortex data and demonstrates the potential for diverse studies on discriminative analyses of human cortex neuroimaging data.
\begin{figure*}[!hbtp]
\centering
\includegraphics[width=17.5cm]{figs_framework_new.pdf
\vspace*{-3mm}
\caption{\footnotesize{Illustration of the spherical CNNs framework proposed for AD diagnosis based on cortical morphometric data. The basic operation blocks are denoted as arrows and listed under the network structure.}}
\label{Fig:framework}
\end{figure*}
\section{Method}
\label{sec:method}
\subsection{Cortical Modeling}
The cortical surfaces were reconstructed using FreeSurfer \cite{dale1999cortical} and morphed to the spherical representation by minimizing areal and distance distortions.
All the individual cortical surfaces were registered to a spherical atlas in the \textit{fsaverage} space matching cortical folding patterns \cite{fischl1999cortical}.
At each vertex of the atlas cortical surface, multiple measures including thickness, surface area, volume, curvature, sulc, Jacobian determinant (warping to the atlas) can be derived from FreeSurfer.
Sensitivities of different measures vary in different diseases.
And any measure can naturally be regarded as the channels in the data representation.
In this study, we used cortical thickness as it has been previously demonstrated to be highly sensitive for AD diagnosis \cite{tenmethod,eskildsen2013prediction}.
We used a sampling grid with a bandwidth of $64$ to sample the cortical surfaces,
generating a $128\times 128$ matrix for each hemisphere.
For each point in the sampling grid, we queried the closest $10$ vertices in the cortical surface in geodesic distance and used the average measure as the matrix value.
\subsection{Spherical CNNs}
Spherical CNNs are extensions of regular CNNs formulation on the plane to spherical data, migrating the translational equivariance to rotational equivariance.
Hence, specially-designed convolution operations are re-formulated on sphere space S2 and 3D rotation group space SO(3) (SO=`special orthogonal group').
More theoretical underpinnings can be found in \cite{s2cnn}.
Elements in the SO(3) space are represented in the Euler ZYZ data format as:
\vspace{0cm}
\begin{equation}
Z(\alpha)Y(\beta)Z(\gamma)
\end{equation}
\vspace{-0.6cm}
\noindent where $Z(\cdot)$ denotes rotation around the $Z$ axis, $Y(\cdot)$ denotes rotation around the $Y$ axis, $\alpha \in [0,2\pi], \beta \in [0,\pi], \gamma \in [0,2\pi]$ are the rotation angles.
Elements in the S2 space can be similarly represented as:
\vspace{-0.1cm}
\begin{equation}
Z(\alpha)Y(\beta)Z(0)
\end{equation}
\vspace{-0.6cm}
The network architecture is similar to regular CNNs, with spherical convolutional blocks hierarchically layered.
The main parameters include bandwidth $b$, which is similar to the spatial dimension in regular CNNs, and number of channels $c$ at each convolution block.
In this work, we use a simple network structure with three convolutional layers interleaved with 3D batch normalization (BN) and rectifier linear unit (ReLU).
Illustration of the network structure is shown in Fig. \ref{Fig:framework}.
The number of channels doubles and the spatial dimensions reduce by two along the depth.
Specifically, we denote the S2 convolution with bandwidth $b$ and channel $c$ as S2Conv($b$, $c$),
and the SO(3) convolution with bandwidth $b$ and channel $c$ as SO3Conv($b$, $c$).
The fully convolutional part of the network is sequenced as: S2Conv(32, 32) - BN - ReLU - SO3Conv(16, 64) - BN - ReLU - SO3Conv(8, 128) - BN - ReLU.
The three dimensions $\alpha,\beta,\gamma$ of the feature maps at each layer are all $2b$.
Then we apply a weighted global average pooling (wGAP) step,
consisting of integrating over the spatial dimensions of the convolutional feature maps and correcting for the non-uniformity of the grid in the Y axis.
The two hemispheres of human cortex are considered as two sets of spherical data sharing the same diagnosis label.
We therefore share the fully convolutional part of the network between left and right hemispheres.
The integrated features from left and right hemispheres are concatenated and fed into the last fully connected layer with softmax activation function for the final disease classification.
We also compared to regular CNNs on the same sampled input,
with the same architecture using regular 2D convolutions, replacing 3D BN with 2D BN, and doubling the channel dimensions to ensure approximately same number of parameters.
Denoting the convolution operation with $c$ channels as Conv($c$),
the fully convolutional part of the network tested for comparison is:
Conv(64) - BN - ReLU - Conv(128) - BN - ReLU - Conv(256) - BN - ReLU.
The convolution layers have a stride of $2$ and a kernel size of $3$.
For model training, we used the cross-entropy loss and optimized using stochastic gradient descent (SGD) with moment 0.9.
We used a batch size of 8 and ran the algorithm for 200 epochs, with a 0.1 learning rate at the first 100 epochs and 0.01 learning rate for the last 100 epochs.
\subsection{Activation Maps}
Spatial localization of features being used by CNNs can be explored using class activation map \cite{cam}, which has been applied in medical imaging field \cite{nam}.
In this study, we extend the class activation map to spherical CNNs, generating class activation maps on the sphere.
The activation maps in spherical CNNs are defined in SO(3) space.
According to Equation 1 and 2, we selected the activation maps at $\gamma=0$ to explore the activation map patterns and corrected for the non-uniformity of the grid in the Y axis similar to the practice in wGAP.
We performed weighted average of the corrected activation maps using the weights from the fully-connected layer.
\section{Result}
\label{sec:result}
\subsection{Data and Setup}
We used the data from ADNI-1 cohort.
We screened subjects per diagnosis group as follows: CN subjects as having stayed cognitively normal during a follow-up period of at least two years,
MCI stable (MCI-s) subjects as having stayed MCI during a follow-up period of at least two years,
MCI progression (MCI-p) subjects as having converted to AD within two years,
and AD patients.
Subject information for each diagnosis group can be found in Table \ref{Table:adniscan}.
\begin{table}[!hbtp]
\centering
\small{
\caption{\small{Subject Information}}\label{Table:adniscan}
\begin{tabular}{c|c|c|c|c|c}
\hline
Diagnosis & CN & \makecell{MCI-s} & \makecell{MCI-p} & AD & Total\\
\hline
N & 151 & 114 & 136 & 188 & 589 \\ \hline
\makecell{Age\\(std)} & \makecell{75.64 \\(5.25)} & \makecell{74.90\\(7.33)} & \makecell{74.69\\(6.95)} & \makecell{75.18\\(7.50)} & \makecell{75.13\\(6.82)} \\ \hline
\makecell{Gender\\M/F} & 74/77 & 72/42 & 85/51 & 99/89 & 330/259 \\ \hline
\end{tabular}}
\end{table}
We used the baseline T1-weighted MRI scans acquired using 1.5 T MRI scanners,
pre-processed with the standard Mayo Clinic pipeline\footnote{http://adni.loni.usc.edu/methods/mri-analysis/mri-pre-processing/}
and post-processed by UCSF using FreeSurfer 4.3 \cite{jack2008adni}.
We performed two binary classification tasks: AD vs. CN and MCI-p vs. MCI-s.
In each classification task, we performed 10-fold cross-validation with the fold split generated from random stratified sampling ensuring similar distribution of diagnosis, age, and gender in each split.
In each experiment, we set out one fold as test set, one fold as validation set, and the rest of the folds as training set.
At each fold, the model with the maximum validation accuracy is selected as the optimal model.
The probability output of all test sets using the optimal models are aggregated together.
We reported the area under curve (AUC) of the receiver operating characteristic (ROC) curve in Fig. \ref{fig:roc},
and also reported the accuracy, sensitivity and specificity.
\subsection{AD vs. CN classification}
The ROC curve for AD vs. CN classification can be found in Fig. \ref{fig:roc} (left). The AUC values for spherical vs. standard CNNs are: 0.915 vs. 0.895. The accuracy (ACC), sensitivity (SEN) and specificity (SPE) values (with 0.5 as threshold) for spherical vs. standard CNNs are: 90.0\% vs. 84.6\%, 89.9\% vs. 84.0\%, 90.1\% vs. 85.4\%. The performance is higher than a previous study also using cortical thickness patterns in ADNI cohort (ACC: 84.5\%, SEN: 79.4\%, SPE: 88.9\%, AUC: 0.905) \cite{eskildsen2013prediction}.
\begin{figure}[t]
\center
\includegraphics[width=7.5cm]{figs_auc.png}
\vspace*{-3mm}
\caption[ ] {\small{ROC of (left) AD vs. CN classification and (right) MCI-p vs. MCI-s classification.}}
\label{fig:roc}
\end{figure}
\subsection{MCI progression prediction}
We further test our model on a more challenging MCI progression prediction task using the same network setting. The ROC curve can be found in Fig. \ref{fig:roc} (right). The ROC AUC values for spherical vs. standard CNNs are: 0.707 vs. 0.657. The accuracy, sensitivity and specificity values (with 0.5 as threshold) for spherical vs. standard CNNs are: 71.6\% vs. 66.4\%, 80.2\% vs. 69.9\%, 61.4\% vs. 62.3\%. The performance is higher than a previous study on 2-year MCI progression prediction also using cortical thickness patterns in ADNI cohort (ACC: 66.7\%, SEN: 59.0\%, SPE: 70.2\%, AUC: 0.673) \cite{eskildsen2013prediction}.
\subsection{Exploratory Visualization}
A population-average AD class activation map of left hemisphere at $\gamma=0$, generated with the spherical CNNs, is shown in Fig. \ref{fig:cam} together with a reference label map from the Desikan-Killiany atlas \cite{aparc} sampled in the same way as the thickness measures.
The colors and orders of the regions in the reference label map are displayed according to the FreeSurfer color lookup table.
We observed two blobs of AD predictive regions: the lower left blob corresponding to regions around medial temporal lobe,
and the upper blob corresponding to regions in the vicinity of supramarginal gyrus.
Both regions are implicated in AD, according to \cite{regions}.
\begin{figure}[t]
\center
\includegraphics[width=7cm]{figs_cam.png}
\vspace*{-3mm}
\caption[ ] {\small{ (Left) Class activation map for AD classification task from the proposed spherical CNN; (Right) Desikan-Killiany atlas in the same space \cite{aparc}.}}
\label{fig:cam}
\end{figure}
\section{Discussion}
Despite promising results obtained via our application of spherical CNNs to cortical measures,
there are several limitations and potential future improvements to be considered:
the input omits subcortical structures, such as the hippocampus, which is one of the brain structures affected by AD and a sensitive biomarker for AD diagnosis \cite{tenmethod,adhc},
In future work, the hippocampus can be modeled in the same way as the general 3D structures \cite{s2cnn,eccv_scnn}, and incorporated into the classification model.
And we can use multi-channel input including other measures such as volume to have multi-faceted characterization of the cortex.
We shared the fully convolutional part between left and right hemispheres, while we can also use two different sets of parameters, which however doubles the number of parameters for the network. Left and right hemispheres could also be registered into the same space and concatenated as two channels. By doing so, the asymmetry in the input information could be embedded and utilized by the CNNs for the diagnosis or prediction tasks.
Since the spherical CNNs formulation is still new to the field, there are still variant architectures to test, such as \cite{eccv_scnn}.
A more thorough exploration of parameters (bandwidth, channel), architectures, and properties (fully-convolutional property) is still necessary to fully exploit its potential.
\vspace{-0.2cm}
\section{Conclusion}
In this study, we demonstrate for the first time that the newly introduced spherical CNNs formulation can be an effective deep learning framework for modeling human cortex and performing AD diagnosis task using MRI-based cortical measures. Our results on the ADNI cohort show state-of-the-art classification performance using structural MRI information only.
The spherical CNNs formulation has the potential to be applied to further structural MRI studies, on other neurological diseases, and other modalities such as fMRI and PET, as long as the measures can be projected onto the cortical sphere.
\vspace{0.2cm}
\noindent\footnotesize{\textbf{Acknowledgments:} Thanks for funding from NIH/NHLBI R01-HL121270. Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: \url{http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf}}
|
1,108,101,565,737 | arxiv | \section{Introduction}
The determination of the thermodynamic state of matter formed in
high-energy nuclear collisions is of
great importance in understanding the behaviour of the matter formed at high
temperature and/or energy density.
A set of basic macroscopic quantities,
such as temperature, pressure, volume, entropy, and energy density, as
well as a set of response functions, including specific heat,
compressibility and different susceptibilities define the
thermodynamic properties of the system. These quantities are related
by the equation of state (EOS), which on the other hand, governs the
evolution of the system. One of the basic goals of calculating the
thermodynamic quantities, such as the
specific heat ($c_v$) and isothermal compressibility ($k_{\rm{T}}$)
is to obtain the EOS of the matter~\cite{mrow,mekjian,stokic,wang,muller,lacey,sierk}.
The $c_v$ is the amount of energy per unit change in
temperature and is related to the fluctuation in the temperature of
the system~\cite{stodolsky,shuryak1}.
The $k_{\rm{T}}$~describes the relative variation of the volume of a system
due to a change in the pressure at constant temperature. Thus $k_{\rm{T}}$~is
linked to density fluctuations and can be expressed in terms of
the second derivative of the free energy with respect to the pressure.
In a second order phase transition $k_{\rm{T}}$~is expected to show a
singularity. The determination of $k_{\rm{T}}$~as well as $c_v$ can elucidate the
existence of a phase transition and its nature.
Heavy-ion collisions at
ultra-relativistic energies produce matter at extreme conditions of
energy density and temperature, where a phase
transition from normal hadronic matter to a deconfined state of
quark-gluon plasma (QGP) takes place.
Lattice QCD calculations have affirmed a crossover transition at zero
baryonic chemical potential ($\mu_{\rm B}$) ~\cite{aoki,baza}. On the other hand,
QCD inspired phenomenological models~\cite{gottlieb,fukugita,schaefer,herpay}
predict a first order phase transition at high $\mu_{\rm B}$.
This suggests the possible existence of a QCD critical point where the first order
transition terminates. The current focus of theoretical and
experimental programs is to understand the nature of the phase transition
and to locate the critical point by exploring multiple signatures.
Since $k_{\rm{T}}$~is sensitive to the phase transition, its dependence on the
$\mu_{\rm B}$~or the collision energy provides one of the basic measurements on
this subject.
Recently, collision energy dependence of $c_v$
has been reported by analysing the event-by-event mean transverse momentum
($\langle p_T\rangle$) distributions~\cite{sumit}.
In this approach, the $\langle p_T\rangle$ distributions in finite $p_{\rm{T}}$~ranges are
converted to distributions of effective temperatures. The
dynamical fluctuations in temperature are extracted by subtracting
widths of the corresponding mixed event distributions.
In the present work, we have calculated the isothermal compressibility
of matter formed in high energy collisions using experimentally
observed quantities, as prescribed in Ref.~\cite{mrow}.
This method uses the fluctuations of particle multiplicities produced in the
central rapidity region. It may be noted that enhanced fluctuation of
particle multiplicity had
earlier been proposed as signatures of
phase transition and critical point~\cite{stephanov,heiselberg,gazdzicki,begun,mrow2}.
Thus the study of event-by-event multiplicity fluctuations and
estimation of $k_{\rm{T}}$~are important for understanding the nature of matter at extreme
conditions.
The experimental data of event-by-event multiplicity fluctuations at
the Relativistic Heavy-Ion Collider (RHIC) at Brookhaven
National Laboratory (BNL) and Super Protron Synchrotron (SPS) of CERN
have been used in combination with
temperatures and volumes of the system at the chemical freeze-out to
extract the values of $k_{\rm{T}}$.
These results are compared to that of three event generators and
the hadron resonance gas (HRG) model. Our results
provide important measures for the beam energy scan program of RHIC
and the experiments at the CERN Large Hadron Collider (LHC), and
gives guidance for experiments at the Facility for
Antiproton and Ion Research (FAIR) at GSI and the Nuclotron-based Ion
Collider facility (NICA) at JINR, Dubna.
\section{Methodology}
Isothermal compressibility is the measure of
the relative change in volume with respect to change in
pressure~\cite{mrow},
\begin{eqnarray}
\left.k_T\right|_{T,\langle N\rangle} &=&
-\frac{1}{V}\left.\left(\frac{\partial V}{\partial P}\right)\right|_{T,\langle N\rangle}
\label{eq.kT}
\end{eqnarray}
where $V, T, P$ represent volume, temperature, and pressure of the
system, respectively, and $\langle N\rangle$ stands for the mean yield of the particles.
In the Grand Canonical Ensemble (GCE) framework, the variance ($\sigma^{\rm 2}$)
of the number of particles ($N$)
is directly related to isothermal compressibility~\cite{mrow,adare}, i.e,
\begin{eqnarray}
\sigma^{\rm 2} = \frac{k_{\rm B}T \langle N\rangle ^{\rm 2}}{V}k_{\rm T},
\label{ket}
\end{eqnarray}
where $k_{\rm B}$ is the Boltzmann constant.
Charged particle multiplicity fluctuations have been characterised by the scaled variances of
the multiplicity distributions, defined as,
\begin{eqnarray}
\omega_{\rm ch}=\frac{\langle N_{\rm ch}^{\rm 2} \rangle - \langle
N_{\rm ch} \rangle^{\rm 2}}{\langle N_{\rm ch} \rangle}=\frac{\sigma^{\rm 2}}{\mu}
\end{eqnarray}
where $N_{\rm ch}$ is the charged particle multiplicity per event,
and $\mu = \langle N_{\rm ch}\rangle$.
Following the above two equations, we obtain,
\begin{eqnarray}
\omega_{\rm ch} = \frac{k_{\rm B}T\mu}{V}k_{\rm T},
\label{imp}
\end{eqnarray}
which makes a connection between multiplicity fluctuation and $k_{\rm{T}}$.
This formalism, using GCE properties, may be applied to experimental measurements at
mid-rapidity, as energy and conserved quantum numbers are exchanged
with the rest of the system~\cite{jeon}.
At the chemical freeze-out surface, the inelastic collisions cease,
and thus the hadron multiplicities get frozen. While the ensemble average
thermodynamic properties like the temperature and volume can be
extracted from the mean hadron yields, $k_{\rm{T}}$~can be accessed through
the measurements of the event-by-event multiplicity fluctuations.
\section{Multiplicity fluctuations: experimental data}
The multiplicity fluctuations have been measured for a range of
collision energies by the E802 collaboration~\cite{abbott} at BNL-AGS,
WA98~\cite{aggarwal}, NA49~\cite{alt,alt1}, NA61~\cite{na61a,na61b} and CERES~\cite{sako}
experiments at CERN-SPS, and PHENIX experiment~\cite{adare} at RHIC.
The results of these measurements could not be compared directly
because of differences in the kinematic acceptances and detection efficiencies.
The experimental results are normally reported after correcting for
detector efficiencies. But the acceptances in pseudorapidity~($\eta$)
need not be the same for these experiments.
The results from the experiments have been scaled to
mid-rapidity so that these can be presented in the same footing~\cite{adare, mait}.
Fig.~\ref{wch} shows the values of $\omega_{\rm ch}$ for
$|\eta|<0.5$ in central (0-5\%) collisions as a function of the
collision energy~\cite{mait}.
The solid circles represent experimental measurements. An
increase in the scaled variances with the increase in collision energy
has been observed from these data.
It is to be noted that the widths of the charged particle distributions
and $\omega_{\rm ch}$ get their contributions from several sources,
some of which are of statistical in nature and the rest have dynamical origins.
The dynamical components are connected to thermodynamics and have been
used in the present work to extract $k_{\rm{T}}$~\cite{mrow}. Thus an estimation of the statistical part is necessary to
infer about the dynamical component of multiplicity fluctuations.
\begin{figure}
\includegraphics[width=0.45\textwidth]{omega_ch.pdf}
\caption{Beam-energy dependence of scaled variances of multiplicity
distributions ($\omega_{\rm ch}$) for central (0-5\%) Au-Au (Pb-Pb) collisions from the
available experimental data~\cite{adare,abbott,alt,alt1,na61a,na61b,sako}.
The statistical components of fluctuations ($\omega_{\rm ch,stat}$)
using the participant model calculations have been shown.
The dynamical components of the fluctuations ($\omega_{\rm ch,dyn}$)
are obtained by subtracting
the statistical components from the measured values.
}
\label{wch}
\end{figure}
One of the major contributions to statistical fluctuations comes from the
geometry of the collision, which includes variations in the impact
parameter or the number of participating nucleons. In a participant
model~\cite{heiselberg}, the nucleus-nucleus collisions
are treated as superposition of nucleon-nucleon interactions. Thus the
fluctuation in multiplicity arises because of the fluctuation in
number of participants ($N_{\rm part}$) and the fluctuation in the
number of particles produced per participant. In this formalism, based
on Glauber type of initial conditions, $\omega_{\rm ch}$ can be expressed as,
\begin{eqnarray}
\omega_{\rm ch} = \omega_{\rm n} + \langle n \rangle \omega_{N_{\rm part}},
\label{om}
\end{eqnarray}
where $n$ is the number of charged particles produced per participant,
$\omega_{\rm n}$ denotes fluctuations in $n$, and $\omega_{N_{\rm part}}$ is the
fluctuation in $N_{\rm part}$.
The value of $\omega_{\rm n}$ has a
strong dependence on acceptance. The fluctuations in the number of
accepted particles ($n$) out of the total number of produced particles
($m$) can be calculated by assuming that the distribution of $n$ follows a
binomial distribution. This is given as~\cite{heiselberg,aggarwal},
\begin{eqnarray}
\omega_{\rm n} = 1 - f + f \omega_{\rm m},
\end{eqnarray}
where $f$ is the fraction of accepted particles.
The values of $f$ and $\omega_{\rm m}$ are obtained from
proton-proton collision data of the number of charged particles
within the mid-rapidity range and the total number of charged
particles produced in the collision~\cite{whitmore,phobos,alice1,cms,aggarwal} .
Using these, we obtain the values of
$\omega_{\rm n}$ as a function of collision energy.
The values of $\omega_{\rm n}$ vary within 0.98 to 2.0
corresponding to $\sqrt{s_{\rm NN}}$~=7.7~GeV to 2.76~TeV, and are in agreement with
those reported for SPS energies~\cite{aggarwal}.
The distribution of $N_{\rm part}$ for narrow centrality bins yields
the value of $\omega_{N_{\rm part}}$. With the choice of narrow
bins in centrality selection, $\omega_{N_{\rm part}}$ values
remain close to unity from peripheral to central collisions.
With the knowledge of $\omega_{\rm n}$, $\langle n \rangle$ and
$\omega_{N_{\rm part}}$, the statistical components of
$\omega_{\rm ch}$ from the participant model have been extracted. The
values of $\omega_{\rm ch,stat}$ are presented as open symbols in Fig.~\ref{wch} as a function of collision energy.
The uncertainties in $\omega_{\rm ch,stat}$ are derived from the
statistical and systematic uncertainties in $n$ and $\omega_{\rm n}$.
The dynamical fluctuations of $\omega_{\rm ch}$ (denoted as $\omega_{ch,dyn}$)
are extracted by subtracting the statistical fluctuations from the
measured ones. In Fig.~\ref{wch}, the values of $\omega_{ch,dyn}$ are plotted (as diamond
symbols) as a function of collision energy.
Within the quoted errors, $\omega_{ch,dyn}$ is
seen to remain constant as a function of collision energy. However,
a decreasing trend may be seen for $\sqrt{s_{\rm NN}} > 20$ GeV.
More experimental data at low and intermediate collision energies are needed to
conclude the nature of the fluctuations as a function of the collision
energy.
\section{Multiplicity fluctuations from event generators}
In order to validate the results from experimental data, we have
analysed the results from three different event generators, which are:
AMPT (A Multi Phase
Transport)~\cite{ampt,ampt2,ampt3}, UrQMD (Ultra-relativistic Quantum
Molecular Dynamics)~\cite{UrQMD1,UrQMD2}, and EPOS~\cite{EPOS1,EPOS3,EPOS4}.
Multiplicity fluctuations using the AMPT model have been studied for
the default (DEF) and string melting (SM) modes~\cite{mait}.
In the default mode, hadronization takes place via the string fragmentation,
whereas in the SM mode, hadronization takes place via quark coalescence.
The UrQMD is a
microscopic transport model, where the hadron-hadron interactions and the space-time
system evolution are studied based
on the covariant propagation of all hadrons in combination with
stochastic binary scatterings, color string formation, and resonance
decay. UrQMD has been previously used to simulate production of different particles
and analysis of their event-by-event
fluctuations~\cite{UrQMDapply1,UrQMDapply2,Bleichersus,Sahoo,Bhanu,Arghya}.
The EPOS(3+1) viscous hydrodynamical model incorporates multiple
scattering approach based upon the Gribov-Regge (GR) theory and
perturbative QCD~\cite{EPOS3}. The hydrodynamical
evolution starts from flux tube (or relativistic strings) initial conditions, generated by the
GR framework. The string formation occurs due to initial
scatterings, which later breaks into segments identified as
hadrons. One of the salient features of the model is the
classification of two regions of physical interest on the basis of
density, such as core (high density) and corona (low density)~\cite{EPOS4}.
For the centrality dependence of observables, the corona plays a major role at large rapidity
and low multiplicity events and contributes to hadronization. However,
for most central collisions, a core with collective hadronization is
created from corona because of a large number of nucleons suffering
inelastic collisions. Results from EPOS match experimental data at
RHIC and LHC for particle multiplicities, transverse momenta and
correlation patterns~\cite{EPOS1,EPOS3,EPOS4,EPOS5}.
For the present study, a large number of events are generated using the
event generators for Au-Au collisions between $\sqrt{s_{\rm NN}}$ = 7.7
to 200 GeV, corresponding to the RHIC energies, and
for for Pb-Pb collisions at $\sqrt{s_{\rm NN}}$~= 2.76~TeV.
In all cases, the centrality of the collision has been
selected using minimum bias distributions of charged particle multiplicities in the
range, $0.5 < |\eta| < 1.0$. The
multiplicities and multiplicity fluctuations have been obtained within
the kinematic range, $|\eta| < 0.5$ and $0.2 < p_{\rm T}<2.0$~GeV/c.
The $\eta$-range used for the centrality selection is different from
the one for the fluctuation study, and thus poses almost no bias on the
fluctuation analysis.
To minimise the geometrical fluctuations, calculations are first done for
narrow (1\%) centrality bins. These results are then combined to
make wider bins by using centrality
bin width correction method which takes care of the impact
parameter variations~\cite{mait}.
\begin{figure}
\includegraphics[width=0.49\textwidth]{Omega_model.pdf}
\caption{
Collision energy dependence of scaled variances of charged particle multiplicity
distributions for central (0-5\%) Au-Au (Pb-Pb) collisions from event generators,
UrQMD, EPOS and AMPT. The dynamical multiplicity fluctuations
($\omega_{ch,dyn}$) are obtained after subtracting the statistical
fluctuations from participant model.
}
\label{wch_model}
\end{figure}
Fig.~\ref{wch_model} shows the collision energy dependence of
$\omega_{ch}$ for central (0-5\%) collisions from the
event generators. Statistical
errors are calculated using the Delta
theorem~\cite{delta}. It is observed that the fluctuations remain
somewhat constant over the energy range considered, except for the AMPT
events, where a small rise is seen at higher energies.
The statistical components of the fluctuations have been calculated
from the participant model calculations, using the same procedure as
discussed in the previous section.
The dynamical components, $\omega_{ch,dyn}$, are obtained after
subtracting the statistical fluctuations, and are also shown in the Fig.~\ref{wch_model}.
In all cases, the dynamical
multiplicity fluctuations decrease with the increase of high collision
energy to $\sqrt{s_{\rm NN}} > 62.4$~GeV, beyond which the
fluctuations are close to zero.
\section{$k_{\rm{T}}$~from HRG model}
The values of $k_{\rm{T}}$~can be obtained by employing a hadron resonance gas
model, which is based on a list of
majority of the hadrons and their resonances as per the
Particle Data Book~\cite{PDG}. It works within the framework of a multiple species non-interacting
ideal gas in complete thermal and chemical equilibrium~\cite{alba,andronic,cleymans}.
The HRG model
has been found to provide a good description of the mean hadron
yields using a few thermodynamic parameters at freeze-out (for a
recent compilation of the freeze-out parameters, see Ref.~\cite{sandeep1}).
The goal in the HRG model calculation is to obtain
$k_{\rm{T}}$~directly from eqn.~1, where instead of total number of charged particles,
the attempt has been to calculate in terms of species dependence ($i$)
of the hadrons. The differential for the pressure $P\left( T, \{\mu_i\}\right)$
can be written as,
\begin{eqnarray}
dP = \left(\frac{\partial P}{\partial T}\right) dT + \sum_i\left(\frac{\partial
P}{\partial \mu_i}\right) d\mu_i\label{eq.diffP1},
\end{eqnarray}
and so:
\begin{eqnarray}
\left.\left(\frac{\partial P}{\partial V}\right)\right|_{T,\{\langle N_i\rangle\}} = \sum_i\left(\frac{\partial P}{\partial \mu_i}\right)
\left.\left(\frac{\partial \mu_i}{\partial V}\right)\right|_{T,\{\langle N_i\rangle\}}\label{eq.diffP2}.
\end{eqnarray}
While the first factor is straightforward to compute from the expression for $P$, the
second factor $\left.\left(\frac{\partial \mu_i}{\partial V}\right)\right|_{T,\{\langle N_i\rangle\}}$ is obtained
from the condition of constancy of $N_i$ as follows,
\begin{eqnarray}
dN_i = \left(\frac{\partial N_i}{\partial T}\right) dT + \left(\frac{\partial N_i}{\partial V}\right) dV +
\left(\frac{\partial N_i}{\partial \mu_i}\right) d\mu_i\label{eq.diffNi}.
\end{eqnarray}
For fixed $N_i$, the above equation becomes,
\begin{eqnarray}
\left.\left(\frac{\partial \mu_i}{\partial V}\right)\right|_{T,\{\langle N_i\rangle\}} = -\frac{\left(\frac{\partial N_i}{\partial V}\right)}
{\left(\frac{\partial N_i}{\partial \mu_i}\right)}\label{eq.dmudv}.
\end{eqnarray}
Within HRG, $\frac{\partial N}{\partial V}=\frac{\partial P}{\partial\mu}$. Thus, Eq.~\ref{eq.diffP2} becomes
\begin{eqnarray}
\left.\left(\frac{\partial P}{\partial V}\right)\right|_{T,\{\langle N_i\rangle\}}
= -\sum_i\frac{\left(\frac{\partial P}{\partial \mu_i}\right)^2}{\left(\frac{\partial N_i}{\partial \mu_i}\right)}
\end{eqnarray}
which is used to get $k_T$ using Eq.~\ref{eq.kT},
\begin{eqnarray}
\left.k_T\right|_{T,\{\langle N_i\rangle\}} =
\frac{1}{V}\frac{1}{\sum_i{\left(\frac{\partial P}{\partial \mu_i}\right)^2}/{\left(\frac{\partial N_i}{\partial \mu_i}\right)}}.
\end{eqnarray}
This prescription of the HRG model has been used to calculate $k_{\rm{T}}$~for
Au-Au collisions as a function of collision energy, which are
presented in terms of the solid curve in Fig.~\ref{final_kt}.
With the increase of collision energy, the values of $k_{\rm{T}}$~decrease up to
$\sqrt{s_{\rm NN}}$~$=$20~GeV.
However, at higher energies, $k_{\rm{T}}$~remains almost constant.
This follows primarily from the behaviour of chemical freeze-out temperature as
a function of collision energy.
\begin{figure}
\includegraphics[width=0.49\textwidth]{All_kT.pdf}
\caption{Isothermal compressibility, $k_{\rm{T}}$, as a function of $\sqrt{s_{\rm NN}}$~ for
available experimental data for central (0-5\%) Au-Au (Pb-Pb) collisions.
Results for three event generators are presented.
Results from HRG calculations are superimposed.
}
\label{final_kt}
\end{figure}
\section{Compilation of $k_{\rm{T}}$}
Finally, the values of $k_{\rm{T}}$~are calculated from the available
experimental data and event generators using
the dynamical fluctuations, $\omega_{\rm ch,dyn}$, which are presented in the
figures~\ref{wch} and \ref{wch_model}. The mean charged particle multiplicities are
obtained under the same kinematic conditions.
The calculation of $k_{\rm{T}}$~requires temperature and volume, which are
obtained from different sets of measurements.
The chemical freeze-out temperature ($T_{\rm ch}$) and the corresponding volume of the system
have been obtained by
fitting the measured identified particle yields using thermal model
calculations~\cite{cleymans,sandeep1,cleymans2,pbm,star1,alice2}.
For the calculation of $k_{\rm{T}}$, both $T_{\rm ch}$ and $V$
have been obtained from Ref.~\cite{sandeep1}.
A compilation of $k_{\rm{T}}$~as a function of $\sqrt{s_{\rm NN}}$~for central Au-Au (Pb-Pb)
collisions is presented in Fig.~\ref{final_kt}. In the
absence of experimental data at the LHC, calculations from AMPT and EPOS have
been presented.
From the available experimental data, it is observed that, $k_{\rm T}$~remains
almost constant within the assigned errors. The
results from the event generators are seen to decrease with an
increase in the
collision energy and remain constant at higher energies.
The results from HRG calculations show a sharp decrease in $k_{\rm{T}}$~at low collision energies.
Thus more experimental data points at collision
energies below $\sqrt{s_{\rm NN}}$~$\sim$~20~GeV are needed to validate our findings.
The extraction of $k_{\rm{T}}$~may be affected by several sources of
uncertainty. The evaluation of the statistical component of the
fluctuation poses one of the largest uncertainties. We have
used a participant model calculation to obtain the
$\omega_{ch,stat}$ based on the Glauber type of initial
conditions. Another effect which affects
the charged particle production is the resonance decay of particles.
This is studied for Au-Au collisions at $\sqrt{s_{\rm NN}}$~=~200 GeV
and Pb-Pb collisions at $\sqrt{s_{\rm NN}}$~=~2.76~TeV using AMPT and EPOS event
generators by turning off and on the higher order resonances.
The differences between the two cases are very small and within the errors, implying that
resonance decay effects are negligible for multiplicity fluctuations.
Other sources of fluctuations which affect the extraction of
$\omega_{ch,dyn}$ include uncertainty in the initial state
fluctuations and fluctuations in the amount of stopping.
In view of the uncertainties from different sources
which could not be considered presently, the extracted values are the upper limits of $k_{\rm{T}}$.
\section{Summary}
We have studied the isothermal
compressibility of the system formed at the time of chemical
freeze-out in relativistic nuclear collisions for $\sqrt{s_{\rm NN}}$~from 7.7~GeV to 2.76~TeV.
We have shown that $k_{\rm{T}}$~is related to the fluctuation in particle
multiplicity in the central rapidity region. Multiplicity fluctuations
have been obtained from available experimental data and event
generators. The dynamical fluctuations are extracted
from the total fluctuations
by subtracting the statistical components using
contributions from the number of
participating nucleons.
For the calculation of $k_{\rm{T}}$, the temperature and volume were taken from
the thermal model fits of the measured particle yields
at the chemical freeze-out.
Within quoted errors, the values of $k_{\rm{T}}$~from the experimental
data remain almost constant as a function of energy.
Using the event generators, we have seen that $k_{\rm{T}}$~decreases with
an increase of the collision
energy. The estimation of $k_{\rm{T}}$~presented in the present manuscript relies on several
assumptions, most importantly on the estimation of dynamical
fluctuations. The results of $k_{\rm{T}}$~represent the upper limits because of
unknown contributions to the statistical components.
We have calculated the values of $k_{\rm{T}}$~from the HRG model for a wide
range of collision energy. With the increase of collision energy,
$k_{\rm{T}}$~values decrease up to $\sqrt{s_{\rm NN}}$~$\sim$~20~GeV, beyond which
the $k_{\rm{T}}$~remain almost constant.
The nature of $k_{\rm{T}}$~as a function of collision energy is similar to what
has been observed for $c_v$~\cite{sumit}.
A higher value of $k_{\rm{T}}$~at low energies compared
to higher energies indicates that the collision system is more
compressible at the lower energies.
This study gives a strong impetus for the second phase of the beam energy scan program of
RHIC and planned experiments at FAIR and NICA.
\medskip
\noindent{\bf Acknowledgements}\\
The authors would like to thank
Stanislaw Mrowczynski, Jean Cleymans, Victor Begun and
Pradip K. Sahu for discussions on the concepts leading to this work.
SPA is grateful to Klaus Werner for providing the EPOS code.
MM is thankful to the High Energy Physics group of Bose Institute for
useful discussions. SB wishes to thank Claude A. Pruneau for fruitful discussions
during the preparation of the manuscript.
SB is supported by the U.S.Department of Energy Office of Science,
Office of Nuclear Physics under Award Number DE-FG02-92ER-40713.
SC is supported by the Polish Ministry of Science and Higher
Education (MNiSW) and the National Science Centre grant 2015/17/B/ST2/00101.
This research used resources of the LHC grid computing centers at
Variable Energy Cyclotron Center, Kolkata and at Bose Institute, Kolkata.
\medskip
\noindent{\bf References}\\
|
1,108,101,565,738 | arxiv | \section{Introduction}
Tverberg's theorem and the weak $\varepsilon$-net theorem for convex sets are central results describing the combinatorial properties of convex sets. Their statements are the following
\begin{theorem}[Tverberg 1966, \cite{Tverberg:1966tb}]
Let $r,d$ be positive integers. Given a set $X$ of $(r-1)(d+1)+1$ points in $\mathds{R}^d$, there is a partition of $X$ into $r$ sets whose convex hulls intersect.
\end{theorem}
We call a partition into $r$ sets as above a \textit{Tverberg partition}. For a set $Y \subset \mathds{R}^d$, we denote by $\conv Y$ its convex hull.
\begin{theorem}[Weak $\varepsilon$-net; Alon, B\'ar\'any, F\"uredi, Kleitman 1992, \cite{Alon:1992ek}]
Let $d$ be a positive integer and $\varepsilon >0$. Then, there is an integer $n = n(\varepsilon, d)$ such that the following holds. For any finite set $X$ of points in $\mathds{R}^d$, there is a set $K \subset \mathds{R}^d$ of $n(\varepsilon, d)$ points such that for all $Y \subset X$ with $|Y| \ge \varepsilon |X|$, we have that $\conv Y$ intersects $K$.
\end{theorem}
For an overview of both theorems and how they have shaped discrete geomety, consult \cite{Matousek:2002td, barany2017survey}. One key aspect of the weak $\varepsilon$-net theorem is that $n(\varepsilon, d)$ does not depend on $|X|$. The two theorems are closely related to each other. Tverberg's theorem is an important tool in the proof of the ``first selection lemma'' \cite{Barany:1982va}, which in turn is used to prove the weak $\varepsilon$-net theorem. Finding upper and lower bounds for $n(\varepsilon, d)$ is a difficult problem. As an upper bound, for any fixed $d$ we have $n(\varepsilon, d) = O(\varepsilon^{-d}\operatorname{polylog}(\varepsilon^{-1}))$ \cite{CEGGSW95, MW04}. There are lower bounds superlinear in $1/\varepsilon$, for any fixed $d$ we have $n(\varepsilon,d) = \Omega ((1/\varepsilon)\ln^{d-1}(1/\varepsilon))$ \cite{Bukh:2011vs}.
The purpose of this paper is to provide a different link between these two theorems. Just as the weak $\varepsilon$-net gives you a fixed-size set which intersects the convex hull of each not too small subset of $X$, now we seek a fixed number of partitions of $X$, such that for every not too small subset $Y \subset X$, at least one of the partitions induces a Tverberg partition on $Y$. Unlike the weak $\varepsilon$-net problem, we get an exact value for the number of partitions needed.
Given a partition $\mathcal{P}$ of $X$ and $Y \subset X$, we denote by $\mathcal{P} (Y)$ the restriction of $\mathcal{P}$ on $Y$,
\[
\mathcal{P}(Y) = \{K \cap Y: K \in \mathcal{P}\}.
\]
If $\mathcal{P}$ is a partition into $r$ sets, then $\mathcal{P}(Y)$ is also a partition into $r$ sets, though some may be empty. With this notation, we can state the main result of this paper.
\begin{theorem}\label{theorem-main}
Let $1\ge\varepsilon > 0$ be a real number and $r,d$ be positive integers. Then, there is an integer $m=m(\varepsilon, r)$ such that the following is true. For every sufficiently large finite set $X \subset \mathds{R}^d$, there are $m$ partitions $\mathcal{P}_1, \ldots, \mathcal{P}_m$ of $X$ into $r$ parts each such that, for every subset $Y \subset X$ with $|Y| \ge \varepsilon |X|$, there is a $k$ such that $\mathcal{P}_k(Y)$ is a Tverberg partition. Moreover, we have
\[
m(\varepsilon,r) = \left\lfloor \frac{\ln\left(\frac{1}{\varepsilon}\right)}{\ln\left(\frac{r}{r-1}\right)}\right\rfloor + 1.
\]
\end{theorem}
An equivalent statement is that $\varepsilon > ((r-1)/r)^m$ if and only if $m(\varepsilon, r) \le m$. One should notice that $1/\ln(r/(r-1)) \sim r$, so $m(\varepsilon, r) \sim r \ln (1/\varepsilon)$. One surprising aspect of this result is that $m$ does not depend on the dimension. The effect of the dimension only appears when we look at how large $X$ must be for the theorem to kick in. The value for $|X|$ where the theorem starts working is, up to polylogarithmic terms, $m d r^3 (\varepsilon - ((r-1)/r)^m)^{-2}$.
The proof of Theorem \ref{theorem-main} follows from a repeated application of the probabilistic method, contained in section \ref{section-proof}. We build up on the techniques of \cite{soberon2016robust} to prove Tverberg-type results by making random partitions. The key new observation is that, given $m$ partitions of $X$, the number of containment-maximal subsets $Y$ such that $\mathcal{P}_k(Y)$ is not a Tverberg partition for any $k$ is polynomial in $|X|$.
This result is also closely related to Tverberg's theorem with tolerance.
\begin{theorem}[Tverberg with tolerance; Garc\'ia-Col\'in, Raggi, Rold\'an-Pensado 2017, \cite{GRR17tolerant}]\label{theorem-tolerance}
Let $r,t,d$ be positive integers, were $r,d$ are fixed. There is an integer $N(r,t,d)=rt+o(t)$ such that the following holds. For any set $X$ of $N$ points in $\mathds{R}^d$, there is a partition of $X$ into $r$ sets $X_1, \ldots, X_r$ such that, for all $C \subset X$ of cardinality $t$, we have
\[
\bigcap_{j=1}^r \conv (X_j \setminus C) \neq \emptyset.
\]
\end{theorem}
This is a result that is motivated by earlier work of Larman \cite{Larman:1972tn}, who studied the case $t=1, r=2$. Theorem \ref{theorem-tolerance} determines the correct leading term as $t$ becomes large. This result been improved to $N = rt + \tilde{O}(\sqrt{t})$, where the $\tilde{O}$ term hides polylogarithmic factors, and is polynomial in $r,d$ \cite{soberon2016robust}. In the notation of Theorem \ref{theorem-main}, Theorem \ref{theorem-tolerance} says that if $\varepsilon > 1 - 1/r$, then $m(\varepsilon, r) = 1$. Improved bounds for small values of $t$ can be found in \cite{Soberon:2012er, MZ14tolerant}.
As the driving engine in the proof of Theorem \ref{theorem-main} is Sarkaria's tensoring technique, described in section \ref{section-preliminaries}, it can be easily modified to get similar versions of a multitude of variations of Tverberg's theorem. This includes Tverberg ``plus minus'' \cite{barany2016tverberg}, colorful Tverberg with equal coefficients \cite{soberon2015equal} and asymptotic variations of Reay's conjecture \cite{soberon2016robust}. We do not include those variations explicitly. We do include an $\varepsilon$-version for the colorful Tverberg theorem in section \ref{section-colored}, as it is closely related to a conjecture in \cite{soberon2016robust}.
A natural question that follows the results of this paper is to determine whether a topological version of Theorem \ref{theorem-main} also holds.
\section{Preliminaries}\label{section-preliminaries}
\subsection{Sarkaria's technique.}
We start discussing the preliminaries for the the proof of Theorem \ref{theorem-main}. At the core of the proof is Sarkaria's technique to prove Tverberg's theorem via tensor products \cite{Sarkaria:1992vt, BaranyOnn}.
The goal is to reduce Tverberg's theorem to the colorful Carath\'eodory theorem.
\begin{theorem}[Colorful Carath\'eodory; B\'ar\'any 1982 \cite{Barany:1982va}]\label{theorem-colorcarath}
Let $F_1, \ldots, F_{n+1}$ be sets of points in $\mathds{R}^n$. If $0 \in \conv(F_i)$ for all $i=1, \ldots, n+1$, then we can choose points $x_1 \in F_1, \ldots, x_{n+1} \in F_{n+1}$ so that $0 \in \conv \{x_1, \ldots, x_{n+1}\}$.
\end{theorem}
The set $\{x_1, \ldots, x_{n+1}\}$ is called a \textit{transversal} of $\mathcal{F}=\{F_1, \ldots, F_{n+1}\}$. Each set $F_i$ is called a \textit{color class}. For the sake of brevity we do not reproduce Sarkaria's proof, but point out the main ingredients. We distinguish between Tverberg-type results and colorful Carath\'eodory-type results by denoting the dimension of their ambient spaces by $d$ and $n$, respectively.
Let $X = \{x_1, \ldots, x_N\}$ be a set of points in $\mathds{R}^d$ and $r$ a positive integer. We define $n = (d+1)(r-1)$. Let $v_1, \ldots, v_r$ be the vertices of a regular simplex in $\mathds{R}^{r-1}$ centered at the origin. We construct the points
\[
\bar{x}_{i,j} = (x_i,1) \otimes v_j \in \mathds{R}^{(d+1)(r-1)}= \mathds{R}^n,
\]
where $\otimes$ denotes the standard tensor product. Given two vectors ${v_1} \in \mathds{R}^{d_1}, {v_2}\in \mathds{R}^{d_2}$, their tensor product ${v_1} \otimes {v_2}$ is simply the $d_1 \times d_2$ matrix $v_1v_2^{T}$ interpreted as a $d_1d_2$-dimensional vector. These tensor products carry all the information about Tverberg partitions into $r$ parts.
\begin{lemma}\label{lemma-magic}
Let $X = \{x_1, \ldots, x_N\}$ be a finite set of points in $\mathds{R}^d$, $r$ be a positive integer. Then, a partition $X_1, \ldots, X_r$ of $X$ is a Tverberg partition if and only if
\[
{0} \in \conv \{\bar{x}_{i,j} : i,j \ \mbox{are such that } x_i \in X_j\}
\]
\end{lemma}
A lucid explanation of the lemma above can be found in \cite{baranytensors}. Lemma \ref{lemma-magic} implies that, given $X$, if we consider the sets
\[
F_i =\{\bar{x}_{i,j} : j=1,\ldots, r\} \qquad i=1,\ldots,N,
\]
then finding a Tverberg partition of $X$ into $r$ parts corresponds to finding a transversal of $\mathcal{F}=\{F_1, \ldots, F_N\}$ whose convex hull contains the origin in $\mathds{R}^n$. Since $0 \in \conv F_i$ for each $i$, Theorem \ref{theorem-colorcarath} or a variation can be applied. Then, by Lemma \ref{lemma-magic}, we obtain a Tverberg partition.
For transversals, there is also a natural notion of restriction. Given a family $\mathcal{F}$ of sets in $\mathds{R}^n$, $\mathcal{G} \subset \mathcal{F}$, and $P$ a transversal of $\mathcal{F}$, we define
\[
P(\mathcal{G})=\{x \in P: x \mbox{ came from a set in } \mathcal{G}\}.
\]
Alternatively, $P(\mathcal{G}) = P \cap (\cup \mathcal{G})$. In order to prove Theorem \ref{theorem-main}, it is sufficient to prove the following.
\begin{theorem}\label{theorem-epsilon-caratheodory}
Let $r, n$ be positive integers and $1\ge \varepsilon >0$ a real number. Then, there is an integer $m=m(\varepsilon, r)$ such that the following is true. For every sufficiently large $N$, if we are given a family $\mathcal{F}$ of $N$ sets in $\mathds{R}^n$, such that $0 \in \conv F$ and $|F|=r$ for all $F \in \mathcal{F}$, then there are $m$ transversals $P_1, \ldots, P_m$ of $\mathcal{F}$ with the following property. For every $\mathcal{G} \subset \mathcal{F}$ with $|\mathcal{G}| \ge \varepsilon|\mathcal{F}|$ there is a $k$ with $0 \in \conv P_k(\mathcal{G})$.
Moreover, we have
\[
m(\varepsilon,r) = \left\lfloor \frac{\ln\left(\frac{1}{\varepsilon}\right)}{\ln\left(\frac{r}{r-1}\right)}\right\rfloor + 1.
\]
\end{theorem}
Indeed, let us sketch how Theorem \ref{theorem-epsilon-caratheodory} implies Theorem \ref{theorem-main}.
\begin{proof}
Assume $r,d,\varepsilon,m$ are given, satisfying the last equality of Theorem \ref{theorem-epsilon-caratheodory}. Let $n = (d+1)(r-1)+1$. Assume that we are given a set $X$ of $N$ points in $\mathds{R}^d$, $X = \{x_1, \ldots, x_N\}$, where $N$ is a large positive integer. For $v_1, \ldots, v_r \in \mathds{R}^{r-1}$ as before, we construct the sets
\[
F_i = \{(x_i,1) \otimes v_j : j = 1, \ldots, r\} \subset \mathds{R}^n.
\]
Then, we apply Theorem \ref{theorem-epsilon-caratheodory} to the family $\mathcal{F} = \{F_1, \ldots, F_N\}$ and find $m$ transversals $P_1, \ldots, P_m$. Given a set of indices $I \subset [N]$ such that $|I| \ge \varepsilon N$, consider ${\mathcal{G}}_I = \{F_i : i \in I\}$. Then, there must be a transversal $P_{i_0}$ such that ${0} \in \conv P_{i_0}({\mathcal{G}}_I)$. By Lemma \ref{lemma-magic}, this means that the partition $\mathcal{P}_{i_0}$ of $X$ induced by $P_{i_0}$ is a Tverberg partition even when restricted to the set $X_I = \{x_i : i \in X\}$. In other words, the partitions induced by $P_1, \ldots, P_m$ satisfy the conclusion of Theorem \ref{theorem-main}.
\end{proof}
We also need the following lemma. It bounds the complexity of verifying if $0 \in \conv Y$ if $Y \subset X$ and $X$ is given in advance. For our purposes, we need a slightly weaker version than the one presented in \cite{soberon2016robust} (see also \cite{Clarksonradon}).
\begin{lemma}\label{lemma-halfspace}
Let $X \subset \mathds{R}^n$ be a finite set. Then, there is a family $\mathcal{H}$ of $|X|^n$ half-spaces in $\mathds{R}^n$, each containing $0$, such that the following holds. For every subset $Y \subset X$, we have $ 0 \in \conv Y$ if and only if $Y \cap H \neq \emptyset$ for all $H \in \mathcal{H}$. \qed
\end{lemma}
\begin{proof}[Sketch of proof]
$0$ belongs to $\conv Y$ if and only if there is no hyperplane separating $0$ from $Y$. There are infinitely many candidate hyperplanes, but they can be grouped into equivalence classes according to which subset of $X$ they separate from $0$. We just need one representative from each class. The number of such possible subsets is equal, under duality, to the number of cells into which $|X|$ hyperplanes partition $\mathds{R}^n$.
\end{proof}
\subsection{Hoeffding's inequality}
Our main probabilistic tool will be Hoeffding's inequality.
\begin{theorem}[Hoeffding 1963, \cite{Hoe63}]
Given $n$ independent random variables $x_1, \ldots, x_N$ such that $0 \le x_i \le 1$, let $y=x_1 + \ldots + x_N$. For all $\lambda \ge 0$, we have
\[
\mathbb{P}\left[y < \mathbb{E}(y) - \lambda\right] < e^{-2\lambda^2/N}.
\]
\end{theorem}
The expert reader may know that Hoeffding proved a slightly different inequality: $\mathbb{P}\left[y >\mathbb{E}(y) + \lambda\right] < e^{-2\lambda^2/N}$. It suffices to apply the inequality to the variables $z_i = 1 - x_i$ to obtain the other bound. This is a special case of Azuma's inequality (with a slightly different constant in the exponent, which would not change the main result significantly) \cite{Azu67}. These inequalities carry at their heart the central limit theorem, which is why such an exponential decay is expected in the tails of the distribution. See \cite{alonspencer} for references on the subject.
\section{Proof of Theorem \ref{theorem-epsilon-caratheodory}}\label{section-proof}
\begin{proof}
We first prove that $\varepsilon > ((r-1)/r)^m$ is necessary for Theorem \ref{theorem-main}, which also implies the lower bound for Theorem \ref{theorem-epsilon-caratheodory}. Given $N$ points in $\mathds{R}^d$ and $m$ partitions $P_1, \ldots, P_m$, of them, let us find a subset of size greater than or equal to $N((r-1)/r)^m$ in which no $P_k$ induces a Tverberg partition. First, notice that one of the parts of $P_1$ must have at most $N/r$ points. If we remove them, then there are at least $N(1-1/r)$ points left. We can repeat the same argument, and, among the points we have left, one of the parts induced by $P_2$ must have at most a $(1/r)$-fraction of them. Removing those leaves us with at least $N(1-1/r)^2$ points. We proceed this way and end up with a set ${Y}$ of at least $N(1-1/r)^m$ points, such that $P_k({Y})$ has at least one empty component for each $k = 1,\ldots, m$. Therefore, none of these is a Tverberg partition.
Assume now that $\varepsilon > ((r-1)/r)^m$. We want to prove that there are $m$ transversals as the theorem required. We choose (with foresight) $A = (Nr)^n$, and $\lambda > \sqrt{m N \ln A}$. Define a sequence $N_0, N_1, \ldots$ by $N_0 = N$ and $N_k = N_{k-1}(1-1/r) + \lambda$ for $k \ge 1$. If we apply Lemma \ref{lemma-halfspace} to $\cup \mathcal{F}$, we obtain a family $\mathcal{H}$ of $A$ halfspaces, all containing $0$, which are enough to check if the convex hulls of the transversals we construct contain $0$.
We consider each $F \in \mathcal{F}$ as a color class. For $k=1, \ldots, m$, we will construct $P_k$ and a family $\gimel_k$ of sets of color classes such that the following properties hold:
\begin{itemize}
\item given $\mathcal{G} \subset \mathcal{F}$ such that $0 \not\in \conv(P_{k'}(\mathcal{G}))$ for all $k'=1,\ldots,k$, there must be a $\mathcal{V} \in \gimel_k$ such that $\mathcal{G} \subset \mathcal{V}$,
\item if $\mathcal{V} \in \gimel_k$, then $|\mathcal{V}|\le N_k$, and
\item $|\gimel_k| \le A^k$.
\end{itemize}
We can consider $\gimel_0 = \{\mathcal{F}\}$. We construct $P_k$ inductively, assuming $\gimel_{k-1}$ and $P_{k'}$ have been constructed for $k' < k$. We start by choosing $P_k$ randomly. For each $F \in \mathcal{F}$, we pick $y_F^k \in F$ uniformly and independently. Then, we denote $P_k = \{y^k_F : F \in \mathcal{F}\}$.
Given a half-space $H \in \mathcal{H}$, consider the random variable
\[
x^k_F(H) = \begin{cases}
1 & \mbox{if } y^k_F \in H \\
0 & \mbox{otherwise }
\end{cases}
\]
Since $0 \in \conv(F)$, we know that $\mathbb{E}(x^k_F(H)) \ge 1/r$. By linearity of expectation, for each $\mathcal{V} \in \gimel_{k-1}$ we have
\[
\mathbb{E}\left[\sum_{F \in \mathcal{V}}x^k_F(H)\right] \ge \frac{1}{r}|\mathcal{V}|.
\]
Since all variables $x^k_F(H), x^k_{F'}(H)$ are independent for $F \neq F'$, Hoeffding's inequality gives
\[
\mathbb{P}\left[\sum_{F \in \mathcal{V}}x^k_F(H) < \frac{|\mathcal{V}|}{r}-\lambda\right]< e^{-2\lambda^2/|\mathcal{V}|} \le e^{-2\lambda^2/N}
\]
Therefore the union bound gives
\begin{align*}
\mathbb{P}\left[\exists H \in \mathcal{H} \ \exists \mathcal{V} \in \gimel_{k-1}\mbox{ such that } \sum_{F \in \mathcal{V}}x^k_F(H) < \frac{|\mathcal{V}|}{r}-\lambda\right] & \le A \cdot |\gimel_{k-1}| \cdot e^{-2\lambda^2/ N} \\
& \le A^k e^{-2\lambda^2/N} < 1
\end{align*}
by the choice of $\lambda$.
Therefore, there is a choice of $P_k$ such that for all $\mathcal{V} \in \gimel_{k-1}$ and all half-spaces $H \in \mathcal{H}$, we have
\[
\sum_{F \in \mathcal{V}}x^k_F(H) \ge \frac{|\mathcal{V}|}{r}-\lambda.
\]
We fix $P_k$ to be this choice. We are ready to construct $\gimel_k$. For each $\mathcal{V} \in \gimel_{k-1}$ and each half-space $H \in \mathcal{H}$, we construct the set $\mathcal{V}' = \{F \in \mathcal{V} : x^k_F(H) = 0\}$. We call $\gimel_k$ to the family of all sets that can be formed this way. Let us prove that $\gimel_k$ satisfies all the desired properties.
\begin{claim}
Given $\mathcal{G} \subset \mathcal{F}$ such that $0 \not\in \conv(P_{k'}(\mathcal{G}))$ for all $k'=1,\ldots,k$, there must be a $\mathcal{V}' \in \gimel_k$ such that $\mathcal{G} \subset \mathcal{V}'$.
\end{claim}
\begin{proof}
If $0 \not\in \conv(P_{k'}(\mathcal{G}))$ for all $k'=1,\ldots,k$, we already know that there must be a $\mathcal{V} \in \gimel_{k-1}$ such that $\mathcal{G} \subset \mathcal{V}$. Since $0 \not\in \conv(P_k(\mathcal{G}))$, there must be a half-space $H\in \mathcal{H}$ containing $0$ such that $x^k_F(H) = 0$ for all $F \in \mathcal{G}$. Therefore, there is a $\mathcal{V}' \in \gimel_k$ with $\mathcal{G} \subset \mathcal{V'}$.
\end{proof}
\begin{claim}
If $\mathcal{V'} \in \gimel_k$, then $|\mathcal{V'}|\le N_k$.
\end{claim}
\begin{proof}
Let $\mathcal{V} \in \gimel_{k-1}$, $H\in \mathcal{H}$ be the family and half-space that defined $\mathcal{V'}$, respectively. Then,
\[
|\mathcal{V}'| = \sum_{F \in \mathcal{V}}(1-x^k_F(H)) \le |\mathcal{V}|\left(1-\frac{1}{r}\right) + \lambda \le N_{k-1}\left(1-\frac{1}{r}\right)+ \lambda = N_k.
\]
\end{proof}
\begin{claim}
We have $|\gimel_k| \le A^k$.
\end{claim}
\begin{proof}
By construction, $|\gimel_k| \le |\gimel_{k-1}| \cdot A \le A^k$.
\end{proof}
This concludes the construction of $P_1, \ldots, P_m$.
If $\mathcal{G} \subset \mathcal{F}$ is such that $0 \not \in \conv P_k (\mathcal{G})$ for $k=1, \ldots, m$, then there must be a $\mathcal{V} \in \gimel_m$ such that $\mathcal{G} \subset \mathcal{V}$.
Recall that $m$ was chosen so that $((r-1)/r)^m < \varepsilon$. Therefore
\[
|\mathcal{G}| \le |\mathcal{V}| \le N_m \le N\left(\frac{r-1}{r}\right)^m + r\lambda < \varepsilon N,
\]
where the last inequality holds if $N$ is large enough, as $\lambda = O(\sqrt{N\ln N})$.
\end{proof}
\section{A Colorful version}\label{section-colored}
Another important variation of Tverberg's theorem is the following conjecture by B\'ar\'any and Larman.
\begin{conjecture}[Colorful Tverberg; B\'ar\'any, Larman 1992 \cite{Barany:1992tx}]
For any given $d+1$ sets $F_1, \ldots, F_{d+1}$ of $r$ points each in $\mathds{R}^d$, there is a Tverberg partition $X_1, \ldots, X_r$ of their union such that for all $i,j$ we have $|F_i \cap X_j|=1$.
\end{conjecture}
A partition $X_1, \ldots, X_r$ with $|F_i \cap X_j|=1$ for all $i,j$ is called a \textit{colorful partition}. Consult \cite{Blagojevic:2014js, Blagojevic:2011vh, blago15} and the references therein the current solved cases and techniques. We present an $\varepsilon$-version of the conjecture above in the following theorem. Let $p_r\sim 1 - 1/e$ be the probability that a random permutation of a set with $r$ elements has fixed points.
\begin{theorem}\label{theorem-tverbergcolored}
Let $r,d$ be positive integers and $\varepsilon >0$ be a real number. There is an $m_{\col} = m_{\col}(\varepsilon, r)$ such that the following holds. For a sufficiently large $N$, if we are given $N$ sets $F_1, \ldots, F_N$ of $r$ points in $\mathds{R}^d$ each, then there are $m_{\col}$ colorful partitions of $\mathcal{F} = \{F_1, \ldots, F_N\}$ such that for any $\mathcal{G} \subset \mathcal{F}$ with $|\mathcal{G}| \ge \varepsilon |\mathcal{F}|$, at least one of the partitions induces a colorful Tverberg partition on $|\mathcal{G}|$. Moreover, we have
\[
m_{\col} \le \left\lfloor \frac{\ln\left({\varepsilon} \right)}{\ln (1-p_r)} \right\rfloor +1
\]
\end{theorem}
We should note that the theorem above gives $m \sim 1 + \ln (1/\varepsilon)$ if $r$ is large enough. This is related to the colorful version from \cite{soberon2016robust}, which seeks the smallest $\varepsilon$ for which $m_{\col}(\varepsilon, r) = 1$. Using our notation, the main conjecture in that paper states the following.
\begin{conjecture}
For all $\varepsilon >0$ and any positive integer $r$, we have
\[
m_{\col}(\varepsilon, r) = 1.
\]
\end{conjecture}
To prove Theorem \ref{theorem-tverbergcolored}, we also use Sarkaria's transformation. In order to translate the conditions on the colors through the tensor products, we need the following definition.
A set $B$ is an $r$-block if it is an $r \times r$ array of points in $\mathds{R}^n$ such that the convex hull of each column contains the origin. A \textit{colorful transversal} of an $r$-block $B$ is a subset of $r$ points of $B$ that has exactly one point of each column and exactly one point of each row. Given a family $\mathcal{B}$ of $r$-blocks, a \textit{colorful transversal} for $\mathcal{B}$ is the result of putting together a colorful transversal for each block. If we apply Sarkaria's technique, colorful partitions in $\mathds{R}^d$ become colorful transversals of $r$-blocks in $\mathds{R}^n$. Theorem \ref{theorem-tverbergcolored} is then implied by the following.
\begin{theorem}\label{theorem-coloredcarth-epsilon}
Let $n,r$ be positive integers and $\varepsilon >0$ be a real number. $m_{\col} = m_{\col}(\varepsilon, r)$ such that the following holds. For a sufficiently large $N$, if we are given $N$ $r$-blocks $B_1, \ldots, B_N$ in $\mathds{R}^n$, there are $m_{\col}$ colorful transversals $P_1, \ldots, P_m$ of $\mathcal{B} = \{B_1, \ldots, B_N\}$ such that for any $\mathcal{G} \subset \mathcal{B}$ with $|\mathcal{G}| > \varepsilon |\mathcal{B}|$ for at least one $k$ we have $0 \in \conv (P_k (\mathcal{G}))$. Moreover, we have
\[
m_{\col} \le \left\lfloor \frac{\ln\left({\varepsilon} \right)}{\ln (1-p_r)} \right\rfloor +1.
\]
\end{theorem}
We also need the observation from \cite{soberon2016robust} that, for any $r$-block and any half-space $H$ that contains the origin, the probability that a random colorful transversal has points in $H$ is greater than or equal to $p_r$.
\begin{proof}
We proceed in a similar fashion to the proof of Theorem \ref{theorem-epsilon-caratheodory}.
Assume that $\varepsilon > (1-p_r)^m$. We want to prove that there are $m$ transversals as the theorem requires. We choose (with foresight) $A = (Nr^2)^n$, and $\lambda > \sqrt{mN\ln A}$. Define a sequence recursively by $N_0 = N$ and $N_k = N_{k-1}(1-p_r) + \lambda$. If we apply Lemma \ref{lemma-halfspace} to $\cup \mathcal{B}$, we obtain a family $\mathcal{H}$ of $A$ half-spaces, all containing $0$, which are enough to check if the convex hulls of the colorful transversals we construct contain $0$.
For $k = 1, \ldots, m$, we will construct $P_k$ and a family $\gimel_k$ of sets of $r$-blocks with the following properties.
\begin{itemize}
\item Given $\mathcal{G} \subset \mathcal{B}$ such that $0 \not\in \conv(P_{k'}(\mathcal{G}))$ for all $k'=1,\ldots, k$, there must be a $\mathcal{V} \in \gimel_k$ such that $\mathcal{G} \subset \mathcal{V}$,
\item if $\mathcal{V} \in \gimel_k$, then $|\mathcal{V}| \le N_k$, and
\item $|\gimel_k| \le A^k$.
\end{itemize}
We can consider $\gimel_0 = \{\mathcal{B}\}$ to start the induction. We construct $P_k$ inductively, assuming $\gimel_{k-1}$ and $P_{k'}$ have been constructed for $k' < k$. We first choose $P_k$ randomly. For each $B \in \mathcal{B}$, we pick a colorful transversal $y^k_B$ randomly and independently. Then, we denote $P_k = \{y^k_B : B \in \mathcal{B}\}$.
Given a half-space $H \in \mathcal{H}$, consider the random variable
\[
x^k_B(H) = \begin{cases}
1 & \mbox{if } y^k_B \cap H \neq \emptyset \\
0 & \mbox{otherwise.}
\end{cases}
\]
Since $\mathbb{E} [x^k_B(H)]\ge p_r$ for each $B \in \mathcal{B}, H \in \mathcal{H}$, we have that for any $\mathcal{V} \in \gimel_{k-1}$
\[
\mathbb{E} \left[\sum_{B \in \mathcal{V}}x^k_B(H)\right]\ge |\mathcal{V}|p_r
\]
Since all variables $x^k_B(H)$, $x^k_{B'}(H)$ are independent for $B \neq B'$, Hoeffidng's inequality gives
\[
\mathbb{P}\left[\sum_{B \in \mathcal{V}}x^k_B(H) < |\mathcal{V}|p_r - \lambda \right] < e^{-2\lambda^2/|\mathcal{V}|} \le e^{-2\lambda^2/N}.
\]
Therefore
\begin{align*}
\mathbb{P}\left[\exists H \in \mathcal{H} \exists \mathcal{V} \in \gimel_{k-1} \sum_{B \in \mathcal{V}}x^k_B(H) < |\mathcal{V}|p_r - \lambda \right] & < A \cdot |\gimel_{k-1}| e^{-2\lambda^2/N} \\
& \le A^k e^{-2\lambda^2/N} < 1
\end{align*}
by the choice of $\lambda$.
Therefore, there must be a choice of $P_k$ such that for all $H \in \mathcal{H}$ and all $\mathcal{V} \in \gimel_{k-1}$ we have
\[
\sum_{B \in \mathcal{V}}x^k_B(H) \ge |\mathcal{V}|p_r - \lambda.
\]
We fix $P_k$ to be this choice. In order to form $\gimel_k$, for each $\mathcal{V} \in \gimel_{k-1}$ and $H \in \mathcal{H}$, we include the set $\{B \in \mathcal{V}: x^k_B(H) = 0\}$. Proving that $\gimel_k$ satisfies the desired properties and that this implies the conclusion of Theorem \ref{theorem-coloredcarth-epsilon} follows from arguments analogous to those at the end of section \ref{section-proof}.
\end{proof}
\section{Acknowledgments}
The author would like to thank the careful comments of two anonymous referees, which have significantly improved the quality of this paper.
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,565,739 | arxiv | \section{Introduction}
Randomised algorithms have come to occupy a central place within theoretical computer science and had a profound affect on the development of algorithms and complexity theory \cite{Karp91,MotRag}. Most randomised algorithms assume access to a source of unbiased independent random bits. In practice, however, truly independent unbiased random bits are inconvenient, if not impossible, to obtain. We can generate pseudo-random bits on a computer fairly effectively \cite{GoldPseudo} but if computational resources are constrained the quality of these bits may suffer; in particular they may be biased or correlated. Another reason to consider the dependency of randomised algorithms on the random bits they use, other than imperfect generation, is that an adversary may seek to tamper with a source of randomness to influence the output of a randomised algorithm. This raises the natural question of whether relaxing the unbiased and independent assumptions have a notable effect on the efficacy of randomised algorithms. This is a question many researchers have studied since early in the development of randomised algorithms \cite{AlonBiasedCoin,BopNarCoin,VazRandPoly}.
Motivated by this question Azar, Broder, Karlin, Linial and Phillips \cite{ABKLPbias} introduced the $\varepsilon$-biased random walk ($\varepsilon$-BRW). This process is a walk on a graph where at each step with probability $\varepsilon$ a controller can choose a neighbour of the current vertex to move to, otherwise a uniformly random neighbour is selected. One can see this process from two different perspectives. The first interpretation is to see the model as a simple random walk (SRW) with some adversarial noise. That is, the SRW moves to uniformly and independently sampled neighbour in each time step, however there is an adversarial controller who with probability $\varepsilon$ can change the random bits used to sample the next step to their advantage. In particular one may consider this as a basic model for any randomised algorithm which uses their random bits to find a correct solution. A more specific class of examples is the search for a witness to the truth of a given statement, for example \cite{UniTrans,SoSt}, where the objective of the adversary may be to prevent us finding a witness. Other concrete instances are Pollard's Rho and Kangaroo algorithms for solving the discrete logarithm problem \cite{BirthdayParadoxPollard,Kangaroo,Pollard}. The second perspective sees the controller as an advisor who guides an agent to some objective, however with probability $1-\varepsilon $ at each step their signal is lost and agent moves to a neighbouring vertex sampled at random. In this setting the $\varepsilon$-BRW and very closely related models have be studied in the contexts of graph searching with noisy advice \cite{Binsearh,LocatingTarget,SearchNoise} and collective navigation \cite{antblazed}. Another work \cite{Navigate} studies the problem of searching for an adversarially placed target in a tree where a ``signpost'' pointing toward the target appears at each vertex with probability $\varepsilon$. Thus in \cite{Navigate} the controller hints appear randomly in space, as opposed to randomly at steps as in $\varepsilon$-BRW.
Azar et al.~\cite{ABKLPbias} consider a pair of different objectives for the controller. The first objective is to maximise/minimise weighted sums of stationary probabilities, where they obtain bounds on how much the controller can influence the stationary probabilities of certain graph classes. This fits most naturally with the first perspective on the walk. The second controller objective they study is that of minimising the expected hitting time of a given set of vertices. This fits more naturally with the second perspective however, one motivation for obtaining bounds on the hitting times is that they can also be used to bound stationary probabilities (via return times). They show that optimal strategies for maximising or minimising stationary probabilities or hitting times can be computed in polynomial time. Since finding an optimal controller strategy for either objective can be cast as a Markov decision process (MDP) \cite{Derman}, it follows that there exists an optimal strategy for these tasks which is independent of time. Consequently, Azar et al.\ only consider fixed strategies which are independent of time.
We extend the work of Azar et al.\ \cite{ABKLPbias} by studying the cover time of $\varepsilon$-biased random walks, which is the expected time for the walk to visit every vertex of the graph. In the setting of memoryless search algorithms with noisy advice, the cover time of a time dependent $\varepsilon$-biased walk is the run time of the probabilistic following algorithm \cite[Supplementary material]{antblazed} applied to the task of collecting a token from every vertex (as opposed to just one token as in \cite{Binsearh,LocatingTarget,SearchNoise}).
\subsection{Our Results}
In \cref{S:regular} we introduce a new method which is crucial to cope with the time dependencies of the $\epsilon$-TBRW. We first consider a ``trajectory-tree'' which encapsulates all walks of a given length from a fixed start vertex in a connected graph $G$ by embedding them into a rooted tree. We also introduce a symmetric operator on real vectors which describes the action of the $\varepsilon$-TBRW. The combination of the operator and trajectory-tree allows to us to show that the $\varepsilon$-TBRW can significantly increase the probabilities of rare events described by trajectories, that is:
\begin{enumerate}[label=(\arabic*)]
\item Let $u\in V$, $t > 0$, $0\leq \varepsilon \leq 1$ and $S$ be a subset of trajectories of length $t$ from $u$. Let $p$ be the probability the SRW samples a trajectory from $S$. Then a controller can increase the probability of being in $S$ after $t$ steps from $u$ from $p$ to $p^{1-\varepsilon}$. (See \cref{nonregboostnew}.)
\end{enumerate}
This result can be applied to bound cover and hitting times in terms of the number of vertices $n$, the minimum and average degrees $d_{\mathsf{min}}$ and $d_{\mathsf{avg}} $, and finally $t_{\mathsf{hit}}$, $t_{\mathsf{rel}}$ and $t_{\mathsf{mix}}$ which are the hitting, relaxation and mixing times of the lazy random walk; see Section \ref{formaldef} for full definitions.
\begin{enumerate}[resume*]
\item For any vertex $u$ there is a strategy so that the $\varepsilon$-TBRW started from $u$ covers $G$ in expected time at most
\[\BO{\frac{t_{\mathsf{hit}}}{\varepsilon}\cdot \log\left( \frac{d_{\mathsf{avg}}\cdot t_{\mathsf{rel}} \cdot \log n}{d_{\mathsf{min}}} \right) }.\]
It should be noted that, for regular graphs, this upper bound breaks the lower bound of $\Omega(n \log n)$ for the cover time of simple random walks
if $t_{\mathsf{rel}} = o( (\log n/ \log \log n)^2)$; in particular, for expanders we obtain a nearly-optimal cover time bound of $O(n \log \log n)$.
\item For any two vertices $u,v\in V$ there is a strategy so that for the $\varepsilon$-TBRW the expected time to reach $v$ from $u$ is at most \[\BO{ \left(\frac{n\cdot d_{\mathsf{avg}}}{ d_{\mathsf{min}}}\right)^{1-\varepsilon}\cdot \left( t_{\mathsf{mix}}\right)^{\frac{2+\varepsilon}{3}}}.\]
\end{enumerate}
(See Theorems \ref{trelbdd} and \ref{trelhit} for the two results above.)
In \cref{AzarConjSec} we study how much the controller can affect the stationary distribution of any vertex in our graph. Azar et al.\ \cite{ABKLPbias}
introduced this problem and showed that for any bounded degree regular graph a controller can increase the stationary probability of any vertex from $p$ to $p^{1-\Omega(\varepsilon)} $. By applying the results from \cref{S:regular} we prove a stronger bound for graphs with small relaxation time and sub-polynomial degree ratio:
\begin{enumerate}[resume*]
\item In any graph a controller can increase the stationary probability of any vertex from $p$ to $p^{1-\varepsilon+\delta} $, where $\delta=\ln\left( 16\cdot t_{\mathsf{mix}} \right)/\abs{\ln p} $. (See \cref{azarconj}.)
\end{enumerate}
Azar et al.\ \cite{ABKLPbias} conjectured that in any graph the controller can boost the stationary probability of any chosen vertex from $p$ to $p^{1-\varepsilon}$ (see Conjecture~\ref{abklp}), thus we confirm their conjecture (up to a negligible error in the exponent) for the class of graphs above, including expanders.
Motivated by this conjecture and a comment of Azar et al.\ stating that for regular graphs the interesting case is when $\varepsilon$ is not substantially larger than $1/d_{\mathsf{max}}$ (the reciprocal of the maximum degree). We try to quantify the effect of a controller in this regime. Establishing several bounds and counter-examples we reveal the following trichotomy in terms of the density of the graph:
\begin{enumerate}[resume*]
\item For any graph with $d_{\mathsf{max}}= \lo{\log n/\log\log n}$, a controller for the $\varepsilon$-BRW can increase the stationary probability of any vertex by more than a constant factor.
\item For any graph which is everywhere dense, i.e., has a minimum degree of $\Omega(n)$, a controller cannot increase any entry in the stationary distribution by more than a constant factor.
\item However, any polynomial but sublinear degree regime contains regular graphs for which entries in the stationary distribution can be increased by a polynomial factor, but for almost all almost-regular graphs, no entry can be increased by more than a constant factor.
\end{enumerate}
(See \cref{cor-small-poly}, and Propositions \ref{prop:dense}, \ref{GnpNoBoost} and \ref{RegCycleBoost} respectively for the above results.)
In \cref{complexsec} we consider the complexity of finding an optimal strategy to cover a graph in minimum expected time. Azar et al.\ considered this problem for hitting times and showed that there is a polynomial algorithm to determine an optimal strategy on directed graphs; we establish a dichotomy by proving complexity theoretic lower bounds for the cover time.
\begin{enumerate}[resume*]
\item The problems of deciding between two neighbours as the next step in order to minimise the cover time, and deciding if the cover time of a vertex subset is less than a given value are both $\PSPACE$-complete on directed graphs. (See Theorems \ref{covinPSPACE} and \ref{allhard}.)
\end{enumerate}
Adapting previous results for the related choice random walk process \cite{ITCSpaper}, we also conclude:
\begin{enumerate}[resume*]
\item The two problems mentioned above in (8) are $\NP$-hard on undirected graphs. (See Theorem \ref{NextIsNPHard}.)
\end{enumerate}
Finally in \cref{Conclude} we conclude with some open problems and conjectures.
\section{Preliminaries} \label{formaldef}
We shall now formally describe the $\varepsilon$-biased and $\varepsilon$-time-biased random walk model and introduce some notation.
Throughout this paper we shall always consider a connected $n$-vertex simple graph $G=(V,E)$, which unless otherwise specified, will be unweighted. We write $\Gamma(v)$ for the neighbourhood of a vertex $v$ and call $d(v)=|\Gamma(v)|$ the degree of $v$. We use $d_{\mathsf{max}}$, $d_{\mathsf{min}}$ and $d_{\mathsf{avg}}$ to denote the maximum, minimum and average degrees of a graph respectively. Given a Markov chain $\mathbf{H}=(h_{x,y})_{x,y\in V}$ with transition probabilities $h_{x,y}$, let $h_{x,y}^{(t)}$ denote the probability the walk started at state $x$ is at $y$ after $t$ steps. Let $\pi_{\mathbf{H}}$ denote the stationary distribution of $\mathbf{H}$, and throughout we let $\pi=\pi_{\mathbf{P}}$ where $\mathbf{P}$ is the transition matrix of a simple random walk (SRW), thus $\mathbf{P}=(p_{x,y})_{x,y\in V}$ where $p_{x,y}=1/d(x)$ if $xy\in E$ and $0$ otherwise.
Azar et al.\ \cite{ABKLPbias}, building on earlier work \cite{ben1987collective}, introduced the $\varepsilon$-biased random walk ($\varepsilon$-BRW) on a graph $G$. Each step of the $\varepsilon$-BRW is preceded by an $(\varepsilon, 1 - \varepsilon)$-coin flip. With probability $1 -\varepsilon$ a step of the simple random walk is performed, but with probability $\varepsilon$ the controller gets to select which neighbour to move to. The selection can be probabilistic, but it is time independent. Thus if $\mathbf P$ is the transition matrix of the simple random walk, then the transition matrix $\mathbf Q^{\varepsilon\text{B}}$ of the $\varepsilon$-biased random walk is given by
\begin{equation}\label{bias}\mathbf Q^{\varepsilon\text{B}} = (1 - \varepsilon)\mathbf P + \varepsilon\mathbf B,\end{equation}
where $\mathbf B$ is an arbitrary stochastic matrix chosen by the controller, with
support restricted to $E(G)$. The controller of an $\varepsilon$-BRW has full knowledge of $G$.
Azar et al.\ focused on the problems of bias strategies which either minimise or maximise the stationary probabilities of sets of vertices or which minimise the hitting times of vertices. Azar et al.\ \cite[Sec.\ 4]{ABKLPbias} make the connection between Markov decision processes and the $\varepsilon$-biased walk; in particular they observe that the two tasks they study can be identified as the expected average cost and optimal first-passage problems respectively in this context \cite{Derman}. As a result of this, the existence of time independent optimal strategies for both objectives follow from Theorems $2$ and $3$ respectively in \cite[Ch.\ 3]{Derman}. For this reason Azar et al.\ restrict to the class of unchanging strategies, where we say that an $\varepsilon$-bias strategy is \textit{unchanging} if it is independent of both time and the history of the walk.
It is clear that if we wish to consider optimal strategies to cover a graph (visit every vertex) in shortest expected time then we must include strategies which depend on the set of vertices already visited by the walk. Let $\mathcal{H}_t$ be the history of the random walk up to time $t$, that is the sigma algebra $\mathcal{H}_t= \sigma\left(X_0, \dots,X_t \right)$ generated by all steps of the walk up to and including time $t$. Thus we consider a time-dependent version, where the bias matrix $\mathbf B_t$ may depend on the time $t$ and the history $\mathcal{H}_t$; we refer to this as the $\varepsilon$-time-biased walk ($\varepsilon$-TBRW).
Let $\ETBcov{v}{G}$ denote the minimum expected time (taken over all strategies) for the $\varepsilon$-TBRW to visit every vertex of $G$ starting from $v$, and define the \textit{cover time} $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(G):=\max_{v\in V}\ETBcov vG$. Similarly let $\Heb xy$ denote the minimum expected time for the $\varepsilon$-biased walk to reach $y$, which may be a single vertex or a set of vertices, starting from a vertex $x$. We do not need to provide notation for the hitting times of the $\varepsilon$-TBRW since, as mentioned before, there is always a time-independent optimal strategy for hitting a given vertex \cite[Thm.\ 11]{ABKLPbias}, thus hitting times in the $\varepsilon$-TBRW and $\varepsilon$-BRW are the same. We also define the \textit{hitting time} $t_{\mathsf{hit}}^{\varepsilon\mathsf{B}}(G): = \max_{x,y\in V}\Heb xy$. Any unchanging strategy of the $\varepsilon$-BRW on a finite connected graph results in an irreducible Markov chain $\mathbf{Q}$ and thus, when appropriate, we refer to its stationary distribution as $\pi_{\mathbf{Q}}$.
Let $\mathbf{I}$ denote the identity matrix. Given a Markov chain $\mathbf{H}$ we call $\tilde{\mathbf{H}}=(\mathbf{I}+\mathbf{H})/2$ the \textit{lazy chain} of $\mathbf{H}$, and note that $\pi_{\tilde{\mathbf{H}}}=\pi_{\mathbf{H}}$. One important case is $\tilde{\mathbf{P}}$, where $\mathbf{P}$ is the SRW, we refer to this as the lazy random walk (LRW). Let $1=\lambda_1> \lambda_2\geq \cdots\geq \lambda_n\geq -1 $ be the eigenvalues of a simple random walk (SRW) on a connected $n$ vertex graph $G$ and define $\lambda_* =\max\left\{|\lambda_i| : i = 2, \dots, n \right\}$. Let $t_{\mathsf{rel}} := (1-\tilde{\lambda}_2)^{-1}$ be the relaxation time of $G$, where $\tilde{\lambda}_2$ is the second largest eigenvalue of $\tilde{\mathbf{P}}$, the LRW on $G$. We let \[t_{\mathsf{mix}}=\min_{t\geq 1}\left\{t:\;\max_{x\in V}||\tilde{p}_{x,\cdot }^{(t)}- \pi(\cdot)||_{\mathsf{TV}}\leq 1/4\right\}, \quad \text{where }\quad ||\tilde{p}_{x,\cdot }^{(t)}- \pi(\cdot)||_{\mathsf{TV}}= \frac{1}{2}\sum_{y\in V}\left|\tilde{p}^{(t)}_{x,y}-\pi(y) \right|, \] denote the \textit{total variation mixing time} of $G$. For the lazy random walk (LRW) $\tilde{\mathbf{P}}$ we define \begin{equation}\label{eq:sep}t_{\mathsf{sep}}= \inf\left\{t : \max_{x,y\in V}\left[1-\frac{\tilde{p}^{(t)}_{x,y}}{\pi(y)}\right]\geq \frac{1}{\mathrm{e}}\right\}, \qquad \text{and}\qquad t_{\infty}= \inf\left\{t : \max_{x,y\in V}\left|\frac{\tilde{p}^{(t)}_{x,y}}{\pi(y)}-1\right|<\frac{1}{\mathrm{e}}\right\},\end{equation} to be the \textit{separation time} and the \textit{$\ell^\infty$-mixing time} respectively.
\section{Hitting and Cover Times}\label{S:regular}
In this section we prove that the $\varepsilon$-TBRW has the power to increase the probability of certain events. As a consequence of this result we obtain bounds on the cover and hitting times of the $\varepsilon$-TBRW on a graph $G$ in terms of $n$, the extremal and average degrees, the relaxation time, and the hitting time of the SRW.
The approach used to prove these results is, for a given graph $G$, to consider events which depend only on the trajectory of the walker (that is, the sequence of vertices visited) up to some fixed time $t$. We use a ``trajectory-tree'' to encode all possible trajectories. This then allows us to relate the probability of a given event in the $\varepsilon$-TBRW to that for the SRW; the role of the technical lemma is to recursively bound the effects of an optimal strategy for the $\varepsilon$-TBRW at each level of the tree. This section follows the conference version of this paper \cite{ITCSpaper} where the method was initially developed for the $\varepsilon$-TBRW. This method is flexible in the sense that it can be applied to other random processes with choice, in particular in \cite{POTC} we adapt this method to the choice random walk.
Fix a vertex $u$, a non-negative integer $t$ and a set $S$ of trajectories of length $t$ (here the length is the number of steps taken). Write $p_{u,S}$ for the probability that running a SRW starting from $u$ for $t$ steps results in a member of $S$. Let $q_{u,S}(\varepsilon)$ be the corresponding probability for the $\varepsilon$-TBRW, which depends on the particular strategy used. It is important that neither the $\varepsilon$-TBRW ($q$) nor the SRW ($p$) is lazy. We prove the following result relating $q_{u,S}(\varepsilon)$ to $p_{u,S}$.
\begin{theorem}\label{nonregboostnew}Let $G$ be a graph, $u\in V$, $t > 0$, $0\leq \varepsilon \leq 1$ and $S$ be a set of trajectories of length $t$ from $u$. Then there exists a strategy for the $\varepsilon$-TBRW
such that
\[
q_{u,S}(\varepsilon) \geq \left( p_{u,S} \right)^{1-\varepsilon}.
\]
\end{theorem}
Here we typically think of $S$ encoding such events as ``the walker is in a set $W\subset V$ at time $t$'' or ``the walker has visited $v\in V$ by time $t$''; however, the result applies to any event measurable at time $t$. This theorem can be used to bound the cover time of the $\varepsilon$-TBRW.
\begin{theorem}\label{trelbdd}
For any graph $G$, and any $\varepsilon\in(0,1)$,
\[ t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(G)=\BO{\frac{t_{\mathsf{hit}}}{\varepsilon}\cdot \log\left( \frac{d_{\mathsf{avg}}\cdot t_{\mathsf{rel}} \cdot \log n}{d_{\mathsf{min}}} \right) }.\]
\end{theorem}
\cref{trelbdd} has the following consequence for expanders; a sequence of graphs $(G_n)$ is a \emph{sequence of expanders} if $t_{\mathsf{rel}}(G_n) = \BT{1}$.
\begin{corollary}\label{trelbddcor}For every sequence $(G_n)_{n\in \mathbb N}$ of $n$-vertex bounded degree expanders and any fixed $\varepsilon >0$, we have \[t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(G_n)=\BO{\frac{n}{\varepsilon}\cdot \log \log n}.\]
\end{corollary}
We can also use \cref{nonregboostnew} to bound the hitting times of the $\varepsilon$-BRW.
\begin{theorem}\label{trelhit}For any graph $G$, any $x,y \in V $ and any $\varepsilon\in(0,1)$, we have
\[\quad\Heb{x}{y}\leq 16 \cdot \pi(y)^{\varepsilon-1}\cdot t_{\mathsf{mix}},\]
where $\pi$ is the stationary distribution of the SRW;
this bound also holds for return times. Additionally, \[t_{\mathsf{hit}}^{\varepsilon\mathsf{B}}(G)\leq 120\cdot \left(\frac{n\cdot d_{\mathsf{avg}}}{ d_{\mathsf{min}}}\right)^{1-\varepsilon}\cdot \left( t_{\mathsf{mix}}\right)^{\frac{2+\varepsilon}{3}}.\]
\end{theorem}
We shall prove Theorem \ref{nonregboostnew} in \cref{gadget} after proving a key lemma in \cref{game}. Theorem \ref{trelbdd} is an analogue for the $\varepsilon$-TBRW of \cite[Theorem 6.1]{POTC}, and the derivation from \cref{nonregboostnew} follows that given in \cite[Section 6.1]{POTC} exactly. We include this for completeness in the Appendix. The statement of Theorem \ref{trelhit} is an improvement over the analogous result in \cite[Theorem 6.2]{POTC} (the main improvement is replacing $t_{\mathsf{rel}} \log n$ with $ t_{\mathsf{mix}}$). We give the proof of this result in \cref{sec:Hitproof} and note that the same proof will give the same improvement to \cite[Theorem 6.2]{POTC}.
The main difference between the results here and those in \cite{POTC} is that each relies on an operator which describes the random walk process being studied. The operator used here is different to those introduced in \cite{POTC}, and as a result so is the strength of the boosting obtainable. This highlights the versatility of the technique used to prove \cref{nonregboostnew} in that it can be used to analyse several different random processes with non-deterministic interventions, such as the $\varepsilon$-TBRW and the choice random walk (CRW) of \cite{POTC}.
\subsection{The \texorpdfstring{$\varepsilon$}{e}-Max/Average Operation}\label{game}
In this subsection we shall introduce an operator which models the action of the $\varepsilon$-TBRW. We shall then prove a bound on the output of the operator, which is used to show that the $\varepsilon$-TBRW can boost probabilities indexed by paths.
For $0<\varepsilon <1 $ define the $\varepsilon$-max/average operator $\operatorname{MA}_{\varepsilon}:[0,\infty)^m\to [0,\infty)$ by \begin{equation*}
\operatorname{MA}_{\varepsilon}\left(x_1,\dots , x_m \right) = \varepsilon \cdot \max_{1\leq i \leq m} x_i + \frac{1 - \varepsilon }{m} \cdot \sum_{i=1}^m x_i.
\end{equation*}
This can be seen as an average which is biased in favour of the largest element, indeed it is a convex combination between the largest element and the arithmetic mean.
For $p\in \mathbb R\setminus\{0\}$, the $p$-power mean $M_p$ of non-negative reals $x_1,\ldots,x_m$ is defined by \[M_p(x_1,\ldots,x_m)=\left(\frac{x_1^p+\cdots+x_m^p}{m}\right)^{1/p},\]and \[M_{\infty}(x_1,\ldots,x_m)=\max\{x_1,\ldots,x_m\}=\lim_{p\to \infty}M_p(x_1,\ldots,x_m).\] Thus we can express the $\varepsilon$-max/ave operator as $\operatorname{MA}_{\varepsilon}(\cdot)= (1-\varepsilon)M_1(\cdot)+\varepsilon \operatorname{M}_{\infty}(\cdot)$. We use a key lemma, \cref{anticonv}, which could be described as a multivariate anti-convexity inequality.
\begin{lemma}\label{anticonv}Let $0<\varepsilon <1$, $m \geq 1$ and $\delta \leq \varepsilon /(1-\varepsilon) $. Then for any $x_1,\dots, x_m \in [0,\infty)$,
\[M_{1+\delta}\left(x_1,\dots, x_m \right) \leq \operatorname{MA}_{\varepsilon}\left(x_1,\dots, x_m \right) . \]
\end{lemma}
\begin{proof} We begin by establishing the following claim.
\begin{claim}Let $\eta\in(0,1)$, and suppose $a,b,c\in\mathbb R^+$ with $c=(1-\eta)a+\eta b$. Then
\begin{equation}\label{mean-inequality}M_c\leq M_a^{(1-\eta)a/c}M_b^{\eta b/c}.\end{equation}
\end{claim}
\begin{poc}H\"older's inequality states for positive reals $y_1,\ldots,y_m$ and $z_1,\ldots,z_m$ that
\[y_1z_1+\cdots+y_mz_m\leq\bigl(y_1^p+\cdots+y_m^p\bigr)^{1/p}\bigl(z_1^q+\cdots+z_m^q\bigr)^{1/q},\]
where $p,q\geq 1$ satisfy $1/p+1/q=1$.
The desired result follows by setting $y_i=x_i^{(1-\eta)a}$, $z_i=x_i^{\eta b}$, $p=1/(1-\eta)$, $q=1/\eta$, dividing both sides by $m$ and then taking $c$\textsuperscript{th} roots.
\end{poc}
Applying \eqref{mean-inequality}, we have for any $k>\delta$ that
\begin{align*}
M_{1+\delta}&\leq M_1^{\frac{1-\delta/k}{1+\delta}}M_{k+1}^{\frac{(k+1)\delta/k}{1+\delta}}\\
&\leq\frac{1-\delta/k}{1+\delta}M_1+\frac{(k+1)\delta/k}{1+\delta}M_{\infty},
\end{align*}
using the weighted AM-GM inequality and the fact that $M_p\leq M_{\infty}$ for any $p$. Taking limits as $k\to\infty$, noting that $\varepsilon\geq\delta/(1+\delta)$, gives the required inequality.
\end{proof}
\begin{remark}The dependence of $\delta$ on $\varepsilon$ given in \cref{anticonv} is best possible. This can be seen by setting $x_1=0$ and $x_i=1$ for $2\leq i\leq m$, and letting $m$ tend to $\infty$.
\end{remark}
\subsection{The Trajectory-Tree for Graphs}\label{gadget}
In this section we show how the ``trajectory-tree'' can be used to prove \cref{nonregboostnew}. This tree encodes walks of length at most $t$ from $u$ in a rooted graph $(G,u)$ by vertices of an arborescence $(\mathcal{T}_t,\mathbf{r})$, i.e.\ a tree with all edges oriented away from the root $\mathbf{r}$. Here we use bold characters to denote trajectories, and $\mathbf r$ will be the length-$0$ trajectory consisting of the single vertex $u$. The tree $\mathcal T_t$ consists of one node for each trajectory of length $i\leq t$ starting at $u$, and has an edge from $\mathbf{x}$ to $\mathbf{y}$ if $\mathbf{x}$ may be obtained from $\mathbf{y}$ by deleting the final vertex; we refer to such $\mathbf{y}$ as `offspring' of $\mathbf{x}$.
The proof of \cref{nonregboostnew} will follow the corresponding proof in \cite{POTC} closely, but we give a full proof here in order to clarify the role played by the $\varepsilon$-max/average operator.
We write $d^+(\mathbf{x})$ for the number of offspring in $\mathcal T_t$ of $\mathbf x$, and $\Gamma^+(\mathbf{x})$ for the set of offspring of $\mathbf x$. Denote the length of the walk $\mathbf x$ by $\abs{\mathbf{x}}$. We shall extend our notation $p_{u,S}$ and $q_{u,S}(\varepsilon)$ to $p_{\mathbf x,S}$ and $q_{\mathbf x,S}(\varepsilon)$, defined to be the probabilities that extending $\mathbf x$ to a trajectory of length $t$, using the laws of the SRW and $\varepsilon$-TBRW respectively, results in an element of $S$. Additionally, let $W_u(k):=\bigcup_{i=0}^k\{X_i \}$ be the trajectory of a simple random walk $X_t$ on $G$ up to time $k$, with $X_0=u$.
\begin{figure}
\begin{subfigure}{.35\textwidth}
\begin{tikzpicture}[xscale=.85,yscale=0.9,knoten/.style={thick,circle,draw=black,minimum size=.8cm,fill=white},wknoten/.style={thick,circle,draw=black,minimum size=.6cm,fill=white},edge/.style={black},dedge/.style={thick,black,-stealth}]
\node[knoten] (u) at (2,2) {$u$};
\node[knoten] (v) at (4,2.65) {$v$};
\node[knoten] (w) at (4,1.35) {$w$};
\node[knoten] (x) at (6,3.25) {$x$};
\node[knoten] (y) at (6,2) {$y$};
\node[knoten] (z) at (6,0.75) {$z$};
\draw[edge] (u) to (v);
\draw[edge] (u) to (y);
\draw[edge] (w) to (x);
\draw[edge] (w) to (z);
\draw[fill=red!20,opacity=0.4] (6,1.375) ellipse (0.75cm and 1.3cm);
\draw[edge] (v) to (x);
\draw[edge] (z) to (y);
\draw[edge] (u) to (w);
\draw[edge] (v) to (z);
\end{tikzpicture}
\end{subfigure}%
\begin{subfigure}{.65\textwidth}
\begin{tikzpicture}[xscale=0.8,yscale=0.76,knoten/.style={thick,circle,draw=black,minimum size=.6cm,fill=white},wknoten/.style={thick,circle,draw=black,minimum size=.6cm,fill=white},edge/.style={black},dedge/.style={thick,black,-stealth}]
\node[knoten] (u1) at (2,6) [label=left:$\textcolor{blue}{\frac{7}{18}}$,label=right:$\textcolor{red}{\frac{50}{81}}$]{$\phantom{x}u\phantom{x}$};
\node[knoten] (v2) at (-2,4)[label=left:$\textcolor{blue}{\frac{1}{3}}$,label=right:$\textcolor{red}{\frac{5}{9}}$] {$\;uv\;$};
\node[knoten] (y2) at (2,4)[label=left:$\textcolor{blue}{\frac{1}{2}}$,label=right:$\textcolor{red}{\frac{2}{3}}$] {$\;uy\;$};
\node[knoten] (w2) at (6,4)[label=left:$\textcolor{blue}{\frac{1}{3}}$,label=right:$\textcolor{red}{\frac{5}{9}}$] {$\;uw\;$};
\node[wknoten] (u3) at (-3.4,2)[label=below:$0$] {$uvu$};
\node[wknoten] (x3) at (-2,2)[label=below:$0$] {$uvx$};
\node[wknoten] (z3) at (-0.6,2)[label=below:$1$] {$uvz$};
\node[wknoten] (u31) at (1.1,2)[label=below:$0$] {$uyu$};
\node[wknoten] (z31) at (2.9,2)[label=below:$1$] {$uyz$};
\node[wknoten] (u32) at (4.6,2) [label=below:$0$]{$uwu$};
\node[wknoten] (x32) at (6,2) [label=below:$0$]{$uwx$};
\node[wknoten] (z32) at (7.4,2) [label=below:$1$]{$uwz$};
\draw[dedge] (u1) to (v2);
\draw[dedge] (u1) to (y2);
\draw[dedge] (u1) to (w2);
\draw[dedge] (v2) to (u3);
\draw[dedge] (v2) to (x3);
\draw[dedge] (v2) to (z3);
\draw[dedge] (y2) to (u31);
\draw[dedge] (y2) to (z31);
\draw[dedge] (w2) to (u32);
\draw[dedge] (w2) to (x32);
\draw[dedge] (w2) to (z32);
\end{tikzpicture}
\end{subfigure}
\caption{Illustration of a (non-lazy) walk on a non-regular graph starting from $u$ with the objective of being at $\{y,z\}$ at step $t=2$. The probabilities of achieving this are given in blue (left) for the SRW and in red (right) for the $\frac{1}{3}$-TBRW.}
\end{figure}
\begin{proof}[Proof of \cref{nonregboostnew}]For convenience we shall suppress the notational dependence of $q_{\mathbf{x},S}(\varepsilon)$ on $\varepsilon$. To each node $\mathbf{x}$ of the trajectory-tree $\mathcal{T}_t$ we assign the value $q_{\mathbf{x},S} $ under the the $\varepsilon$-TB strategy of biasing towards a neighbour in $G$ which extends to a walk $\mathbf{y}\in \Gamma^+(\mathbf{x})$ maximising $q_{\mathbf{y},S}$. This is well defined because both the strategy and the values $q_{\mathbf{x},S}$ can be computed in a ``bottom up'' fashion starting at the leaves, where if $\mathbf{x} \in V(\mathcal{T}_t)$ is a leaf then $q_{\mathbf{x},S} $ is $1$ if $\mathbf x\in S$ and $0$ otherwise.
Suppose $\mathbf{x}$ is not a leaf. Then with probability $1-\varepsilon$ we choose the next step of the walk uniformly at random in which case the probability of reaching $S$ from $\mathbf{x}$ is just the average of $q_{\mathbf{y},S}$ over the offspring $\mathbf{y}$ of $\mathbf{x}$, otherwise we choose a maximal $q_{\mathbf{y},S}$. Thus the value of $\mathbf{x}$ is given by the $\varepsilon$-max/average of its offspring, that is \begin{equation}\label{qqqq}q_{\mathbf{x},S} =\operatorname{MA}_{\varepsilon}\left( \left(q_{\mathbf{y},S}\right)_{\mathbf{y}\in \Gamma^+(\mathbf{x})} \right).\end{equation}
We define the following potential function $\Phi^{(i)}$ on the $i^{th}$ generation of the trajectory-tree $\mathcal{T}$: \begin{equation}\label{Phi}\Phi^{(i)}= \sum\limits_{\abs{\mathbf{x}}=i}q_{\mathbf{x},S}^{1+ \delta}\cdot \Pr{W_u(i) = \mathbf{x}}. \end{equation}
Notice that if $\mathbf{x}\mathbf{y}\in E(\mathcal{T}_t)$ then \[\Pr{W_u(\abs{\mathbf{y}}) = \mathbf{y}} = \Pr{W_u(\abs{\mathbf{x}}) = \mathbf{x}}/d^+(\mathbf{x}) .\] Also since each $\mathbf{y}$ with $\abs{\mathbf{y}}=i$ has exactly one parent $\mathbf{x}$ with $\abs{\mathbf{x}}=i-1$ we can write
\begin{equation}\label{PPhi}\Phi^{(i)} = \sum\limits_{\abs{\mathbf{x}}=i-1}\sum_{\mathbf{y} \in \Gamma^+(\mathbf{x})}q_{\mathbf{y},S}^{1+ \delta}\cdot \frac{\Pr{W_u(i-1) = \mathbf{x}}}{d^+(\mathbf{x}) }.\end{equation} We now show that $\Phi^{(i)} $ is non-increasing in $i$. By combining \eqref{Phi} and \eqref{PPhi} we can see that the difference $\Phi^{(i-1)}-\Phi^{(i)}$ is given by
\begin{align*}
&\sum\limits_{\abs{\mathbf{x}}=i-1} \left(q_{\mathbf{x},S}^{1+ \delta}-\frac{1}{d^+(\mathbf{x})}\sum_{\mathbf{y} \in \Gamma^+(\mathbf{x})}q_{\mathbf{y},S}^{1+ \delta} \right) \Pr{W_u(i-1) = \mathbf{x}}.
\end{align*}Recalling \eqref{qqqq}, to establish $\Phi^{(i-1)}-\Phi^{(i)}\geq 0$ it is sufficient to show the following inequality holds whenever $\mathbf{x}$ is not a leaf:
\[ \operatorname{MA}_{\varepsilon}\left(\left( q_{\mathbf{y},S}\right)_{\mathbf{y} \in \Gamma^+(\mathbf{x})} \right)^{1+ \delta} \geq \frac{1}{d^+(\mathbf{x} ) }\sum_{\mathbf{y}\in \Gamma^+(\mathbf{x})}q_{\mathbf{y},S}^{1+ \delta}.\]
By taking $(1+\delta)$\textsuperscript{th} roots this inequality holds for any $\delta \leq \varepsilon /(1-\varepsilon) $ by \cref{anticonv}, and thus for $ \delta$ in this range $\Phi^{(i)} $ is non-increasing in $i$.
Observe $\Phi^{(0)} = q_{u,S}^{1+\delta}$. Also if $\abs{\mathbf{x}}=t$ then $q_{\mathbf{x},S}=1 $ if $\mathbf{x} \in S$ and $0$ otherwise, it follows that
\[\Phi^{(t)} = \sum_{\abs{\mathbf{x}}=t}q_{\mathbf{x},S}^{1+\delta}\cdot \Pr{W_u(t) = \mathbf{x}}=\sum_{\abs{\mathbf{x}}=t}\mathbf{1}_{\mathbf{x}\in S}\cdot \Pr{W_u(t) = \mathbf{x}} = p_{u,S} .\] Thus since $\Phi^{(t)}$ is non-increasing $q_{u,S}^{1+\delta} = \Phi^{(0)}\geq \Phi^{(t)} = p_{u,S} $. The result for the $\varepsilon$-TBRW follows by taking $\delta = \varepsilon /(1-\varepsilon) $.
\end{proof}
\subsection{Proof of Theorem \ref{trelhit}}\label{sec:Hitproof}
We now prove Theorem \ref{trelhit}. The idea of the proof is to use Theorem \ref{nonregboostnew} to boost the probability that a random walk hits a vertex within $\Theta(t_{\mathsf{mix}})$ steps.
\begin{proof}Observe that for any non-negative integer random variable $Z$ the following holds\begin{equation}\label{eq:posrvtrick} \Pr{Z\geq 1} = \frac{\Ex{Z}}{\Ex{Z\mid Z\geq 1}}.\end{equation} Let $ N_y(T) = |\{t\leq T: \tilde{X}_t = y\}|$ be the number of visits to $y\in V$ up to time $T\geq 0$ by the lazy random walk $\tilde{X}_t$ on $G$. We shall now apply \eqref{eq:posrvtrick} to $N_y(T)$ for a suitable $T$.
Recall the definition $t_{\mathsf{mix}}:=t_{\mathsf{mix}}(1/4)$ of the total variation mixing time. It follows that with probability $3/4$ we can couple a lazy random walk $\tilde{X}_t$ from any start vertex with a stationary walk by time $t_{\mathsf{mix}}$. Then, for any $x,y\in V$ we have \begin{equation}\label{eq:bddonvisits}\Exu{x}{N_y(2 t_{\mathsf{mix}})}\geq \frac{3}{4}\cdot \Exu{\pi}{N_y(t_{\mathsf{mix}})}\geq \frac{3\pi (y)t_{\mathsf{mix}}}{4}.\end{equation} Now, if $N_y(T)\geq 1$ then $X_t$ first visited $y$ at some random time $0\leq s\leq T$. Taking $s=0$ gives \begin{equation}\label{eq:bddonreturns}\Exu{x}{N_y(2t_{\mathsf{mix}})\mid N_y(2t_{\mathsf{mix}})\geq 1} \leq \sum_{t=0}^{2\cdot t_{\mathsf{mix}}}p_{y,y}^{(t)}\leq 2\cdot t_{\mathsf{mix}}+1\leq 3\cdot t_{\mathsf{mix}}.\end{equation} If we apply \eqref{eq:posrvtrick} to $N(y,X)$, then it follows from \eqref{eq:bddonvisits} and \eqref{eq:bddonreturns} that for any $x,y\in V$ we have
\begin{equation}\label{eq:hitprobbdd}\Pru{x}{ \tau_{y}\leq T}\geq \frac{\Exu{x}{N_y(2t_{\mathsf{mix}})} }{\Exu{x}{N_y(2t_{\mathsf{mix}}) \mid N_y(2t_{\mathsf{mix}})\geq 1}} \geq \frac{3\pi (y)t_{\mathsf{mix}}}{8}\cdot \frac{1}{3\cdot t_{\mathsf{mix}}} = \frac{\pi (y)}{8}. \end{equation} By the natural coupling between trajectories of the simple and lazy random walks (adding in the lazy steps) it follows that \eqref{eq:hitprobbdd} also holds for the simple random walk.
Now, applying Theorem \ref{nonregboostnew} to \eqref{eq:hitprobbdd} shows that for any $x,y\in V$ there exists a strategy for the $\varepsilon$-TBRW to hit $y$ within $2\cdot t_{\mathsf{mix}}$ steps which has success probability at least $(\pi(y)/8)^{1-\varepsilon}$. Thus if we run this strategy for $2\cdot t_{\mathsf{mix}}$ steps then repeat if necessary, we see that the expected time for the $\varepsilon$-TBRW to hit $ y$ from $x$ is at most $2\cdot t_{\mathsf{mix}}/(\pi(y)/8)^{1-\varepsilon} \leq 16\pi(y)^{\varepsilon-1}\cdot t_{\mathsf{mix}} $. Since there exists an optimal strategy for hitting any vertex which is independent of time \cite[Theorem 5]{ABKLPbias} we conclude that this bound also holds for the $\varepsilon$-BRW.
To prove the second bound we shall get a different bound on returns (replacing \eqref{eq:bddonreturns}) which is independent of $y$. By \cite[Lemma 1]{oliveira2018random} and \cite[Lemma 2]{oliveira2018random}, for any $T\geq 0$ and $y\in V$, we have
\begin{equation}\label{eq:refinedret1}
\sum_{t=0}^T p_{y,y}^{(t)} \leq \frac{e }{e-1}\sum_{t=0}^{t_{\mathsf{rel}}} p_{y,y}^{(t)} + T\cdot \pi(y) \leq \frac{e }{e-1}\cdot 6\pi(y)\frac{nd_{\mathsf{avg}}}{d_{\mathsf{min}}}\sqrt{t_{\mathsf{rel}}+1} + T\cdot\pi(y).
\end{equation} Now since $t_{\mathsf{rel}} \leq t_{\mathsf{mix}}\leq 2t_{\mathsf{hit}} +1 \leq 2n^3 + 1 \leq 3n^3$ by \cite[(10.24)]{levin2009markov} and \cite[(6.14)]{aldousfill} we have \begin{equation}\label{eq:refinedret2}n\sqrt{t_{\mathsf{rel}} +1 } + t_{\mathsf{mix}} \leq 2n\sqrt{t_{\mathsf{mix}}} + t_{\mathsf{mix}} \leq 2n(t_{\mathsf{mix}})^{2/3} + t_{\mathsf{mix}}\leq 4n(t_{\mathsf{mix}})^{2/3}.\end{equation} Thus by \eqref{eq:refinedret1} and \eqref{eq:refinedret2} and since $12e/(e-1)< 19$ we have \begin{equation}\label{eq:refinedret3} \sum_{t=0}^{2t_{\mathsf{mix}}} p_{y,y}^{(t)} \leq \pi(y)\left( \frac{6e}{e-1}\frac{d_{\mathsf{avg}}}{d_{\mathsf{min}}}\cdot n\sqrt{t_{\mathsf{rel}}+1} + 2t_{\mathsf{mix}}\right)\leq 21\pi(y)\frac{nd_{\mathsf{avg}}}{d_{\mathsf{min}}}\left(t_{\mathsf{mix}}\right)^{2/3}. \end{equation}Now, using the bound \eqref{eq:refinedret3} on $\Exu{x}{N_y(2t_{\mathsf{mix}})\mid N_y(2t_{\mathsf{mix}})\geq 1}$ instead of \eqref{eq:bddonreturns} in \eqref{eq:hitprobbdd} gives us \begin{equation*}\Pru{x}{ \tau_{y}\leq T}\geq \frac{3\pi (y)t_{\mathsf{mix}}}{8}\cdot \frac{1}{21\pi(y)\frac{nd_{\mathsf{avg}}}{d_{\mathsf{min}}}\left(t_{\mathsf{mix}}\right)^{2/3}} \geq \frac{d_{\mathsf{min}} (t_{\mathsf{mix}})^{1/3}}{60 nd_{\mathsf{avg}} }. \end{equation*}Now, by the same steps as before there is a strategy for the $\varepsilon$-BRW to hit any $y$ from any $x$ in time at most $\left(\frac{60 nd_{\mathsf{avg}} }{d_{\mathsf{min}} (t_{\mathsf{mix}})^{1/3}}\right)^{1-\varepsilon}\cdot 2t_{\mathsf{mix}} \leq 120 \left( nd_{\mathsf{avg}} /d_{\mathsf{min}}\right)^{1-\varepsilon}(t_{\mathsf{mix}})^{\frac{2+\varepsilon}{3}} $ as claimed.\end{proof}
\section{Increasing Stationary Probabilities}\label{AzarConjSec}
In this section we shall consider the problem of how much an unchanging strategy can affect the stationary probabilities in a graph. Azar et al.\ studied this question and made an appealing conjecture. Our result on the hitting times of the $\varepsilon$-BRW allows us to make progress towards this conjecture. We also derive some more general bounds on stationary probabilities for classes of Markov chains which include certain regimes for the $\varepsilon$-BRW, and tackle the question of when the stationary probability of a vertex can be changed by more than a constant factor.
\subsection{A Conjecture of Azar et al.}
Azar, Broder, Karlin, Linial and Phillips make the following conjecture for the $\varepsilon$-BRW \cite[Conjecture 1]{ABKLPbias}. Their motivation was that a corresponding bound holds for the related process studied by Ben-Or and Linial \cite{ben1987collective}.
\begin{conjecture}[ABKLP Conjecture]\label{abklp}
In any graph, a controller can increase the stationary probability of any vertex from $p$ (for the SRW) to $p^{1-\varepsilon}$.
\end{conjecture}
This conjecture becomes particularly attractive in the context of \cref{nonregboostnew}, which implies that in the $\varepsilon$-TBRW a controller may increase the probability of being at any given vertex at time $t$ from $p_t$ to $p_t^{1-\varepsilon}$, where for non-bipartite graphs we have $p_t\to p$. However, a crucial point is that the strategy guaranteed by \cref{nonregboostnew} depends on $t$, and so we cannot necessarily achieve this boosting uniformly over $t$, or by using only the $\varepsilon$-BRW.
In fact, the conjecture fails for the graph $K_2$, as no strategy for the $\varepsilon$-BRW can increase the stationary probability over that of a simple random walk. This motivates weakening the conjecture by replacing $ p^{1-\varepsilon}$ by $p^{1- \varepsilon + o_n(1)} $; however this fails for the star on $n$ vertices, and non-bipartite counterexamples may be obtained by adding a small number of extra edges to the star. In each of these counterexamples there is a vertex with constant stationary probability, and for large graphs this can only happen if there is a large degree discrepancy.
We believe the following should hold.
\begin{conjecture}\label{reformulated}
In any graph a controller can increase the stationary probability of any vertex from $p$ to $p^{1-\varepsilon+\delta} $, where $\delta \rightarrow 0$ as $p \rightarrow 0$.
\end{conjecture}
Azar et al.\ prove a weaker bound of $p^{1-\mathcal{O}(\varepsilon)}$ for bounded-degree regular graphs. As a corollary of \cref{trelhit} we confirm \cref{reformulated} for any graph where $t_{\mathsf{mix}}$ is sub-polynomial in $n$. Our techniques are different to those of Azar et al.\ and allow us to cover a larger class of graphs, including dense graphs as well as sparse ones. In addition, for graphs where $d_{\mathsf{max}}/d_{\mathsf{avg}}$ and $t_{\mathsf{mix}}$ are both sub-polynomial our result achieves the same exponent (up to lower order terms) as the conjectured bound.
\begin{theorem}\label{azarconj}In any graph a controller can increase the stationary probability of any vertex from $p$ to $p^{1-\varepsilon+\delta}$, where $\delta:=\delta_G=\ln\left( 16\cdot t_{\mathsf{mix}}\right)/\abs{\ln p}$.
\end{theorem}
\begin{proof}By \cref{trelhit} for each vertex $v$ there exists a strategy so that the return time to $v$ is at most $16\cdot \pi(v)^{\varepsilon-1}\cdot t_{\mathsf{mix}} $. Let $q$ denote the stationary probability of $v$ for this $\varepsilon $-B walk. Then as stationary probability is equal to the reciprocal of the return time by \cite[Prop.\ 1.14]{levin2009markov} we have $q\geq \pi(v)^{1-\varepsilon}/(16t_{\mathsf{mix}}) $, for the simple random walk $p=\pi(y)$. For $\delta = \ln\left( 16t_{\mathsf{mix}}\right)/\abs{\ln\pi(y)}$ we have \begin{align*}q/p^{1-\varepsilon+\delta } &\geq \frac{\pi(v)^{1-\varepsilon}}{16t_{\mathsf{mix}}}\cdot\frac{\pi(y)^{-\delta}}{\pi(v)^{1-\varepsilon}} = \frac{\exp\left(-\ln\pi(y) \cdot \frac{\ln \left(16\cdot t_{\mathsf{mix}}\right)}{\abs{\ln\pi(y)}} \right) }{16t_{\mathsf{mix}}} = 1.\qedhere\end{align*}
\end{proof}
The dependence of $\delta$ on $\abs{\ln p}$ in \cref{azarconj} imposes the condition that any vertex we wish to boost must have sub-polynomial degree. This condition is tight in some sense as no stationary probability bounded from below can be boosted by more than a constant factor. In \cref{s:polyboost} we prove a weaker boosting effect which holds in any sublinear polynomial-degree regime.
In the context of $d$-regular graphs, Azar et al.\ state, \begin{displayquote}[\cite{ABKLPbias}][]The interesting situation is when $\varepsilon$ is not substantially larger than $1/d$; otherwise, the process is dominated by the controller's strategy. \end{displayquote}
Note that for $d$-regular graphs with $d=\omega(\log n)$ the conjectured boost from $p$ to $p^{1-\varepsilon}$ does not change the stationary probabilities by more than a constant factor in this regime. For this reason we shall focus on the following question for $d$-regular graphs. \begin{quest}\label{const-fac}When can we boost the stationary probability by more than a constant factor in the $\varepsilon$-BRW with $\varepsilon=\BT{1/d}$?\end{quest}
As noted when $d=\omega(\log n)$ such a boost is stronger than for the AKBLP conjecture and we think \cref{const-fac} is quite natural.
We will consider not only regular graphs but also \textit{almost-regular} ones, that is, graphs in which degrees differ by at most a constant factor. An interesting class is the almost-regular graphs of linear degree; we say that a graph is \textit{everywhere dense} if it has minimum degree $\Omega(n)$. We consider \cref{const-fac} for such graphs in \cref{const-boost}. In particular, we show that the answer to \cref{const-fac} is negative for everywhere dense graphs.
This is essentially best possible, since we show that the corresponding result does not hold for $n^{\alpha}$-regular graphs for any $\alpha<1$. However, it does hold for almost every almost-regular graph in this regime.
\subsection{Boosting in the polynomial degree regime}\label{s:polyboost}
In this section we prove the following boosting result for graphs whose degree is bounded by a polynomial function of $n$.
\begin{corollary}\label{cor-small-poly}Let $G$ be any graph satisfying $d_{\mathsf{max}}\leq n^{\alpha}$ for some $\alpha\in(0,1)$. Then a controller for the $\varepsilon$-BRW can increase the stationary probability of any vertex from $p$ to $p^{1-c_\alpha\varepsilon/\ln d_{\mathsf{max}}}$ for some $c_\alpha>0$.\end{corollary}
Let $G=(V,E)$ be any connected, undirected graph with degree bound $d \leq C$. We will associate to every edge a positive weight given by the function $w:E \rightarrow \mathbb{R}^{+}$. We consider a random walk that picks an incident edge with probability proportional to its weight. Recall that the stationary distribution of this walk is given by $\pi(x)= \sum_{y \sim x} w(y,x)/(2W)$,
where $W := \sum_{ \{r,s\} \in E(G) } w(r,s)$ is the total sum of weights assigned.
Fix a vertex $u \in V$ and let $-1< a < \infty$. We consider the weight function given by
\begin{equation}\label{weightedwalk}
w(r,s) = \left( 1 + a \right)^{ \max\{d(u,r),d(u,s) \}},
\end{equation}
where $d(\cdot,\cdot)$ is the graph distance.
Note that this particular weight function satisfies the following property:
\begin{equation}\label{ratiocond}
\forall x,y,z \colon \{x,y\},\{x,z\} \in E(G) \colon \frac{w(x,y)}{w(x,z)} \in\{1 + a,(1+a)^{-1},1\}.
\end{equation}
\begin{proposition}
Let $-1< a< \infty$, and let $G$ be an edge-weighted graph whose weights satisfy \eqref{ratiocond}. Then, provided $\varepsilon \geq-a$ if $a\leq 0$ and $\varepsilon \geq a/(1+a) $ if $a>0$, the $\varepsilon$-BRW can emulate the walk given by those weights.
\end{proposition}
\begin{proof}
It suffices to prove that we may emulate a step of the walk from any given vertex $x$. If all edges meeting $x$ have the same weight, we simply ``bias'' towards the uniform distribution on neighbours of $x$. Otherwise $a\neq 0$, $d=d(x)\geq 2$ and there are exactly two weights, $w_1$ and $w_2$, incident to $x$, which satisfy $w_1=(1+a)w_2$. Suppose there are $k$ incident edges of weight $w_1$ and $d-k$ of weight $w_2$; clearly $1\leq k\leq d-1$.
Now we need to construct a bias matrix $\mathbf B$ which will satisfy the walk probabilities given by \eqref{weightedwalk}. Note that if $w(xy)=w_1$ then $p_{x,y} = w_1/(kw_1 + (d-k)w_2) = (1+a)/(ak + d)$ and otherwise $p_{x,y}=1/(ak +d )$.
We first consider the case $a>0$, i.e.\ $w_1>w_2$. It is sufficient to assume $\varepsilon=\frac{a}{1+a}$, since if it is larger we may use the $\varepsilon$-BRW to emulate the $\frac{a}{1+a}$-BRW.
In this case set
\[\mathbf B_{x,z}=\begin{cases}\frac{da+2d-k}{dak+d^2}&\text{ if }w(xz)=w_1\\
\frac{d-k}{dak+d^2}&\text{ if }w(x,z)=w_2.\end{cases}\]
This gives $\sum_{z\sim x}\mathbf{B}_{x,z}=1$, all entries are positive and
\[p_{x,z}=\frac{a}{1+a}\cdot\mathbf{B}_{x,z}+\frac{1}{1+a}\cdot\frac 1d=\begin{cases}\frac{a+1}{ka+d}&\text{ if }w(xz)=w_1\\
\frac{1}{ka+d}&\text{ if }w(x,z)=w_2.\end{cases}\]
The case $a<0$ may be reduced to the previous case by replacing $a$ with $a'=\frac{-a}{1+a}$, noting that $\varepsilon\geq -a$ is equivalent to $\varepsilon\geq\frac{a'}{1+a'}$.
\end{proof}
\begin{theorem}\label{boostingp}Let $G$ be any graph such that $d_{\mathsf{max}}\geq 3$ and let $\varepsilon >0$. Then a controller for the $\varepsilon$-BRW can increase the stationary probability of any vertex from $p$ to $p^{1-\tilde{\varepsilon}}$, where \[\tilde{\varepsilon}= \frac{\ln(1-\varepsilon)\ln p}{\ln(d_{\mathsf{max}} -1)\ln n}>0.\]
\end{theorem}
\begin{proof}Consider a walk $\mathbf{Q}$ with weighting scheme $w(r,s) = \left( 1 - \varepsilon \right)^{\max\{d(u,r),d(u,s) \}}$. Note there are at most $d_{\mathsf{max}}(d_{\mathsf{max}} -1)^{i-1} $ vertices at distance exactly $i$ from $u$ (and also edges from vertices at distance $i-1$ to those at $i$). Thus, writing $W$ for the total weight of the graph, for any $r$,
\begin{align*}W&\leq \sum_{i=1}^r d_{\mathsf{max}} (d_{\mathsf{max}} -1)^{i-1}\cdot (1-\varepsilon)^{i-1} + n \cdot d_{\mathsf{avg}} \cdot (1-\varepsilon)^{r}\\ &\leq \left( 2(d_{\mathsf{max}} -1)^r + n\cdot d_{\mathsf{avg}} \right)\cdot (1-\varepsilon)^{r}. \end{align*} Thus if we let $ r =\lfloor\ln(n)/\ln( d_{\mathsf{max}} -1)\rfloor $ then $W \leq d_{\mathsf{avg}} \cdot n^{1+\kappa }$, where $\kappa =\ln(1-\varepsilon)/ \ln(d_{\mathsf{max}} -1) <0$. For any $u \in V$ it follows that $\pi_{\mathbf{Q}}(u) \geq d(u) /d_{\mathsf{avg}} \cdot n^{1+\kappa } = n\cdot\pi(u) /n^{1+\kappa }$ and so for $\delta\geq 0$,
\[\frac{\pi_{\mathbf{Q}}(u)}{ \pi(u)^{1+\kappa + \delta } } \geq \frac{ n\cdot\pi(u) }{n^{1+\kappa }} \cdot \frac{n^{1+\kappa + \delta }}{ (n\cdot \pi(u))^{1+\kappa + \delta } }= (n\cdot \pi(u))^{-\kappa - \delta } \cdot n^{\delta} \geq 1,\] where the final inequality holds by taking $\delta = \abs{\kappa \ln(n\pi(u) )}/\ln n $.
\end{proof}
\begin{proof}[Proof of \cref{cor-small-poly}]The statement holds for paths and cycles, and for graphs such that $d_{\mathsf{max}} \geq 3$ it follows from \cref{boostingp} since $-\ln(1-x) \geq x$ for any $x\leq 1$.
\end{proof}
\subsection{Boosting by more than a constant factor}\label{const-boost}
In this section we show that in the case of an everywhere-dense graph, stationary probabilities for the $\varepsilon$-BRW cannot exceed those for the SRW by more than a constant factor, giving a negative answer to \cref{const-fac}. In fact we show that this bound applies more generally to a class of (not necessarily reversible) Markov chains which resemble simple walks on everywhere-dense graphs. In contrast, we show that there exist regular graphs with polynomial degree arbitrarily close to linear for which the answer to \cref{const-fac} is positive. However, such graphs are rare: the answer is negative with high probability for a random graph with the same density, and hence for almost all almost-regular graphs in the polynomial regime.
Let $\mathbf{Q}=(q_{u,v})_{u,v\in V}$ be a transition matrix supported on $G$. For $c,C$ such that $0<c \leq C<\infty $ we say that the corresponding Markov chain is a \textit{$(c,C)$-simple walk} on $G$ if for every $uv \in E(G)$,
\[\frac{c}{ d(u)} \leq q_{u,v} \leq \frac{C}{d(u)}.\]
\begin{proposition}\label{prop:dense}
For any graph $G$ with minimum degree $d_{\mathsf{min}}\geq \alpha \cdot n$ for some constant $\alpha>0$, any strategy $\mathbf{Q}$ for the $\varepsilon$-BRW with $\varepsilon\leq\beta/n$ satisfies $\pi_\mathbf{Q}(u)\leq(1+\beta)\alpha^{-2}\pi(u)$ for every $u\in V$.
\end{proposition}
\begin{proof}
Note that any strategy for the $\varepsilon$-BRW satisfies
\[\frac{1}{d(u)}(1-\varepsilon)\leq q_{u,v}\leq\varepsilon+\frac{1}{d(u)}(1-\varepsilon),\]
and, since $\varepsilon\leq\beta/n\leq\beta/d(u)$, this is a $((1-\varepsilon),(1+\beta))$-simple walk on $G$. Noting that
\[\pi(u)=\frac{d(u)}{\sum_{v\in V}d(v)}\geq\frac{\alpha}{n},\]
it is sufficient to verify that for any $(c,C)$-simple walk on $G$, the stationary probability $\pi_{\mathbf{Q}}$ satisfies $\pi_{\mathbf{Q}}(u)\leq C/(\alpha n)$ for every $u\in V$. This is true since
\[\pi_{\mathbf{Q}}(u) = \sum_{v\in V}\pi_{\mathbf{Q}}(v)q_{v,u} \leq \frac{C}{d_{\mathsf{min}} }\sum_{v\in V}\pi_{\mathbf{Q}}(v)\leq\frac{C}{\alpha n}.\qedhere\]
\end{proof}
We shall now give some bounds on the stationary distribution of $(c,C)$-simple walks based on variants of the mixing time. Recall the definitions of the $\ell^\infty$ mixing time $t_{\infty}$ and the separation time $t_{\mathsf{sep}}$ from Section~\ref{formaldef} and note that throughout we follow the convention that $1/0 =\infty$.
\begin{proposition}\label{pibdd} Let $G$ be a connected graph and ${\mathbf{Q}}$ be a $(c,C)$-simple walk on $G$. Let $\tau_1 =\min\left\{t_{\mathsf{sep}}, \;3\log(n) / |\log \lambda_*| \right\}$ and $\tau_2 =\min\left\{t_{\infty}, \;3\log(n) / |\log \lambda_*| \right\}$. Then
\[
\phantom{\qquad \text{for all }x \in V}\frac{c^{\tau_1}}{2}\cdot \pi(x) \leq \pi_{\mathbf{Q}}(x) \leq 2 C^{\tau_2} \cdot \pi(x)\qquad \text{for all }x \in V.
\]
\end{proposition}
\begin{proof} Let $\tilde{\mathbf{P}}$ and $\tilde{\mathbf{Q}}$ be the LRW and lazy $(c,C)$-simple walk on $G$ respectively and observe that for any $u,v \in E$ we have $\tilde{q}_{u,v} = (1+{q}_{u,v})/2\geq (1+c{p}_{u,v})/2\geq c(1+{p}_{u,v})/2\geq c\tilde{p}_{u,v}$ since $c\leq 1$. Thus $\tilde{q}^t_{y,x}\geq c^t\cdot \tilde{p}^t_{y,x} $ for any $t\geq 1$ and $x,y\in V$. Recall the definition of the separation distance $t_{\mathsf{sep}} $ from $\tilde{p}_{x,y}^{(t_{\mathsf{sep}})}\geq \frac{e-1}{e}\cdot \pi(y) $ for any $x,y \in V$. Thus for any $x\in V$ we have
\begin{equation}\label{eq:statbounds}\begin{aligned}
\pi_{\mathbf{Q}}(x) &= \sum_{y \in V} \pi_{\mathbf{Q}}(y) \cdot \tilde{q}^{(t_{\mathsf{sep}})}_{y,x} \\
&\geq c^{t_{\mathsf{sep}}} \cdot \sum_{y \in V} \pi_{\mathbf{Q}}(y) \cdot \tilde{p}^{(t_{\mathsf{sep}})}_{y,x} \\
&\geq c^{t_{\mathsf{sep}}} \cdot \sum_{y \in V} \pi_{\mathbf{Q}}(y)\cdot \frac{e-1}{e}\pi(x) \\
&= c^{t_{\mathsf{sep}}} \cdot\frac{e-1}{e}\cdot \pi(x).
\end{aligned}\end{equation}
For the upper bound recall the definition of the $\ell^\infty$-mixing time $t_{\infty}<\infty$ from \eqref{eq:sep} and observe that $\tilde{p}_{x,y}^{(t_{\infty})}\leq \frac{e+1}{e}\cdot \pi(y)$ for any $x,y\in V$. Thus by similar steps as \eqref{eq:statbounds} we have \[ \pi_{\mathbf{Q}}(x)\leq C^{t_{\infty}}\cdot \frac{e+1}{e}\cdot \pi(x).\] If the graph $G$ is aperiodic then $\lambda_*<1$ for the SRW $\mathbf{P}$. In this case we recall the following inequality, $\left|p_{x,y}^{(t)}/\pi(y)-1\right|\leq \lambda_*^t/\min_{x\in V}\pi(x) $ for any $t\geq 1$ and $x,y\in V$ by \cite[(12.11)]{levin2009markov}. Thus, since $\min_{x\in V}\pi(x)\geq 1/n^2$, if we take $t= 3\log(n)/|\log \lambda_*|$ then we have \[\pi(y)/2\leq \pi(y)\left(1-1/n\right)\leq p_{x,y}^{(t)}\leq \pi(y)\left(1+1/n\right)\leq 2\pi(y),\] as we can assume $n\geq 2$ or else the result holds vacuously. Consequently, again similarly to \eqref{eq:statbounds}, we have $c^{t}\cdot \pi(x)/2 \leq \pi_{\mathbf{Q}}(x)\leq C^{t}\cdot 2 \pi(x) $. The result follows by taking the maximum of the first two bounds with $t$ and observing that $\frac{e-1}{e}\geq 1/2 $ and $\frac{e+1}{e}\leq 2$.
\end{proof}
Now we show that for the Erd\H{o}s--R\'{e}nyi random graph in the polynomial average degree regime the answer to \cref{const-fac} is negative w.h.p.
\begin{proposition}\label{GnpNoBoost}Let $0\leq \beta<\infty $ be a fixed real and $\mathcal{G}\overset{d}{\sim}\mathcal{G}(n,p)$ where $np\sim n^\alpha$ for some fixed real $0<\alpha\leq 1$. Then w.h.p.\ for every vertex $u$ the controller of $(\beta /np)$-BRW can only increase the stationary probability of $u$ from $\pi(u)$ to at most $3 \left(1+\beta\right)^{6/\alpha}\cdot \pi(u) $.
\end{proposition}
\begin{proof} To begin, by the union and Chernoff bounds \cite[Cor. 4.6]{MitzUpfal} we have \[\Pr{\cup_{x\in V}\left\{|d(x) - np|> 3\sqrt{np \log n}\right\}}\leq n\cdot 2\exp\left(-np\left(3\sqrt{\frac{\log n}{np }}\right)^2/3 \right) \leq \frac{1}{n^2} .\] Thus w.h.p., for any $(\beta/np)$-BRW strategy $Q$ we have
\[q_{x,y} \leq \frac{\beta}{np} + \frac{1-\beta/np}{d(x)} \leq \frac{1 }{d(x)}\left(1+\beta +100\beta\cdot \sqrt{\frac{\log n}{np}}\right).\] Since also $q_{x,y} \geq (1-\varepsilon)/d(x) = (1-\beta/np)/d(x)$, we see that for any fixed strategy, $Q$ is a $\left(1-\beta/np, \; 1+\beta+100\beta\cdot \sqrt{ (\log n)/np }\right)$-simple walk.
For a graph $G$ let $\mathcal{L} = \mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{1/2} $ the \textit{normalised Laplacian}, where $\mathbf{D}$ is a diagonal matrix with $ d_{x,x} = d(x)$, $\mathbf{A}$ is the adjacency matrix, and $\mathbf{I}$ is the identity matrix. By \cite[Thm.\ 1.2]{CojaEigen} there exists some $c<\infty $ such that if $np \geq c\log n$ then w.h.p.\ we have \begin{equation}\label{eq:lap}1- (4+\lo{1})/\sqrt{np} \leq \lambda_2(\mathcal{L}(\mathcal{G}(n,p)))\leq \lambda_n(\mathcal{L}(\mathcal{G}(n,p)))\leq 1+ (4+\lo{1})/\sqrt{np}.\end{equation} Observe that since the diagonal matrix $\mathbf{D}$ is invertible, the matrices $\mathcal{L}$ and $\mathbf{D}^{-1/2}\mathcal{L}\mathbf{D}^{1/2} = \mathbf{I} - \mathbf{P}$ are similar (and thus have the same eigenvalues). Thus, by shifting the eigenvalues of $\mathcal{L}$ to correspond to the SRW $\mathbf{P}$, \cite[Thm.\ 1.2]{CojaEigen} implies that for $np\geq c\log n$ we have $\lambda_* \leq (4+\lo{1})/\sqrt{np}$ w.h.p.. Thus we have $|\log \lambda_*|\geq (\alpha/2) \log n - \log4 -\lo{1}$ w.h.p. and consequently $3\log(n)/|\log \lambda_*| \leq (6/\alpha)( 1 + 2/\log n) $ w.h.p.\ for large $n$. Thus by Proposition \ref{pibdd} we have \[\pi_{\mathbf{Q}}(x)\leq 2\left( 1+\beta+100\beta\cdot \sqrt{\frac{\log n}{n^\alpha}}\right)^{(6/\alpha)(1+2/\log n)}\pi(x) \leq 3 \left(1+\beta\right)^{6/\alpha}\pi(x) ,\]w.h.p.\ for suitably large $n$ when $0<\alpha\leq 1$ and $ 0\leq \beta<\infty$ are fixed, as claimed.
\end{proof}
Finally, we give a general $d$-regular example with $d=\poly(n)$ for which we can answer \cref{const-fac} in the affirmative. These graphs have the largest possible diameter $\approx n/d$ and feature several bottlenecks.
\begin{proposition}\label{RegCycleBoost}
Fix any $0<\alpha < 1$ and let $d= n^{\alpha}$, $\varepsilon=\Theta(1/d)$. Then there exists a $d$-regular graph for which the stationary distribution of any given vertex can be boosted by the $\varepsilon$-TB random walk from $1/n$ to $\Omega(1/n^{\alpha})$.
\end{proposition}
\begin{proof}Let $d= n^{\alpha}$ and $\ell =n^{1-\alpha}$ and consider the $(\ell,K_{d,d})$-ring pictured in \cref{fig:my-label}. The $(\ell,K_{d,d})$-ring has $ N=2\ell (d+1)$ vertices and is $d+1$-regular graph, thus in our case $N \sim 2n$.
Let $x,u$ be the end points of one of the edges which connects two units, and $u_1,\dots, u_d$ be the vertices in the $K_{d,d}$ attached to $u$ (see \cref{fig:my-label}). Assuming that $x$ is closer to the target vertex we wish to boost, the $\varepsilon$-BRW strategy is clear: we should prefer the walk at $u$ to visit $x$ and thus set $B_{u,x}=1$ and $B_{u,u_i}=0 $, for all $1\leq i\leq d$ where $B$ is the bias matrix. Now we see that \[\frac{w(u,x)}{w(u,u_i)} = \frac{\varepsilon + (1-\varepsilon)/(d+1)}{(1-\varepsilon)/(d+1)} = 1 + \frac{\varepsilon (d+1)}{(1-\varepsilon)} = 1+ \Omega(1). \]We seek to bound the total weight $W$. If we sum from the target $v$, where we set the adjacent weights to $1$, then we see that the $i$\textsubscript{th} $K_{d,d}$ away from $v$ must have weights that are at most $(1+ \Omega(1))^{-i}$, thus
\[ W\leq 2 \sum_{i=0}^{\ell} (1+\Omega(1) )^{-i}(d^2 + 2d +1) = \mathcal{O}(d^2). \]
Now we see a boosting under this $\varepsilon$-TB boosting strategy from $1/N$ to $p'$ where \[p'\geq d/\mathcal{O}(d^2) = \Omega (1/d ) = \Omega(N^{-\alpha}).\qedhere\]
\end{proof}
\begin{figure}[!htb]
\center\begin{tikzpicture}
\foreach \x in {0,4,5,9}
\draw[fill] (\x,1) circle (.1);
\foreach \x in {1,3,6,8}
\foreach \y in {0,1,2}
\draw[fill] (\x,\y) circle (.1);
\foreach \x in {1,6}
\foreach \y in {0,1,2}{%
\draw (\x,\y) -- (\x-1,1);
\draw (\x+2,\y) -- (\x+3,1);
\foreach \z in {0,1,2}
\draw (\x,\y) -- (\x+2,\z);}
\foreach \x in {-1,4,9}
\draw (\x,1) -- (\x+1,1);
\draw (4,1.1) node[anchor=south]{$x$};
\draw (5,1.1) node[anchor=south]{$u$};
\foreach \x in {1,2,3}
\draw (6,3.1-\x) node[anchor=south]{$u_{\x}$};
\draw[dashed] (10,1) arc(0:180:5.5cm and 2.5cm);
\end{tikzpicture}
\caption{\label{fig:my-label}The $(\ell,K_{d,d})$-ring consists of $\ell$ complete bipartite graphs on $d$ vertices arranged in a cycle. The $(\ell,K_{3,3})$-ring, for some $\ell\geq 2$, is shown above.}
\end{figure}
\section{Computing Optimal Choice Strategies} \label{complexsec}
In this section we focus on the following problem: given a graph $G$ and an objective, how can we compute a strategy for the $\varepsilon$-TBRW which achieves the given objective in optimal expected time? Unless otherwise specified, this section considers walks on the more general class of strongly-connected directed graphs. A strategy consists of a family of controller bias matrices $\{\textbf{B}(\mathcal H_t)\}$, where $t\geq 0$ is the time and $\mathcal H_t$ is the history of the walk up to time $t$. Azar et al.\ \cite{ABKLPbias} considered the following computational problems:
\begin{labeling}{$\mathtt{Hit}\left(G,v,S\right)$:}
\item[$\mathtt{Stat}\left(G,w\right)$:] Find an $\varepsilon$-bias strategy min/maximising $\sum_{v \in V}w_v \cdot \pi_{v}$ for vertex weights $w_v\geq 0$.
\item[$\mathtt{Hit}\left(G,v,S\right)$:] Find an $\varepsilon$-bias strategy minimising $\sum_{v \in V}\ell_v\cdot \Heb{v}{S}$ for a given $S\subseteq V(G)$, $v\in V(G)$ and vertex weights $\ell_v\geq 0$.
\end{labeling}
Notice that for $\mathtt{Stat}$ to make sense we must fix an unchanging strategy and there exists an unchanging optimal strategy for $\mathtt{Hit} $, see \eqref{bias}. Azar et al.\ showed $\mathtt{Stat}$ and $\mathtt{Hit}$ are tractable.
\begin{theorem}[Theorems 6 \& 12 in \cite{ABKLPbias}]\label{azarpoly} Let $G$ be any connected directed graph, $v\in V(G)$ and $S\subseteq V(G)$. Then $\mathtt{Stat}\left(G,w\right)$ and $\mathtt{Hit}\left(G,v,S\right)$ can be solved in polynomial time.
\end{theorem}We introduce the following computational problem not considered by Azar et al.
\begin{description}
\item[$\mathtt{Cov}\left(G,v\right)$:] Find an $\varepsilon$-TB strategy minimising $\ETBcov{v}{G}$ for a given $v \in V(G)$.
\end{description}
Unlike for $\texttt{Stat}$ and $\texttt{Hit}$, an optimal strategy for $\texttt{Cov}$ on essentially any graph cannot be unchanging as it will need to adapt as some vertices become visited (consider the walk on a path started from the midpoint). \cref{Covunchanging} shows that there is an optimal strategy for $\mathtt{Cov}$ which is conditionally independent of time, in that no more information from $\mathcal H_t$ than the set of uncovered vertices is used. This fact means that an optimal strategy for $\mathtt{Cov}$ can be described using only finitely many bias matrices.
Additionally one can show that, for undirected graphs, the $\varepsilon$-TBRW exhibits the same dichotomy as the CRW studied in \cite{POTC}, by a simple adaptation of the proof of hardness in \cite{POTC}. That is while optimising $\mathtt{Hit}$ admits a polynomial-time algorithm, even computing an individual bias matrix $\textbf{B}(\mathcal H_t)$ from an optimal strategy for $\mathtt{Cov}$ is $\NP$-hard. We may view this as an on-line approach to solving $\mathtt{Cov}$, where we compute only the specific bias matrices needed as the random walk progresses; clearly this is an easier problem than precomputing an entire optimal strategy. Note that at most $n$ bias matrices will need to be computed in the course of any given walk, since an optimal bias matrix only depends on the uncovered set, which changes at most $n$ times; however, a full optimal strategy may require exponentially many such matrices.
In fact we will prove $\PSPACE$-completeness for the (online) covering problem in the more general setting of directed graphs. Again we consider the on-line version of the problem, which represents computing a single row of the bias matrix. The input is a (directed) graph $G$, a current vertex $u$, and a visited set $X$ containing $u$. We require $G$ to be strongly connected, so that the walk will almost surely eventually visit all vertices. The visited set $X$ must have the property that a single walk ending at $u$ could have visited precisely those vertices; in particular, any set $X$ which contains $u$ and induces a strongly connected subgraph is feasible.
\begin{labeling}{$\mathtt{NextStep}\left(G,u,X\right)$:}
\item[$\mathtt{NextStep}\left(G,u,X\right)$:] Output a probability distribution over the neighbours of $u$ (a row of the bias matrix) which minimises the expected time for the $\varepsilon$-TBRW to visit every vertex not in $X$, assuming an optimal strategy is followed thereafter.
\end{labeling}
Any such problem may arise during the $\varepsilon$-TBRW on $G$ starting from some vertex in $X$, no matter what strategy was followed up to that point, since with positive probability the bias coin did not allow the controller to influence any previous walk steps. We also introduce the following decision version of $\mathtt{NextStep}\left(G,u,X\right)$ for $X\subset V$, $u\in X$ and $y,z \in \Gamma(u)$:
\begin{labeling}{$\mathtt{BestStep}\left(G,X,y,z\right)$:}
\item[$\mathtt{BestStep}\left(G,X,y,z\right)$:] Is $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(y,X\cup\{ y\})< t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(z,X\cup\{ z\}) $?
\end{labeling}
We can also consider the decision problem for the expected time to cover a given unvisited set $X$ from a vertex $u$:
\begin{labeling}{$\mathtt{Cost}\left(G,u,X,y,z\right)$:}
\item[$\mathtt{Cost}\left(G,u,X,C\right)$:] Is $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)< C$?
\end{labeling}
We show that all three of the problems above can be solved in polynomially bounded space.
\begin{theorem}\label{covinPSPACE}Let $G$ be any strongly connected directed weighted graph and $u \in V$ and $X\subseteq V$ be any connected vertex subset containing $u$. Further let $x,y \in \Gamma(u)$ and $C<\infty$. Then $\mathtt{Cost}\left(G,u,X,C\right)$ and $\mathtt{BestStep}\left(G,X,x,y\right)$ are in $\PSPACE$.
\end{theorem}
\begin{remark}\label{rmk:nextorbest}Note that $\mathtt{NextStep}(G,u,X)$ is not a decision problem, and so not in $\PSPACE$; however, it can be solved by using a polynomial number of
calls to $\mathtt{BestStep}$ to identify an optimal neighbour of $u$. This is since there is an optimal solution to $\mathtt{NextStep}$ supported on a single neighbour by \cref{Covunchanging}.
\end{remark}
We show all three problems are $\PSPACE$-hard, thus $\mathtt{Cost}$ and $\mathtt{BestStep}$ are $\PSPACE$-complete.
\begin{theorem}\label{allhard} For any fixed $\varepsilon\in(0,1)$ the problems $\mathtt{Cost}$, $\mathtt{BestStep}$ and $\mathtt{NextStep}$ are $\PSPACE$-hard on strongly connected directed graphs.
\end{theorem}
In \cite{ITCSpaper} we proved that the $\mathtt{NextStep}$ problem for the related CRW on undirected graphs is $\NP$-hard. The same argument holds for the $\varepsilon$-biased random walk and in \cref{S:NPadapt} we shall provide some details of how to adapt the proof to give the following.
\begin{theorem}\label{NextIsNPHard} For any fixed $\varepsilon\in(0,1)$ the problems $\mathtt{Cost}$, $\mathtt{BestStep}$ and $\mathtt{NextStep}$ are $\NP$-hard on undirected graphs, even under the restriction $d_{\mathsf{max}}\leq 3$. \end{theorem}
In a similar vein, the proofs of Theorems \ref{allhard} and \ref{covinPSPACE} can also be fairly easily adapted so the same results hold for the CRW of \cite{POTC,ITCSpaper}.
\subsection{Properties of Optimal Covering Strategies}\label{covalgsec}
The following result from \cite{POTC} says that one can encode the cover time problem as a hitting time problem on a (significantly) larger graph. In \cite{POTC} this is proved for the CRW; the same proof applies to the $\varepsilon$-TBRW.
\begin{lemma}[Lemma 7.7 of \cite{POTC}]\label{covashit}
For any graph $G=(V,E)$ let the (directed) auxiliary graph $\tilde{G}=(\tilde{V},\tilde{E})$ be given by $\tilde{V}=V\times \mathcal{P}(V)$ (where $\mathcal{P}(V)$ is the power set) and $\tilde{E}= \left\{((i,S),(j,S\cup j))\mid ij \in E,S\subseteq V\right\}$. Then solutions to $\mathtt{Cov}\left(G,v\right)$ correspond to solutions to $\mathtt{Hit}\bigl(\tilde{G},(v,\{v\}),W\bigr)$ and vice versa, where $W=\{(u,V)\mid u\in V\}$.
\end{lemma}
Recall that if the next step is a bias step then the $\varepsilon$-TBRW strategy will output a probability distribution over the neighbours of the current vertex which depends on the history of the walk.
\begin{corollary}\label{Covunchanging}There exists an optimal strategy for the $\varepsilon$-TBRW cover time problem which is unchanging between times when a new vertex is visited. Moreover, given a fixed visited set $X$, for each vertex $x\in X$ there is fixed $y\in \Gamma(x)$ such that whenever the walk is at $x$ the distribution over neighbours of $x$ given by the strategy is $\delta_y$, that is it always moves to $y$ when given the choice.
\end{corollary}
\begin{proof}[Proof of \cref{Covunchanging}]We shall appeal to \cref{covashit} and consider the problem of covering $G$ as hitting the set $W$ in the auxiliary graph $\tilde{G}$. This is now an instance of the optimal first-passage problem in the context of Markov decision processes \cite{Derman} (see also \cite{ABKLPbias}), and the existence of a time independent deterministic optimal policy follows from \cite[Thm.\ 3, Ch.\ 3]{Derman}.
Regarding time independence, notice that although the strategy for hitting the vertex $W$ in $\tilde{G}$ is independent of time this is not strictly true of the original cover time problem. Recall $\tilde{G}$ is a directed graph which consists of a series of undirected graphs linked by directed edges, the undirected graphs represent the sub-graphs of $G$ induced by possible visited sets and the directed edges correspond to the walk in $G$ visiting a new vertex. Since the strategy for $\tilde{G}$ is independent of time, between the times when a new vertex is added to the covered set the strategy on $G$ is fixed.
Regarding the term deterministic; using the terminology from \cite{Derman}, the set of actions at a given time are the neighbours of current vertex policy/strategy is a probability distribution over the set of actions. Derman \cite{Derman} states that a policy is deterministic if at every possible step in the process these distributions are supported on a single action. Since in our case there is a function taking the vertices of $\tilde{G}$ to those of $G$ this corresponds to a strategy always choosing the same fixed neighbour of a given vertex during epochs when the visited set does not change.\end{proof}
\subsection{The \texttt{BestStep} and \texttt{Cost} problems are in \PSPACE}
In light of \cref{covashit} we can solve $\mathtt{Cov}(G,v)$ in exponential time using \cref{azarpoly}, by solving the associated hitting time problem on the (exponentially sized) auxiliary graph $\tilde{G}$. We shall now prove that the problems $\mathtt{BestStep}, \mathtt{NextStep}$ and $\mathtt{Cost}$ can be solved using polynomially bounded space for any finite irreducible Markov chain, where that $\mathtt{NextStep}$ equates to computing the optimal strategy for one step in the on-line cover time problem.
\begin{proof}[Proof of \cref{covinPSPACE}]For a set $S\subset V$ let $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S) $ be the optimal expected cover time of $G$ from $s\in S$ by the $\varepsilon$-TBRW assuming that $S$ has already been visited. Let $\partial S= \{y \in V\setminus S: \exists x\in S:xy\in E \}$. By \cref{Covunchanging} if we consider steps of the walk between times when a new vertex is added to the set of visited vertices then the strategy can be just thought of as a fixed bias matrix.
\begin{claim}Let $S \subset V$, $ s \in S$ and assume for each $x \in \partial S$ we have access to the value $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,S\cup\{ x\} ) $. Then we can compute $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S)$ and a bias matrix $\mathbf{B}$, which is a optimal bias matrix while $S$ is the visited set, in $\poly(n)$ space. \end{claim}
\begin{poc} Given $S\subset V$, $s \in S$ and a bias matrix $\mathbf{B}$, let $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B}) $ be the expected cover time from $s$ assuming that $S$ has been covered and strategy $\mathbf{B}$ is followed until the first time the walk exits $S$ and an optimal strategy is followed thereafter. If follows that \begin{equation}\label{etep}t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S) = \inf_{\mathbf{B}}t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B}),\end{equation}where the infimum is over stochastic matrices supported on the edges of $G$. Since $G$ is strongly connected the random walk on $G$ is irreducible and so for any $\varepsilon<1$ and $\mathbf{B}$ it follows that $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B})$ is at most polynomial in $n$.
The idea is that for a fixed $\mathbf{B}$, $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B})$ is the solution to a discrete harmonic equation with boundary values $\{t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,S\cup\{ x\} ) \}_{x \in \partial S }$. Indeed, let $\mathbf{P}$ be the transition matrix of the SRW on $G$, and $ h_x :=t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,S\cup \{x\} ) $ for any $x \in S \cup \partial S $. Then
\[h_x=\begin{cases}1+\sum_y \left(p_{xy}(1-\varepsilon) + \varepsilon b_{x,y}\right) \cdot h_y&\quad\text{if }x\in S\\
t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,S\cup\{x\} )&\quad\text{if }x\in \partial S.\end{cases}\]
We can then solve this in polynomial space since the values $\{t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,S\cup\{ x\} ) \}_{x \in \partial S }$ are known. Since by \cref{Covunchanging} there is an optimal strategy minimising cover time where the bias distributions are only supported on a single neighbour, it suffices to only consider matrices $\mathbf{B}$ with a single $1$ in each column. There are at most $n^n$ of these and so by \cref{etep} we can determine $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S)$ by calculating $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B})$ for each such $\mathbf{B}$ sequentially and only storing the best pair $\mathbf{B}$, $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(s,S,\mathbf{B})$ found so far. \end{poc}
We now use the claim to show that we can calculate the value $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)$ in $\poly(n)$ space, consisting of the space required for the claim plus additional space to store up to $n^2$ other values, for each pair $u,X$. To be precise, we prove by induction on $n-\abs{X}$ that we may calculate $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)$ using additional storage for at most $(n-\abs{X})n$ other values. If $\abs{X}=n$ then $X=V$ and $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)=0$. If $\abs{X}=n-k$ and the result holds for all larger sets then we may compute each of $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,X\cup\{x\})$ for $x\in\partial X$ using only $(k-1)n$ additional storage spaces, storing the results in at most $n$ further storage spaces, and then use the claim to compute $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)$ from these values. Thus the result holds for all pairs $u,X$ by induction, and so computing $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X)$ and comparing it with $C$ solves $\mathtt{Cost}\left(G,u,X,C\right)$ in $\poly(n)$ space.
The claim also gives us the matrix $\mathbf{B}$ minimising $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(u,X,\mathbf{B})$, and the column of this matrix corresponding to the vertex $u$ solves $\mathtt{NextStep}\left(G,u,X\right)$. Finally, we can solve the problem $\mathtt{BestStep}\left(G,X,x,y\right)$ by computing both $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(x,X\cup\{ x\})$ and $t_{\mathsf{cov}}^{\varepsilon\mathsf{TB}}(y,X\cup\{ y\})$ and comparing them.
\end{proof}
\subsection{The \texttt{Cost} problem is \PSPACE-hard}
We aim to show that $\mathtt{Cost}$ is $\PSPACE$-hard via a reduction to quantified satisfiability, which is the canonical \PSPACE-complete problem \cite{AroraBarak}. To define this problem let $\phi$ be a conjunctive normal form for variables $x_1,\ldots,x_{2n}$, where we can assume that each clause contains three literals. The decision problem is then as follows.
\begin{labeling}{$\mathtt{QSAT}(\phi)$:}
\item[$\mathtt{QSAT}(\phi)$:] $\exists x_1, \forall x_2,\exists x_3, \dots , \forall x_{2n}$ such that $\phi(x_1, x_2, \dots , x_{2n})$ holds?
\end{labeling}
Let $N(\phi,x)$ be the number of clauses of $\phi$ featuring the literal $x$ (where $x\in\{x_i,\overline{x_i}\mid i\in\{1,\ldots,n\}\}$) and $C$ be the total number of clauses. We can assume that no two complementary literals $x_i$ and $\overline{x_i}$ appear in the same clause, since otherwise this clause is trivially satisfied. We shall now introduce some gadgets which will help us make the reduction between the two problems. For simplicity, we shall assume $\varepsilon=1/4$ throughout; the proof can be adapted to a general constant value of $\varepsilon$ with suitable changes to the length parameters $\ell$ of the various gadgets.
\subsubsection{The Gadgets}
\begin{gadget}{The Quincunx Gadget $Q(\ell)$} This gadget allows the walker to choose between two alternatives with very high probability. It consists of vertices $v_{i,j}$ for $0\leq i\leq j\leq\ell$, where the parameter $\ell$ is an odd integer, together with two other vertices $x,y$. The walker enters at $v_{0,0}$ and leaves at either $x$ or $y$. Each vertex $v_{i,j}$ for $j<\ell$ has two outedges to $v_{i,j+1}$ and $v_{i+1,j+1}$; each vertex $v_{i,\ell}$ has a single outedge, which goes to $x$ if $2i<\ell$ and to $y$ if $2i>\ell$. We refer to $v_{0,0}$ as the ``entrance'', $x$ as the ``left exit'' and $y$ as the ``right exit''. Note that the time taken to cross the quincunx is $\ell+1$ deterministically.\end{gadget}
\begin{lemma}\label{quincunx}
If the controller of the $1/4$-TBRW wishes to exit $Q(\ell)$ at $x$ (or $y$) then they may achieve this with probability at least $1-0.99^{\ell}$.
\end{lemma}
\begin{proof}
We think of each step from $v_{i,j}$ to $v_{i,j+1}$ as moving ``left'', and each step from $v_{i,j}$ to $v_{i+1,j+1}$ as moving ``right''. In order to maximise the probability of exiting at $x$, the controller should choose to move left whenever possible. In this case the number of times the walk moves right, $R$, is given by a binomial random variable with mean $\mu=3\ell/4$, and by the multiplicative Chernoff bound (see e.g~\cite[Thm.\ 4.4]{MitzUpfal})
\[\Pr{R>\frac{\ell}{2}}=\Pr{R>\mu/3}<\left(e^{1/3}(3/4)^{4/3}\right)^\mu=\left(3e^{1/4}/4\right)^{\ell/2}<0.99^\ell.\qedhere\]
\end{proof}
\begin{figure}
\begin{subfigure}{.65\textwidth}
\begin{tikzpicture}[label/.style={thick,circle}]
\usetikzlibrary{arrows.meta}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{decorations.pathreplacing}
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {Stealth[length=4mm]}},postaction={decorate}},>=stealth'}
\foreach \y in {0,1,2,3}
\foreach \x in {0,...,\y}{
\draw[fill] (2*\x-\y,-\y) circle (.1);
\draw (2*\x-\y+.5,-\y+.2) node[anchor=north]{{$v_{\x,\y}$}};}
\foreach \x in {0,4}{
\draw[fill] (2*\x-4,-4) circle (.1);
}
\draw (-4.1 ,-4.2) node[anchor=north]{{$x$}};
\draw (4 ,-4.2 ) node[anchor=north]{{$y$}};
\draw[->] (-3,-3) -- (-3.9,-3.9);
\draw[->] (3,-3) -- (3.9,-3.9);
\draw[->] (-1,-3) to[out=270,in=15] (-3.8,-4.);
\draw[->] (1,-3) to[out=270,in=165] (3.8,-4);
\foreach \y in {0,1,2}
\foreach \x in {0,...,\y}{
\draw[->] (2*\x-\y,-\y) -- (2*\x-\y-.9,-\y-.9);
\draw[->] (2*\x-\y,-\y) -- (2*\x-\y+.9,-\y-.9);}
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\begin{tikzpicture}[label/.style={thick,circle}]
\usetikzlibrary{arrows.meta}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{decorations.pathreplacing}
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {Stealth[length=4mm]}},postaction={decorate}},>=stealth'}
\foreach \x in {2,...,7}{
\draw[fill] (4*\x/5- 3/5 ,-4*\x/5) circle (.1);}
\foreach \x in {0,...,3}{
\draw (12/5-4*\x/5+1.6 ,4*\x/5-24/5) node[anchor=north]{{$v_{\x}$}};}
\draw (4.4,-5.6) node[anchor=north]{$\mathsf{Start}$};
\draw (1.5 ,-1) node[anchor=north]{$\mathsf{Finish}$};
\foreach \x in {3,...,5}{
\draw[->] (4*\x/5+1/5 ,-4*\x/5-4/5) --(4*\x/5- 3/5 +.1,-4*\x/5-.1) ;}
\draw[red,->] (9/5,-12/5) -- (1.1 ,-8/5-.1);
\draw[red,->] (5 ,-28/5)--(21/5+.1 ,-24/5-.1);
\foreach \x in {2,...,4}{
\draw[->] (4*\x/5+1/5 ,-4*\x/5-4/5) to[out=20,in=\x*30 -30 ] (4.41-\x/20 ,-4.85+\x/25);
}
\end{tikzpicture}
\end{subfigure}
\caption{A Quincunx Gadget $Q(3)$ (left) and a Slow Path Gadget $P(3)$ (right). Removing $\mathsf{start}$, $\mathsf{finish}$ and the adjacent red edges from $P(3)$ leaves a Steep Hill $H(3)$.}
\end{figure}
\begin{gadget}{The Steep Hill Gadget $H(\ell)$}This consists of vertices $v_0, \dots, v_\ell$ with directed edges $v_{i-1},v_i$ and $v_i,v_0$ for each $i\in\{1,\ldots,\ell\}$. Note that $H(\ell)$ is strongly connected, but (for $\ell>1$) it is much easier to reach $v_0$ from $v_{\ell}$ than vice versa. We refer to $v_0$ as the ``bottom'' and $v_\ell$ as the ``top''.\end{gadget}
\begin{gadget}{The Slow Path Gadget $P(\ell)$}This consists of a steep hill $H(\ell)$ together with two extra vertices, a ``start'' vertex and ``finish'' vertex, and directed edges from the start vertex to the bottom of the hill and from the top of the hill to the finish vertex.\end{gadget}
The slow path gadget will play the part of a very long path in the construction which follows; we use a slow path instead of a simple path in order for the (expected) time to traverse to be exponentially large even though the gadget has polynomial size. We calculate the expected time to traverse now.
\begin{lemma}\label{LengthSlowPath}For any $\varepsilon<1$, the expected time taken for the $\varepsilon$-TBRW to traverse $P(\ell)$ from start to finish, using an optimal strategy, is given by
\[ L(\ell):=\frac{11}{3}\bfrac{8}{5}^{\ell} - \frac{2}{3}.\]
\end{lemma}
\begin{proof}
Let $H_{i}$ be the expected time for the walk to reach the finish from vertex $v_i$, and set $H_{\ell+1}=0$. Observe that for any $1\leq i\leq\ell $ we have
\[ H_{i} =1 + \frac{3}{8}H_0 + \frac{5}{8}H_{i+1}, \] and $H_0 = 1 + H_1$. Using this relation one can show by induction that for any $2\leq j\leq \ell+1$,
\[ H_0 = 2\bfrac{8}{5}^{j-1} + \sum_{i=1}^{j-2}\bfrac{8}{5}^i + H_{j}.\]
The result follows by setting $j=l+1$ and summing the geometric series, noting that the expected time to traverse the gadget is $1+H_0$.
\end{proof}
\begin{gadget}{The Roundabout Gadget $R(\ell_p,\ell_q,k)$}This consists of a cyclic arrangement of $k$ copies of the slow path $P(\ell_p)$ and $k$ copies of the quincunx $Q(\ell_q)$. Identify the finish vertex of each slow path with the entrance of a quincunx, and identify the right exit of each quincunx with the start vertex of the next slow path. We say that the left exits of the quincunxes are the ``departure vertices'' of the roundabout, and the right exits of the quincunxes are the ``arrival vertices''; arrival and departure vertices are ``corresponding'' if they are exits of the same qunicunx.\end{gadget}
\begin{gadget}{The Star Connector Gadget $S(\ell,k)$}
The purpose of this gadget is to allow us to make the visited set of our graph strongly connected.
It consists of $k$ steep hills $H(\ell)$, with their top vertices identified. The bottoms of the hills we call the ``ports'' of the star connector, and the identified top vertices are the ``nexus''.\end{gadget}
We will use the following simple lemma to bound the time spent inside the star connector.
\begin{lemma}\label{octopus}Consider a star connector $S(\ell,k)$, with each port having at least one outgoing edge to some vertex which is not part of the star connector. Start a $1/4$-TBRW at any port. Then, no matter what strategy is employed, the expected time spent in the star connector before leaving is less than $14$ and the probability of reaching the nexus before leaving is less than $\bfrac{13}{14}^\ell$.
\end{lemma}
\begin{proof}
Note that from any vertex which is not a port, the next step reaches a port with probability at least $\frac 38$, since either there are only two outedges, each chosen with probability at least $\frac38$ and one of which leads to a port, or we are at the nexus and all outedges lead to ports. Similarly, from any port there are two outedges and so the next step leaves the star connector with probability at least $\frac 38$. Consequently, from any vertex in the star connector there is a probability of at least $\bfrac 38^2$ of leaving the star connector within two steps.
It follows that the number of steps taken before leaving is dominated by $2X-1$, where $X$ is a geometric random variable with success probability $\frac{9}{64}$; this has mean $\frac{128}{9}-1<14$. In order to reach the nexus the walk needs to take at least $\ell+1$ steps before leaving, and so the probability of this is bounded by $\Pr{X>\lceil\ell/2\rceil}\leq\bfrac{55}{64}^{\ell/2}<\bfrac{13}{14}^{\ell}$.
\end{proof}
\begin{figure}
\begin{subfigure}{.6\textwidth}
\begin{tikzpicture}[label/.style={thick,circle}]
\usetikzlibrary{arrows.meta}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{decorations.pathreplacing}
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {Stealth[length=4mm]}},postaction={decorate}},>=stealth'}
\def 27 {27}
\def 2.5cm {2.5cm}
\def \margin {4}
\foreach \s in {1,...,27}
{
\draw[fill] ({360/27 * (\s - 1)}:2.5cm) circle (.08);
}
\foreach \s in {1,...,27}
{
\draw[->] ({360/27 * (\s - 1)}:2.5cm)
arc ({360/27 * (\s - 1)}:{360/27 * (\s)-2.5}:2.5cm);
}
\foreach \x in {0,9,18}
\foreach \s in {6,7,8}
{
\draw[->] ({360/27 * (\s +\x)}:2.5cm) to[out=180/27 * \s +360/27 *\x +250 ,in=1200/27 * \s +360/27 *\x-80] ({360/27 * \x +380/27 *5 -\s/2}:2.5cm-2.6) ;
}
\foreach \x in {0,9,18}
{
\draw[fill,green] ({360/27 * (\x+4) }:2.5cm ) circle (.08);
\foreach \r in {1,...,4}{
\def .6cm {.6cm}
\draw[fill] ({360/27 * (\x+\r/1.5) }:2.5cm + \r*.6cm) circle (.08);
\draw[fill,red] ({360/27 * (\x+8/3) }:2.5cm + 4*.6cm) circle (.08);
\draw[->] ({360/27 * (\x+\r/1.5-2/3) }:2.5cm + \r*.6cm -.6cm) -- ({360/27 * (\x+\r/1.5)-1.5}:2.5cm + \r*.6cm-1.5);
}
\def .6cm {.6cm}
\draw[->] ({360/27 * (\x+2/3) }:2.5cm + .6cm ) -- ({360/27 * (\x+2/3+1) -1.5}:2.5cm + .6cm );
\draw[->] ({360/27 * (\x+4/3) }:2.5cm + 2*.6cm ) -- ({360/27 * (\x+4/3+1) -1.5}:2.5cm + 2*.6cm );
\foreach \r in {1,2}{
\draw[fill] ({360/27 * (\x+\r/1.5 +1) }:2.5cm + \r*.6cm) circle (.08);
\draw[->] ({360/27 * (\x+\r/1.5-2/3 + 1) }:2.5cm + \r*.6cm -.6cm) -- ({360/27 * (\x+\r/1.5+1)-1.5}:2.5cm + \r*.6cm-1.5);
}
\draw[->] ({360/27 * (\x+2/3+1) }:2.5cm + .6cm ) -- ({360/27 * (\x+2/3+2) -1.7}:2.5cm + .6cm );
\draw[->] ({360/27 * (\x+2/1.5 +1) }:2.5cm + 2*.6cm) to[out=+360/27 *\x +60 ,in=360/27 *\x-130] ({360/27 * \x +960/27-.5}:2.5cm + 4*.6cm-2.5) ;
\foreach \r in {1}{
\draw[fill] ({360/27 * (\x+\r/1.5 +2) }:2.5cm + \r*.6cm) circle (.08);
\draw[->] ({360/27 * (\x+\r/1.5-2/3 + 2) }:2.5cm + \r*.6cm -.6cm) -- ({360/27 * (\x+\r/1.5+2)-1.5}:2.5cm + \r*.6cm-1.5);
\draw[->] ({360/27 * (\x+\r/1.5 +2) }:2.5cm + \r*.6cm) to[out=+360/27 *\x +120 ,in=360/27 *\x +30] ({360/27 * \x +380/27 *4-4}:2.5cm+2.7) ;
}
}
\end{tikzpicture}%
\end{subfigure}%
\begin{subfigure}{.4\textwidth}
\begin{tikzpicture}[label/.style={thick,circle}]
\usetikzlibrary{arrows.meta}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{decorations.pathreplacing}
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {Stealth[length=4mm]}},postaction={decorate}},>=stealth'}
\foreach \x in {3,...,6}{
\draw[fill] (4*\x/5- 3/5 ,-4*\x/5) circle (.1);}
\foreach \x in {3,...,5}{
\draw[->] (4*\x/5+1/5 ,-4*\x/5-4/5) --(4*\x/5- 3/5 +.1,-4*\x/5-.1) ;}
\foreach \x in {2,...,4}{
\draw[->] (4*\x/5+1/5 ,-4*\x/5-4/5) to[out=-30,in=\x*30 -30 ] (4.41-\x/20 ,-4.85+\x/25);
}
\foreach \x in {3,...,6}{
\draw[fill] (9/5 ,-\x+3/5) circle (.1);}
\foreach \x in {3,...,5}{
\draw[->] (9/5 ,-\x-2/5) --(9/5,-\x+3/5-.15) ;}
\foreach \x in {2,...,4}{
\draw[->] (9/5 ,-\x-2/5) to[out=290,in=\x*30-70 ] (1.95 -\x/50 ,-5.5 +\x/20);}
\foreach \x in {3,...,6}{
\draw[fill] (-4*\x/5+ 21/5 ,-4*\x/5) circle (.1);}
\foreach \x in {3,...,5}{
\draw[->] (-4*\x/5+17/5 ,-4*\x/5-4/5) --(-4*\x/5+ 21/5 -.1,-4*\x/5-.1) ;}
\foreach \x in {2,...,4}{
\draw[->] (-4*\x/5+17/5 ,-4*\x/5-4/5) to[out=-115,in=-110+\x*30 ] (-.58+\x/40 ,-4.96+\x/25);
}
\draw (1,-2) node[anchor=north]{{Nexus}};
\draw (1.8,-5.7 ) node[anchor=north]{{Ports}};
\end{tikzpicture}
\end{subfigure}
\caption{A Roundabout Gadget $R(3,3,3)$ (left), with arrival vertices in \textcolor{green}{green} and departure vertices in \textcolor{red}{red}, and a Star Connector Gadget $S(3,3)$ (right).}
\end{figure}
We are now able to describe how we encode an instance of $\mathtt{QSAT}$ as a graph.
\begin{gadget}{The QSAT Graph $G(\phi)$}
We shall encode a given $\mathtt{QSAT}$ problem on an $n$ variable $3$-CNF $\phi$ with $r$ clauses as the QSAT Graph $G(\phi)$ with a certain unvisited set $X$. We shall build this up in stages. The construction depends on certain length parameters $\ell_p,\ell_q,\ell_s$ for the gadgets which we choose later.
For each clause take one roundabout gadget $R(\ell_p,\ell_q,3)$ and label its arrival vertices with the literals appearing in that clause. Take a star connector gadget $S(\ell_s,6r)$, and identify its ports with the start vertices of the slow paths and the entrances of the quincunxes in these roundabout gadgets. Mark as unvisited every vertex, other than the start vertices, in the slow paths of the roundabout gadgets. These will form the entire unvisited set $U$.
For each literal $x$, we construct a chain of $N(\phi,x)$ quincunxes $Q(\ell_q)$ as follows. For each clause containing $x$ in turn, take a quincunx and two slow paths. Identify the right exit of the quincunx with the start vertex of one of the slow paths, and identify the end vertex of that slow path with the arrival vertex of the clause roundabout labelled with $x$. Identify the corresponding departure vertex with the start vertex of the other slow path, and identify the end vertex of that slow path with the left exit of the quincunx. Add a directed edge from the left exit of the quincunx to the entrance of the next quincunx; for the final quincunx, instead add a directed edge to a new vertex $\mathsf{out}_x$. Label the entrance of the first qunicunx as $\mathsf{in}_x$. We refer to this chain of qunicunxes as the $x$-cascade.
Now, for each $i\leq 2n$ we connect the $x_i$-cascade and the $\overline{x_i}$-cascade as follows. Identify $\mathsf{out}_{x_i}$ and $\mathsf{out}_{\overline{x_i}}$ to form a new vertex $\mathsf{last}_i$. If $i$ is even, add a new vertex $\mathsf{first}_i$ with directed edges to $\mathsf{in}_{x_i}$ and $\mathsf{in}_{\overline{x_i}}$. If $i$ is odd, instead add a quincunx, with entrance $\mathsf{first}_i$ and left and right exists identified with $\mathsf{in}_{x_i}$ and $\mathsf{in}_{\overline{x_i}}$. The odd values are the existentially quantified variables, and here the controller has a very high probability of being able to choose whether to set $x_i$ as true or false; for even values (universally quantified) this choice is approximately random, and the controller must therefore cope with an unfavourable sequence of choices for these variables with some probability which is not too small.
Finally, for each $i<2n$ identify $\mathsf{last}_i$ and $\mathsf{first}_{i+1}$. Add a slow path from $\mathsf{last}_{2n}$ to $\mathsf{first}_1$. Designate $\mathsf{first}_1$ as the starting vertex of the walk.\end{gadget}
\begin{figure}[ht]
\begin{tikzpicture}[scale=.83]
\usetikzlibrary{shapes.geometric}
\usetikzlibrary{arrows.meta}
\usetikzlibrary{decorations.markings}
\usetikzlibrary{decorations.pathreplacing}
\tikzset{->-/.style={decoration={
markings,
mark=at position .5 with {Stealth[length=9mm]}},postaction={decorate}},>=stealth'}
\path [use as bounding box] (-2.5,-3.5) rectangle (16.5,12.8);
\begin{scope}
\draw[thick,red!80,->>] (8.55,10) to[out=0,in=100] (10.95,7.7);
\draw[thick,red!80,->>] (10.55,7.85) to[out=120,in=0] (7.8,9.53);
\draw[thick,red!80,->>] (8.55,8.5) to[out=340,in=100] (10.95,1.2);
\draw[thick,red!80,->>] (10.55,1.35) to[out=105,in=0] (7.8,8.03);
\draw[thick,red!80,->>] (1.5,8.85) to[out=0,in=220] (4.18,10-.866*.5-.05);
\draw[thick,red!80,->>] (3.5,10) to[out=160,in=70] (1.05,8.7);
\draw[thick,red!80,->>] (0.05,6.3) to[out=350,in=200] (4.2,5.03);
\draw[thick,red!80,->>] (3.5,5.5) to[out=180,in=350](0.05,6.75) ;
\draw[thick,red!80,->>] (12, 5.3) to[out=200,in=340](7.8,5.5-.866*.5-.05 ) ;
\draw[thick,red!80,->>] (8.55,5.5) to[out=350,in=200](11.95,5.75) ;
\draw[thick,red!80,->>] (8.55,1.98) to[out=340,in=200] (11.94,-.75) ;
\draw[thick,red!80,->>] (11.94,-1.2) to[out=200,in=350] (7.82,2-.866*.5) ;
\draw[thick,red!80,->>] (-1.5,8.8) .. controls (-2.5,2.5) and (3,1.7) ..(4.2,1.53);
\draw[thick,red!80,->>] (3.5,2) .. controls (-2,4) and (-1.6,7.8) ..(-1.15,8.6) ;
\draw[thick,red!80,->>] (8.52,-1) .. controls (12,-3) and (14.5,0) ..(13.1,1.1);
\draw[thick,red!80,->>] (13.5,1.3) .. controls (15,-1) and (12,-3.5) .. (7.8,-1.45);
\draw[thick,red!80,->>] (3.5,-1) .. controls (5,-4) and (20,-6) ..(13.15,7.6) ;
\draw[thick,red!80,->>] (13.45,7.85) .. controls (21,-8.5) and (6,-1) ..(4.3,-1.42);
\ReflectedRoundClause{1.2}{0}{8}{$\overline{x_1}\vee x_2\vee \overline{x_3}$}
\RoundClause{1.2}{12}{7}{$x_1\vee \overline{x_2}\vee x_4$}
\RoundClause{1.2}{12}{.5}{$x_1\vee x_3\vee \overline{x_4}$}
\Quincrux{.5}{6}{11.5}{}
\draw (5.8,12) node[anchor=east]{$\mathsf{First}_{x_1}$};
\CascadeQuincruxR{.5}{8}{10}{}
\CascadeQuincruxR{.5}{8}{8.5}{}
\CascadeQuincruxL{.5}{4}{10}{}
\draw[fill] (6,7) circle (.05);
\draw (5.7,7) node[anchor=east]{$\mathsf{Last}_{x_1}/\mathsf{First}_{x_2}$};
\draw[thick,->] (6+.866*.5,11.25) -- (7.70,10+.866*.5+.05);
\draw[thick,->] (6-.866*.5,11.25) -- (4.30,10+.866*.5+.05);
\draw[thick,->] (7.75,10+.866*.5) -- (7.75,8.5+.866*.5+.08);
\draw[thick,->] (7.75,8.5-.866*.5) -- (6+.05,7+.03);
\draw[thick,->,rounded corners] (4.25,10-.866*.5) -- (4.25,8.5-.866*.5) -- (6-.05,7+.03);
\draw (5.4,9.3) node[anchor=west]{{ \Large $ \exists x_1$}};
\draw (4.2,10.2) node[anchor=west]{$\mathsf{in}_{\overline{x_1}}$};
\draw (7.8,10.2) node[anchor=east]{$\mathsf{in}_{x_1}$};
\CascadeQuincruxR{.5}{8}{5.5}{}
\CascadeQuincruxL{.5}{4}{5.5}{}
\draw[thick,->] (6 ,7) -- (7.70,5.5+.866*.5+.05);
\draw[thick,->] (6 ,7) -- (4.30,5.5+.866*.5+.05);
\draw[thick,->] (7.75,5.5-.866*.5)--(6+.06,4+.03);
\draw[thick,->] (4.25,5.5-.866*.5)--(6-.06,4+.03);
\draw (4.2,5.75) node[anchor=west]{$\mathsf{in}_{x_2}$};
\draw (7.8,5.75) node[anchor=east]{$\mathsf{in}_{\overline{x_2}}$};
\draw (5.4,5.5) node[anchor=west]{{ \Large $ \forall x_2$}};
\draw (5.7,4) node[anchor=east]{$\mathsf{Last}_{x_2}/\mathsf{First}_{x_3}$};
\Quincrux{.5}{6}{3.5}{}
\CascadeQuincruxR{.5}{8}{2}{}
\CascadeQuincruxL{.5}{4}{2}{}
\draw[fill] (6,.5) circle (.05);
\draw[thick,->] (6+.866*.5, 3.25) -- (7.70,2+.866*.5+.05);
\draw[thick,->] (6-.866*.5,3.25) -- (4.30,2+.866*.5+.05);
\draw[thick,->] (7.75,2-.866*.5)--(6+.06,.5+.03);
\draw[thick,->] (4.25,2-.866*.5)--(6-.06,.5+.03);
\draw (4.2,2.25) node[anchor=west]{$\mathsf{in}_{\overline{x_3}}$};
\draw (7.8,2.25) node[anchor=east]{$\mathsf{in}_{x_3}$};
\draw (5.4,2) node[anchor=west]{{ \Large $ \exists x_3$}};
\draw (5.7,.5) node[anchor=east]{$\mathsf{Last}_{x_3}/\mathsf{First}_{x_4}$};
\CascadeQuincruxR{.5}{8}{-1}{}
\CascadeQuincruxL{.5}{4}{-1}{}
\draw[thick,->] (6 ,.5) -- (7.70,-1+.866*.5+.05);
\draw[thick,->] (6 ,.5) -- (4.30,-1+.866*.5+.05);
\draw[thick,->] (7.75,-1-.866*.5)--(6+.06,-2.5+.03);
\draw[thick,->] (4.25,-1-.866*.5)--(6-.06,-2.5+.03);
\draw[fill] (6,-2.5) circle (.05);
\draw (5.7,-2.5) node[anchor=east]{$\mathsf{Last}_{x_4}$};
\draw (5.4,-1) node[anchor=west]{{ \Large $ \forall x_4$}};
\draw (4.2,-.75) node[anchor=west]{$\mathsf{in}_{x_4}$};
\draw (7.8,-.75) node[anchor=east]{$\mathsf{in}_{\overline{x_4}}$};
\draw[thick,red!80,->>,rounded corners] (6,-2.5) -- (6,-3) -- (-2,-3) -- (-2,12.6) -- (6,12.6) -- (6,12.07);
\begin{scope}[shift={(-.7,1.8)}]
\draw[line width=1.8pt] (-1,0) -- (3.1,0) -- (3.1,-4.5) -- (-1,-4.5) -- (-1,0) -- (3.1,0);
\draw (.5,-.35) node[anchor=west]{\underline{\textbf{Key}}};
\draw (0,-1) node[anchor=west]{Quincunx};
\Quincrux{.3}{-.5}{-1}{}
\RoundClause{.25}{-.5}{-2}{}
\draw (0,-2) node[anchor=west]{Roundabout};
\draw[->] (-.8,-3.2)--(-.2,-2.8);
\draw (0,-3) node[anchor=west]{Directed edge};
\draw[red!80,->>] (-.8,-4.2)--(-.2,-3.8);
\draw (0,-4) node[anchor=west]{Slow Path};
\end{scope}
\end{scope}
\end{tikzpicture}
\caption{The QSAT Graph for the \texttt{QSAT} problem $\exists x_1, \forall x_2, \exists x_3, \forall x_4:\phi(x_1,x_2,x_3,x_4)$, where $\phi(x_1,x_2,x_3,x_4) = \left(\overline{x_1}\vee x_2\vee \overline{x_3}\right) \wedge \left(x_1\vee \overline{x_2}\vee x_4 \right) \wedge \left(x_1\vee x_3\vee \overline{x_4} \right) $. For clarity we omit the star connector, which has six arms attached to each roundabout.}
\end{figure}
\begin{proof}[Proof of \cref{allhard}]Our analysis of the time taken to cover the unvisited vertices will focus on the number of slow paths traversed (counted with multiplicity). Note that once the walk has crossed the first edge of a slow path, there is no way to leave the whole slow path until it has been entirely traversed, and clearly it is optimal to do so as quickly as possible, taking a random time with expectation $L:=L(\ell_p)$ independently of the decision to start the slow path.
Suppose that a walker visits the whole set $U$ without visiting the nexus. Then it must have crossed at least $5r-1$ slow paths, since it must cross three slow paths in each roundabout to visit $U$, one slow path to reach each roundabout, and one slow path to leave each roundabout except the last one visited. However, in order to do this crossing exactly $5r-1$ slow paths, the walker visit each roundabout exactly once, and must arrive and depart from each roundabout (except the last) via corresponding vertices, since to do otherwise it would either fail to cross all paths in that roundabout or cross one of them twice. It also cannot cross the slow path from $\mathsf{last}_{2n}$ to $\mathsf{first}_1$. The combination of these factors means that the walker must start from $\mathsf{start}_1$, visit either the $x_1$-cascade or the $\overline{x_1}$-cascade, visit zero or more roundabouts accessible from that cascade, returning to the same cascade each time, then reach $\mathsf{start}_2$ and continue in a like manner, visiting every roundabout before reaching $\mathsf{final}_{2n}$. In particular, the cascades visited correspond to a (possibly incomplete) truth assignment to the variables, and the fact that every roundabout is accessible from some visited cascade means this truth assignment satisfies $\phi$.
The comments above apply to \textit{any} walker; we now analyse the performance of the $1/4$-TBRW. If the instance of $\mathtt{QSAT}$ is satisfiable, then there exists a strategy to visit $U$ while only crossing $5r-1$ slow paths, which succeeds provided the walker avoids the nexus and makes the desired choice from each quincunx encountered. This is because the walker can choose which of the two cascades to visit for each existentially quantified variable, based on which earlier cascades have been visited, in such a way that these cascades give a satisfying assignment, and visit each roundabout at the first opportunity.
We first introduce two ``failure'' events. The first, $F_n$, is that the walker reaches the nexus before crossing $5r$ slow paths. Note that the walker can only enter the star connector at most $10r$ times before crossing $5r$ slow paths, and so \cref{octopus} implies that $\Pr{F_n}<10r\bfrac{13}{14}^{\ell_s}$; this bound is independent of both the strategy followed and the start vertex, provided that this start vertex is outside the star connector or is one of its ports. Setting $\ell_c=a(n+r)$, for some suitable constant $a$, this is less than $\frac{1}{2000r}\bfrac{3}{8}^n$.
The second failure event, $F_q$, is that the walker fails to make the desired decision at a quincunx on the first occasion that quincunx is traversed. Since there are $6r+n$ quincunxes in the graph in total, this has probability at most $(6r+n)\bfrac{99}{100}^{\ell_q}$ by \cref{quincunx}. Setting $\ell_q=b(n+r)$, for some suitable constant $b$, this is less than $\frac{1}{2000r}\bfrac{3}{8}^n$.
We now bound the expected time for an optimal strategy given that the instance is satisfiable. The walker can succeed in visiting $U$ while crossing exactly $5r-1$ slow paths with probability at least $1-\frac{1}{1000r}\bfrac{3}{8}^n$. We can control the extra time not spent in slow paths while attempting to do this. The walker enters the star connector at most $10r$ times, and each time spends a random amount of time in the star connector. By \cref{octopus}, the expectation of this time is less than $14$. The time spent in quincunxes is at most $(6r+n)\ell_q$, and there are a small number of other steps, at most $3r+2n$, coming from single edges linking quincunxes etc. Thus the expected time for the attempt is at most $(5r-1)L+(n+6r)(\ell_q+30)$.
If the attempt was unsuccessful, he attempts to ``reset'' by returning to $\mathsf{start}_1$ and restarting. By taking at most one more step, he is outside the star connector or at one of its ports. From here, he can reach $\mathsf{start}_1$ crossing at most three slow paths with probability $1-\frac{1}{1000r}\bfrac{3}{8}^n$. A similar analysis applies to this attempt. Consequently the expected number of attempts taken to return to $\mathsf{start}_1$ is at most $(1-\frac{1}{1000r}\bfrac{3}{8}^n)^{-1}<1.001$, each taking expected time $3L+(n+6r)(\ell_q+30)$. Overall the expected number of additional attempts needed, given that the first failed, is less than $0.001$, and the expected time to ``reset'' after each attempt is less than $1.001(3L+(n+6r)(\ell_q+30))$, giving a total expected time until $U$ is visited of at most \[(5r-1)L+(n+6r)(\ell_q+30)+\frac{1}{1000r}\bfrac{3}{8}^n((5r+5)L+3(n+6r)(\ell_q+30)).\]
We may choose an appropriate constant $c$ and set $\ell_p=c(n+r)$ to satisfy $(n+6r)(\ell_q+30)<\frac{1}{1000}\bfrac{3}{8}^nL$.
This ensures the value above is at most
\[T_{\mathrm{sat}}:=\left(5r-1+\frac{1}{100}\bfrac{3}{8}^n\right)L.\]
Next we consider the case where the instance of $\mathtt{QSAT}$ is not satisfiable. In that case, no matter how the existentially quantified variables are assigned, there is a way to choose values for the universally quantified variables, depending on values of earlier variables, which avoids $\phi$ being satisfied. As the walker proceeds through the graph, assuming it does not reach the nexus, each universally quantified variable is determined by a single step, and though the controller can influence this step he cannot decrease the probability of either alternative below $\frac38$. Thus, with probability at least $\bfrac38^n$, the truth assignment corresponding to cascades visited does not satisfy $\phi$; recall that in this case the walker must cross at least $5r$ slow paths (or visit the nexus before crossing this number of slow paths, which has probability $\Pr{F_n}$). Thus for the unsatisfiable case the expected time taken is at least
\[T_{\mathrm{unsat}}:=\left(5r-1+\frac{99}{100}\bfrac{3}{8}^n\right)L.\]
Thus, for these values of $\ell_p,\ell_q,\ell_s$, we have a Cook reduction from $\mathtt{QSAT}(\phi)$ to $\mathtt{Cost}(G(\phi),\allowbreak\mathsf{start}_1,U,(T_{\mathrm{sat}}+T_{\mathrm{unsat}})/2)$, so $\mathtt{Cost}$ is $\PSPACE$-complete.
We next briefly describe how to adapt this argument to prove that $\mathtt{BestStep}$ is $\PSPACE$-hard. Choose a value $\ell'=\BO{n+r}$ to satisfy
\[\frac13\bfrac{3}{8}^nL<L(\ell')<\frac23\bfrac{3}{8}^nL;\]
this is possible since incrementing $\ell'$ increases $L(\ell')$ by a factor of less than $2$ (and since $\ell'<\ell_p=\BO{n+r}$). We write $L':=L(\ell')$.
Now we modify the construction above to create a graph $G'(\phi)$ as follows. Make each roundabout a copy of $R(\ell_p,\ell_q,4)$ instead of $R(\ell_p,\ell_q,3)$. Add an extra cascade, with extremal vertices labelled $\mathsf{in}_*$ and $\mathsf{out}_*$ connected by slow paths $P(\ell_p)$ to the spare arrival and departure points of every roundabout. Add a new vertex $\mathsf{start}_0$, with two outedges: one to $\mathsf{start}_1$ and the other leading to a slow path $P(\ell')$ which in turn leads to $\mathsf{in}_*$. Finally, add an edge from $\mathsf{out}_*$ to $\mathsf{last}_{2n}$.
In this modified graph, if the walker starts at $\mathsf{start}_1$ the same analysis as above applies, with $(5r-1)L$ replaced by $(6r-1)L$ (to account for the extra slow path in each roundabout). Thus if the instance is satisfiable the expected time started from this point is at most $T_{\mathrm{sat}}+rL$, and if it is not satisfiable it is at least $T_{\mathrm{unsat}}+rL$ (since in order to make use of the new cascade from this starting point, the walker must traverse more than $6r-1$ slow paths).
However, starting from the beginning of the slow path of length $\ell'$, the expected time is at most $T_{\mathrm{sat}}+rL+L'$ (since after traversing this path the walker can, assuming $F_n$ and $F_q$ do not occur, visit all of $U$ using $6r-1$ other slow paths). It is also at least $(6r-1)L+L'-\Pr{F_n}$. By choice of $L'$ these values lie between $T_{\mathrm{sat}}+rL$ and $T_{\mathrm{unsat}}+rL$.
Thus, starting at $\mathsf{start}_0$, the optimal strategy is to prefer $\mathsf{start}_1$ if the instance is satisfiable and the the other outneighbour if not. This gives a Cook reduction from $\mathtt{QSAT}(\phi)$ to $\mathtt{BestStep}(G'(\phi),\mathsf{start}_0,U)$. Notice that the unique solution to $\mathtt{BestStep}(G'(\phi),\mathsf{start}_0,U)$ is to give full weight to one of the two neighbours, thus both problems are $\PSPACE$-hard. $\PSPACE$-hardness for $\mathtt{BestStep}$ follows from \cref{rmk:nextorbest}.
\end{proof}
\section{Concluding Remarks and Open Problems}\label{Conclude}
In this paper we extended the previous work on the $\varepsilon$-biased random walk to include strategies which may depend on the history of the walk. Our motivation for this is the cover time problem for which we obtained bounds using a new technique that allows us relate the probability of any event for the $\epsilon$-biased walk to the corresponding event for a simple random walk. This technique also allowed us to make progress on a conjecture of Azar et al.\ \cite{ABKLPbias}. We note that this conjecture requires some further technical conditions not given in the original statement. However, as discussed in \cref{AzarConjSec}, the only case necessitating this extra condition appears to be that of graphs with large entries in the stationary vector, and we believe that the following slightly refined version of their conjecture should hold.
\newtheorem*{conj:reformulated}{\cref{reformulated}}
\begin{conj:reformulated}
\textit{In any graph a controller can increase the stationary probability of any vertex from $p$ to $p^{1-\varepsilon+\delta} $, where $\delta:=\delta(G)\rightarrow 0$ as $p\rightarrow 0$.}
\end{conj:reformulated}
We also showed that computing an optimal next step for the $\varepsilon$-TBRW to take in the online version of the covering problem is $\PSPACE$-complete on directed graphs. The class $\PSPACE$ is a natural candidate for the covering problem given that some suitably intricate Markov decision problems and route planning problems are $\PSPACE$-complete \cite{Markovhard}. We believe that the problem is also $\PSPACE$-hard for undirected graphs, although we can only show it is $\NP$-hard.
\begin{conjecture}\label{hardconj}
For undirected graphs $\mathtt{BestStep}$ is $\PSPACE$-hard.
\end{conjecture}
The difficulty in establishing \cref{hardconj} is that on undirected graphs it is difficult to force the walk to make irreversible decisions and so it is not clear how to create gadgets with the sort of one-way nature typical in $\PSPACE$ reductions \cite{DemHenLyn}. In particular there does not seem to be an easy way to adapt our proof for directed graphs to the undirected case.
\section*{Acknowledgements}
J.H.\ was supported by ERC Starting Grant no.\ 639046 (RGGC) and by the UK Research and Innovation Future Leaders Fellowship MR/S016325/1. T.S.\ and J.S.\ were supported by ERC Starting Grant no.\ 679660 (DYNAMIC MARCH). J.S.\ was also supported by EPSRC project EP/T004878/1. J.S.\ would like to thank Dylan Hendrickson and Jayson Lynch for some interesting discussion about $\PSPACE$. We thank Sam Olesker-Taylor for spotting an error in an earlier version of this work.
\bibliographystyle{abbrv}
|
1,108,101,565,740 | arxiv | \section{Introduction}
Let be $\left\{ f(x,\theta) \right\}_{\theta \in \Theta }$ be a family
of probability densities with respect to some $\sigma$-finite measure $\lambda$. The
parameter set $\Theta $ is always assumed to be a compact subset of
$\R$ with non-empty interior. A finite mixture model with $m$
components is given by
\begin{equation}
\label{fxG}
f(x,G)=\int_{\Theta}f(x,\theta)\dd G(\theta)
\end{equation}
where $G$ is a $m$-points support distribution on $\Theta$, called the \emph{mixing distribution}. The class
of such $m$-mixing distributions $G$ is denoted by $\Gm{}$ and $\Glm$
will be the union of $\Gj$ for $j\in \lb 1,m\rb$.
In Section~\ref{sec:lowerbound} we will show that a consistent estimator $\widehat{G}_n\in \Glm$
of an unknown mixing distribution $G_1$ can not converge uniformly faster than $n^{-1/(4(m-m_0) + 2)}$ in the neighborhood of $G_0\in\Gm{0}$, in
the ($L^1$-)Wasserstein metric,
where $n$ is the sample
size. Recall that this metric can be defined by
\begin{equation}
\label{defW}
W(G_1,G_2)=\int_{\R}|G_1(-\infty,t]-G_2(-\infty,t]|\dd t,
\end{equation}
and that by the Kantorovich-Rubinstein dual representation,
\begin{equation}
\label{dualW}
W(G_1,G_2)=\sup_{|f|_{\mathrm{Lip}}\le 1}\int_{\Theta}f(\theta)\dd (G_1-G_2)(\theta).
\end{equation}
In Section~\ref{sec:upperbound}, we prove that the rate
$n^{-1/(4(m-m_0) + 2)}$ is optimal, under strong identifiability conditions. Finally, Section~\ref{sec:class} exhibits natural families satisfying these strong identifiability conditions.
Some auxiliary or too long computations are
postponed to Appendix~\ref{app}.
\section{The optimal rate can not be better than $n^{-1/(4(m-m_0) + 2)}$}
\label{sec:lowerbound}
The main idea is to build families of mixing distributions $G_{n}(u)$ with the same $2(m - m_0) $ first moments, and $u n^{-1/2}$ as rescaled shifted $(2 (m - m_0) + 1)$-th moment. Hence the Wasserstein distance between $G_n(u_1)$ and $G_n(u_2)$ will be of order $n^{-1/(4(m-m_0) + 2)}$. They will need $n$ observations to be told apart. Theorem \ref{LAN} makes this precise. We first need a few tools.
We give a far-from-general definition of local asymptotic normality \citep{LeCam}, but it is sufficient for our purposes.
\begin{defin}
\label{defLAN} Given densities $f_{n,u}$ with respect to a measure
$\lambda$, consider the sequence of experiments $\mathcal{E} _n =\left\{ f_{n,u}, u\in \mathcal{U} _n \right\} $ with each point of $\mathbb{R} $ in $\mathcal{U}_n$ for $n$ large enough. Let $X$ have density $f_{n,0}$ and consider the log-likelihood ratios:
\begin{align*}
Z_{n, 0}(u) & = \ln \left( \frac{f_{n, u}(X)}{f_{n,0}(X)} \right) .
\end{align*}
Suppose that there is a positive constant $\Gamma $ and a sequence of random variables $Z_n$ with $Z_n \xrightarrow[]{d} \mathcal{N} (0, \Gamma )$, such that for all $u\in \mathbb{R} $:
\begin{align}
\label{ELAN}
Z_{n, 0} (u) - u Z_n + \frac{u^2}{2} \Gamma & \xrightarrow[n\to\infty]{P} 0
\end{align}
The sequence of experiments is said \emph{locally asymptotically
normal} (LAN) and \emph{converging} to the Gaussian shift experiment $\left\{ \mathcal{N} (u\Gamma, \Gamma ), u \in \mathbb{R} \right\} $.
\end{defin}
Of course, here $\xrightarrow[]{d}$ (resp. $\xrightarrow[]{P}$) stands for convergence in distribution (resp. in probability). Intuitively, (almost) anything that can be done in a Gaussian shift experiment can be done asymptotically in a locally asymptotically normal sequence of experiments.
\begin{defin}
\label{Eia}
Let $\left\{ f(x, \theta ) \right\}_{\theta\in\Theta} $ be a family of
densities with respect to a $\sigma$-finite measure $\lambda $.
Let us consider, for $p\in \N$ and $q>0$, the functions:
\begin{align}
\label{Eiaeq}
E_{p,q} : \qquad \Theta ^3 \quad & \to [0,\infty] \nonumber\\
\left(\theta _1, \theta _2, \theta _3\right) & \mapsto \mathbb{E}_{\theta _1}\left|\frac{ f^{(p)}(x, \theta _2)}{f(x,\theta _3)} \right|^q .
\end{align}
We say that the family of densities is $(p,q )$-smooth if $E_{p,q
}$ is well-defined and continuous on $\Theta^3$, and if there exists
$\varepsilon > 0$ such that for all $\theta_1$,
\begin{align}
\label{proche}
|\theta _2 - \theta _3| < \varepsilon & \implies E_{p,q }(\theta _1, \theta _2, \theta _3) < \infty.
\end{align}
\end{defin}
\begin{ex}
Let us consider an exponential family with natural parameter
$\theta\in \Theta _0 $, so that $f(x, \theta ) = h(x) g(\theta
) \exp(\theta T(x))$, with $g \in C^{\infty}$. Consider
$\Theta $ such that its $\varepsilon $-neighbourhood $\Theta
\oplus B(0, \varepsilon )$ is included in $\Theta_0 $. Then
$\left\{ f(x, \theta), \theta \in \Theta \right\} $ is $(p,q
)$-smooth for any $p$ and $q$. Indeed,
\begin{eqnarray*}
f^{(p)}(x, \theta _2) & = &
h(x)\e^{\theta_2T(x)}\left[\sum_{k=0}^p\binom{p}{k}g^{(k)}(\theta_2)T^{p-k}(x)\right]\\
\frac{ f^{(p)}(x, \theta _2)}{ f(x, \theta _3)}& = &
\frac{\e^{(\theta_2-\theta_3)T(x)}}{g(\theta_3)}\left[\sum_{k=0}^p\binom{p}{k}g^{(k)}(\theta_2)T^{p-k}(x)\right]\\
\left|\frac{ f^{(p)}(x, \theta _2)}{ f(x, \theta _3)}\right|^q& = &
a\frac{\e^{q(\theta_2-\theta_3)T(x)}}{g^q(\theta_3)}\left|\sum_{k=0}^p\binom{p}{k}g^{(k)}(\theta_2)T^{p-k}(x)\right|^q
\end{eqnarray*}
so that
\[E_{p,q}(\theta _1, \theta _2, \theta _3) = \frac{g(\theta_1) \E_{\theta_1+q(\theta_2-\theta_3)}\left|\sum_{k=0}^p\binom{p}{k}g^{(k)}(\theta_2)T^{p-k}(x)\right|^q}{g^q(\theta_3)g(\theta_1+q(\theta_2-\theta_3))}.\]
Since all the moments of the sufficient statistic $T(x)$ are finite under a distribution in the exponential family, and since $\theta _1 + q\theta _2 - q \theta _3$ is in $\Theta _0$ for $ (\theta _2 - \theta _3) < \eps/q$, we have finiteness of $E_{p,q } (\theta _1, \theta _2, \theta _3) $. Continuity is clear.
\end{ex}
Being $(p, q )$-smooth ensures finiteness of similar integrals when some $\theta _j$ are replaced with mixing distributions with components close to the $\theta _j$:
\begin{prop}
\label{tversmix}
Given $\pi_0>0$ and two positive integers $m_0\le m$, define
mixing distributions
\[G_n = \sum_{j=1}^m \pi_{j,n} \delta _{\theta _{j,n}}\]
such that $\theta _{j,n}\to \theta _0$ for all $ j \in\lb m_0,m\rb$ and $\sum_{j=
m_0}^m \pi_{j,n} \geq \pi_0$ for all $n$ large enough.
Consider a $(p, q)$-smooth family of densities
$\{f(x,\theta)\}_{\theta\in\Theta} $ with respect to some $\sigma$-finite measure $\lambda$.
Then there is a finite $C$ depending only on $\theta _0$ and $\pi_0$ such that for any $\theta$ satisfying $\left|\theta - \theta _0 \right| \le \eps / 2$, for $n$ large enough, for any mixture $f(x,G)$:
\begin{align*}
\mathbb{E}_{G}\left|\frac{f^{(p)}(x, \theta)}{f(x,G_n)} \right|^q & \le C.
\end{align*}
If, in addition, the function $\left|f^{(p)}(x, \theta _0)\right|$ has
nonzero integral under $\lambda$, then there is a positive $c$ depending only on $\theta _0$ such that for any mixture $f(x,G)$:
\begin{align*}
\mathbb{E}_{G}\left|\frac{f^{(p)}(x, \theta _0)}{f(x,G)} \right|^q & \geq c.
\end{align*}
\end{prop}
\begin{proof}
For $n$ large enough, we have $\left|\theta _{j,n} - \theta _0
\right|\le \eps / 2$ for all $ j\in\lb m_0,m\rb$. Hence
$\left|\theta _{j,n} - \theta\right|\le \eps$ for all $\theta$
such that $\left|\theta - \theta _0 \right| \le \eps / 2$. So
that we may use \eqref{proche}. By compactness and continuity,
there is a finite $C$ such that
\[\mathbb{E}_{\theta_1}\left|\frac{f^{(p)}(x,
\theta)}{f(x, \theta _{j,n})} \right|^q \le C\]
for all such $(j,n)$ and all $\theta _1$. Since $f(x,G)$ is a convex
combination of some $f(x,\theta_1)$, we may replace $\theta _1$ by
$G$ in the former expression. Since the function $1/y^q$ is
convex on positive reals, by Jensen inequality, setting
$A=\sum_{j=m_0}^m \pi_{j,n}$,
\[ \sum_{j=m_0}^m \frac{\pi_{j,n}}{A} \left| \frac{f^{(p)}(x, \theta)}{f(x,
\theta _{j,n})} \right|^q\ge\left|
\frac{f^{(p)}(x, \theta)}{\sum_{j=m_0}^m \frac{\pi_{j,n}}{A}f(x, \theta _{j,n})}\right|^q \geq A^q\left|
\frac{f^{(p)}(x, \theta)}{f(x, G_{n})}\right|^q,\]
and taking expectations with respect to $G$ we obtain the upper bound
\[\E_G\left|
\frac{f^{(p)}(x, \theta)}{f(x, G_{n})}\right|^q\le \frac{C}{A^q}\le \frac{C}{\pi_0^q}.\]
The lower bound does not depend on $(p, q )$-smoothness. It is a simple consequence of rewriting:
\begin{align*}
\mathbb{E}_{G}\left|\frac{ f^{(p)}(x, \theta _0)}{f(x,G)} \right|^q & = \int\left|\frac{ f^{(p)}(x, \theta _0)^q}{f(x,G)^{q-1}} \right| \dd\lambda(x)
\end{align*}
and noticing $\int\left|f(x,G) \right|\dd\lambda(x) =1$ since
$f(x, G)$ is a probability density. By assumption, there is a
set $B$ of measure $\lambda(B)=M>0$ on which the function $f^{(p)}(x, \theta
_0)$ is more than some $\eps >0$. Now, the set $B\cap
\{f(x,G)\le 2/M\}$ is of measure at least $M/2$ and thus
\[\int\left|\frac{ f^{(p)}(x, \theta _0)^q}{f(x,G)^{q-1}} \right|
\dd\lambda(x) \ge \left[\frac{M}{2}\right]^{q+1} \eps^q.\]
\end{proof}
\begin{thm}
\label{LAN}
Let $m_0 \leq m$. Let $G_0 = \sum_{j=1}^{m_0} \pi_j \delta_{\theta _j} \in \mathcal{G} _{m_0}$ be a mixing distribution whose $m_0$-th component is
in the interior of $\Theta $, that is
$\theta _{m_0} \in \interior{\Theta
}$.
Then there are mixing distributions $G_n(u)$ ($n\ge 0,u\in\R$)
all in
$\Gm{}$ such that:
\begin{enumerate}[(i)]
\item \label{lan1} $\W(G_{n}(u), G_0) \to 0$ for all $u\in\R$. More precisely,
for some $C(u)>0$, we have
\[\W(G_{n}(u), G_0) \le C(u) n^{-1/ (4(m -m_0) +2)}.\]
\item \label{lan2} The mixing distributions get closer at rate $n^{-1/(4(m
- m_0) + 2)} $: for all $u_1$ and $u_2$, there are constants
$c(u_1,u_2)>0$ such that
\[\W(G_{n}(u_1), G_{n}(u_2)) \geq c(u_1,u_2) n^{-1/ (4(m -m_0) +
2)}.\]
\item \label{lan3} Suppose that a family of densities $\left\{ f(x, \theta ), \theta \in
\Theta \right\} $ with respect to $\lambda$ is $(p,q )$-smooth for
all $p \in\lb 1, 2(m-m_0+1)\rb$ and $q\in \lb 1, 4\rb$. Assume moreover that
\[\int \left|f^{(2(m-m_0)+1)}(x,\theta_{m_0})\right|\dd\lambda(x)>0.\]
There is a number $\Gamma >0$ and an infinite subset $\N_0$ of $\mathbb{N} $ along
which the experiments $\mathcal{E} _n = \left\{
\prod_{i=1}^nf\left(x_i,G_n(u)\right), |u|
\le u_{\max}(n)\right\} $ with $u_{\max}(n) \to\infty$ converge to the Gaussian shift
experiment $\left\{ \mathcal{N} (u\Gamma, \Gamma), u \in \mathbb{R}
\right\}$.
\item \label{lan4} $u$ is the rescaled $(2 (m - m_0) + 1)$-th moment of
the components of the mixing distribution near $\theta
_{m_0}$.
\end{enumerate}
\end{thm}
The theorem shows that when the first moments of the components of the mixing distribution $G$ near $\theta _{m_0}$ are known, all remaining knowledge we may acquire is on the next moment, and that's the ``right'' parameter: it is exactly as hard to make a difference between, say, $10$ and $11$ as between $0$ and $1$.
On the other hand, for our original problem the cost function is the transportation distance between mixing distributions. So that an optimal estimator in mean square error for $u$ is not optimal for our original problem. Moreover just taking the loss function $ c(u_1,u_2)$ in the limit experiment runs into technical problems since this might go to zero as $u_2$ goes to infinity. They could be overcome, but it is easier to state a lower bound on risk using just contiguity and two points:
\begin{cor}
\label{lower_bound}
The optimal local minimax rate of estimation around $G_0$ of a mixture cannot be better than $ n^{-1/ (4(m - m_0) + 2)} $ in general: for any sequence of estimators $\hat{G}_n$ and any $\epsilon>0 $, we have:
\begin{align}
\label{local_minimax}
\liminf_{n\to \infty} \!\!\!\!\!\!\sup_{\substack{G_1
\text{s.t.}\\ W(G_1, G_0) < n^{-1/ (4(m - m_0) + 2)+\eps}}}
\!\!\!\!\!\! n^{1/ (4(m - m_0) + 2)} \mathbb{E}_{f(\cdot,G_1)^{\otimes n}}\W(G_1, \hat{G}_n) & > 0,
\end{align}
where the true distribution $G_1$ lies in $\Gm{}$.
\end{cor}
\begin{proof}[Proof of corollary~\ref{lower_bound}]
Fix $u > 0$ and consider the densities $f_{n,u}(x)=\prod_{i=1}^nf\left(x_i,G_n(u)\right)$ with associated probability
measures $\Pnu$ as in Theorem~\ref{LAN} \eqref{lan3}. We have
\begin{equation}
\label{eq:liminf}
\liminf_{n\to \infty}\inf_{A:\Pno(A)\ge 3/4}\Pnu(A)\ge
\frac14\e^{-\frac{u^2}{2}\Gamma}.
\end{equation}
Indeed, the LAN property \eqref{ELAN} can be written as
\[\rho_n:=\frac{f_{n,u}(X)}{f_{n,0}(X)}\e^{-uZ_n+\frac{u^2}{2}\Gamma}\xrightarrow[]{P}1,\]
with $X$ of density $f_{n,0}$ and $Z_n$ with asymptotic distribution $\mathcal{N}(0,\Gamma)$. For any event $A$,
\begin{equation*}
\Pnu(A) = \Eno\left(\frac{f_{n,u}(X)}{f_{n,0}(X)}\1_A\right) = \Eno\left(\rho_n\e^{uZ_n-\frac{u^2}{2}\Gamma}\1_A\right).
\end{equation*}
Furthermore, by restriction on the event $\{Z_n>0\}$ and by using
$\rho_n \xrightarrow[]{P}1$, we get that $\Pnu(A)$ is bounded below by
\begin{equation*}
\e^{-\frac{u^2}{2}\Gamma}\left[\Pno(A)-\Pno(Z_n\le 0)\right]+o(n).
\end{equation*}
Taking now the infimum on events $A$ such that $\Pno(A)\ge 3/4$ and passing to the limit as $n\to \infty$, we obtain \eqref{eq:liminf}.
We now consider, for any sequence of estimators $\hat{G}_n$, the event
\[A = \{ n^{1/ (4(m - m_0) + 2)} \W(G_{n}(0), \hat{G}_n) \geq a\}\]
for some $a>0$ to choose. Notice that by the triangle's inequality its complement $A^c$ satisfies
\[A^c\subset \{n^{1/ (4(m - m_0) + 2)} \W(G_{n}(u), \hat{G}_n) \geq
c(u,0) -a\}\]
where $c(u,0)>0$ is given by Theorem~\ref{LAN} \eqref{lan2}. Choose
$a=c(u,0)/2$.Then either $\Pno(A) \geq 1/4 $, which gives
\[\sup_{G_1 \in \{G_n(0)\}} n^{1/ (4(m
- m_0) + 2)}\mathbb{E}_{f(\cdot,G_1)^{\otimes n}}\W(G_1, \hat{G}_n)
\ge \frac{a}{4},\]
or $\Pnu(A^c) \geq \e^{-\frac{u^2}{2}\Gamma}/4$ in
the limit, by \eqref{eq:liminf}, so that
\[ \liminf_{n\to \infty} \sup_{G_1 \in \{G_n(u)\}} n^{1/ (4(m
- m_0) + 2)}\mathbb{E}_{f(\cdot,G_1)^{\otimes n}}\W(G_1, \hat{G}_n)
\ge \frac{a}{4}\e^{-\frac{u^2}{2}\Gamma}.\]
Thus, gathering the two inequalities, we get
\[ \liminf_{n\to \infty} \sup_{G_1 \in \{G_n(0), G_n(u)\}} n^{1/ (4(m
- m_0) + 2)}\mathbb{E}_{f(\cdot,G_1)^{\otimes n}}\W(G_1, \hat{G}_n)
\ge \frac{a}{4}\e^{-\frac{u^2}{2}\Gamma}.\]
Note to finish that by Theorem~\ref{LAN} \eqref{lan1}, each $G_n(0)$ or
$G_n(u)$ is at $\W$-distance at most $n^{-1/ (4(m - m_0) + 2)+\eps}$
from $G_0$, for large $n$ enough.
\end{proof}
\begin{rems}
\label{remLAN}
We want only an example of this slow convergence, and that it be somewhat typical. That's why we have chosen the regularity conditions to make the proof easy, while still being easy to check, in particular for exponential families.
In particular, it could probably be possible to lower $q$ in
$(p,q)$-smoothness to $2+\eps$ and still get the uniform bound we
use in the law of large numbers below. Similarly, less derivability might be necessary if we tried to imitate differentiability in quadratic mean.
In the opposite direction the variance $\Gamma $ in the limit
experiment is really expected to be $\pi_{m_0}^2\mathbb{E}_{G_0}
\left| \frac{f^{(2d-1)}(x, \theta _{m_0}) }{f(x, G_0)} \right|^2$ in most cases, but more stringent regularity conditions may be needed to prove it.
\end{rems}
\begin{proof}[Proof of Theorem~\ref{LAN}]
In this proof and the rest of the paper, we need to compare asymptotic sequences.
The notation $a_n\preccurlyeq b_n$ (or even $a\preccurlyeq b$ if $n$ is kept
implicit) means that there is a positive constant
$C$ such that $a_n\le C b_n$ ; in other words,
$a_n=O(b_n)$. We will also use $a_n \succcurlyeq b_n$ for $a_n \ge C b_n$, and $a_n\asymp b_n$ for $b_n \preccurlyeq a_n\preccurlyeq b_n$. Finally $a_n \preccurlyeq_u b_n$ means that the constant may depend on $u$, that is $a_n\le C(u) b_n$.
We use the following theorem by \citet[Theorem 2A]{Lindsay} on
the matrix of moments ; the idea is close to the Hankel criterion
developed by \cite{Gass} to estimate the order of a mixture.
\begin{thm}
\label{Lindsay}
Given numbers $1,m_1,\ldots,m_{2d}$, write $M_k$ for the
$k+1$ by $k+1$ (Hankel) matrix with entries $(M_k)_{i,j} =
m_{i+j-2}$ for $k=1,\ldots,d$.
\begin{enumerate}[(a)]
\item The numbers $1, m_1, \dots, m_{2d}$ are the moments of a
distribution with exactly $p$ points of support if and only if
$\det M_k > 0$ for $k=1,\ldots,d-1$ and $\det M_p = 0$.
\item If the numbers $1, m_1, \dots, m_{2d-2}$ satisfies $\det M_k > 0$ for $k=1,\ldots,d-1$ and $m_{2d-1}$ is any scalar, then there exists a unique distribution with exactly $d$ points of support and those initial $2d-1$ moments.
\end{enumerate}
\end{thm}
Set $d=m-m_0+1$ and consider any numbers $1, m_1, \dots,
m_{2d-2}$ such that $\det M_1 > 0, \dots, \det M_{d-1} > 0$. By Theorem~\ref{Lindsay}, we may then define for any $u\in \mathbb{R} $ a distribution $G(u) = \sum_{j= m_0}^m \pi_j(u) \delta _{h_j(u)}$ such that its initial moments are $1, m_1, \dots, m_{2d-2}, u$.
Moreover, the unicity in Theorem \ref{Lindsay} implies that, with $\pi_i > 0$ and $h_1 < \dots < h_{d}$, the following application
is injective:
\begin{align*}
\phi :(\pi_1,\ldots,\pi_d,h_1,\ldots,h_d)\mapsto
\left(\sum_1^d\pi_j,\sum_1^d\pi_j h_j,\sum_1^d\pi_j h_j^2,\ldots,\sum_1^d\pi_j h_j^{2d-1}\right)
\end{align*}
Now, its Jacobian is non-zero (see Appendix~\ref{jaco} for a proof):
\begin{align}
\label{Jac}
J(\phi)=(-1)^{\frac{(d-1)d}{2}}\,\pi_1\cdots\pi_d \prod_{1\le j<k\le d}(h_j- h_k)^4.
\end{align}
Thus the inverse of $\phi$ is locally continuous, so that the $h_j(u)$
are all continuous. In particular, they are bounded if $u$ is bounded:
for any $U>0$, there is a finite $H(U)$ such that if $|u| < U$, then
$|h_j(u)| \le H(U)$. We may then find and use a sequence $u_{\max}(n)$ such that $u_{\max}(n) \to \infty$ and $H(u_{\max}(n)) n^{-1/(4d-2)} \to 0$.
We now define the mixing distributions
\begin{equation}
\label{mixdist}
G_n(u) = \sum_{j=1}^{m_0 - 1} \pi_j \delta_{\theta _j} + \pi_{m_0}
\sum_{j= m_0}^m \pi_j(u) \delta _{\theta_{j,n}(u)}
\end{equation}
with
\[\theta _{j,n}(u) = \theta _{m_0} + n^{-1/(4d-2)} h_j(u).\]
This definition satisfies \eqref{lan4}. The form of $G_n(u)$ makes it clear that it converges to $G_0$ at
speed $n^{-1/(4d - 2)} $: it is easily seen from the dual
representation of $\W$ that for $|u|\le U$
\[\W(G_{n}(u), G_0)\le \pi_{m_0}H(U) n^{-1/(4d - 2)}.\]
This proves \eqref{lan1}.
Moreover, since all other points and proportions are equal, the
transportation distance $ \W(G_{n}(u_1), G_{n}(u_2))$ is equal to the
transportation distance between the last $p$ components. Since those
support points keep the same weights and are homothetic with scale
$n^{-1/(4p - 2)}$ around $\theta _{m_0}$, we have exactly
\[ \W(G_{n}(u_1), G_{n}(u_2)) = \W(G_{1}(u_1), G_{1}(u_2)) n^{-1/(4d - 2)}.\]
This proves \eqref{lan2}.
We now prove local asymptotic normality. In order to shorten
notations, the probability under the mixing distribution $G_n(0)$ will
be denoted by $\Pno$ and the corresponding expectation
$\Eno$. Let $X_{1,n}, \dots, X_{n,n}$ be an
i.i.d. sample with density $\prod_{i=1}^nf\left(x_i,G_n(0)\right)$. Then, we can write the Log-likelihood ratio as
\begin{equation*}
Z_{n,0}(u)=\ln \left(\frac{\prod_{i=1}^nf(X_{i,n}, G_n(u))}{\prod_{i=1}^nf(X_{i,n}, G_n(0))}\right)=\sum_{i=1}^n \ln \left( 1 + Y_{i,n} \right).
\end{equation*}
with
\begin{equation}
\label{Yinu}
Y_{i,n} (u)= \frac{f(X_{i,n}, G_n(u)) - f(X_{i,n}, G_n(0)) }{ f(X_{i,n}, G_n(0))}.
\end{equation}
By definition, we have
\begin{equation*}
f(x,G_n(u))- f(x,G_0)=\pi_{m_0}\sum_{j=m_0}^m\pi_{j,n}(u)\left[f(x,\theta_{j,n}(u))-f(x,\theta_{m_0})\right].
\end{equation*}
Moreover, by Taylor expansion with remainder,
\begin{align*}
f(x,\theta_{j,n}(u))-f(x,\theta_{m_0})=\sum_{k=1}^{2d-1}&\left(\frac{h_j(u)}{n^{1/(4d-2)}}\right)^kf^{(k)}(x,\theta_{m_0})\\
&+\int_{\theta_{m_0}}^{\theta_{j,n}(u)}f^{(2d)}(x,\theta)\frac{(\theta_{j,n}(u)-\theta)^{2d-1}}{(2d-1)!}\dd\theta
\end{align*}
so that we get by linearity
\begin{align} \label{FxGnu}f(x,G_n(u))-f(x,G_0)=\pi_{m_0}\left[\sum_{k=1}^{2d-1}\frac{m_k}{n^{k/(4d-2)}}f^{(k)}(x,\theta_{m_0})+R_n(x,u)\right]
\end{align}
with moments $m_1,\ldots,m_{2d-2}$ that do not depend on $u$ but
$m_{2d-1}=u$ and
\begin{equation}
\label{Rnxu}
R_n(x,u)=\sum_{j=m_0}^m\pi_{j,n}(u) \int_{\theta_{m_0}}^{\theta_{j,n}(u)}f^{(2d)}(x,\theta)\frac{(\theta_{j,n}(u)-\theta)^{2d-1}}{(2d-1)!}\dd\theta.
\end{equation}
Thus, we can write from \eqref{Yinu}, \eqref{FxGnu} and \eqref{Rnxu}
\begin{equation}
\label{Yinubis}
Y_{i,n}(u)= \pi_{m_0}\left[u n^{-1/2}Z_{i,n}+R_{i,n}(u)-R_{i,n}(0)\right]
\end{equation}
with
\begin{equation*}
R_{i,n}(u)=\frac{R_n(X_{i,n},u)}{f(X_{i,n}, G_n(0))}, \quad Z_{i,n}=\frac{f^{(2d-1)}(X_{i,n},\theta_{m_0})}{f(X_{i,n}, G_n(0))}.
\end{equation*}
For each fixed $n$ and $u$, the $(Y_{i,n}(u),Z_{i,n},R_{i,n}(u)) $ are i.i.d. and centered under $G_n(0)$. Indeed, from \eqref{Yinu}, we have
\[\Eno Y_{i,n}(u)=\int
[f(x,G_n(u))-f(x,G_n(0))]\dd\lambda(x)=0;\]
furthermore by expanding $f$ around $\theta_{m_0}$, we get iteratively using
$(p,q)$- smoothness
that for $k=1,\ldots,2d-1$
\[\Eno \left[\frac{f^{(k)}(X_{i,n},\theta_{m_0})}{f(X_{i,n},G_n(0))}\right]=0\]
and in particular $\Eno Z_{i,n}=0$. And dividing \eqref{FxGnu} by
$f(x,G_n(0))$ gives as a result $\Eno R_{i,n}(u)=0$ for all $u$.
Consider
\begin{equation}
\label{Zn}
Z_n=\pi_{m_0}n^{-1/2}\sum_{i=1}^nZ_{i,n}.
\end{equation}
By Proposition~\ref{tversmix}, there are positive finite constants $c$ and $C$ independent on $n$ for $n$ large enough such that $c \le \Eno \left|Z_{1,n}\right|^2 \le C $. Up to taking a subsequence, we may then assume $\Eno \left|Z_{1,n}\right|^2\to \sigma ^2$ for some positive $\sigma $. By Proposition \ref{tversmix} again, we have $ \Eno \left|Z_{1,n}\right|^3\le C'< \infty$ for all $n$ large enough.
We may then apply Lyapunov theorem \cite[Theorem 23.7]{Bill} to prove that, with $\Gamma = \sigma ^2\pi_{m_0}^2$,
\begin{align}
\label{cvZn}
Z_n &\xrightarrow[]{d} \mathcal{N} (0, \Gamma ).
\end{align}
Indeed, setting $ s_n^2:=\sum_{i=1}^n\Eno
\left|Z_{i,n}\right|^2\sim n\sigma^2$, we see that the Lyapunov condition
\[s_n^{-3}\sum_{i=1}^n \Eno \left|Z_{i,n}\right|^3\sim n^{-1/2}\sigma^{-3}\Eno
\left|Z_{1,n}\right|^3\xrightarrow[n\to\infty]{} 0\]
is satisfied so that $s_n^{-1}\sum_{i=1}^nZ_{i,n}$ converges in
distribution to $\mathcal{N} (0,1)$ and \eqref{cvZn} follows from the
equality $Z_n=\pi_{m_0}\left[\Eno
\left|Z_{1,n}\right|^2\right]^{1/2}s_n^{-1}\sum_{i=1}^nZ_{i,n}$.
Now, to get the convergence in probability of
$Z_{n,0}-uZ_n+\frac{u^2}{2}\Gamma$ to zero, it's enough to show the following convergences for all $u$:
\begin{eqnarray}
\sum_{i=1}^nY_{i,n}(u) -uZ_n & \xrightarrow[]{L^2} & 0, \label{p1}\\
\sum_{i=1}^nY_{i,n}(u)^2-u^2\Gamma& \xrightarrow[]{L^1} &0, \label{p2}\\
\sum_{i=1}^n|Y_{i,n}(u)|^3& \xrightarrow[]{L^1}& 0. \label{p3}
\end{eqnarray}
Indeed, we will have, since $|\ln (1+y)-y+y^2/2|\le C|y|^3$ for $|y|\le
1/2$,
\[\left|Z_{n,0}-\sum_{i=1}^nY_{i,n}(u)+\frac12
\sum_{i=1}^nY_{i,n}(u)^2\right|\le C \sum_{i=1}^n|Y_{i,n}(u)|^3\]
with probability going to one with $n$, so that
\begin{align*}
Z_{n,0}-uZ_n+\frac{u^2}{2}\Gamma = \sum_{i=1}^nY_{i,n}(u)
-uZ_n&+\frac{1}{2}[u^2\Gamma -\sum_{i=1}^nY_{i,n}(u)^2]\\
&+ Z_{n,0}-\sum_{i=1}^nY_{i,n}(u)+\frac12 \sum_{i=1}^nY_{i,n}(u)^2
\end{align*}
will tend to $0$ in probability if \eqref{p1}, \eqref{p2} and \eqref{p3} hold.
To prove \eqref{p1}, note that from \eqref{Yinubis} and \eqref{Zn}
\[\sum_{i=1}^nY_{i,n}(u) -uZ_n =
\pi_{m_0}\left(\sum_{i=1}^n R_{i,n}(u)-\sum_{i=1}^n R_{i,n}(0)\right),\]
and the equalities
\[\Eno\left|\sum_{i=1}^n R_{i,n}(u)\right|^2=\sum_{i=1}^n\Eno R_{i,n}(u)^2= n\Eno|R_{1,n}(u)|^2\]
will give the desired $L^2$-convergence if we can prove that for each $u$,
\begin{equation}
\label{cvR1nu}
n\Eno|R_{1,n}(u)|^2\xrightarrow[n\to\infty]{} 0.
\end{equation}
To this end, we look at the
expression \eqref{Rnxu} of $R_{n}(x,u)$ for fixed $u$. We have
$|\theta _{j,n}(u) - \theta|^{2d - 1} \le H(u)^{2d-1} n^{-1/2}$ for any
$\theta$ in the integrand, any $j$ and $n$,. We may thus write
\begin{align*}
\left|R_n(x,u)\right| & \le \sum_{j=m_0}^m \pi_j(u) \int_{\theta _{m_0} - H(u) n^{-\frac{1}{4d-2}}}^{\theta _{m_0} + H(u) n^{-\frac{1}{4d-2}}} \left\lvert f^{(2d)}(x, \theta) \right\rvert \frac{H(u)^{2d-1}n^{-1/2}}{(2d-1)!} \mathrm{d}\theta \\
& \preccurlyeq_u n^{-1/2} \int_{\theta _{m_0} - H(u) n^{-\frac{1}{4d-2}}}^{\theta _{m_0} + H(u) n^{-\frac{1}{4d-2}}} \left\lvert f^{(2d)}(x, \theta) \right\rvert \mathrm{d}\theta.
\end{align*}
Since we have $\sigma $-finite
measures, we may use Fubini theorem. Since moreover $\theta$ in the
integrand is between $\theta _0$ and $\theta _{j,n}(u)$ which
converges to $\theta _0$, we may then apply
Proposition~\ref{tversmix}. For $q\in \lb 1,4\rb$, using convexity of $x \mapsto x^q$ on line two, we may then write:
\begin{align*}
\Eno\left|R_{1,n}(u)\right|^q & \preccurlyeq_u n^{-q/2} \Eno\left|\frac{ \int_{|\theta-\theta _{m_0} |\preccurlyeq_u n^{-\frac{1}{4d-2}}} \left\lvert f^{(2d)}(x, \theta) \right\rvert \dd\theta }{f(x, G_n(0))}\right|^q \\
& \preccurlyeq_u n^{-\frac{q}{2}-\frac{q-1}{4d-2}} \int_{|\theta-\theta _{m_0}|\preccurlyeq_u n^{-\frac{1}{4d-2}}} \Eno\left|\frac{f^{(2d)}(x, \theta) }{f(x, G_n(0))}\right|^q \dd\theta \\
& \preccurlyeq_u n^{-\frac{q}{2}-\frac{q}{4d-2}} C \\
& \preccurlyeq_u n^{-\frac{q}{2}-\frac{q}{4d-2}}
\end{align*}
with $C$ from Proposition \ref{tversmix}. In particular,
\begin{equation}
\label{cvEn0}
n^{q/2} \Eno\left|R_{1,n}(u)\right|^q \preccurlyeq_u n^{-q/(4d-2)} \to 0.
\end{equation}
Take $q=2$ to obtain \eqref{cvR1nu} ; the proof of \eqref{p1} is complete.
To prove \eqref{p2}, note first that from \eqref{Yinubis} and \eqref{Zn},
\begin{align*}
\sum_{i=1}^nY_{i,n}(u)^2-\frac{u^2\pi_{m_0}^2}{n}\sum_{i=1}^nZ_{i,n}^2=\pi_{m_0}^2&\sum_{i=1}^n(R_{i,n}(u)-R_{i,n}(0))^2\\
&+\frac{2u \pi_{m_0}^2}{\sqrt{n}}\sum_{i=1}^n(R_{i,n}(u)-R_{i,n}(0))Z_{i,n}
\end{align*}
so that taking the $L^1$-norm and by the Cauchy-Schwarz inequality,
\begin{multline*}
\Eno\left|
\sum_{i=1}^nY_{i,n}(u)^2-\frac{u^2\pi_{m_0}^2}{n}\sum_{i=1}^nZ_{i,n}^2\right|\preccurlyeq_u
n\Eno
|R_{1,n}(u)|^2+n\Eno
|R_{1,n}(0)|^2\\
+\sqrt{n\Eno |R_{1,n}(u)|^2+n\Eno |R_{1,n}(0)|^2}\sqrt{\Eno Z_{1,n}^2}
\end{multline*}
and the r.h.s. tends to $0$ by \eqref{cvR1nu} and the fact that $\Eno Z_{1,n}^2\to\sigma^2$.
Moreover, setting $\delta_n:=|\Eno Z_{1,n}^2-\sigma^2|$, we have
\begin{eqnarray*}
\Eno \left|n^{-1}\sum_{i=1}^nZ_{i,n}^2-\sigma^2\right|^2&\preccurlyeq &\Eno
\left|n^{-1}\sum_{i=1}^n(Z_{i,n}^2-\Eno
Z_{1,n}^2)\right|^2+\delta_n^2\\
&\preccurlyeq &
n^{-1}\mathrm{Var}_{n,0}(Z_{1,n}^2)+\delta_n^2\to
0\\
\end{eqnarray*}
which goes to zero since $\delta_n\to 0$ by definition and $\Eno
Z_{1,n}^4\le C$ for some constant $C$ by Proposition~\ref{tversmix}.
We have thus,
\[\frac1n\sum_{i=1}^nZ_{i,n}^2\xrightarrow[]{L^2} \sigma^2\quad\text{and}\quad\left|\sum_{i=1}^nY_{i,n}(u)^2-u^2\pi_{m_0}^2\frac{1}{n}\sum_{i=1}^nZ_{i,n}^2\right|\xrightarrow[]{L^1} 0\]
which prove \eqref{p2}.
We turn to the proof of \eqref{p3}. It is easily seen from \eqref{Yinubis} that
\begin{equation*}
\sum_{i=1}^n|Y_{i,n}(u)|^3\preccurlyeq_u
n^{-3/2}\sum_{i=1}^n|Z_{i,n}|^3+\sum_{i=1}^n|R_{i,n}(u)|^3+\sum_{i=1}^n|R_{i,n}(0)|^3
\end{equation*}
so that taking expectations
\begin{equation*}
\Eno \sum_{i=1}^n|Y_{i,n}(u)|^3\preccurlyeq_u
n^{-1/2}\Eno|Z_{1,n}|^3+n\Eno |R_{1,n}(u)|^3+n\Eno|R_{1,n}(0)|^3.
\end{equation*}
But each of the three terms in the r.h.s. tends to $0$: the first one
because of $\Eno|Z_{1,n}|^3\le C$ by Proposition~\ref{tversmix}, the
second and the third ones because of \eqref{cvEn0} for $q=3$. Thus
$\sum_{i=1}^n|Y_{i,n}(u)|^3$ converges to $0$ in $L^1$.
\end{proof}
\begin{ex}
Let's take $m=2$, $m_0=1$ and $\theta_{m_0}=0$ so that $G_0 = \delta _0$. Then $G_{1,n} = \frac{1}{2} \left( \delta
_{- 2 n^{-1/6}} + \delta _{2n^{-1/6}} \right) $ and $G_{2,n} =
\frac{4}{5} \delta _{-n^{-1/6}} + \frac{1}{5} \delta _{4
n^{-1/6}}$ both have $0$ as first moment, and $4n^{-1/3}$ as
second moment. The third moments are respectively zero for $G_{1,n}$ and $12n^{-1/2}$ for $G_{2,n}$. With the notation \eqref{mixdist} in the proof of Theorem~\ref{LAN},
we have $G_{1,n}=G_n(0)$ and $G_{2,n}=G_n(12)$. Clearly, one has $\W(G_{1,n} ,G_{2,n})= n^{-1/6} $ for all $n$
and as a by-product of Theorem~\ref{LAN} \eqref{lan3}, $\{G_{1,n}\}$ and $\{G_{2,n}\} $ are contiguous.
\end{ex}
\section{The rate $n^{-1/(4(m-m_0) + 2)}$ is optimal}
\label{sec:upperbound}
We follow \citet{Deely&Kruse} and \citeauthor{Chen}'s \citeyearpar{Chen} strategy of estimating $G$ by
minimizing the $L^{\infty}$ distance to the empirical repartition function \eqref{gn}. We then need to control this distance in terms of the Wasserstein metric (Theorem \ref{orders}), under appropriate identifiability conditions. To do so, we consider sequences of couples $(G_{1,n}, G_{2,n})$ minimizing the relevant ratios, and express $F(x, G_{1,n}) - F(x,G_{2,n})$ as a sum on their components $F(x, \theta_{j,n})$ and relevant derivatives. A difficulty arises: distinct components $\theta_{j,n}$ may converge to the same $\theta_j$, leading to cancellations in the sums. Forgetting this case was the mistake by \citet{Chen} in the proof of their Lemma 2. We deal with it by using a coarse-graining tree: each node corresponds to sets of components that converge to the same point at a given rate. We may then use Taylor expansions on each node and its descendants, while ensuring that we keep non-zero terms (Lemma \ref{lemrec}).
\subsection{Strong identifiability of order $k$}
In what follows $\|\cdot\|_\infty$ is the supremum norm with respect to $x$ and $\|\cdot\|$ is the Euclidean norm (for instance). Recall that
$ F^{(p)}(x, \theta )$ is the $p$-derivative of $F(x,\theta)$ with respect to $\theta$.
\begin{defin}
\label{identifiability}
A family $\left\{ F(x,\theta ), \theta \in \Theta \right\} $ of
distribution functions is \emph{$k$-strongly identifiable} if for any finite set of say $m$ distinct $\theta_j$, then the equality
\begin{align*}
\norm{ \sum_{p=0}^k \sum_{j=1}^m \alpha _{p,j} F^{(p)}(x, \theta _j) }_{\infty} = 0
\end{align*}
implies $\alpha _{p,j} = 0$ for all $p$ and $j$.
\end{defin}
\begin{rem}
\label{infimum}
For a $k$-strongly identifiable family and fixed $\theta _i$, we may consider
\begin{align*}
\inf_{\norm{\alpha} = 1} \norm{ \sum_{p=0}^k \sum_{j=1}^m \alpha _{p,j} F^{(p)}(x, \theta _j) }_{\infty}.
\end{align*}
Since the inner norm is a continuous function of $\alpha$ and the
sphere is compact, this infimum is attained, and hence not
zero: for some $c(\theta_1,\ldots,\theta_m) > 0 $, we have:
\begin{align}
\label{alpha_bound}
\norm{ \sum_{p=0}^k \sum_{j=1}^m \alpha _{p,j} F^{(p)}(x, \theta _j) }_{\infty} \geq c(\theta_1,\ldots,\theta_m) \norm{\alpha }.
\end{align}
\end{rem}
\subsection{Main result and corollaries}
\begin{thm}
\label{orders}
Assume that $\left\{ F(x, \theta ), \theta \in \Theta \right\} $ is
$2m$-strongly identifiable and that $F(x, \theta )$ is $2m$-differentiable with respect to
$\theta $ for all $x$, with
\begin{equation}\label{po}
F^{(2m)}(x,\theta _1) - F^{(2m)}(x,\theta _2) = o(\theta _1 - \theta _2)
\end{equation}
uniformly in $x$. Then, for any $G_0\in \mathcal{G}_{m_0}$, there are $\varepsilon >0$ and $\delta>0 $ such that
\begin{align}
\label{local}
\inf_{\substack{G_1,G_2\in\Glm \\G_1\ne G_2\\ \W(G_1,G_0) \vee \W(G_2,G_0)\le \eps} } \frac{\left\lVert F(x, G_1) - F(x, G_2) \right\rVert _{\infty}}{\W(G_1, G_2)^{2m - 2m_0 + 1}} > \delta.
\end{align}
\end{thm}
\begin{cor}
\label{global}
Under the conditions of Theorem~\ref{orders}, there exists $\delta > 0$ such that
\begin{equation}
\label{general}
\inf_{\substack{G_1,G_2\in\Glm\\G_1\ne G_2}} \frac{\left\lVert F(x, G_1) - F(x, G_2) \right\rVert _{\infty}}{\W(G_1, G_2)^{2m-1}} > \delta.
\end{equation}
\end{cor}
\begin{proof}[Proof of Corollary~\ref{global}]
Consider a sequence
$(G_{1,n},G_{2,n})$ in $\Glm^2$ with $G_{1,n}\ne G_{2,n} $ for
each $n$ and such that
\begin{equation}
\label{mi}
\frac{\left\lVert F(x, G_{1,n}) - F(x, G_{2,n}) \right\rVert _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m-1}}\xrightarrow[n\to\infty]{}\inf_{\substack{G_1,G_2\in\Glm\\G_1\ne G_2}} \frac{\left\lVert F(x, G_1) - F(x, G_2) \right\rVert _{\infty}}{\W(G_1, G_2)^{2m-1}} .
\end{equation}
We can assume that
$(G_{1,n},G_{2,n})$ converges to some limit $(G_{1,\infty},G_{2,\infty})$ in the compact set
$\Glm^2$. Distinguish two cases.
Suppose first that $G_{1,\infty}\ne G_{2,\infty}$. Set
$w:=\W(G_{1,\infty}, G_{2,\infty})>0$ and let $x_0$ such that
$z_0:=|F(x_0,G_{1,\infty})-F(x_0,G_{2,\infty})|>0$. Then, for all $n$
\begin{equation}
\label{eq:1}
\frac{\left\| F(x, G_{1,n}) - F(x, G_{2,n}) \right\|_{\infty}}{\W(G_{1,n}, G_{2,n})^{2m-1}}\ge \frac{\left| F(x_0, G_{1,n})- F(x_0, G_{2,n}) \right|}{\W(G_{1,n}, G_{2,n})^{2m-1}}.
\end{equation}
The numerator of the r.h.s. of \eqref{eq:1} tends to $z_0$ since $| F(x_0, G_{i,n})- F(x_0,
G_{i,\infty}) |$ is bounded by $K_0 \W(G_{i,n},G_{i,\infty})$ with $K_0=\max_{\theta\in
\Theta}|F^{(1)}(x_0,\theta)|$ ($i=1,2$). And by
assumption, $\W(G_{1,n}, G_{2,n})$ tends to $w$. As a
consequence, \eqref{eq:1} and \eqref{mi} give \eqref{general} by choosing $\delta:=z_0/w^{2m-1}$.
Suppose now that $G_{1,\infty}= G_{2,\infty}$. Set $G_0:= G_{1,\infty}$
which is in $\mathcal{G}_{m_0}$ with some $m_0$ at most $m$. Consider $\eps
>0$ and $\delta>0$ as defined in \eqref{local} ; for $n$ large enough,
say $n\ge n_0$, $\W(G_{i,n},G_0)$ ($i=1,2$) is less than $\eps$ so that by \eqref{local}
\[\inf_{n\ge n_0}\frac{\left\lVert F(x, G_{1,n}) - F(x, G_{2,n}) \right\rVert _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m -2m_0+1}}>\delta.\]
Moreover, for $n$ large enough, say $n \ge n_1$, $ \W(G_{1,n}, G_{2,n})$
is small so that
$\W(G_{1,n}, G_{2,n})^{2m -2m_0+1}$ is more than $ \W(G_{1,n},G_{2,n})^{2m-1}$
and thus for all $n\ge n_0+n_1$,
\begin{equation*}
\frac{\left\lVert F(x, G_{1,n}) - F(x, G_{2,n})
\right\rVert _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m-1}}\ge \inf_{n\ge n_0+n_1}\frac{\left\lVert F(x, G_{1,n}) - F(x, G_{2,n})\right\rVert _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m -2m_0+1}}> \delta.
\end{equation*}
\end{proof}
\begin{cor}
\label{main}
Let $\eps >0$. Under the assumptions of Theorem~\ref{orders}, let $G_0\in\Gm0$ and $F_n$ be the empirical distribution of $n$ i.i.d. random variables with distribution $F(x,G_1)$. Let $\widehat{G}_n$ be a near optimal estimator of $G_1$ in the following sense:
\begin{equation}
\label{gn}
\|F(x,\widehat{G}_n)-F_n(x)\|_\infty\le \inf_{G\in \Glm}\|F(x,G)-F_n(x)\|_\infty+\frac{1}{n}.
\end{equation}
Then,
\[W(\widehat{G}_n,G_1)\preccurlyeq \frac{1}{n^{1/(4(m-m_0)+2)}}\]
in probability under $G_1$, uniformly for $G_1\in \Glm$ such that $\W(G_1,G_0)<\eps$.
\end{cor}
\begin{proof}[Proof of Corollary~\ref{main}] We simply follow
\citet[Theorem 2]{Chen}. By the triangle inequality and \eqref{gn} (choose $G=G_1$), we have
\[\|F(x,\widehat{G}_n)-F(x,G_1)\|_\infty\le 2\|F(x,G_1)-F_n(x)\|_\infty+\frac{1}{n}.\]
Moreover by the DKW inequality \citep{Ma}, we have
\[\|F(x,G_1)-F_n(x)\|_\infty\preccurlyeq \frac{1}{\sqrt{n}}, \]
and thus
\begin{equation}
\label{cvpb}
\|F(x,\widehat{G}_n)-F(x,G_1)\|_\infty\preccurlyeq \frac{1}{\sqrt{n}}
\end{equation}
in probability under $G_1$, uniformly in $G_1$.
We also have $\W(\widehat{G}_n,G_1)\to 0$. Otherwise, since $\widehat{G}_n$ is in the compact space $\Glm$, there would be a subsequence $\widehat{G}_{n_k}$ which converges to some $G_2\ne G_1$ and thus we would have for all $x$:
\[|F(x, \widehat{G}_{n_k})-F(x,G_2)|\le \max_{\theta\in\Theta}|F^{(1)}(x,\theta)|\, \W(\widehat{G}_{n_k},G_2)\to 0.\]
This, together with \eqref{cvpb}, would imply $|F(x,G_1)-F(x,G_2)|=0$ for all $x$, which contradicts identifiability.
Consequently, if $\W(G_1,G_0)<\eps$, we have $\W(\widehat{G}_n,G_0)<2\eps$ for $n$ large enough, and by Theorem~\ref{orders} and \eqref{cvpb},
\[\W(\widehat{G}_n,G_1)^{2m-2m_0+1}\preccurlyeq \|F(x,\widehat{G}_n)-F(x,G_1)\|_\infty\preccurlyeq \frac{1}{\sqrt{n}}\]
in probability under $G_1$, uniformly in $G_1\in \Glm$ such that $\W(G_1,G_0)<\eps$.
\end{proof}
\subsection{Proof of the main Theorem~\ref{orders}}
In all this section, keep in mind the hypothesis of Theorem~\ref{orders}: the family $\left\{ F(x, \theta ), \theta \in \Theta \right\} $ is
$2m$-strongly identifiable and $F(x, \theta )$ is $2m$-differentiable with respect to
$\theta $ for all $x$, with
\begin{equation*}
F^{(2m)}(x,\theta _1) - F^{(2m)}(x,\theta _2) = o(\theta _1 - \theta _2)
\end{equation*}
uniformly in $x$.
Note first that proving \eqref{local} amounts to proving
\[\lim_{n\to \infty}\uparrow \inf_{\substack{G_1,G_2\in\Glm \\G_1\ne G_2\\ \W(G_1,G_0) \vee \W(G_2,G_0)\le 1/n} } \frac{\left\lVert F(x, G_1) - F(x, G_2) \right\rVert _{\infty}}{\W(G_1, G_2)^{2m - 2m_0 + 1}} > \delta.\]
From now on, we consider two sequences $(G_{1,n}), (G_{2,n})$ in
$\Glm$ such that for each $n\ge 1$:
\begin{itemize}
\item $G_{1,n}\ne G_{2,n}$,
\item $\W(G_{i,n},G_0)\le\frac{1}{n}$ ($i=1,2$),
\item \[ \inf_{\substack{G_i\in\Glm \\G_1\ne G_2\\ \W(G_i,G_0)\le \frac{1}{n}} } \!\!\! \frac{\left\| F(x, G_1) - F(x, G_2) \right\| _{\infty}}{\W(G_1, G_2)^{2m - 2m_0 + 1}}\ge \frac{\left\| F(x, G_{1,n}) - F(x, G_{2,n}) \right\| _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m - 2m_0 + 1}}-\frac{1}{n}.\]
\end{itemize}
Consequently, it's enough to prove that
\begin{equation}
\label{localn}
\liminf_{n\to\infty }\frac{\left\| F(x, G_{1,n}) - F(x, G_{2,n}) \right\| _{\infty}}{\W(G_{1,n}, G_{2,n})^{2m - 2m_0 + 1}}>\delta.
\end{equation}
Since $(G_{1,n}), (G_{2,n})$ are two sequences in $\Glm$ and $m$ is
finite, we may and do assume that $(G_{i,n})\subset \mathcal{G}
_{m_{i}}$ for some $m_{i} \le m$ and $i=1,2$. We can then write for
each $n$
\[G_{1,n} =\sum_{j=1}^{m_1}\pi_{1,j,n}\delta_{\theta_{1,j,n}}
\quad\text{and}\quad G_{2,n} =\sum_{j=m_1+1}^{m_1+m_2}\pi_{2,j,n}\delta_{\theta_{2,j,n}}\]
and define for
each $n$ a signed measure $G_n$ of total mass zero:
\begin{equation*}
G_n=G_{1,n}- G_{2,n} =\sum_{j=1}^{m_1+m_2}\pi_{j,n}\delta_{\theta_{j,n}}
\end{equation*}
with
\[(\pi_{j,n} ,\theta_{j,n})=
\begin{cases}
(\pi_{1,j,n},\theta_{1,j,n}) & \text{for $j\in\lb 1,m_1\rb$ }\\
(- \pi_{2,j,n}, \theta_{2,j,n}) & \text{for $j\in\lb m_1+1,m_2\rb$ }
\end{cases}.
\]
\subsubsection{Scaling sequences}
\label{sec:scale}
Set for short
\[J_o=\lb 1,m_1+m_2\rb.\]
Since $J_o$ is finite, up to selecting a subsequence of $G_n$, we may find a finite number of scaling sequences $\eps_{0,n},\eps_{1,n},\ldots,\eps_{\sm,n}$,
together with integers $\fs(j,k)$ and $\vs(J)$ in $\lb0,\sm\rb$ for any $j,k \in J_o$ and $J\subset J_o$,
such
that
\begin{align}
0\equiv \eps_{0,n} < \eps_{1,n} & < \cdots < \eps_{\sm,n} \equiv 1, & \text{with }\eps_{s,n} & = o\big(\eps_{s+1,n}\big) , \notag \\
\label{sjk}
\left\lvert \theta_{j,n} - \theta_{k,n}
\right\rvert & \asymp \eps_{\fs(j,k),n}, && \\
\label{sJ}
\left\lvert \sum_{j\in J}\pi_{j,n}
\right\rvert & \asymp \eps_{\vs(J),n}. &&
\end{align}
We also define the $\fs$-diameter of $J$ as
\begin{align*}
\fs(J) & = \sup_{j,k \in J} \fs(j,k).
\end{align*}
\subsubsection{Defining a tree for the key lemmas}
\label{sec:tree}
Note that the application $\fs(\cdot,\cdot)$ defined by \eqref{sjk} is an ultrametric on $J_o$ (but does not
separate points). Thus we may define a tree $\T$ whose vertices are indexed by the distinct ultrametric closed balls
$J=B_\fs(j,s)$ when $j$ ranges over $J_o$ and $s$ over
$\lb0,\sm\rb$.
Indeed, if $I$ and $J$ are two such balls, and $I \cap J\ne \emptyset$, then either $I\subset J$ or $J\subset I$.
So that, defining the set of descendants and the set of children of $J$ by
\begin{eqnarray*}
\mathrm{Desc}(J) &=& \{I\in\T : I\subsetneq J\}; \\
\mathrm{Child}(J) & =& \{I\in\D{J} : I\subset H\subsetneq J,H\in\T\Longrightarrow
H=I\},
\end{eqnarray*}
we get a tree $\T$ with root $J_o$, and where the parent of $J\ne J_o$ is given by
\[p(J)=K\iff J\in\C{K}.\]
\begin{lem}\label{scW} With the above notations, given the tree $\T$,
\begin{equation}
\label{scaleW}
\W\left(G_{1,n}, G_{2,n}\right) \asymp\max_{J\in \mathrm{Desc}(J_o)} \eps_{\vs(J),n} \eps_{\fs(p(J)),n}.
\end{equation}
\end{lem}
\begin{proof}
See Appendix~\ref{apscW}.
\end{proof}
Set now
\[F(x,G_n):=F(x,G_{1,n})-F(x,G_{2,n})\]
and for $J\subset J_o$,
\[F(x,J):=\sum_{j\in J}\pi_{j,n}F(x,\theta_{j,n}).\]
Note that $F(x,G_n)=F(x,J_o)$.
We now use Taylor expansions along the tree $\T$ to express the order of $F(x,G_n)$ in terms of the scaling functions $\eps_s$.
\begin{lem}\label{lemrec}
Let $J$ be a vertex of the tree $\T$ and set $d_J=\mathrm{card}(J)$. Pick $\theta_J:=\theta_{J,n}$ in the set $\{\theta_{j,n}:j\in J\}$. The subscript $_n$ is skipped from the following notations. There is a vector $\eta_J=(\eta_{k,J})_{0\le k\le 2m}$ and a remainder $R(x,J)$ such that
\begin{equation}
\label{hyprec}
F(x,J) = \sum_{k=0}^{2m} \eta_{k,J} \eps_{\fs(J)}^k F^{(k)}(x, \theta _J) + R(x,J),
\end{equation}
where:
\begin{enumerate}[(i)]
\item \label{coeffsbornes}$\displaystyle\eta_{0,J}=\sum_{j\in
J}\pi_j$ and $|\eta_{k,J}|\preccurlyeq 1 $ for all $k\le 2m$;
\item \label{premierscoeffs} Taking subsequences if needed, there is a coefficient
$\eta_{k,J}$ of maximal order among the $d_J$ first
ones. That is, there is an integer $k(J)<d_J$ such that
\begin{equation*}
\|\eta_J \|:= \max_{k\le 2m} | \eta_{k,J} |
\asymp |\eta_{k(J),J}| ;
\end{equation*}
\item \label{borneinfcoeff} The norm $\|\eta_J \|$ is bounded from below (up to a constant) by
a quantity linked to the Wasserstein distance:
\[\|\eta_J \|\succcurlyeq \max\left(\eps _{\vs(J)},
\max_{I\in\mathrm{Desc}(J)}\eps _{\vs(I)} \left(\frac{\eps
_{\fs(p(I))}}{\eps_{\fs(J)}}\right) ^{d_J - 1}\right);\]
\item \label{remaind} The remainder term is negligible. Uniformly in $x$:
\[R(x,J) = o\left(\|\eta_J\|\, \varepsilon_{\fs(J)}^{2m}\right).\]
\end{enumerate}
\end{lem}
\begin{proof}
See Appendix~\ref{aplemrec}.
\end{proof}
\subsubsection{Concluding the proof}
Let us now consider the root $J_o$ of the tree $\T$. Distinguish two cases:
\begin{description}
\item[Case 1.] Assume that $\fs(J_o)<\sm$. We have $\eps_{\fs(J_o)}=o(1)$ and may apply directly Lemma \ref{hyprec} to $J_o$:
\[F(x,G_n) = F(x,J_o)=\sum_{k=0}^{2m} \eta_{k,J_o} \eps _{\fs(J_o)}^k
F^{(k)}(x, \theta _{J_o}) + R(x,J_o) , \]
where at least one $\eta_{k,J_o}$ satisfies
\begin{eqnarray*}
|\eta_{k,J_o}|&\succcurlyeq&
\max_{I\in\D{J_o}}\eps_{\vs(I)}\left(\frac{\eps_{\fs(p(I))}}{\eps_{\fs(J_o)}}\right)^{d_{J_o}-1}\succcurlyeq \max_{I\in\D{J_o}}\eps_{\vs(I)}\left(\frac{\eps_{\fs(p(I))}}{\eps_{\fs(J_o)}}\right)^{2m-1} ,
\end{eqnarray*}
so that one of the coefficients of the derivatives satisfies
\[|\eta_{k,J_o}\eps _{\fs(J_o)}^k|\succcurlyeq \max_{I\in\D{J_o}}\eps_{\vs(I)}
\eps_{\fs(p(I))}^{2m-1}.\]
Thus, taking $i=1$ in the lower bound \eqref{alpha_bound}, and since $R(x,J_o)$ is
of smaller order, we get
\[\left\lVert F(x,G_n) \right\rVert _{\infty}\succcurlyeq \max_{I\in\D{J_o}}\eps_{\vs(I)}
\eps_{\fs(p(I))}^{2m-1}\succcurlyeq \W(G_{1,n},G_{2,n})^ {2m-1} , \]
where the last inequality comes from Lemma~\ref{scW}.
\item[Case 2.] Assume that $\fs(J_o)=\sm$. We split $G_n$ over the
first-generation children:
\begin{eqnarray*}
F(x,G_n) = F(x,J_o)&=&\sum_{I\in\C{J_o}}F(x,I)\\
&=& \sum_{I\in\C{J_o}}\left[\sum_{k=0}^{2m} \eta_{k,I} \eps _{\fs(I)}^kF^{(k)}(x, \theta _{I}) + R(x,I) \right].
\end{eqnarray*}
Moreover the $\theta _{I}$ for
$I\in\C{J_o}$ are $\eps$-separated for some $\eps >0$ (see \eqref{sep}), so that the
lower bound \eqref{alpha_bound} can be applied and yields, since the
$R(x,I)$'s are negligible:
\begin{equation*}
\left\| F(x,G_n) \right\| _{\infty}\succcurlyeq
\max_{I\in\C{J_o}}\max_{k\le 2m}|\eta_{k,I}\eps _{\fs(I)}^k|\ge
\max_{I\in\C{J_o}}\max_{k<d_I}|\eta_{k,I}\eps _{\fs(I)}^k|.
\end{equation*}
On the one hand, we have $\max_{k<d_I}|\eta_{k,I}\eps _{\fs(I)}^k|\ge |\eta_{0,I}|$ and since $|\eta_{0,I}|=|\sum_{j\in I}\pi_j|\asymp \eps_{\vs(I)}$, we deduce
\[\left\lVert F(x,G_n) \right\rVert _{\infty}\succcurlyeq
\max_{I\in\C{J_o}}\eps_{\vs(I)}.\]
On the other hand, we have $\max_{k<d_I}|\eta_{k,I}\eps _{\fs(I)}^k|\ge \max_{k<d_I}|\eta_{k,I}|\eps _{\fs(I)}^{d_I-1}$ so that from Lemma~\ref{lemrec} \eqref{premierscoeffs} and \eqref{borneinfcoeff} for $I$, we deduce further
\begin{eqnarray*}
\left\lVert F(x,G_n) \right\rVert _{\infty} &\succcurlyeq&
\max_{I\in\C{J_o}}\|\eta_I\|\eps _{\fs(I)}^{d_I-1}\\
&\succcurlyeq&
\max_{I\in\C{J_o}}\max_{H\in\D{I}}\eps_{\vs(H)}
\eps_{\fs(p(H))}^{d_I-1}.
\end{eqnarray*}
After recalling that $\eps_{\fs(J_o)}=1$ and setting $d_\star=\max_{I\in\C{J_o}}d_I$, we may combine these two lower bounds and get
\begin{eqnarray*}
\left\lVert F(x,G_n) \right\rVert _{\infty}&\succcurlyeq& \max_{I\in\C{J_o}}\max_{H\in\D{I}\cup\{I\}}\eps_{\vs(H)}
\eps_{\fs(p(H))}^{d_I-1}\\
&\succcurlyeq&
\max_{H\in\D{J_o}}\eps_{\vs(H)}\eps_{\fs(p(H))}^{d_\star-1}\\
&\succcurlyeq& \W(G_{1,n},G_{2,n})^ {d_\star-1} ,
\end{eqnarray*}
where the last inequality comes from Lemma~\ref{scW}. Since $G_{1,n}$ and $G_{2,n}$ converge to $G_0\in \Gm0$, the root $J_o$ (of cardinality $m_1+m_2$) has at least $m_0$ children with at least two elements. Thus, the cardinality $d_\star$ of the biggest child is bounded by $m_1+m_2-2(m_0-1)$. Thus,
\[ \left\lVert F(x,G_n) \right\rVert _{\infty}\succcurlyeq \W(G_{1,n},G_{2,n})^ {m_1+m_2-2m_0+1}\succcurlyeq \W(G_{1,n},G_{2,n})^ {2m-2m_0+1}.\]
\end{description}
Finally, if $m_0$ is more than one, we are in the second case (where $\fs(J_o)=\sm$) and if $m_0$ is one, the two cases can occur. But whatever the case, we always have
\[\left\lVert F(x,G_n) \right\rVert _{\infty}\succcurlyeq \W(G_{1,n},G_{2,n})^ {2m-2m_0+1}\]
so that \eqref{localn} is proved.
\section{A class of $k$-strongly identifiable families}
\label{sec:class}
We expect the strong identifiability to be rather generic, and hence the above theory often meaningful. In particular, \citet[Theorem~3]{Chen} has proved that
location and scale families with smooth densities are $2$-strongly identifiable. The theorem and the proof straightforwardly generalise to our case. We merely state the result.
\begin{thm}
\label{thm_identifiability}
Let $k\ge 1$. Let $f$ be a probability density with respect to to the Lebesgue measure. Assume that $f$ is $k-1$ times differentiable with
\[\lim_{x\to\pm \infty}f^{(p)}(x)=0\text{ for } p\in \lb 0,k-1\rb.\]
Set $F(x,\theta)=\int_{-\infty}^x f(y-\theta)dy$. Then the family $\{F(x,\theta),\theta\in\Theta\}$ is $k$-strongly identifiable. If $\Theta\subset (0,\infty)$, the result stays true with $F(x,\theta)=\frac{1}{\theta}\int_{-\infty}^x f\left(\frac{y}{\theta}\right)dy$.
\end{thm}
|
1,108,101,565,741 | arxiv | \section*{\large\bf Acknowledgments}
A. Mord\`a has been supported by the OCEVU Labex (Grant No. ANR-11-LABX-0060)
and by the A*MIDEX project (Project No. ANR-11-IDEX-0001-02), funded by the
``{\it Investissements d'Avenir}'' French government program managed by the ANR.
\newpage
\renewcommand{\Large}{\large}
|
1,108,101,565,742 | arxiv | \section{Introduction}
Neutrino physics is at the forefront of current theoretical and
experimental research in astro, nuclear, and particle physics. The
presence of neutrinos, being chargeless particles, can only be
inferred by detecting the secondary particles they create when
colliding and interacting with matter. Nuclei are often used as
neutrino detectors, thus the interpretation of neutrino data heavily
relies on detailed and quantitative knowledge of the features of the
neutrino-nucleus interaction. At low and intermediate energies, the
neutrino-nucleus cross section is dominated by QE and single pion
production processes. Those processes are largely dominated by
mechanisms where the gauge boson ($W^{\pm}, Z^0$) inside the nuclear
medium is absorbed by one nucleon, or excites a $\Delta(1232)$
resonance which subsequently decays into a $N\pi$ pair, respectively.
There is a general consensus among the theorists that a simple Fermi
Gas (FG) model, widely used in the analysis of neutrino oscillation
experiments, fails to provide a satisfactory description of the
measured cross sections, and inclusion of further nuclear effects is
needed~\cite{sakuda}. In the first part of the talk, I will focus on
the most relevant nuclear ingredients affecting to QE inclusive and
semi-inclusive processes. Next, I will examine the structure of the
neutrino pion production off the nucleon amplitude, and the role
played by chiral symmetry.
\section{QE Inclusive and Semi-Inclusive Reactions}
The double differential cross section, with respect to the outgoing
lepton kinematical variables, for the process $\nu_l (k) +\, A_Z \to
l^- (k^\prime) + X $ is given in the Laboratory (LAB) frame
by\footnote{Extensions to antineutrino or NC induced processes are
straightforward. Details can be found in Refs.~\cite{ccjuan,ncjuan}.}
\begin{equation}
\frac{d^2\sigma_{\nu l}}{d\Omega(\hat{k^\prime})dE^\prime_l} =
\frac{|\vec{k}^\prime|}{|\vec{k}~|}\frac{G^2}{4\pi^2}
L_{\mu\sigma}W^{\mu\sigma} \label{eq:sec}
\end{equation}
with $\vec{k}$ and $\vec{k}^\prime~$ the LAB lepton momenta, $G$ the
Fermi constant and $L$ and $W$ the leptonic and hadronic tensors,
respectively. The hadronic tensor includes all sort of non-leptonic
vertices and is determined by the $W^+-$boson selfenergy,
$\Pi^{\mu\rho}_W(q)$, in the nuclear medium. We follow here the
formalism of Ref.~\cite{GNO97}, and we evaluate the selfenergy of a
neutrino moving in infinite nuclear matter of density $\rho$. We
obtain,
\begin{eqnarray}
W^{\mu\sigma}_s (q) &\propto& \Theta(q^0)
\int \frac{d^3 r}{2\pi}~ {\rm Im}\left [ \Pi_W^{\mu\sigma}
+ \Pi_W^{\sigma\mu} \right ] (q;\rho(r))\label{eq:wmunus}\\
W^{\mu\sigma}_a (q) &\propto& \Theta(q^0)
\int \frac{d^3 r}{2\pi}~{\rm Re}\left [ \Pi_W^{\mu\sigma}
- \Pi_W^{\sigma\mu}\right] (q;\rho(r)) \label{eq:wmunua}
\end{eqnarray}
with $W^{\mu\sigma}= W^{\mu\sigma}_s + {\rm i} W^{\mu\sigma}_a$,
$q=k-k'$, and where we have used the Local Density Approximation
(LDA), which assumes a FG model for the nucleus to start
with\footnote{ Large basis shell model schemes provides a very
accurate description of the nuclear ground state wave
functions~\cite{SM}, which is unnecessary when one is dealing with
inclusive processes and nuclear excitation energies above, let us
say, 50 MeV~\cite{capture}. Besides, the description of high-lying
excitations necessitates the use of large model spaces and this
often leads to computational difficulties, making the approach
applicable essentially only for neutrino energies in the range of
tens of MeV.}. The virtual $W$ gauge boson can be absorbed by one
nucleon, 1p1h nuclear excitation, leading to the QE contribution to
the nuclear response function. In this case, the $W-$selfenergy is
determined, besides the $W^\pm NN$ vertex, by the imaginary part of
isospin asymmetric Lindhard function. We work on an non-symmetric
nuclear matter with different Fermi sea levels for protons
than for neutrons. Explicit expressions can be found in
~\cite{ccjuan}. In what follows, we will consider further improvements
on this simple framework:
\begin{itemize}
\item We enforce a \underline{correct energy balance} of the
different studied processes and consider the effect of the
\underline{Coulomb} field of the nucleus acting on the ejected
charged lepton.
\item \underline{RPA and SRC}: We take into account polarization
effects by substituting the particle-hole (1p1h) response by an RPA
response consisting of a series of ph and $\Delta$h excitations. We
use a Landau-Migdal ph-ph interaction~\cite{Sp77}: $V =
c_{0}\left\{ f_{0}+f_{0}^{\prime}\vec{\tau}_{1}\vec{\tau}_{2}+
g_{0}\vec{\sigma}_{1}\vec{\sigma}_{2}+g_{0}^{\prime}
\vec{\sigma}_{1}\vec{\sigma}_{2} \vec{\tau}_{1}\vec{\tau}_{2}
\right\}$. In the vector-isovector channel ($\vec{\sigma}
\vec{\sigma} \vec{\tau} \vec{\tau}$ operator) we use an
interaction~\cite{GNO97} with explicit $\pi-$meson (longitudinal)
and $\rho-$meson (transverse) exchanges, that also includes SRC and
$\Delta(1232)$ degrees of freedom. RPA effects are extremely
important, as confirmed by several groups~\cite{crpa}, and should be
definitely taken into account in any neutrino oscillation
analysis~\cite{nuint07}. As a matter of example, we show in the left
panel of Fig.~\ref{fig:rpa-fsi} results in
$^{16}$O at intermediate energies~\cite{ccjuan}.
\item \underline{SF+FSI:} We take into account the modification of the
nucleon dispersion relation in the medium by using nucleon
propagators properly dressed with a realistic
self-energy~\cite{FO92}. Thus, we compute the imaginary part of the
Lindhard function (ph propagator) using realistic particle and hole
SF's. The effect is twofold, firstly by using the hole SF, we go
beyond a simple FG of non-interacting nucleons, and we include some
interactions among the nucleons. Secondly, the particle SF accounts
for the interaction of the ejected nucleon with the final nuclear
state; this is most commonly called Final State Interaction (FSI) in
the literature. We show some results in the left panel of
Fig.~\ref{fig:rpa-fsi}, taken from Ref.~\cite{ccjuan}. We find a
sizeable reduction of the strength at the QE peak, which is slightly
shifted, and an enhancement of the high energy transfer tail. For
integrated cross sections both effects partially compensate. We
find a qualitative and quantitative agreement with the results of
Benhar et al.~\cite{sakuda} and of the Giessen
group~\cite{Buss:2007ar}.
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[height=4.cm,width=6cm]{japon2.eps}\hspace{2cm}
\includegraphics[height=4.cm,width=6cm]{7a.eps}
\end{center}
\caption{\footnotesize Left: $\nu_e$ inclusive QE differential cross
sections in $^{16}$O as a function of the transferred energy, for a
fixed transferred momentum. We show results with and without RPA and
SRC and with (SF) and without (NOREL) SF+FSI effects. Right:
$^{40}Ar(\nu,\nu+p)$ cross section as a function of the kinetic
energy of the final proton. The dashed histogram shows results
without rescattering (PWIA) and the solid one has been obtained from
a MC cascade simulation. }\label{fig:rpa-fsi}
\end{figure}
We have estimated the theoretical uncertainties of our model by Monte
Carlo (MC) propagating the uncertainties of its different inputs into
differential and total cross sections~\cite{ccjuan-errors}. We
conclude that our approach provides QE $\nu(\bar \nu)$--nucleus cross
sections with relative errors of about
10-15\%, while uncertainties affecting the ratios
$\sigma(\mu)/\sigma(e)$ and $\sigma(\bar\mu)/\sigma(\bar e)$ would be
certainly smaller, not larger than about 5\%, and mostly coming
from deficiencies of the local FG picture of the
nucleus~\cite{ccjuan-errors}.
Finally in the QE region, we have also studied CC and NC nucleon
emission processes which play an important role in the analysis of
oscillation experiments. In particular, they constitute the unique
signal for NC neutrino driven reactions. We use a MC simulation method
to account for the rescattering of the outgoing nucleon~\cite{GNO97}.
The first step is the gauge boson ($W^{\pm}$ and $Z^0$ ) absorption in
the nucleus\footnote{Some calculations in the literature use the PWIA
and DWIA, including or not relativistic effects. The PWIA
constitutes a poor approximation, since it neglects all types of
interactions between the ejected nucleon and the residual nuclear
system. The DWIA describes the ejected nucleon as a solution of the
Dirac or Schr\"odinger equation with an optical potential obtained
by fitting elastic proton--nucleus scattering data. The imaginary
part accounts for the absorption into unobserved channels. This
scheme is incorrect to study nucleon emission processes where the
state of the final nucleus is totally unobserved, and thus all final
nuclear configurations, either in the discrete or on the continuum,
contribute. The distortion of the nucleon wave function by a
complex optical potential removes all events where the nucleons
collide inelastically with other nucleons. Thus, in DWIA
calculations, the nucleons that interact inelastically are lost when
in the physical process they simply come off the nucleus with a
different energy, angle, and maybe charge, and they should
definitely be taken into account.}. Different distributions for both
NC and CC processes can be found in ~\cite{ncjuan}, as example, we
show here results for NC nucleon emission from argon (right panel of
Fig.~\ref{fig:rpa-fsi}). The rescattering of the outgoing nucleon
produces a depletion of the high energy side of the spectrum, but the
scattered nucleons clearly enhance the low energy region. Our results
compare well with those of Ref.~\cite{Buss:2007ar} obtained by means
of a transport model.
\begin{figure}[th]
\includegraphics[width=6.7cm,height=4.5cm]{anl.eps}\hspace{1cm}
\includegraphics[width=6.7cm,height=4.5cm]{t2k.eps}
\caption{ Left: Flux averaged $\pi N$ invariant mass distribution of events
for the $\nu_\mu p \to \mu^- p \pi^+$ reaction. Dashed lines stand
for the contribution of the $\Delta$ pole term with $C_5^A(0)=1.2$
(GTR) and $M_{A\Delta}= 1.05$ GeV. Dashed--dotted and central solid
lines are obtained when the full model of Ref.~\cite{prd} is
considered with $C_5^A(0)=1.2,\, M_{A\Delta}= 1.05$ GeV
(dashed-dotted) and with the best fit parameters $C_5^A(0)=0.867,\,
M_{A\Delta}= 0.985$ GeV (solid). For this latter case, we also show
the 68\% CL bands.
Right: CC coherent pion production differential cross section.
}\label{fig:res5}
\end{figure}
\section{Chiral Symmetry and Neutrino Pion
Production off the Nucleon}
The neutrino pion production off the nucleon is traditionally
described in the literature by means of the weak excitation of the
$\Delta(1232)$ resonance and its subsequent decay into $N\pi$. Here,
we present results from a model~\cite{prd} that includes also some
background terms required by chiral symmetry. The contribution of
these terms is sizeable and leads to significant effects in total and
partially integrated pion production cross sections at intermediate
energies. We re-adjust the $C_5^A(q^2)$ form--factor, that controls
the largest term of the $\Delta-$axial contribution, and find
corrections of the order of 30\% to the off diagonal
Goldberger-Treiman relation (GTR), when the $\nu_\mu p \to \mu^-p\pi^+$ ANL
$q^2-$differential cross section data~\cite{anl} are
fitted (right panel of Fig.~\ref{fig:res5}). Thus, we find a
substantially smaller contribution of the $\Delta$ pole mechanism than
in other approaches~\cite{weak-pi}, which has an important effect on the
CC and NC nuclear coherent pion production cross sections
(Fig.~\ref{fig:res5}). We have also extended the model to describe two pion
production processes near threshold~\cite{twopion}.
|
1,108,101,565,743 | arxiv | \section{More details about the Hilbert space and the representation basis}\label{appa}
In this work, we used the \emph{group element states} $\left\{\left|g\right\rangle\right\}_{g\in G}$ for the local (link) Hilbert spaces of the gauge field, and defined
the group transformations, right $\Theta_g\left|h\right\rangle = \left|hg^{-1}\right\rangle$ and left $\widetilde{\Theta}_g\left|h\right\rangle = \left|g^{-1}h\right\rangle$ on them.
One may also use
the \emph{representation basis} \cite{Zohar2015}, whose states $\left|jmn\right\rangle$ are labeled by an irreducible representation $j$ and two identifiers within the representation, $m$ and $n$,
corresponding to left and right degrees of freedom.
The transition from $\left|g\right\rangle$ to $\left|jmn\right\rangle$ is given by
\begin{equation}
\left\langle g | jmn \right\rangle = \sqrt{\frac{\text{dim}\left(j\right)}{\left|G\right|}} D^{j}_{mn}\left(g\right)
\label{changebasis}
\end{equation}
which is simply a generalization of Wigner's formula for the eigenfunctions of the isotropic rigid rotator (Wigner matrices) \cite{Rose1995,Edmonds1996}.
To understand better the $m$ and $n$ notions, let us use (\ref{changebasis}) to see how these states transform under the group:
\begin{equation}
\Theta_g \left|jmn\right\rangle = \left|jmn'\right\rangle D^{j}_{n'n}\left(g\right)\quad \quad\tilde{\Theta}_g\left|h\right\rangle = D^{j}_{mm'}\left(g\right)\left|jm'n\right\rangle
\end{equation}
One particular state in this basis is the singlet state - $\left|000\right\rangle$, corresponding to the trivial representation. It is invariant under any group transformation.
This is the only representation state we use in the main text, as it is used for the gauging procedure. Note that (\ref{changebasis}) implies that
\begin{equation}
\left\langle g |000\right\rangle = \left|G\right|^{-1/2}
\end{equation}
which was used by us in the main text.
We also introduced the \emph{group element operators} \cite{Zohar2015},
\begin{equation}
U^{j}_{mn} = \int dg D_{mn}^{j}\left(g\right)\left|g\right\rangle\left\langle g \right|.
\label{Udef}
\end{equation}
These are matrices of operators: the matrix indices, $m,n$, refer to a linear space called either group, color or gauge space, on which the group transformations act. Each such matrix element is an operator on the local
Hilbert space on the link.
It is clear from the definition that the different matrix elements of $U^{j}_{mn}$ commute, and hence one may define functions of these operators as if they were matrices of numbers.
Note that
\begin{equation}
\left|jmn\right\rangle=\sqrt{\text{dim}\left(j\right)}U^{j}_{mn}\left|000\right\rangle.
\end{equation}
In the main text, we stated that local gauge symmetry is simply invariance under the gauge transformations
\begin{equation}
\hat\Theta_g\left(\mathbf{x}\right) = \underset{k=1...d}{\prod}\left(\widetilde{\Theta}_g\left(\mathbf{x},k\right)\Theta^{\dagger}_g\left(\mathbf{x}-\hat{\mathbf{k}},k\right)\right)
\check{\theta}^{\dagger}_g\left(\mathbf{x}\right)
\end{equation}
involving a vertex and all the links starting and ending there.
A gauge invariant state $\left|\Psi\right\rangle$ satisfies $\hat\Theta_g\left(\mathbf{x}\right)\left|\Psi\right\rangle=\left|\Psi\right\rangle$ for each $\mathbf{x}\in\mathbb{Z}^d,g\in G$.
If $G$ is a Lie group, one may define its left and right generators, $L_a,R_a$ respectively, satisfying the group's algebra
\begin{equation}
\begin{aligned}
&\left[R_a,R_b\right]=if_{abc}R_c \\
&\left[L_a,L_b\right]=-if_{abc}L_c \\
&\left[R_a,L_b\right]=0
\end{aligned}
\end{equation}
as well as the matrix $j$ represenation of the generators, $T^j_a$, with
\begin{equation}
\left[T^j_a,T^j_b\right]=if_{abc}T^j_c \\
\end{equation}
where $f_{abc}$ are the group's structure constants.
These can be used for expressing the transformation operators, as well as the representation matrices, using the group parameters $\phi_a\left(g\right)$:
\begin{equation}
\begin{aligned}
\Theta_g &= e^{i\phi_a\left(g\right) R_a} \\
\widetilde\Theta_g &= e^{i\phi_a\left(g\right) L_a}\\
D^{j}\left(g\right) &= e^{i \phi_a\left(g\right) T_a}
\end{aligned}
\end{equation}
Formally, one may also define operators $\hat\phi_a$,
such that
\begin{equation}
U^j_{mn} = \left(e^{i \hat\phi_a T_a}\right)_{mn}
\end{equation}
$\hat\phi_a$ play the role of the vector potential on a link, which is not a well defined quantity on a lattice (where one uses the group elements instead of its algebra). Therefore
the group element operator is the lattice analog to a Wilson line along a link.
As transformation generators, the $R,L$ operators satisfy
\begin{equation}
\begin{aligned}
\left[R_a,U^{j}_{mn}\right]&=U^{j}_{mn'}\left(T_a\right)^{j}_{n'n} \\
\left[L_a,U^{j}_{mn}\right]&=\left(T_a\right)^{j}_{mm'} U^{j}_{m'n}
\end{aligned}
\end{equation}
It is also possible to express the gauge transformation in this way, and define
\begin{equation}
\hat\Theta_g\left(\mathbf{x}\right) = e^{i \phi_a G_a}
\end{equation}
with
\begin{equation}
G_a\left(\mathbf{x}\right) = \underset{k=1...d}{\sum}\left(L_a\left(\mathbf{x},k\right) - R_a\left(\mathbf{x-\hat{k}},k\right)\right)-Q_a\left(\mathbf{x}\right)
\end{equation}
where $Q_a\left(\mathbf{x}\right)$ are the fermionic charges (see, e.g. \cite{Zohar2015,Zohar2015b,Zohar2016a}). Then, gauge invariance (without static charges) implies
\begin{equation}
G_a\left(\mathbf{x}\right)\left|\Psi\right\rangle = 0 \quad \forall \mathbf{x},g,a
\end{equation}
This equation is
known as the Gauss law, and interprets $R_a,L_a$ as the (right and left) electric fields.
\section{Gauging in the representation basis and the truncation of the physical Hilbert spaces}\label{appb}
We mentioned in the main text that the fermionic construction imposes a truncation of the gauge field physical Hilbert space. Full details may be found in \cite{Zohar2015b,Zohar2016a}; here we shall explain that briefly.
In the gauging procedure, we defined the following unitary operators, that entangle the gauge field and virtual fermions on a link:
\begin{equation}
\mathcal{U}\left(\mathbf{x},k\right) = \int dg \left|g\left(\mathbf{x},k\right)\right\rangle\left\langle g\left(\mathbf{x},k\right)\right|\otimes \mathcal{U}_g\left(\mathbf{x},k\right)
\end{equation}
(see Fig. \ref{fig1}), leading us to the gauged state
\begin{equation}
\left|\Psi\right\rangle =\left|G\right|^{N_{\text{links}}/2} \underset{\mathbf{x},k}{\prod}\omega\left(\mathbf{x},k\right) \underset{\mathbf{x},k}{\prod}\mathcal{U}\left(\mathbf{x},k\right)\left|000\right\rangle_{\mathbf{x},k}
\underset{\mathbf{x}}{\prod} A\left(\mathbf{x}\right) \left|\Omega\right\rangle
\label{GS}
\end{equation}
The transformation $\mathcal{U}\left(\mathbf{x},k\right)$, when acting on the virtual creation operators of $A\left(\mathbf{x}\right)$, simply rotates them, within $A$, by the matrix $U$ ($\overline{U}$) on the link, for an even (odd)
$\mathbf{x}$:
$c_m^{j,\alpha\dagger}\left(\mathbf{x},+k\right) \rightarrow U^{j}_{mn}\left(\mathbf{x},k\right)c_n^{j,\alpha\dagger}\left(\mathbf{x}+k\right)$ for $\mathbf{x} \in e$, (or $\overline{U}^{j}_{mn}\left(\mathbf{x},k\right)c_n^{j,\alpha\dagger}\left(\mathbf{x},+k\right)$ for $\mathbf{x} \in o$).
In other words, the physical electric field is identified with a virtual electric field, defined by the virtual fermions. The action $U^{j}_{mn}\left(\mathbf{x},k\right)c_n^{j,\alpha\dagger}\left(\mathbf{x}+k\right)$ (and multiple actions thereof, as such operators
appear in the exponential of $A$) on the product of fermionic vacuum and gauge field singlet, excite both the virtual and physical electric fields in a correlated way. However, the virtual fields are truncated, as they are created
from a finite set of fermionic operators, which truncates the physical Hilbert space on the link as well. The truncation is done in representation basis: the physical gauge field states on a link are created from the singlet
$\left|000\right\rangle$ with products of $U$ ($\overline{U}$) matrix elements, accompanied by virtual fermionic operators, which due to the fermionic statistics, impose the truncation.
For example, in the $U(1)$ case of \cite{Zohar2015b},
there are two virtual fermionic modes on each edge, corresponding to the representations $j=\pm1$, that may lead together to the total representations $0,\pm1$ - virtual electric field configurations, with $0$ corresponding
to no fermions or both present (the fermionic statistics forces the creation of a singlet), and $\pm1$ to the presence of a single fermion. This truncates the physical Hilbert space on the link, making it three dimensional,
with electric field $0,\pm1$ (not differentiating between the two possible ways to obtain a virtual zero field). As $\mathbb{Z}_3$ is a subgroup of $U(1)$, the
PEPS is also invariant under it, and in general, one could use the same state $\left|\Psi\right\rangle$ for studying $U(1)$ models with an electric field truncation $\left|E\right|\leq\ell$ as well as $\mathbb{Z}_{2\ell+1}$.
The difference between the two cases will arise for the observables whose expectation values and correlations are computed - i.e., whether they respect the $U(1)$ symmetry or only that of the subgroup $\mathbb{Z}_{2\ell+1}$.
In the $SU(2)$ case of \cite{Zohar2016a}, once again there are two virtual modes per edge,
corresponding to the two spin half states. These may realize only the representations $0,1/2$ on the link (1 is prevented by the fermionic statistics), and the Hilbert space of the link is truncated to
$\left|jmn\right\rangle$ states with $j=0,1/2$ - a five dimensional space. In this case, however, we could not use the same state construction for studying a subgroup, as there is no non-Abelian 5 dimensional subgroup of $SU(2)$.
\section{More on the transformation properties of gauged fermionic gaussian states}\label{appc}
In the main text, we wrote that the fermionic gaussian state $\left|\psi\left(\mathcal{G}\right)\right\rangle$ describes fermions coupled to a static background field $\mathcal{G}$. Let us see why.
Using Eqs. (\ref{transver},\ref{translink}), we obtain that
\begin{equation}
\underset{\mathbf{x}}{\prod}\check{\theta}^{\dagger}_{h\left(\mathbf{x}\right)}\left(\mathbf{x}\right)
\left|\psi\left(\mathcal{G}\right)\right\rangle = \left|\psi\left(\mathcal{G'}\right)\right\rangle
\end{equation}
where $\mathcal{G'} = \left\{g'\left(\mathbf{x},k\right) = h^{-1}\left(\mathbf{x}\right)g\left(\mathbf{x},k\right)h\left(\mathbf{x+\hat{k}}\right)\right\}$: under fermionic transformations with arbitrary, position-dependent group elements $\left\{h\left(\mathbf{x}\right)\right\}$,
the gauge field configuration transforms as $\mathcal{G} \rightarrow \mathcal{G'} = \left\{g'\left(\mathbf{x},k\right) = h^{-1}\left(\mathbf{x}\right)g\left(\mathbf{x},k\right)h\left(\mathbf{x+\hat{k}}\right)\right\}$:
$\left|\psi\left(\mathcal{G}\right)\right\rangle$ transforms, indeed, as a state with a background field configuration $\mathcal{G}$. Note, that as the group elements used as parameters for this transformation
are vertex-dependent, the transformation is local, and could be performed similarly only on few vertices (or one), in all cases giving rise to a physically equivalent state.
This also implies the non-surprising result, that the globally-invariant state corresponds to a state without background field, or, in other words, in which $\mathcal{G}$ is the identity element ($e$) everywhere:
\begin{equation}
\left|\psi_0\right\rangle = \left|\psi\left(e\right)\right\rangle
\end{equation}
The state $\left|\Psi\right\rangle$ is gauge invariant by construction, as shown in \cite{Zohar2015b,Zohar2016a}. Here, however, we shall give an alternative proof for that, using the group element states, and the physical interpretation of $\left|\psi\left(\mathcal{G}\right)\right\rangle$ presented above. Let us apply a local gauge transformation with a group element $h$ at the vertex $\mathbf{x}$ and use the transformation properties of $\left|\psi\left(\mathcal{G}\right)\right\rangle$ and $\left|\mathcal{G}\right\rangle$ :
\begin{widetext}
\begin{equation}
\hat\Theta_h\left(\mathbf{x}\right)\left|\Psi\right\rangle =
\int \mathcal{DG} \underset{k=1...d}{\prod}\left(\widetilde{\Theta}_h\left(\mathbf{x},k\right)\Theta^{\dagger}_h\left(\mathbf{x-e}_k,k\right)\right)\left|\mathcal{G}\right\rangle
\check{\theta}^{\dagger}_h\left(\mathbf{x}\right)\left|\psi\left(\mathcal{G}\right)\right\rangle
=\int \mathcal{DG} \left|\mathcal{G'}\right\rangle \left|\psi\left(\mathcal{G'}\right)\right\rangle =\int \mathcal{DG'} \left|\mathcal{G'}\right\rangle \left|\psi\left(\mathcal{G'}\right)\right\rangle= \left|\Psi\right\rangle
\end{equation}
\end{widetext}
where we have used the invariance of the integration measure under a unitary coordinate change - as the gauge transformation is.
\section{The gaussian formalism}\label{appd}
Fermionic gaussian states are fully characterized by their covariance matrix \cite{Bravyi05}. In the main text, we defined it for the physical fermions, but obviously it could be extended for the virtual modes as well. Besides that, the gaussian formalism becomes extremely simple when one, instead of using Dirac fermions, uses a majorana formulation - i.e., for every fermionic mode $a_i$, define the two hermitian Majorana operators
\begin{equation}
\gamma_i^{(1)}=\left(a_i+a_i^{\dagger}\right) \quad ; \quad \gamma_i^{(2)}=i\left(a_i-a_i^{\dagger}\right)
\end{equation}
If we unite all the $2N$ Majorana modes of a system containing $N$ fermionic modes under $\left\{\gamma_a\right\}_{a=1}^{2N}$, we can write that they satisfy the algebra
\begin{equation}
\left\{\gamma_a,\gamma_b\right\} = 2 \delta_{ab}
\end{equation}
and define the covariance matrix for a gaussian state $\left|\phi\right\rangle$ in Majorana terms
\begin{equation}
\Gamma_{ab} = \frac{i}{2}\left\langle \left[\gamma_a,\gamma_b\right] \right\rangle
=\frac{i}{2} \frac{\left\langle \phi \right| \left[\gamma_a,\gamma_b\right] \left|\phi\right\rangle}{\left\langle \phi |\phi\right\rangle}
\end{equation}
To obtain the covariance matrix of the state $\left|\psi\left(\mathcal{G}\right)\right\rangle$, one can use a gaussian map \cite{Bravyi05,Kraus2010}. This is done as follows. Define the state
\begin{equation}
\left|A\right\rangle = \underset{\mathbf{x}}{\prod} A\left(\mathbf{x}\right) \left|\Omega\right\rangle
\end{equation}
and denote its density matrix by $\rho_A$. It is a gaussian product state, that does not introduce mixing among different vertices. Thus, its covariance matrix $M$ will be a direct sum of the covariance matrices of each vertex, $M\left(\mathbf{x}\right)$
\begin{equation}
M = \underset{\mathbf{x}}{\bigoplus}M\left(\mathbf{x}\right)
\end{equation}
and in the translationally invariant case, one will simply have that $M\left(\mathbf{x}\right) = M_0$.
We can thus express $\left|\psi_0\right\rangle$ as
\begin{equation}
\left|\psi_0\right\rangle = \underset{\mathbf{x},k}{\prod}\omega\left(\mathbf{x},k\right) \left|A\right\rangle
\end{equation}
If we denote the density matrix corresponding to the unnormalized operators $\omega\left(\mathbf{x},k\right)$ by $\rho_B$, we can write the density matrix of physical fermions corresponding to $\left|\psi_0\right\rangle$ as
\begin{equation}
\rho_{0} = \text{Tr}_{V}\left( \rho_B \rho_A\right)
\end{equation}
which involves a fermionic partial trace on the virtual modes, that has to be carefully defined \cite{Bravyi05}.
We reorder the covariance matrix $M$ such that it has the following form:
\begin{equation}
M = \left(
\begin{array}{cc}
M_A & M_B \\
-M^T_B & M_D \\
\end{array}
\right)
\end{equation}
where $M_A$ is a block that corresponds to correlations of physical fermions with themselves, $M_D$ corresponds to the same for virtual fermions, and $M_B$ is for mixed correlations. We also construct the covariance matrix
$\Gamma_{\text{in}}$ corresponding to $\rho_B$; its dimension is equal to this of $M_D$, as it only involves virtual fermions, and we order the matrix in the same order of the virtual modes in $M$. Then, the covariance matrix
of the output, physical state $\left|\Psi\right\rangle$ is given by \cite{Bravyi05}
\begin{equation}
\Gamma_{\text{out}} = M_A + M_B\left(M_D - \Gamma_{\text{in}}\right)^{-1}M_B^T
\end{equation}
(this holds only if $\rho_B$ is pure, but this is our case here). If the PEPS is translationally invariant, one can decompose everything into momentum blocks using a Fourier transform \cite{Kraus2010}, but we are interested
here in a more general case.
Now, turn to the states $\left|\psi\left(\mathcal{G}\right)\right\rangle$ which are gaussian too, and will admit the same formalism. We have
\begin{equation}
\left|\psi\left(\mathcal{G}\right)\right\rangle = \underset{\mathbf{x},k}{\prod}\omega\left(\mathbf{x},k\right) \underset{\mathbf{x},k}{\prod}\mathcal{U}_{g\left(\mathbf{x},k\right)}\left(\mathbf{x},k\right) \left|A\right\rangle
\end{equation}
One can interpret now the gauging transformation $\mathcal{U}\left(\mathcal{G}\right)\equiv\underset{\mathbf{x},k}{\prod}\mathcal{U}_{g\left(\mathbf{x},k\right)}\left(\mathbf{x},k\right)$ as acting either to the right, on $\left|A\right\rangle$ (as in the main text), giving rise to the
state $\left|A\left(\mathcal{G}\right)\right\rangle = \mathcal{U}\left(\mathcal{G}\right)\left|A\right\rangle$, with the density matrix
\begin{equation}
\rho_A\left(\mathcal{G}\right) = \mathcal{U}\left(\mathcal{G}\right) \rho_A \mathcal{U}^{\dagger}\left(\mathcal{G}\right)
\end{equation}
or the other way around, on the projection operators,
giving rise to
\begin{equation}
\rho_B\left(\mathcal{G}\right) = \mathcal{U}^{\dagger}\left(\mathcal{G}\right) \rho_B \mathcal{U}\left(\mathcal{G}\right)
\end{equation}
Then, one obtains that the output state is
\begin{equation}
\rho\left(\mathcal{G}\right) = \text{Tr}_{V}\left( \rho_B \rho_A\left(\mathcal{G}\right)\right) = \text{Tr}_{V}\left( \rho_B\left(\mathcal{G}\right) \rho_A\right)
\end{equation}
The covariance matrix of the output gauged state will be
\begin{equation}
\begin{aligned}
\Gamma_{\text{out}}\left(\mathcal{G}\right) &= M_A\left(\mathcal{G}\right) + M_B\left(\mathcal{G}\right)\left(M_D\left(\mathcal{G}\right) - \Gamma_{\text{in}}\right)^{-1}M_B\left(\mathcal{G}\right)^T
\\&=M_A + M_B\left(M_D - \Gamma_{\text{in}}\left(\mathcal{G}\right)\right)^{-1}M_B^T
\end{aligned}
\end{equation}
where either $M$ or $\Gamma_{\text{in}}$ are transformed with respect to the gauge configuration $\mathcal{G}$. This is a very simple procedure: such transformations are mapped to orthogonal transformations on Majorana
covariance matrices \cite{Kraus2010,Zohar2015b}. Thus the covariance matrix elements may be calculated very easily using the gaussian formalism.
A very crucial quantity for our method is $\left\langle \psi\left(\mathcal{G}\right) | \psi\left(\mathcal{G}\right)\right\rangle$. This can also be calculated very simply with the gaussian formalism:
\begin{equation}
\left\langle \psi\left(\mathcal{G}\right) | \psi\left(\mathcal{G}\right)\right\rangle \propto \left\langle A\left(\mathcal{G}\right) \right| \rho_B \left| A \left(\mathcal{G}\right) \right\rangle =
\left\langle A \right| \rho_B \left(\mathcal{G}\right) \left| A \right\rangle
\label{normcalc}
\end{equation}
where the proportion is since the $\omega$ operators are not normalized projectors, but this is irrelevant for our purposes, as we are interested eventually in
$p\left(\mathcal{G}\right) = \frac{\left\langle\psi\left(\mathcal{G}\right)|\psi\left(\mathcal{G}\right)\right\rangle}{ \int \mathcal{D}\mathcal{G}' \left\langle\psi\left(\mathcal{G}'\right)|\psi\left(\mathcal{G}'\right)\right\rangle}$. Thus one does not have to worry about the normalization in (\ref{normcalc}), and obtain simply
\begin{equation}
\left\langle \psi\left(\mathcal{G}\right) | \psi\left(\mathcal{G}\right)\right\rangle = \text{Tr}\left( \rho_B \rho_A\left(\mathcal{G}\right)\right) = \text{Tr}\left( \rho_B\left(\mathcal{G}\right) \rho_A\right)
\label{nrmtrace}
\end{equation}
which once again could be calculated using the gaussian techniques of \cite{Bravyi05}.
In the case of pure gauge theories, the norm calculation simplifies even further, as it involves no physical fermions. Thus $M=M_D$, and therefore (\ref{nrmtrace}) simply corresponds to the overlap of two gaussian states involving the same modes, e.g. $\rho_B\left(\mathcal{G}\right)$ and $\rho_A $, if we choose to act with the gauge transformation on the bonds. This has a very simple formula involving the covariance matrices \cite{Bravyi05,Mazza2012a},
\begin{equation}
\text{Tr}\left( \rho_B\left(\mathcal{G}\right) \rho_A\right) = \sqrt{\text{det}\left(\frac{1-\Gamma_{\text{in}}\left(\mathcal{G}\right)M_D}{2}\right)}
\label{overlap}
\end{equation}
which we used in our numerical illustration, that dealt with a pure gauge theory, described next.
\section{Using the method for further observables}\label{appe}
Gauge invariant operators - which could be used as physical observables in gauge theories - can be of several forms.
For example,
they may involve a path matrix product of $U$,$U^{\dagger}$ -
closed and traced (Wilson loop) or enclosed within the appropriate fermionic operators. Wilson loops were discussed in the main text. Here we shall comment on the other type - "meson" operators:
oriented strings of group element operators along an open path $C$, connecting fermionic operators on its edges, e.g.
\begin{equation}
\mathcal{M}\left(\mathbf{x},\mathbf{y},C\right)=\psi^{\dagger}_m\left(\mathbf{x}\right) \left(\underset{\left\{\mathbf{z},k\right\}\in C}{\prod} U\left(\mathbf{z},k\right) \right)_{mn} \psi_n\left(\mathbf{y}\right),
\end{equation}
where $C$ connects $\mathbf{x,y}$ and the $U$ matrices may get a $\dagger$ depending on the orientation, as in the Wilson Loop case.
Again, we use the fact that $\left|\mathcal{G}\right\rangle$ is an eigenstate of the gauge field part. The fermionic part may be expressed in terms of the covariance matrix, as
\begin{equation}
\left\langle \psi\left(\mathcal{G}\right)\right| \psi^{\dagger}_m \left(\mathbf{x}\right)\psi_n\left(\mathbf{y}\right)\left|\psi\left(\mathcal{G}\right)\right\rangle=
-i\mathcal{R}^{\mathcal{G}}_{nm}\left(\mathbf{y},\mathbf{x}\right)\left\langle\psi\left(\mathcal{G}\right)|\psi\left(\mathcal{G}\right)\right\rangle
\end{equation}
We define
\begin{equation}
F_{\mathcal{M} \left(\mathbf{x},\mathbf{y},C\right)}\left(\mathcal{G}\right)=-i\left(\underset{\left\{\mathbf{x},k\right\}\in C}{\prod}D\left(g\left(\mathbf{x},k\right)\right)\right)_{mn} \mathcal{R}^{\mathcal{G}}_{nm}\left(\mathbf{y},\mathbf{x}\right)
\end{equation}
and obtain
\begin{equation}
\left\langle \mathcal{M} \left(\mathbf{x},\mathbf{y},C\right) \right\rangle =
\int \mathcal{DG} F_{\mathcal{M} \left(\mathbf{x},\mathbf{y},C\right)}\left(\mathcal{G}\right) p\left(\mathcal{G}\right)
\end{equation}
allowing one to use Monte-Carlo efficiently as well.
Another class of gauge invariant operators are such that are diagonal in the representation basis.
They include, for example, local operators (on
a link) of the form
\begin{equation}
\underset{j}{\sum}f_j \Pi_j \equiv \underset{j}{\sum}f_j \left|jmn\right\rangle\left\langle jmn\right|
\end{equation}
where $f_j$ are some representation-dependent coefficients. A conventional choice for Lie groups is the casimir
operator: for example, for $U(1)$, $f_j = j^2$, and for $SU(2)$, $f_j = j\left(j+1\right)$. This operator is then understood as the electric energy on the link (since it corresponds to the square of the electric field),
which allows to write down the common \emph{Electric Hamiltonian}
\begin{equation}
H_E = \underset{\mathbf{x},k}{\sum}f_j \Pi_j\left(\mathbf{x},k\right)
\end{equation}
One can also use the method presented in the main text for such operators. Consider a gauge field operator $O\left(\mathcal{L}\right)$, which is not diagonal in terms of group element states,
acting only on a finite set of neighboring links $\mathcal{L}$ (in the electric energy case, it acts on a single link). Then,
\begin{widetext}
\begin{equation}
\left\langle\mathcal{G'}\right|O\left(\mathcal{L}\right)\left|\mathcal{G}\right\rangle =
\underset{\left\{\mathbf{x},k\right\}\notin\mathcal{L}}{\prod}\delta\left(g\left(\mathbf{x},k\right),g'\left(\mathbf{x},k\right)\right)\underset{\left\{\mathbf{x},k\right\}\in\mathcal{L}}
{\prod}f_{O\left(\mathcal{L}\right)}\left(g\left(\mathbf{x},k\right),g'\left(\mathbf{x},k\right)\right)
\end{equation}
and if we define
\begin{equation}
F_{O\left(\mathcal{L}\right)}\left(\mathcal{G}\right)
=
\int \mathcal{DG'}
\left\langle\mathcal{G'}\right| O\left(\mathcal{L}\right) \left|\mathcal{G}\right\rangle \left\langle \psi\left(\mathcal{G'}\right) | \psi\left(\mathcal{G}\right)\right\rangle
/\left\langle\psi\left(\mathcal{G}\right)|\psi\left(\mathcal{G}\right)\right\rangle
\end{equation}
we obtain a Monte-Carlo applicable form for such obervables too:
\begin{equation}
\left\langle O\left(\mathcal{L}\right) \right\rangle = \int \mathcal{DG} F_{O\left(\mathcal{L}\right)}\left(\mathcal{G}\right) p\left(\mathcal{G}\right)
\end{equation}
(as $O\left(\mathcal{L}\right)$ is local, the $\mathcal{DG'}$ integration is simple and involves only a few integration variables, and $\left\langle \psi\left(\mathcal{G'}\right) | \psi\left(\mathcal{G}\right)\right\rangle$
can be computed efficiently as well).
Out of these building blocks one could construct the Hamiltonian of a lattice gauge theory and therefore calculate its expectation value for GGPEPS.
A particular type of a Wilson loop, $W\left(C\right)$, in case $C$ is a single plaquette (unit square), is the \emph{plaquette operator},
\begin{equation}
P\left(\mathbf{x},k_1,k_2\right)=\text{Tr}\left(
U\left(\mathbf{x},k_1\right)
U\left(\mathbf{x+e}_1,k_2\right)
U^{\dagger}\left(\mathbf{x+e}_2,k_1\right)
U^{\dagger}\left(\mathbf{x},k_2\right)\right)
\end{equation}
\end{widetext}
which allows us to write down the \emph{Magnetic Hamiltonian},
\begin{equation}
H_B = \underset{\mathbf{x},k_1<k_2}{\sum}\left(P\left(\mathbf{x},k_1,k_2\right)+P^{\dagger}\left(\mathbf{x},k_1,k_2\right)\right)
\end{equation}
Altogether, we obtain the \emph{Kogut-Susskind Hamiltonian} for lattice pure-gauge theories \cite{KogutSusskind,KogutLattice},
\begin{equation}
H_{KS} = H_E + H_B
\end{equation}
The dynamical matter terms which could be added to these are either the mass terms - local fermionic terms - for which the calculation of the expectation value does not even require Monte-Carlo integration, as it does not involve the gauge field, and gauge-matter interactions which are mesonic operators along a single link.
\section{More details on the $\mathbb{Z}_3$ illustration}\label{appf}
The $\mathbb{Z}_3$ parametrization used by us for the illustration is taken from \cite{Zohar2015b}, where it parameterized $U(1)$ gauge invariant states, with translation and rotation invariance, in two space dimensions.
However, as $\mathbb{Z}_3$ is a subgroup of $U(1)$, and since the fermionic construction imposes a truncation of the link Hilbert spaces to three dimensions, one can use the same parametrization for $\mathbb{Z}_3$ as well.
Generally, the parametrization of \cite{Zohar2015b} included dynamical fermions, but for the current work we only needed the pure gauge case, which is what we shall describe here. Therefore the state involves no physical fermions, and the $A$ operators are only used for connecting the gauge field Hilbert spaces on the links in a gauge invariant way.
There are two virtual modes on each edge (and therefore the bond dimension is $4$): $c^{j\dagger}\left(\mathbf{x},\pm k\right)$, corresponding to single copies of the representations $j=\pm1$ (no need to use $\alpha$), and $k=1,2$ - altogether eight modes. The operator $A$ is constructed using the operators
\begin{widetext}
\begin{equation}
A = \text{exp}\left(\left(
\begin{array}{c}
c^{+\dagger}\left(\mathbf{x},-1\right) \\
c^{-\dagger}\left(\mathbf{x},+ 1\right) \\
c^{-\dagger}\left(\mathbf{x},+2\right) \\
c^{+\dagger}\left(\mathbf{x},-2\right) \\
\end{array}
\right)^T
\left(
\begin{array}{cccc}
0 & y & z/\sqrt{2} & z/\sqrt{2} \\
-y & 0 & -z/\sqrt{2} & z/\sqrt{2} \\
-z/\sqrt{2} & z/\sqrt{2} & 0 & y \\
-z/\sqrt{2} & -z/\sqrt{2} & -y & 0 \\
\end{array}
\right)
\left(
\begin{array}{c}
c^{-\dagger}\left(\mathbf{x},-1\right) \\
c^{+\dagger}\left(\mathbf{x},+ 1\right) \\
c^{+\dagger}\left(\mathbf{x},+2\right) \\
c^{-\dagger}\left(\mathbf{x},-2\right) \\
\end{array}
\right)\right)
\end{equation}
and
\begin{equation}
V\left(\mathbf{x},k\right) = \text{exp}\left(\left(\sigma_x\right)_{jj'}c^{j\dagger}\left(\mathbf{x},k\right)c^{j'\dagger}\left(\mathbf{x+e}_k,-k\right)\right)
\end{equation}
which satisfy the desired symmetry properties defined in the main text (Eqs. (\ref{transver},\ref{translink})).
One may see that the parameter $y$ connects the horizontal or the vertical virtual degrees of freedom, and therefore is responsible for creating straight flux lines. $z$, on the other hand, is
connecting horizontal degrees of freedom to vertical ones, and therefore is in charge of the corners.
For a detailed derivation of this result, the reader could refer to \cite{Zohar2015b}, where everything is explained and proven in detail. To make this easy, since we made a slight change of notations from there to be able to
generalize to higher dimensions, let us briefly comment on the notation and convention changes.
First, we changed the signs on the labels of the virtual fermions on odd sites. This does not change the operator $A$ (exchanging "positive" and "negative" operators, in the notions of \cite{Zohar2015b}), but gives a
different form to the projection operators $\omega$: in \cite{Zohar2015b}, they connected modes from opposite edges of the links, labeled by the same sign, and here the signs are opposite. This also affects the gauging procedure, as originally it was done without staggering the gauge field, and here we stagger.
Another difference is in the names of the virtual modes on a given vertex $\mathbf{x}$. This is summarized in the table below.
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Notation in \cite{Zohar2015b} & Current notation, $\mathbf{x}$ even & Current notation, $\mathbf{x}$ odd \\
\hline
\hline
$\psi^{\dagger}\left(\mathbf{x}\right)$ & $\psi^{\dagger}\left(\mathbf{x}\right)$ & $\psi^{\dagger}\left(\mathbf{x}\right)$ \\
\hline
$l^{\dagger}_+\left(\mathbf{x}\right) $& $c^{+\dagger}\left(\mathbf{x},-1\right)$ & $c^{-\dagger}\left(\mathbf{x},-1\right)$ \\
\hline
$l^{\dagger}_-\left(\mathbf{x}\right)$ & $c^{-\dagger}\left(\mathbf{x},- 1\right)$ & $c^{+\dagger}\left(\mathbf{x},- 1\right)$ \\
\hline
$r^{\dagger}_+\left(\mathbf{x}\right) $& $c^{+\dagger}\left(\mathbf{x},+ 1\right)$ & $c^{-\dagger}\left(\mathbf{x},+ 1\right)$ \\
\hline
$r^{\dagger}_-\left(\mathbf{x}\right)$ & $c^{-\dagger}\left(\mathbf{x},+ 1\right)$ & $c^{+\dagger}\left(\mathbf{x},+ 1\right)$ \\
\hline
$u^{\dagger}_+\left(\mathbf{x}\right) $& $c^{+\dagger}\left(\mathbf{x},+ 2\right)$ & $c^{-\dagger}\left(\mathbf{x},+ 2\right)$ \\
\hline
$u^{\dagger}_-\left(\mathbf{x}\right)$ & $c^{-\dagger}\left(\mathbf{x},+ 2\right)$ & $c^{+\dagger}\left(\mathbf{x},+ 2\right)$ \\
\hline
$d^{\dagger}_+\left(\mathbf{x}\right) $& $c^{+\dagger}\left(\mathbf{x},- 2\right)$ & $c^{-\dagger}\left(\mathbf{x},- 2\right)$ \\
\hline
$d^{\dagger}_-\left(\mathbf{x}\right)$ & $c^{-\dagger}\left(\mathbf{x},- 2\right)$ & $c^{+\dagger}\left(\mathbf{x},- 2\right)$ \\
\hline
\end{tabular}
\end{center}
\end{widetext}
In the process of gauging, we simply put phases on the virtual fermions:
$c^{j\dagger}\left(\mathbf{x},+k\right) \rightarrow \underset{q=-1,0,1}{\sum}e^{\pm 2 \pi i (-1)^{x_1+x_2} j q /3}c^{j\dagger}\left(\mathbf{x},+k\right)\otimes\left|q\right\rangle\left\langle q \right|$
where $(-1)^{x_1+x_2}$ is due to the staggering; The $q$s are the variables which are summed
in the Monte-Carlo procedure. The relevant $U$ operators on the links are $U^{j=1} = U^{j=-1\dagger} = \overset{q}{\underset{q=-1}{\sum}}e^{2\pi i q/3}\left|q\right\rangle\left\langle q\right|$.
As it is a pure gauge theory, the probability function could be calculated through the formula of overlap of two (virtual) fermionic gaussian states, as explained above (Eq. (\ref{overlap})).
|
1,108,101,565,744 | arxiv |
\section{Introduction \label{sec:Intro}}
To study string transformations, given the success of finite-state
automata and the associated theory of regular languages, a natural
starting point is the model of finite-state transducers. A finite-state
transducer emits output symbols at every step, and given an input
string, the corresponding output string is the concatenation of all
the output symbols emitted by the machine during its execution. Such
transducers have been studied since the 1960s, and it has been known
that the transducers have very different properties compared to the
acceptors: \emph{two-way} transducers are strictly more expressive
than their one-way counter-parts, and the post-image of a regular
language under a two-way transducer need not be a regular language
\cite{AHU69}. For the class of transformations computed by two-way
transducers, \cite{CJ77} establishes closure under composition, \cite{Gu80}
proves decidability of functional equivalence, and \cite{EH01} shows
that their expressiveness coincides with MSO-definable string-to-string
transformations of \cite{Cou92}. As a result, \cite{EH01} justifiably
dubbed this class as \emph{regular} string transformations. Recently,
an alternative characterization using one-way machines was found for
this class: \emph{streaming string transducers} \cite{AC10} (and
their more general and abstract counterpart of \emph{cost register
automata} \cite{CRA-LICS}) process the input string in a single left-to-right
pass, but use multiple write-only registers to store partially computed
output chunks that are updated and combined to compute the final answer.
There has been a resurgent interest in such transducers in the formal
methods community with applications to learning of string transformations
from examples \cite{Gul11}, sanitization of web addresses \cite{VHLMB12},
and algorithmic verification of list-processing programs \cite{SST-POPL}.
In the context of these applications, we wish to focus on regular
transformations, rather than the subclass of classical one-way transducers,
since the gap includes many natural transformations such as string
reversal and swapping of substrings, and since one-way transducers
are not closed under basic operations such as choice.
For our formal study, we focus on \emph{cost functions}, that is,
(partial) functions that map strings over a finite alphabet to values
from a monoid $\tuple{\mathbb{D},+,0}$. While the set of output strings with
concatenation is a typical example of such a monoid, cost functions
can also associate numerical values (or rewards) with sequences of
events, with possible application to \emph{quantitative} analysis
of systems \cite{CDH-10} (it is worth pointing out that the notion
of regular cost functions proposed by Colcombet is quite distinct
from ours \cite{Col09}). An example of such a numerical domain is
the set of integers with addition. In the case of a \emph{commutative}
monoid, regular functions have a simpler structure, and correspond
to \emph{unambiguous weighted automata} (note that weighted automata
are generally defined over a semiring, and are very extensively studied---see
\cite{DKV09} for a survey, but with no results directly relevant
to our purpose). As another interesting example of a numerical monoid, each value is
a cost-discount pair, and the (non-commutative) addition is the discounted
sum operation. The traditional use of discounting in systems theory
allows only discounting of \emph{future} events, and corresponds to
cost functions computed by classical one-way transducers, while regular
functions allow more general forms of discounting (for instance, discounting
of both past and future events).
A classical result in automata theory characterizes regular languages
using \emph{regular expressions}: regular languages are exactly the
sets that can be inductively generated from base languages (empty
set, empty string, and alphabet symbols) using the operations of union,
concatenation, and Kleene-{*}. Regular expressions provide a robust
foundation for specifying regular patterns in a \emph{declarative}
manner, and are widely used in practical applications. The goal of
this paper is to identify the appropriate base functions and combinators
over cost functions for an analogous algebraic and machine-independent
characterization of regularity.
We begin our study by defining base functions and combinators that
are the analogs of the classical operations used in regular expressions.
The base function $\const Ld$ maps strings $\sigma$ in the base
language $L$ to the constant value $d$, and is undefined when $\sigma\notin L$.
Given cost functions $f$ and $g$, the \emph{conditional choice}
combinator $\choice fg$ maps an input string $\sigma$ to $\funcapptrad f{\sigma}$,
if this value is defined, and to $\funcapptrad g{\sigma}$ otherwise;
the \emph{split sum} combinator $\splitsum fg$ maps an input string
$\sigma$ to $\funcapptrad f{\sigma_{1}}+\funcapptrad f{\sigma_{2}}$
if the string $\sigma$ can be split \emph{uniquely} into two parts
$\sigma_{1}$ and $\sigma_{2}$ such that both $\funcapptrad f{\sigma_{1}}$
and $\funcapptrad g{\sigma_{2}}$ are defined, and is undefined otherwise;
and the \emph{iterated sum} $\itersum f$ is defined so that if the
input string $\sigma$ can be split uniquely such that $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$
and each $\funcapptrad f{\sigma_{i}}$ is defined, then $\funcapptrad{\itersum f}{\sigma}$
is $\funcapptrad f{\sigma_{1}}+\funcapptrad f{\sigma_{2}}+\cdots+\funcapptrad f{\sigma_{k}}$,
and is undefined otherwise. The combinators conditional choice, split
sum, and iterated sum are the natural analogs of the operations of
union, concatenation, and Kleene-{*} over languages, respectively.
The uniqueness restrictions ensure that the input string is parsed
in an unambiguous manner while computing its cost, and thus, the result
of combining two (partial) functions remains a (partial) \emph{function}.
Our first result is that when the operation $+$ is commutative, regular
functions are exactly the functions that can be inductively generated
from base functions using the combinators of conditional choice, split
sum, and iterated sum. The proof is fairly straightforward, and builds
on the known properties of cost register automata, their connection
to unambiguous weighted automata in the case of commutative monoids,
and the classical translation from automata to regular expressions.
When the operation $+$ is not commutative, which is the case when
the output values are strings themselves and addition corresponds
to string concatenation, we need additional combinators to capture
regularity. First, in the non-commutative case, it is natural to introduce
symmetric \emph{left-additive} versions of split sum and iterated
sum. Given cost functions $f$ and $g$, the \emph{left-split sum}
$\lsplitsum fg$ maps an input string $\sigma$ to $\funcapptrad g{\sigma_{2}}+\funcapptrad f{\sigma_{1}}$
if the string $\sigma$ can be split uniquely into two parts $\sigma_{1}$
and $\sigma_{2}$ such that both $\funcapptrad f{\sigma_{1}}$ and
$\funcapptrad g{\sigma_{2}}$ are defined. The \emph{left-iterated
sum} is defined analogously, and in particular, the transformation
that maps an input string to its \emph{reverse} is simply the left-iterated
sum of the function that maps each symbol to itself. It is easy to
show that regular functions are closed under these left-additive combinators.
The \emph{sum} $\repsum fg$ of two functions $f$ and $g$ maps a
string $\sigma$ to $\funcapptrad f{\sigma}+\funcapptrad g{\sigma}$.
Though the sum combinator is not necessary for completeness in the
commutative case, it is natural for cost functions. For example, the
\emph{string copy} function that maps an input string $\sigma$ to
the output $\sigma\sigma$ is simply the sum of the identity function
over strings with itself. It is already known that regular functions
are closed under sum \cite{EH01,CRA-LICS}.
To motivate our final combinator, consider the string-transformation
$\autobox{\mathit{shuffle}}$ that maps a string of the form $a^{m_{1}}ba^{m_{2}}b\ldots a^{m_{k}}b$
to $a^{m_{2}}b^{m_{1}}a^{m_{3}}b^{m_{2}}\ldots a^{m_{k}}b^{m_{k-1}}$.
This function is definable using cost register automata, but we conjecture
that it cannot be constructed using the combinators discussed so far.
We introduce a new form of iterated sum: given a language $L$ and
a cost function $f$, if the input string $\sigma$ can be split uniquely
so that $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$ with each $\sigma_{i}\in L$,
then the \emph{chained sum} $\chainsum fL$ of $\sigma$ is $\funcapptrad f{\sigma_{1}\sigma_{2}}+\funcapptrad f{\sigma_{2}\sigma_{3}}+\cdots+\funcapptrad f{\sigma_{k-1}\sigma_{k}}$.
In other words, the input is (uniquely) divided into substrings belonging
to the language $L$, but instead of summing the values of $f$ on
each of these substrings, we sum the values of $f$ applied to blocks
of adjacent substrings in a chained fashion. The string-transformation
$\autobox{\mathit{shuffle}}$ now is simply chained sum where $L$ equals the regular
language $\kstar ab$, and $f$ maps $a^{i}ba^{j}b$ to $a^{j}b^{i}$
(such a function $f$ can be constructed using iterated sum and left-split
sum). It turns out that this new combinator can also be defined if
we allow \emph{function composition}: if $f$ is a function that maps
strings to strings and $g$ is a cost function, then the composed
function $\comp gf$ maps an input string $\sigma$ to $\funcapptrad g{\funcapptrad f{\sigma}}$.
Such rewriting is a natural operation, and regular functions are closed
under composition \cite{CJ77}.
The main technical result of the paper is that every regular function
can be inductively generated from base functions using the combinators
of conditional choice, sum, split sum, either chained sum or function
composition, and their left additive versions. The proof in section
\ref{sec:Noncomm} constructs the desired expressions corresponding
to executions of cost register automata. Such automata have multiple
registers, and at each step the registers are updated using \emph{copyless}
(or single-use) assignments. Register values can flow into one another
in a complex manner, and the proof relies on understanding the structure
of compositions of \emph{shapes} that capture these value-flows. The
proof provides insights into the power of the chained sum operation,
and also offers an alternative justification for the copyless restriction
for register updates in the machine-based characterization of regular
functions.
\section{Function Combinators \label{sec:Combinators}}
Let $\Sigma$ be a finite alphabet, and $\tuple{\mathbb{D},+,0}$ be a monoid.
Two natural monoids of interest are those of the integers $\tuple{\mathbb{Z},+,0}$
under addition, and of strings $\tuple{\kstar{\Gamma},\cdot,\epsilon}$
over some output alphabet $\Gamma$ under concatenation. By convention,
we treat $\bot$ as the undefined value, and express partial functions
$\func fAB$ as total functions $\func fA{B_{\bot}}$, where $B_{\bot}=\union B{\roset{\bot}}$.
We extend the semantics of the monoid $\mathbb{D}$ to $\mathbb{D}_{\bot}$ by defining
$d+\bot=\bot+d=\bot$, for all $d\in\mathbb{D}$. A \emph{cost function} is
a function $\arrow{\kstar{\Sigma}}{\mathbb{D}_{\bot}}$.
\subsection{Base functions \label{sub:Combinators:Base}}
For each language $L\subseteq\kstar{\Sigma}$ and $d\in\mathbb{D}$, we define
the \emph{constant function $\func{\const Ld}{\kstar{\Sigma}}{\mathbb{D}_{\bot}}$}
as
\begin{alignat*}{1}
\funcapptrad{\const Ld}{\sigma} & =\begin{cases}
d & \mbox{if }\sigma\in L,\mbox{ and}\\
\bot & \mbox{otherwise}.
\end{cases}
\end{alignat*}
The \emph{everywhere-undefined function $\func{\bot}{\kstar{\Sigma}}{\mathbb{D}_{\bot}}$}
is defined as $\funcapptrad{\bot}{\sigma}=\bot$. $\bot$ can
also be defined as the constant function $\const{\emptyset}0$.
\begin{example}
\label{ex:Combinators:ConstPoint} Let $\Sigma=\roset{a,b}$ in the
following examples. Then, the constant function $\func{\const aa}{\kstar{\Sigma}}{\kstar{\Sigma}}$maps
$a$ to itself, and is undefined on all other strings. We will often
be interested in functions of the form $\const aa$: when the intent
is clear, we will use the shorthand $\trivconst a$.
\end{example}
By \emph{base functions}, we refer to the class of functions $\const Ld$,
where $L$ is a regular language.
\subsection{Conditional choice and sum operators \label{sub:Combinators:CondSum}}
Let $\func{f,g}{\kstar{\Sigma}}{\mathbb{D}}_{\bot}$ be two functions. We
then define the \emph{conditional choice $\choice fg$} as
\begin{alignat*}{1}
\funcapptrad{\choice fg}{\sigma} & =\begin{cases}
\funcapptrad f{\sigma} & \mbox{if }\funcapptrad f{\sigma}\neq\bot,\mbox{ and}\\
\funcapptrad g{\sigma} & \mbox{otherwise}.
\end{cases}
\end{alignat*}
\begin{example}
\label{ex:Combinators:CondSum} The indicator function $\func{\indicatorfn L}{\kstar{\Sigma}}{\mathbb{Z}}$
is defined as $\funcapptrad{\indicatorfn L}{\sigma}=1$ if $\sigma\in L$
and $\funcapptrad{\indicatorfn L}{\sigma}=0$ otherwise. This function
can be expressed using the conditional choice operator as $\choice{\const L1}{\const{\kstar{\Sigma}}0}$.
\end{example}
The \emph{sum $\repsum fg$} is defined as $\funcapptrad{\repsum fg}{\sigma}=\funcapptrad f{\sigma}+\funcapptrad g{\sigma}$.
If there exist unique strings $\sigma_{1}$ and $\sigma_{2}$ such
that $\sigma=\sigma_{1}\sigma_{2}$, and $\funcapptrad f{\sigma_{1}}$
and $\funcapptrad g{\sigma_{2}}$ are both defined, then the \emph{split
sum $\funcapptrad{\splitsum fg}{\sigma}=\funcapptrad f{\sigma_{1}}+\funcapptrad g{\sigma_{2}}$}.
Otherwise, $\funcapptrad{\splitsum fg}{\sigma}=\bot$. Over non-commutative
monoids, this may be different from the\emph{ left-split sum $\lsplitsum fg$}:
if there exist unique strings $\sigma_{1}$ and $\sigma_{2}$, such
that $\sigma=\sigma_{1}\sigma_{2}$, and $\funcapptrad f{\sigma_{1}}$
and $\funcapptrad g{\sigma_{2}}$ are both defined, then $\funcapptrad{\lsplitsum fg}{\sigma}=\funcapptrad g{\sigma_{2}}+\funcapptrad f{\sigma_{1}}$.
Otherwise, $\funcapptrad{\lsplitsum fg}{\sigma}=\bot$.
Observe that $\vartriangleright$ is the analogue of union in regular expressions,
with the important difference being that $\vartriangleright$ is non-commutative.
Similarly, $\oplus$ is similar to the concatenation operator
of traditional regular expressions.
\subsection{Iteration \label{sub:Combinators:Iteration}}
The \emph{iterated sum $\itersum f$} of a cost function is defined
as follows. If there exist unique strings $\sigma_{1}$, $\sigma_{2}$,
\ldots{}, $\sigma_{k}$ such that $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$
and $\funcapptrad f{\sigma_{i}}$ is defined for each $\sigma_{i}$,
then $\funcapptrad{\itersum f}{\sigma}=\funcapptrad f{\sigma_{1}}+\funcapptrad f{\sigma_{2}}+\cdots+\funcapptrad f{\sigma_{k}}$.
Otherwise, $\funcapptrad{\itersum f}{\sigma}=\bot$. The \emph{left-iterated
sum $\litersum f$} is defined similarly: if there exist unique strings
$\sigma_{1}$, $\sigma_{2}$, \ldots{}, $\sigma_{k}$ such that $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$
and $\funcapptrad f{\sigma_{i}}$ is defined for each $\sigma_{i}$,
then $\funcapptrad{\litersum f}{\sigma}=\funcapptrad f{\sigma_{k}}+\funcapptrad f{\sigma_{k-1}}+\cdots+\funcapptrad f{\sigma_{1}}$.
Otherwise, $\funcapptrad{\litersum f}{\sigma}=\bot$. The \emph{reverse
combinator $\funrev f$} is defined as $\funcapptrad{\funrev f}{\sigma}=\funcapptrad f{\strrev{\sigma}}$.
Observe that the left-iterated sum and reverse combinators are interesting
in the case of non-commutative monoids, such as string concatenation.
\begin{example}
\label{ex:Combinators:Iteration:SimpleNumbers} The function $\func{\strlenp{\cdot}a}{\kstar{\Sigma}}{\mathbb{Z}}$
counts the number of $a$-s in the input string. This is represented
by the function expression $\itersum{\left(\choice{\const a1}{\const b0}\right)}$.
The identity function $\func{\autobox{\mathit{id}}}{\kstar{\Sigma}}{\kstar{\Sigma}}$
is given by the function expression $\itersum{\left(\choice{\trivconst a}{\trivconst b}\right)}$.
The function $\autobox{\mathit{copy}}$ which maps an input $\sigma$ to $\strcat{\sigma}{\sigma}$
is then given by the expression $\repsum{\autobox{\mathit{id}}}{\autobox{\mathit{id}}}$. On the other
hand, the expression $\litersum{\left(\choice{\trivconst a}{\trivconst b}\right)}$
is the function which reverses its input: $\funcapptrad{\litersum{\left(\choice{\trivconst a}{\trivconst b}\right)}}{\sigma}=\strrev{\sigma}$
for all $\sigma$. This is also equivalent to the expression $\funrev{\autobox{\mathit{id}}}$.
\end{example}
\begin{example}
\label{ex:Combinators:Iteration:Coffee} Consider the situation of
a customer who frequents a coffee shop. Every cup of coffee he purchases
costs $\$2$, but if he fills out a survey, then all cups of coffee
purchased that month cost only $\$1$ (including cups already purchased).
Here $\Sigma=\roset{C,S,\#}$ denoting respectively the purchase of
a cup of coffee, completion of the survey, and the passage of a calendar
month. Then, the function expression $m=\choice{\left(\itersum{\const C2}\right)}{\left(\splitsum{\left(\itersum{\const C1}\right)}{\splitsum{\const S0}{\itersum{\left(\choice{\const C1}{\const S0}\right)}}}\right)}$
maps the purchases of a month to the customer's debt. The first sub-expression
-- $\itersum{\const C2}$ -- computes the amount provided no survey
is filled out and the second sub-expression -- $\splitsum{\left(\itersum{\const C1}\right)}{\splitsum{\const S0}{\itersum{\left(\choice{\const C1}{\const S0}\right)}}}$
-- is defined provided at least one survey is filled out, and in that
case, charges $\$1$ for each cup. The expression $\autobox{\mathit{coffee}}=\splitsum{\itersum{\left(\splitsum m{\const{\#}0}\right)}}m$
maps the entire purchase history of the customer to the amount he
needs to pay the store.
\end{example}
\begin{example}
\label{ex:Combinators:Iteration:Swap} Let $\Sigma=\roset{a,b,\#}$,
and consider the function $\autobox{\mathit{swap}}$ which maps strings of the form
$\strcat{\sigma}{\strcat{\#}{\tau}}$ where $\sigma,\tau\in\kstar{\roset{a,b}}$
to $\strcat{\tau}{\strcat{\#}{\sigma}}$. Such a function could be
used to transform names from the first-name-last-name format to the
last-name-first-name format. $\autobox{\mathit{swap}}$ can be expressed by the function
expression $\repsum{\left(\splitsum{\const{\kstar{\roset{a,b}}\#}{\epsilon}}{\itersum{\left(\choice{\trivconst a}{\trivconst b}\right)}}\right)}{\repsum{\const{\kstar{\Sigma}}{\#}}{\left(\splitsum{\itersum{\left(\choice{\trivconst a}{\trivconst b}\right)}}{\const{\#\kstar{\roset{a,b}}}{\epsilon}}\right)}}$.
The first subexpression skips the first part of the string -- $\const{\kstar{\roset{a,b}}\#}{\epsilon}$
-- and echoes the second part -- $\itersum{\left(\choice{\trivconst a}{\trivconst b}\right)}$.
The second subexpression $\const{\kstar{\Sigma}}{\#}$ inserts the
$\#$ in the middle. The third subexpression is similar to the first,
echoing the first part of the string and skipping the rest.\end{example}
\begin{example}
\label{ex:Combinators:Iteration:Strip} With $\Sigma=\roset{a,b,\#}$,
consider the function $\autobox{\mathit{strip}}$ which map strings of the form $\strcat{\sigma_{1}}{\strcat{\#}{\strcat{\sigma_{2}}{\strcat[\ldots]{\#}{\sigma_{n}}}}}$
where $\sigma_{i}\in\kstar{\roset{a,b}}$ for each $i$ to $\strcat{\sigma_{1}}{\strcat{\#}{\strcat{\sigma_{2}}{\strcat[\ldots]{\#}{\sigma_{n-1}}}}}$.
This function could be used, for example, to locate the directory
of a file given its full path, or in processing website URLs. This
function is represented by the expression $\splitsum{\autobox{\mathit{id}}}{\const{\#\kstar{\roset{a,b}}}{\epsilon}}$.
\end{example}
From the appropriate definitions, we have:
\begin{prop}
\label{prop:Combinators:DoubleRev} Over all monoids $\tuple{\mathbb{D},+,0}$,
the following identity holds: $\funcapptrad{\litersum f}{\sigma}=\funcapptrad{\funrev{\left(\itersum{\left(\funrev f\right)}\right)}}{\sigma}$.\end{prop}
\subsection{Chained sum \label{sub:Combinators:Chained-sum}}
Let $L\subseteq\kstar{\Sigma}$ be a language, and $f$ be a cost
function over $\kstar{\Sigma}$. If there exists a unique decomposition
$\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$ such that $k\geq2$
and for each $i$, $\sigma_{i}\in L$, then the \emph{chained sum}
$\funcapptrad{\chainsum fL}{\sigma}=\funcapptrad f{\sigma_{1}\sigma_{2}}+\funcapptrad f{\sigma_{2}\sigma_{3}}+\cdots+\funcapptrad f{\sigma_{k-1}\sigma_{k}}$.
Otherwise, $\funcapptrad{\chainsum fL}{\sigma}=\bot$. Similarly,
if there exist unique strings $\sigma_{1}$, $\sigma_{2}$, \ldots{},
$\sigma_{k}$ such that $k\geq2$ and for all $i$, $\sigma_{i}\in L$,
then the \emph{left-chained sum} $\funcapptrad{\lchainsum fL}{\sigma}=\funcapptrad f{\sigma_{k-1}\sigma_{k}}+\funcapptrad f{\sigma_{k-2}\sigma_{k-1}}+\cdots+\funcapptrad f{\sigma_{1}\sigma_{2}}$.
Otherwise, $\funcapptrad{\lchainsum fL}{\sigma}=\bot$.
\begin{example}
\label{ex:Combinators:Chained-sum:Shuffle} Let $\Sigma=\roset{a,b}$
and let $\func{\autobox{\mathit{shuffle}}}{\kstar{\Sigma}}{\kstar{\Sigma}}$ be
the following function: for $\sigma=a^{m_{1}}ba^{m_{2}}b\ldots a^{m_{k}}b$,
with $k\geq2$, $\funcapptrad{\autobox{\mathit{shuffle}}}{\sigma}=a^{m_{2}}b^{m_{1}}a^{m_{3}}b^{m_{2}}\ldots a^{m_{k}}b^{m_{k-1}}$,
and for all other $\sigma$, $\funcapptrad{\autobox{\mathit{shuffle}}}{\sigma}=\bot$.
See figure \ref{fig:Combinators:Chained-sum:Shuffle:Defn}.
\begin{figure*}
\begin{centering}
\subfloat[\label{fig:Combinators:Chained-sum:Shuffle:Defn} Definition of $\ensuremath{\funcapptrad{\autobox{\mathit{shuffle}}}{\sigma}}$.]{\begin{tikzpicture}[node distance=0.1]
\node (out) {$\funcapptrad{\autobox{\mathit{shuffle}}}{\sigma}$:};
\node [right=of out] (oa2) {$a^{m_2}$};
\node [right=of oa2] (ob1) {$b^{m_1}$};
\node [right=of ob1] (oa3) {$a^{m_3}$};
\node [right=of oa3] (ob2) {$b^{m_2}$};
\node [above=0.7 of out] (sigma) {$\sigma$:};
\node [above=0.7 of oa2] (a1) {$a^{m_1}$};
\node [above=0.7 of ob1] (b1) {$b$};
\node [above=0.7 of oa3] (a2) {$a^{m_2}$};
\node [above=0.7 of ob2] (b2) {$b$};
\node [right=of b2] (a3) {$a^{m_3}$};
\node [right=of a3] (b3) {$b$};
\node [right=of b3] (idots) {$\ldots$};
\node [right=of idots] (al) {$a^{m_{k - 1}}$};
\node [right=of al] (bl) {$b$};
\node [right=of bl] (ak) {$a^{m_k}$};
\node [right=of ak] (bk) {$b$};
\node [below=0.7 of idots] (odots) {$\ldots$};
\node [below=0.7 of al] (oak) {$a^{m_k}$};
\node [below=0.7 of bl] (obl) {$b^{m_{k - 1}}$};
\path [->] (a1) edge node {} (ob1);
\path [->] (a2) edge node {} (oa2);
\path [->] (a3) edge node {} (oa3);
\path [->] (a2) edge node {} (ob2);
\path [->] (ak) edge node {} (oak);
\path [->] (al) edge node {} (obl);
\end{tikzpicture}
}
\par\end{centering}
\begin{centering}
\subfloat[\label{fig:Combinators:Chained-sum:Shuffle:Expr} Each patch $P_{i}$
is a string of the form $\kstar ab$.]{\begin{tikzpicture}[node distance=0.1]
\node (sigma) {$\sigma$:};
\node [right=of sigma] (P1) {$P_{1}$};
\node [below=0.7 of P1] (P1a) {$P_{1}$};
\node [right=of P1a] (P1b) {$P_{1}$};
\node [right=of P1b] (P2a) {$P_{2}$};
\node [above=0.7 of P2a] (P2) {$P_{2}$};
\node [right=of P2a] (P2b) {$P_{2}$};
\node [right=of P2b] (P3a) {$P_{3}$};
\node [above=0.7 of P3a] (P3) {$P_{3}$};
\node [right=of P3a] (P3b) {$P_{3}$};
\node [right=of P3b] (Pda) {$\ldots$};
\node [above=0.7 of Pda] (Pd) {$\ldots$};
\node [right=of Pda] (Pla) {$P_{k - 1}$};
\node [above=0.7 of Pla] (Pl) {$P_{k - 1}$};
\node [right=of Pla] (Plb) {$P_{k - 1}$};
\node [right=of Plb] (Pka) {$P_{k}$};
\node [above=0.7 of Pka] (Pk) {$P_{k}$};
\node [right=of Pka] (Pkb) {$P_{k}$};
\path [->] (P1) edge node {} (P1a);
\path [->] (P1) edge node {} (P1b);
\path [->] (P2) edge node {} (P2a);
\path [->] (P2) edge node {} (P2b);
\path [->] (P3) edge node {} (P3a);
\path [->] (P3) edge node {} (P3b);
\path [->] (Pl) edge node {} (Pla);
\path [->] (Pl) edge node {} (Plb);
\path [->] (Pk) edge node {} (Pka);
\path [->] (Pk) edge node {} (Pkb);
\draw [decorate, decoration={brace, mirror}] (P1b.south west) -- coordinate[midway](P12) (P2a.south east);
\draw [decorate, decoration={brace, mirror}] (P2b.south west) -- coordinate[midway](P23) (P3a.south east);
\draw [decorate, decoration={brace, mirror}] (Plb.south west) -- coordinate[midway](Plk) (Pka.south east);
\node [below=0.005 of P12] (P12g) {};
\node [below=0.005 of P23] (P23g) {};
\node [below=0.005 of Plk] (Plkg) {};
\node [below=0.7 of P12] (fP12) {$\funcapptrad{f}{P_{1}, P_{2}}$};
\node [below=0.7 of P23] (fP23) {$\funcapptrad{f}{P_{2}, P_{3}}$};
\node [below=0.7 of Plk] (fPlk) {$\funcapptrad{f}{P_{k - 1}, P_{k}}$};
\path [->] (P12g) edge node {} (fP12);
\path [->] (P23g) edge node {} (fP23);
\path [->] (Plkg) edge node {} (fPlk);
\end{tikzpicture}
}
\par\end{centering}
\caption{\label{fig:Combinators:Chained-sum:Shuffle} Defining and expressing
$\ensuremath{\funcapptrad{\autobox{\mathit{shuffle}}}{\sigma}}$ using function
combinators.}
\end{figure*}
We first divide $\sigma$ into chunks of text $P_{i}$, each of the
form $\kstar ab$. Similarly the output may also be divided into patches,
$P_{i}^{\prime}$. Each input patch $P_{i}$ should be scanned twice,
first to produce the $a$-s to produce $P_{i-1}^{\prime}$, and then
again to produce the $b$-s in $P_{i}^{\prime}$. Let $L=\kstar ab$
be the language of these patches. It follows that $\autobox{\mathit{shuffle}}=\chainsum fL$,
where $f=\lsplitsum{\left(\splitsum{\itersum{\const ab}}{\const b{\epsilon}}\right)}{\left(\splitsum{\itersum{\const aa}}{\const b{\epsilon}}\right)}$.
\end{example}
The motivation behind the chained sum is two-fold: first, we believe
that $\autobox{\mathit{shuffle}}$ is inexpressible using the remaining operators,
and second, the operation naturally emerges as an idiom during the
proof of theorem \ref{thm:Noncomm}.
\subsection{Function composition \label{sub:Combinators:Composition}}
Let $\func f{\kstar{\Sigma}}{\kstar{\Gamma}_{\bot}}$ and $\func g{\kstar{\Gamma}}{\mathbb{D}}$
be two cost functions. The \emph{composition $\comp gf$} is defined
as $\funcapptrad{\comp gf}{\sigma}=\funcapptrad g{\funcapptrad f{\sigma}}$,
if $\funcapptrad f{\sigma}$ and $\funcapptrad g{\funcapptrad f{\sigma}}$
are defined, and $\funcapptrad{\comp gf}{\sigma}=\bot$ otherwise.
\begin{example}
\label{ex:Combinators:Composition:Shuffle} Composition is an alternative
to chained sum for expressive completeness. Let $\autobox{\mathit{copy}}_{L}=\repsum{\left(\splitsum{\itersum{\trivconst a}}b\right)}{\left(\splitsum{\itersum{\trivconst a}}b\right)}$
be the function which accepts strings from $L$ and repeats them twice.
The first step of the transformation is therefore the expression $\itersum{\autobox{\mathit{copy}}_{L}}$.
We then drop the first copy of $P_{1}$ and the last copy of $P_{k}$
-- this is achieved by the expression $\autobox{\mathit{drop}}=\splitsum{\const L{\epsilon}}{\splitsum{\autobox{\mathit{id}}}{\const L{\epsilon}}}$.
The function $\autobox{\mathit{ensurelen}}=\repsum{\autobox{\mathit{id}}}{\const{\kplus{\Sigma}}{\epsilon}}$
echoes its input, but also ensures that the input string contains
at least two patches. The final step is to specify the function $f$
which examines pairs of adjacent patches, and first echoes the $a$-s
from the second patch, and then transforms the $a$-s from the first
patch into $b$-s. $f=\lsplitsum{\left(\splitsum{\itersum{\const ab}}{\const b{\epsilon}}\right)}{\left(\splitsum{\itersum{\const aa}}{\const b{\epsilon}}\right)}$.
Thus, $\autobox{\mathit{shuffle}}=\comp f{\comp{\autobox{\mathit{ensurelen}}}{\comp{\autobox{\mathit{drop}}}{\itersum{\autobox{\mathit{copy}}_{L}}}}}$.
\end{example}
Observe that the approach in example \ref{ex:Combinators:Composition:Shuffle}
can be used to express the chained sum operation itself in terms of
composition. Pick a symbol $@\notin\Sigma$, and extend $f$ to $\arrow{\kstar{\left(\union{\Sigma}{\roset @}\right)}}{\mathbb{D}}$
by defining $\funcapptrad f{\sigma}=\bot$ whenever $\sigma$ contains
an occurrence of $@$. Let $\autobox{\mathit{id}}$ be the identity function for strings
over $\Sigma$, and $\autobox{\mathit{copy}}_{L}$ be that function
which maps strings $\sigma\in L$ to $\sigma@\sigma@$, and undefined
otherwise. $\autobox{\mathit{copy}}_{L}=\repsum{\left(\splitsum{\autobox{\mathit{id}}}{\const{\epsilon}@}\right)}{\left(\splitsum{\autobox{\mathit{id}}}{\const{\epsilon}@}\right)}$.
Let $\autobox{\mathit{drop}}_{L}$ be $\splitsum{\const{L@}{\epsilon}}{\splitsum{\itersum{\left(\splitsum{\autobox{\mathit{id}}}{\splitsum{\const @{\epsilon}}{\splitsum{\autobox{\mathit{id}}}{\const @@}}}\right)}}{\const{L@}{\epsilon}}}$.
Therefore, given a string $\sigma$ uniquely decomposed as $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}$,
where for each $i$, $\sigma_{i}\in L$, $\comp{\autobox{\mathit{drop}}_{L}}{\itersum{\autobox{\mathit{copy}}_{L}}}$
maps it to $\strcat{\sigma_{1}}{\strcat{\sigma_{2}}{\strcat @{\strcat{\sigma_{2}}{\strcat{\sigma_{3}}{\strcat[\ldots]@{\strcat{\sigma_{k-1}}{\strcat{\sigma_{k}}@}}}}}}}$.
We then have the following:
\begin{prop}
\label{prop:Combinators:Composition:Comp} For each cost function
$f$, language $L\subseteq\kstar{\Sigma}$, and string $\sigma\in\kstar{\Sigma}$,
\begin{enumerate}
\item $\funcapptrad{\chainsum fL}{\sigma}=\funcapptrad{\comp{\itersum{\left(\splitsum f{\const @{\epsilon}}\right)}}{\comp{\autobox{\mathit{ensurelen}}}{\comp{\autobox{\mathit{drop}}_{L}}{\itersum{\autobox{\mathit{copy}}_{L}}}}}}{\sigma}$,
and
\item $\funcapptrad{\lchainsum fL}{\sigma}=\funcapptrad{\comp{\litersum{\left(\splitsum f{\const @{\epsilon}}\right)}}{\comp{\autobox{\mathit{ensurelen}}}{\comp{\autobox{\mathit{drop}}_{L}}{\itersum{\autobox{\mathit{copy}}_{L}}}}}}{\sigma}$.\end{enumerate}
\end{prop}
\section{Regular Functions are Closed under Combinators \label{sec:Combinators-to-RegFuns}}
As mentioned in the introduction, there are multiple equivalent definitions
of regular functions. In this paper, we will use the operational model
of copyless cost register automata ($\operatorname{\mbox{CCRA}}$) as the yardstick
for regularity. A $\operatorname{\mbox{CCRA}}$ is a finite state machine which makes
a single left-to-right pass over the input string. It maintains a
set of registers which are updated on each transition. Examples of
register updates include $v:=u+v+d$ and $v:=d+v$, where $d\in\mathbb{D}$
is a constant. The important restrictions are that transitions and
updates are test-free -- we do not permit conditions such as ``$q$
goes to $q^{\prime}$ on input $a$, provided $v\geq5$'' -- and
that the update expressions satisfy the copyless (or single-use) requirement.
$\operatorname{\mbox{CCRA}}$s are a generalization of streaming string transducers
to arbitrary monoids. The goal of this paper is to show that functions
expressible using the combinators introduced in section \ref{sec:Combinators}
are exactly the class of regular functions. In this section, we formally
define $\operatorname{\mbox{CCRA}}$s, and show that every function expression represents
a regular function.
\subsection{Cost register automata \label{sub:Combinators-to-RegFuns:CCRA}}
\begin{defn}
\label{defn:Combinators-to-RegFuns:Copyless} Let $V$ be a finite
set of registers. We call a function $\func fV{\kstar{\left(\union V{\mathbb{D}}\right)}}$
\emph{copyless} if the following two conditions hold:
\begin{enumerate}
\item For all registers $u,v\in V$, $v$ occurs at most once in $\funcapptrad fu$,
and
\item for all registers $u,v,w\in V$, if $u\neq w$ and $v$ occurs in
$\funcapptrad fu$, then $v$ does not occur in $\funcapptrad fw$.
\end{enumerate}
Similarly, a string $e\in\kstar{\left(\union V{\mathbb{D}}\right)}$ is copyless
if each register $v$ occurs at most once in $e$.
\end{defn}
\begin{defn}[Copyless $\operatorname{\mbox{CRA}}$ \cite{CRA-LICS}]
\label{defn:Combinators-to-RegFuns:CCRA} A \emph{$\operatorname{\mbox{CCRA}}$}
is a tuple $M=\tuple{Q,\Sigma,V,\delta,\mu,q_{0},F,\nu}$, where
\begin{enumerate}
\item $Q$ is a finite set of states,
\item $\Sigma$ is a finite input alphabet,
\item $V$ is a finite set of registers,
\item $\func{\delta}{\cart Q{\Sigma}}Q$ is the state transition function,
\item $\func{\mu}{\cart Q{\cart{\Sigma}V}}{\kstar{\left(\union V{\mathbb{D}}\right)}}$
is the register update function such that for all $q$ and $a$, the
partial application $\func{\funcapptrad{\mu}{q,a}}V{\kstar V}$ is
a copyless function over $V$,
\item $q_{0}\in Q$ is the initial state,
\item $F\subseteq Q$ is the set of final states, and
\item $\func{\nu}F{\kstar{\left(\union V{\mathbb{D}}\right)}}$ is the output function,
such that for all $q$, the output expression $\funcapptrad{\nu}q$
is copyless.\end{enumerate}
\end{defn}
The semantics of a $\operatorname{\mbox{CCRA}}$ $M$ is specified using configurations.
A \emph{configuration} is a tuple $\gamma=\tuple{q,\autobox{\mathit{val}}}$
where $q\in Q$ is the current state and $\func{\autobox{\mathit{val}}}V{\mathbb{D}}$
is the register valuation. The initial configuration is $\gamma_{0}=\tuple{q_{0},\autobox{\mathit{val}}_{0}}$,
where $\funcapptrad{\autobox{\mathit{val}}_{0}}v=0$, for all $v$.
For simplicity of notation, we first extend $\autobox{\mathit{val}}$
to $\arrow{\union V{\mathbb{D}}}{\mathbb{D}}$ by defining $\funcapptrad{\autobox{\mathit{val}}}d=d$,
for all $d\in\mathbb{D}$, and then further extend it to strings $\func{\autobox{\mathit{val}}}{\kstar{\left(\union V{\mathbb{D}}\right)}}{\mathbb{D}}$,
by defining $\funcapptrad{\autobox{\mathit{val}}}{v_{1}v_{2}\ldots v_{k}}=\funcapptrad{\autobox{\mathit{val}}}{v_{1}}+\funcapptrad{\autobox{\mathit{val}}}{v_{2}}+\cdots+\funcapptrad{\autobox{\mathit{val}}}{v_{k}}$.
If the machine is in the configuration $\gamma=\tuple{q,\autobox{\mathit{val}}}$,
then on reading the symbol $a$, it transitions to the configuration
$\gamma^{\prime}=\tuple{q^{\prime},\autobox{\mathit{val}}^{\prime}}$,
and we write $\gamma\to^{a}\gamma^{\prime}$, where $q^{\prime}=\funcapptrad{\delta}{q,a}$,
and for all $v$, $\funcapptrad{\autobox{\mathit{val}}^{\prime}}v=\funcapptrad{\autobox{\mathit{val}}}{\funcapptrad{\mu}{q,a,v}}$.
We now define the function \emph{$\func{\interp M}{\kstar{\Sigma}}{\mathbb{D}_{\bot}}$
computed by $M$}. On input $\sigma\in\kstar{\Sigma}$, say $\gamma_{0}\to^{\sigma}\tuple{q_{f},\autobox{\mathit{val}}_{f}}$.
If $q_{f}\in F$, then $\funcapptrad{\interp M}{\sigma}=\funcapptrad{\autobox{\mathit{val}}}{\funcapptrad{\nu}{q_{f}}}$.
Otherwise, $\funcapptrad{\interp M}{\sigma}=\bot$.
A cost function is \emph{regular} if it can be computed by a $\operatorname{\mbox{CCRA}}$.
A streaming string transducer is a $\operatorname{\mbox{CCRA}}$ where the range $\mathbb{D}$
is the set of strings $\kstar{\Gamma}$ over the output alphabet under
concatenation.
\begin{example}
\label{ex:Combinators-to-RegFuns:CCRA} We present an example of an
$\operatorname{\mbox{SST}}$ in figure \ref{fig:Combinators-to-RegFuns:CCRA}. The machine
computes the function $\autobox{\mathit{shuffle}}$ from example \ref{ex:Combinators:Chained-sum:Shuffle}.
It maintains $3$ registers $x$, $y$ and $z$, all initially holding
the value $\epsilon$. The register $x$ holds the current output.
On viewing each $a$ in the input string, the machine commits to appending
the symbol to its output. Depending on the suffix, this $a$ may also
be used to eventually produce a $b$ in the output. This provisional
value is stored in the register $z$. The register $y$ holds the
$b$-s produced by the previous run of $a$-s while the machine is
reading the next patch of $a$-s.
\begin{figure*}
\begin{centering}
\pgfmathwidth{"$\quoset{C}{x := x + 2}$"}
\def\pgfmathresult{\pgfmathresult}
\pgfmathparse{2.0 * \pgfmathresult}
\global\edef\pgfmathresult{\pgfmathresult}
\begin{tikzpicture}[node distance=\pgfmathresult pt]
\node [state, initial] (q0) {$q_0$};
\node [state, right=of q0] (q1) {$q_1$};
\node [state with output, right=of q1] (q2) {$q_2$ \nodepart{lower} $x$};
\path [->] (q0)
edge [loop above] node
{$\quoset a{\begin{array}{rcl} z & := & zb \end{array}}$}
(q0);
\path [->] (q0)
edge node [above]
{$\quoset b{\begin{array}{rcl} x & := & xy \\ y & := & z \\ z & := & \epsilon \end{array}}$}
(q1);
\path [->] (q1)
edge [loop above] node
{$\quoset a{\begin{array}{rcl} x & := & xa \\ z & := & zb \end{array}}$}
(q1);
\path [->] (q1)
edge [bend left] node [above]
{$\quoset b{\begin{array}{rcl} x & := & xy \\ y & := & z \\ z & := & \epsilon \end{array}}$}
(q2);
\path [->] (q2)
edge [bend left] node [below]
{$\quoset a{\begin{array}{rcl} x & := & xa \\ z & := & zb \end{array}}$}
(q1);
\path [->] (q2)
edge [loop above] node
{$\quoset b{\begin{array}{rcl} x & := & xy \\ y & := & z \\ z & := & \epsilon \end{array}}$}
(q2);
\end{tikzpicture}
\par\end{centering}
\caption{\label{fig:Combinators-to-RegFuns:CCRA} Streaming string transducer
computing $\autobox{\mathit{shuffle}}$. $q_{2}$ is the only accepting state. The
annotation ``$x$'' in state $q_{2}$ specifies the output function.
On each transition, registers whose updates are not specified are
left unchanged.}
\end{figure*}
\end{example}
\subsection{Additive cost register automata \label{sub:Combinators-to-RegFuns:ACRA}}
We recall that when $\mathbb{D}$ is a commutative monoid, $\operatorname{\mbox{CCRA}}$s are
equivalent in expressiveness to the simpler model of additive cost
register automata ($\operatorname{\mbox{ACRA}}$). In theorem \ref{thm:Comm}, where we
show that regular functions over commutative monoids can be expressed
using the base functions over regular languages combined using the
choice, split sum and function iteration operators, we assume that
the regular function is specified as an $\operatorname{\mbox{ACRA}}$. These machines drop
the copyless restriction on register updates, but require that all
updates be of the form ``$u:=v+d$'', for some registers $u$ and
$v$ and some constant $d$.
\begin{defn}[Additive CRA]
\label{defn:Combinators-to-RegFuns:ACRA} An \emph{additive cost
register automaton ($\operatorname{\mbox{ACRA}}$)} is a tuple $M=\tuple{Q,\Sigma,V,\delta,\mu,q_{0},F,\nu}$,
where
\begin{enumerate}
\item $Q$ is a finite set of states,
\item $\Sigma$ is a finite input alphabet,
\item $V$ is a finite set of registers,
\item $\func{\delta}{\cart Q{\Sigma}}Q$ is the state transition function,
\item $\func{\mu}{\cart Q{\cart{\Sigma}V}}{\cart V{\mathbb{D}}}$ is the register
update function,
\item $q_{0}\in Q$ is the initial state,
\item $F\subseteq Q$ is the set of final states, and
\item $\func{\nu}F{\cart V{\mathbb{D}}}$ is the output function.
\end{enumerate}
\end{defn}
The semantics of $\operatorname{\mbox{ACRA}}$s are also specified using configurations.
The initial configuration $\gamma_{0}=\tuple{q_{0},\autobox{\mathit{val}}_{0}}$
maps all registers to $0$. If the machine is in a configuration $\gamma=\tuple{q,\autobox{\mathit{val}}}$,
and reads a symbol $a$, then it transitions to the configuration
$\gamma^{\prime}=\tuple{q^{\prime},\autobox{\mathit{val}}^{\prime}}$,
written as $\gamma\to^{a}\gamma^{\prime}$, where
\begin{enumerate}
\item $q^{\prime}=\funcapptrad{\delta}{q,a}$, and
\item for each register $u$, if $\funcapptrad{\mu}{q,a,u}=\tuple{v,d}$,
then $\funcapptrad{\autobox{\mathit{val}}^{\prime}}u=\funcapptrad{\autobox{\mathit{val}}^{\prime}}v+d$.
\end{enumerate}
We then define the function $\interp M$ computed by $M$ as follows.
On input $\sigma\in\kstar{\Sigma}$, if $\gamma_{0}\to^{\sigma}\tuple{q_{f},\autobox{\mathit{val}}_{f}}$,
and $q_{f}\in F$, then $\funcapptrad{\interp M}{\sigma}=\funcapptrad{\autobox{\mathit{val}}_{f}}{\funcapptrad{\nu}{q_{f}}}$.
Otherwise, $\funcapptrad{\interp M}{\sigma}=\bot$.
\begin{example}
\label{ex:Combinators-to-RegFuns:ACRA} In figure \ref{fig:Combinators-to-RegFuns:ACRA},
we present an $\operatorname{\mbox{ACRA}}$ which computes the function $\autobox{\mathit{coffee}}$ described
in example \ref{ex:Combinators:Iteration:Coffee}. In the state $q_{\lnot S}$,
the value in register $x$ tracks how much the customer owes the establishment
if he does not fill out a survey before the end of the month, and
the value in register $y$ is the amount he should pay otherwise.
\begin{figure}
\begin{centering}
\pgfmathwidth{"$\quoset{C}{x := x + 2}$"}
\def\pgfmathresult{\pgfmathresult}
\pgfmathparse{2.0 * \pgfmathresult}
\global\edef\pgfmathresult{\pgfmathresult}
\begin{tikzpicture}[node distance=\pgfmathresult pt]
\node [state with output, initial] (qnS) {$q_{\lnot S}$ \nodepart{lower} $x$};
\node [state with output, right=of qnS] (qS) {$q_{S}$ \nodepart{lower} $x$};
\path [->] (qnS) edge [loop above] node {$\quoset C {\begin{array}{c} x := x + 2\\ y := y + 1\end{array}}$} (qnS);
\path [->] (qnS) edge [bend left, above] node {$\quoset S {x := y}$} (qS);
\path [->] (qnS) edge [loop below] node {$\quoset \# {y := x}$} (qnS);
\path [->] (qS) edge [loop above] node {$\quoset C {x := x + 1}$} (qS);
\path [->] (qS) edge [loop below] node {$S$} (qS);
\path [->] (qS) edge [bend left, below] node {$\quoset \# {y := x}$} (qnS);
\end{tikzpicture}
\par\end{centering}
\caption{\label{fig:Combinators-to-RegFuns:ACRA} $\operatorname{\mbox{ACRA}}$ computing $\autobox{\mathit{coffee}}$.}
\end{figure}
\end{example}
\subsection{Regular look-ahead \label{sub:Combinators-to-RegFuns:RLA}}
An important property of regular functions is that they are closed
under \emph{regular look-ahead }\cite{SST-POPL}: a $\operatorname{\mbox{CCRA}}$ can
make transitions based not simply on the next symbol of the input,
but on regular properties of the as-yet-unseen suffix. To formalize
this, we introduce the notion of a \emph{look-ahead labelling}. Let
$\sigma=\strcat{\sigma_{1}}{\strcat[\ldots]{\sigma_{2}}{\sigma_{n}}}\in\kstar{\Sigma}$
be a string, and $A=\tuple{Q,\Sigma,\delta,q_{0}}$ be a DFA over
$\Sigma$. Starting in state $q_{0}$, and reading $\sigma$ in reverse,
say $A$ visits the sequence of states $q_{0}\to^{\sigma_{n}}q_{1}\to^{\sigma_{n-1}}q_{2}\to^{\sigma_{n-2}}\cdots\to^{\sigma_{1}}q_{n}$.
Then, the state of $A$ at position $i$, $q_{i}$ determines a regular
property of the suffix $\sigma_{n-i+1}\sigma_{n-i+2}\ldots\sigma_{n}$.
We term the string of states $q_{n}q_{n-1}\ldots q_{0}$ the labelling
of $\sigma$ by the \emph{look-ahead automaton $A$}.
\begin{prop}
\label{prop:Combinators-to-RegFuns:RLA-Closure} Let $A$ be a look-ahead
automaton over $\Sigma$, and let $M$ be a $\operatorname{\mbox{CCRA}}$ over labellings
in $\kstar Q$. Then, there is a $\operatorname{\mbox{CCRA}}$ machine $M^{\prime}$
over $\Sigma$, such that for every $\sigma\in\kstar{\Sigma}$, $\funcapptrad{\interp{M^{\prime}}}{\sigma}=\funcapptrad{\interp M}{\funcapptrad{\autobox{\mathit{lab}}}{\sigma}}$
where $\funcapptrad{\autobox{\mathit{lab}}}{\sigma}$ is the labelling
of $\sigma$ by $A$.\end{prop}
\subsection{From function expressions to cost register automata \label{sub:Combinators-to-RegFuns:Thm}}
\begin{thm}
\label{thm:Combinators-to-RegFuns:Thm} Every cost function expressible
using the base functions combined using the $\vartriangleright$, $+$,
$\oplus$, $\overleftarrow{\splitsumop}$, $\sum$, $\overleftarrow{\itersumop}$, input
reverse, composition, chained sum, and left-chained sum combinators
is regular.
\end{thm}
This can be proved by structural induction on the structure of the
function expression. We now prove each case as a separate lemma, and these together establish
the present theorem.
\begin{lem}
\label{lem:Combinators-to-RegFuns:Base} For all regular languages
$L\subseteq\kstar{\Sigma}$, and $d\in\mathbb{D}$, $\const Ld$ is a regular
function.\end{lem}
\begin{IEEEproof}
Consider the DFA $A=\tuple{Q,\Sigma,\delta,q_{0},F}$ accepting $L$,
and construct the machine $M=\tuple{Q,\Sigma,\emptyset,\delta,\mu,q_{0},F,\nu}$,
where $\funcapptrad{\nu}q=d$, for all $q\in F$. This machine has
the same state space as $A$, but does not maintain any registers.
In every final state, the machine outputs the constant $d\in\mathbb{D}$.
The domain of the register update function $\mu$ is empty, and so
we do not specify it. Clearly, $\funcapptrad{\interp M}{\sigma}=\funcapptrad{\const Ld}{\sigma}=d$,
for each $\sigma,$and it follows that $\const Ld$ is a regular function.\end{IEEEproof}
\begin{lem}
\label{lem:Combinators-to-RegFuns:ChoiceRepSum} Whenever $f$ and
$g$ are regular functions, $\choice fg$ and $\repsum fg$ are also
regular.\end{lem}
\begin{IEEEproof}
Let $f$ and $g$ be computed by the $\operatorname{\mbox{CCRA}}$s $M_{f}=(Q_{f},\Sigma,V_{f},\delta_{f},\mu_{f},q_{0f},F_{f},\nu_{f})$
and $M_{g}=(Q_{g},\Sigma,V_{g},\delta_{g},\mu_{g},q_{0g},F_{g},\nu_{g})$
respectively. We use the product construction to create the machines
$M_{\choice fg}$ and $M_{\repsum fg}$ that compute $\choice fg$
and $\repsum fg$ respectively. The idea is to run both machines in
parallel, and in the case of $M_{\choice fg}$, output depending on
which machines are in accepting states. In $M_{\repsum fg}$, we output
only if both machines are accepting, and then output the sum of the
outputs of both machines.
Assume, without loss of generality, that $\intersection{V_{f}}{V_{g}}=\emptyset$.
Define $M_{\choice fg}=(\cart{Q_{f}}{Q_{g}},\Sigma,\union{V_{f}}{V_{g}},\delta,\mu,(q_{0f},q_{0g}),F_{\choice fg},\nu_{\choice fg})$
and $M_{\repsum fg}=(\cart{Q_{f}}{Q_{g}},\Sigma,\union{V_{f}}{V_{g}},\delta,\mu,(q_{0f},q_{0g}),F_{\repsum fg},\nu_{\repsum fg})$,
where
\begin{enumerate}
\item for each $q_{1}$, $q_{2}$ and $a$, $\delta((q_{1},q_{2}),a)=(\delta_{f}(q_{1},a),\delta_{g}(q_{2},a))$,
\item if $v\in V_{f}$, then $\funcapptrad{\mu}{\tuple{q_{1},q_{2}},a,v}=\funcapptrad{\mu_{f}}{q_{1},a,v}$,
and otherwise, $\funcapptrad{\mu}{\tuple{q_{1},q_{2}},a,v}=\funcapptrad{\mu_{g}}{q_{2},a,v}$,
\item $F_{\choice fg}=\union{\cart{F_{f}}{Q_{g}}}{\cart{Q_{f}}{F_{g}}}$,
and $F_{\repsum fg}=\cart{F_{f}}{F_{g}}$,
\item for all $\tuple{q_{1},q_{2}}\in F_{\choice fg}$, if $q_{1}\in F_{f}$,
then $\funcapptrad{\nu}{q_{1},q_{2}}=\funcapptrad{\nu_{f}}{q_{1}}$,
and otherwise $\funcapptrad{\nu}{q_{1},q_{2}}=\funcapptrad{\nu_{g}}{q_{2}}$,
and
\item for all $\tuple{q_{1},q_{2}}\in F_{\repsum fg}$, $\funcapptrad{\nu}{q_{1},q_{2}}=\funcapptrad{\nu_{f}}{q_{1}}+\funcapptrad{\nu_{g}}{q_{2}}$.
\end{enumerate}
Since the sets of registers are disjoint, observe that the register
updates and output functions just defined are copyless. It follows
that $M_{\choice fg}$ and $M_{\repsum fg}$ compute $\choice fg$
and $\repsum fg$ respectively.\end{IEEEproof}
\begin{lem}
\label{lem:Combinators-to-RegFuns:SplitSum} Whenever $f$ and $g$
are regular functions, $\splitsum fg$ and $\lsplitsum fg$ are also
regular.\end{lem}
\begin{IEEEproof}
Let $f$ and $g$ be computed by the $\operatorname{\mbox{CCRA}}$s $M_{f}=(Q_{f},\Sigma,V_{f},\delta_{f},\mu_{f},q_{0f},F_{f},\nu_{f})$
and $M_{g}=(Q_{g},\Sigma,V_{g},\delta_{g},\mu_{g},q_{0g},F_{g},\nu_{g})$
respectively. We recall that the domain $L\subseteq\kstar{\Sigma}$
over which a regular function is defined is a regular language. Let
$L_{f}$ and $L_{g}$ be the domains of $f$ and $g$ respectively.
The idea is to use regular lookahead and execute $M_{f}$ on the prefix
$\sigma_{1}\in L_{f}$, and when the lookahead automaton indicates
that the suffix $\sigma_{2}\in L_{g}$, we switch to executing $M_{g}$,
and combine the results in the output function.
Let $A_{1}$ be a lookahead automaton with state space $\union{\Sigma}{\roset{q_{01}}}$,
so that the state of $A_{1}$ indicates the next symbol of the input.
Let $A_{2}$ be a lookahead automaton which accepts strings $\sigma$
such that $\strrev{\sigma}\in L_{g}$, and let $F_{2}$ be the set
of its accepting states. The combined lookahead automaton is the product
$\cart{A_{1}}{A_{2}}$, such that the state $\tuple{a,q}$ of this
product indicates the next symbol in the input, and depending on whether
$q\in F_{2}$, whether the suffix $\sigma_{2}\in L_{g}$.
Let $A_{3}$ (with accepting states $F_{3}$) be a DFA, which on input
$\sigma$, determines whether $\sigma$ can be unambiguously split
as $\sigma=\sigma_{1}\sigma_{2}$, with $\sigma_{1}\in L_{f}$ and
$\sigma_{2}\in L_{g}$. Construct the machine $M=\brtuple{\cart{\p{\union{Q_{f}}{Q_{g}}}}{A_{3}},\cart{Q_{1}}{Q_{2}},\union{V_{f}}{\union{V_{g}}{\roset{\autobox{\mathit{total}}}}},\delta,\mu,\brtuple{q_{0f},q_{03}},\cart{F_{g}}{F_{3}},\nu}$,
where $\delta$, $\mu$, and $\nu$ operate as follows:
\begin{enumerate}
\item In a state $\tuple{q_{1},q_{3}}\in\cart{Q_{f}}{A_{3}}$, on reading
the input symbol $\tuple{a,q_{l}}$, where $q_{1}\notin F_{f}$ or
$q_{l}\notin F_{2}$, the machine transitions to $\brtuple{\funcapptrad{\delta_{f}}{q_{1},a},\funcapptrad{\delta_{3}}{q_{3},a}}$.
The registers of $M_{f}$ are updated according to $\mu_{f}$, and
the other registers are left unchanged.
\item In a state $\tuple{q_{1},q_{3}}\in\cart{Q_{f}}{A_{3}}$, on reading
the input symbol $\tuple{a,q_{l}}$, where $q_{1}\in F_{f}$ and $q_{l}\in F_{2}$,
the machine transitions to $\brtuple{q_{0g},\funcapptrad{\delta_{3}}{q_{3},a}}$.
The machine stores the output of $M_{f}$ in the register $\autobox{\mathit{total}}$,
and the other registers are left unchanged.
\item In the state $\tuple{q_{2},q_{3}}\in\cart{Q_{g}}{A_{3}}$, on reading
the input symbol $\tuple{a,q_{l}}$, the machine transitions to $\brtuple{\funcapptrad{\delta_{g}}{q_{2},a},\funcapptrad{\delta_{3}}{q_{3},a}}$.
The registers of $M_{g}$ are updated according to $\mu_{g}$, and
the other registers are left unchanged.
\item In the final state $\tuple{q_{2},q_{3f}}\in\cart{F_{g}}{F_{3}}$,
the machine outputs the value $\autobox{\mathit{total}}+\funcapptrad{\nu}{q_{2}}$.
\end{enumerate}
The machine $M$ just constructed computes the function $\splitsum fg$
using regular lookahead, and it follows that $\splitsum fg$ is regular.
Similarly, it can be shown that $\lsplitsum fg$ is also regular.
\end{IEEEproof}
Along similar lines, we have:
\begin{lem}
\label{lem:Combinators-to-RegFuns:IterSum} Whenever $f$ is a regular
function, $\itersum f$ and $\litersum f$ are also regular.\end{lem}
\begin{IEEEproof}
The main difference between this and the construction of lemma \ref{lem:Combinators-to-RegFuns:SplitSum}
are the following: the state space $Q$ of $M$ is defined as $Q=\cart{Q_{f}}{A_{3}}$,
since there is only one $\operatorname{\mbox{CCRA}}$ $M_{f}$. The set of registers
is $V=\union V{\roset{\autobox{\mathit{total}}}}$, and the accepting
states $F=\cart{F_{f}}{F_{3}}$.
In a state $\tuple{q_{1},q_{3}}\in\cart{Q_{f}}A$, on reading the
input symbol $\tuple{a,q_{l}}$, where $q_{1}\in F_{f}$, and $q_{l}\in F_{2}$,
the machine transitions back to $\tuple{q_{0f},\funcapptrad{\delta_{3}}{q_{3},a}}$.
The machine appends the output of $M_{f}$ to the right of the register
$\autobox{\mathit{total}}$, and all other registers are cleared to
$0$.
The machine thus constructed computes $\itersum f$. If the machine
were to append the output of $M_{f}$ to the left of $\autobox{\mathit{total}}$,
then it would compute $\litersum f$. Thus, both function expressions
are regular.
\end{IEEEproof}
The next lemma was first proved in \cite{CRA-LICS}. It can also be
seen as a consequence of lemma \ref{lem:Combinators-to-RegFuns:Comp},
because for all $\sigma$, $\funcapptrad{\funrev f}{\sigma}=\funcapptrad{\comp f{\autobox{\mathit{reverse}}}}{\sigma}$,
where $\autobox{\mathit{reverse}}=\litersum{\vartriangleright\ruset{\const aa}{a\in\Sigma}}$
is the function which reverses its input.
\begin{lem}
\label{lem:Combinators-to-RegFuns:FunRev} Whenever $f$ is a regular
function, so is $\funrev f$.
\end{lem}
\begin{lem}
\label{lem:Combinators-to-RegFuns:Comp} Whenever $\func f{\kstar{\Gamma}}{\mathbb{D}}$
and $\func g{\kstar{\Sigma}}{\kstar{\Gamma}}$ are regular functions,
$\comp fg$ is also a regular function.\end{lem}
\begin{IEEEproof}
Since $\operatorname{\mbox{SST}}$s are closed under composition, if $\func f{\kstar{\Gamma}}{\mathbb{D}}$
and $\func g{\kstar{\Sigma}}{\kstar{\Gamma}}$ are regular functions,
it follows that $\comp fg$ is also a regular function.\end{IEEEproof}
\begin{lem}
\label{lem:Combinators-to-RegFuns:ChainedSum} Whenever $f$ is a
regular function, and $L\subseteq\kstar{\Sigma}$ is a regular language,
$\chainsum fL$ and $\lchainsum fL$ are also regular functions.\end{lem}
\begin{IEEEproof}
From proposition \ref{prop:Combinators:Composition:Comp} and lemma
\ref{lem:Combinators-to-RegFuns:Comp}.
\end{IEEEproof}
This completes the proof of theorem \ref{thm:Combinators-to-RegFuns:Thm}.
\section{Completeness of Combinators for Commutative Monoids \label{sec:Comm}}
In this section, we show that if $\mathbb{D}$ is a commutative monoid, then
constant functions combined using the choice, split sum and function
iteration are expressively equivalent to the class of regular functions.
Consider the $\operatorname{\mbox{ACRA}}$ $M$ shown in figure \ref{fig:Comm:ACRA}. The
idea is to view $M$ as a non-deterministic automaton $A$ over the
set of vertices $\cart QV$: for every path $\pi=q_{0}\to^{\sigma_{1}}q_{1}\to^{\sigma_{2}}\cdots\to^{\sigma_{n}}q_{n}$
through the $\operatorname{\mbox{ACRA}}$, there is a corresponding path through $A$,
$\pi_{A}=\tuple{q_{0},v_{0}}\to^{\sigma_{1}}\tuple{q_{1},v_{1}}\to^{\sigma_{2}}\cdots\to^{\sigma_{n}}\tuple{q_{n},v_{n}}$,
where $v_{n}$ is the register which is output in the final state
$q_{n}$, and at each position $i$, $v_{i}$ indicates the register
whose current value flows into the final value of $v_{n}$. Observe
that this NFA $A$ is unambiguous -- for every string $\sigma$ that
is accepted by $A$, there is a unique accepting path. Furthermore,
the final value of register $v_{n}$ is simply the sum of the increments
accumulated along each transition of this accepting path. Therefore,
if the label $\tuple{q,v}\to^{a_{d}}\tuple{q^{\prime},v^{\prime}}$
along each edge is also annotated with the increment value $d$, so
that the update expression reads $\funcapptrad{\mu}{q,a,v^{\prime}}=v+d$,
then the regular expression for the language accepted $A$ -- $\kstar{\left(a_{1}+b_{0}\right)}+\kstar{\left(a_{1}+b_{1}+e_{1}\right)}e_{1}\kstar{\left(a_{1}+b_{0}\right)}$
-- can be alternatively viewed as a function expression for $\interp M$
-- $\choice{\itersum{\p{\const b0\vartriangleright\const a1}}}{\p{\itersum{\p{\const b1\vartriangleright\const a1\vartriangleright\const e1}}\oplus e_{1}\oplus\itersum{\p{\const b0\vartriangleright\const a1}}}}$.
\begin{thm}
\label{thm:Comm} If $\tuple{\mathbb{D},+,0}$ is a commutative monoid, then
every regular function $\func f{\kstar{\Sigma}}{\mathbb{D}}$ can be expressed
using the base functions combined with the choice, split sum and iterated
sum operators.
\end{thm}
\begin{IEEEproof}
We need to show an arbitrary $\operatorname{\mbox{ACRA}}$ $M=\tuple{Q,\Sigma,V,\delta,\mu,q_{0},F,\nu}$
can be expressed by these combinators.
We construct an NFA $A$ with states $\cart QV$ and an alphabet $\Gamma$
consisting of a finite subset of $\cart{\Sigma}{\mathbb{D}}$ which are those
elements $\tuple{a,d}$ such that for some state $q\in Q$ and two
registers $v,v^{\prime}\in V$ there is an update $\funcapptrad{\mu}{q,a,v}=v^{\prime}+d$.
We will denote $\tuple{a,d}$ as $a_{d}$.
We define the transition relation as follows: $\tuple{q^{\prime},v^{\prime}}\in\funcapptrad{\delta^{\prime}}{\tuple{q,v},a_{d}}$
iff $\funcapptrad{\mu}{q,a,v^{\prime}}=v+d$ and $\funcapptrad{\delta}{q,a}=q^{\prime}$.
Assume without loss of generality that our output function takes values
in $V$. The start states of the NFA $A$ are all states in $\cart{\roset{q_{0}}}V$
and the final states are $\ruset{\tuple{q,v}}{\funcapptrad{\nu}q=v}$.
Consider any unambiguous regular expression of strings accepted by
the NFA $A$: interpret regular expression union $\cup$ as $\vartriangleright$,
regular expression concatenation $\cdot$ as $\oplus$, Kleene-{*}
as the iterated sum $\itersum f$ and input symbols $a_{d}$ as the
constant functions $\const ad$.
It can be shown by an inductive argument that the regular expression
corresponding to paths in our NFA from $\tuple{q,v}$ to $\tuple{q^{\prime},v^{\prime}}$,
when interpreted as a regular function $f$ is defined exactly on
those $\sigma\in\kstar{\Sigma}$ such that $\sigma$ is a path from
$q$ to $q^{\prime}$ with the effect that $v$ flows into $v^{\prime}$.
Moreover, the total effect of this path $\sigma$ is $v^{\prime}:=v+\funcapptrad f{\sigma}$
for all of these $\sigma$. It follows that a function expression
for $\interp M$ can be obtained from the union of the unambiguous
regular expressions from some state in $\cart{\roset{q_{0}}}V$ to
some state in $\ruset{\tuple{q,v}}{\funcapptrad{\nu}q=v}$.\end{IEEEproof}
\begin{figure*}
\begin{centering}
\subfloat[\label{fig:Comm:ACRA}]{\begin{centering}
\begin{tikzpicture}
\node [state with output, initial] (q0) {$q_{0}$ \nodepart{lower} $x$};
\path [->] (q0)
edge [loop right] node [right]
{$\quoset a{\begin{array}{rcl}x & := & x + 1\\y & := & y + 1\end{array}}$}
(q0);
\path [->] (q0)
edge [loop above] node [above]
{$\quoset b{\begin{array}{rcl}x & := & x\\y & := & y + 1\end{array}}$}
(q0);
\path [->] (q0)
edge [loop below] node [below]
{$\quoset e{\begin{array}{rcl}x & := & y + 1\\y & := & y + 1\end{array}}$}
(q0);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Comm:NFA}]{\begin{centering}
\begin{tikzpicture}
\node [state, accepting] (q0x) {$\tuple{q_{0}, x}$};
\node [state, right=of q0x] (q0y) {$\tuple{q_{0}, y}$};
\path [->] (q0x) edge [loop above] node [above] {$a_{1}$} (q0x);
\path [->] (q0x) edge [loop left] node [left] {$b_{0}$} (q0x);
\path [->] (q0y) edge [loop above] node [above] {$a_{1}$} (q0y);
\path [->] (q0y) edge [loop right] node [right] {$b_{1}$} (q0y);
\path [->] (q0y) edge [loop below] node [below] {$e_{1}$} (q0y);
\path [->] (q0y) edge node [above] {$e_{1}$} (q0x);
\end{tikzpicture}
\par\end{centering}
}
\par\end{centering}
\caption{\label{fig:Comm} Translating an $\operatorname{\mbox{ACRA}}$ to the commutative calculus.
The machine operates over the alphabet $\Sigma=\roset{a,b,e}$, and
when given a string $\sigma=\sigma_{1}e\sigma_{2}e\ldots\sigma_{k}$,
where each $\sigma_{i}\in\kstar{\roset{a,b}}$, it counts the number
of $a$-s and $e$-s, but only counts those $b$-s which occur before
the final $e$. Figure \ref{fig:Comm:NFA} is the NFA that results
from the construction of theorem \ref{thm:Comm}. Both states in the
NFA are initial.}
\end{figure*}
\section{Completeness of Combinators for General Monoids \label{sec:Noncomm}}
In this section, we describe an algorithm to express every regular
function $\func f{\kstar{\Sigma}}{\mathbb{D}}$ as a function expression.
To simplify the presentation, we prove theorem \ref{thm:Noncomm}
only for the case of string transductions, i.e. where $\mathbb{D}=\kstar{\Gamma}$,
for some finite output alphabet $\Gamma$. Note that this is sufficient
to establish the theorem in its full generality: let $\Gamma_{\mathbb{D}}\subseteq\mathbb{D}$
be the (necessarily finite) set of all constants appearing in the
textual description of $M$. $M$ can be alternatively viewed as an
$\operatorname{\mbox{SST}}$ mapping input strings in $\kstar{\Sigma}$ to output strings
in $\kstar{\Gamma_{\mathbb{D}}}$. The restricted version of theorem \ref{thm:Noncomm}
can then be used to convert this $\operatorname{\mbox{SST}}$ to function expression form,
which when interpreted over the original domain $\mathbb{D}$ represents $\interp M$.
\begin{thm}
\label{thm:Noncomm} For an arbitrary finite alphabet $\Sigma$ and
monoid $\tuple{\mathbb{D},+,0}$, every regular function $\func f{\kstar{\Sigma}}{\mathbb{D}}$
can be expressed using the base functions combined with choice, sum,
split sum, iterated sum, chained sum, and their left-additive versions.
\end{thm}
\global\long\def\pareg#1#2#3{\funcapptrad{r^{\left(#1\right)}}{#2,#3}}
\global\long\def\paregv#1#2#3#4{\funcapptrad{\vector R_{#2}^{\left(#1\right)}}{#3,#4}}
\global\long\def\paregp#1#2#3#4{\funcapptrad{\vector R_{#2}^{\left(#1\right)}}{#3,#4}}
\global\long\def\mathbb{S}{\mathbb{S}}
\global\long\def\shapecat#1#2{\strcat[\cdot]{#1}{#2}}
\global\long\def\preceq{\preceq}
\global\long\def\prec{\prec}
\global\long\def\support#1{\funcapptrad{\autobox{\mathit{supp}}}{#1}}
\global\long\def\sqsubseteq{\sqsubseteq}
\global\long\def\sqsubset{\sqsubset}
\global\long\def\sim{\sim}
\subsection{From DFAs to regular expressions: A review \label{sub:Noncomm:DFA-Regex}}
The procedure to convert a $\operatorname{\mbox{CCRA}}$ into a function expression
is similar to the corresponding algorithm \cite{Sipser-Intro} that
transforms a DFA $A=\brtuple{Q,\Sigma,\delta,q_{0},F}$ into an equivalent
regular expression; we will also use this algorithm in our correctness
proof -- hence this review.
Let $Q=\rosetbr{q_{1},q_{2},\ldots,q_{n}}$. For each pair of states
$q,q^{\prime}\in Q$, and for $i\in\mathbb{N}$, $0\leq i\leq n$, $\pareg iq{q^{\prime}}$
is the set of strings $\sigma$ from $q$ to $q^{\prime}$, while
only passing through the intermediate states $\roset{q_{1},q_{2},\ldots,q_{i}}$.
This can be inductively constructed as follows:
\begin{enumerate}
\item $\pareg 0q{q^{\prime}}=\ruset{a\in\union{\Sigma}{\roset{\epsilon}}}{q\to^{a}q^{\prime}}$.
\item $\pareg{i+1}q{q^{\prime}}=\pareg iq{q^{\prime}}+\pareg iq{q_{i+1}}\kstar{\pareg i{q_{i+1}}{q_{i+1}}}\pareg i{q_{i+1}}{q^{\prime}}$.
\end{enumerate}
The language $L$ accepted by $A$ is then given by the regular expression
$\sum_{q_{f}\in F}\pareg n{q_{0}}{q_{f}}$. Note that the regular
expression thus obtained is also unambiguous.
\subsection{A theory of shapes \label{sub:Noncomm:Shapes}}
In a $\operatorname{\mbox{CCRA}}$ $M$, the effect of processing a string $\sigma$
starting from a state $q$ can be summarized by the pair $\brtuple{\fabr{\delta}{q,\sigma},\fabr{\mu}{q,\sigma}}$
-- $\funcapptrad{\delta}{q,\sigma}$ is the state of the machine after
processing $\sigma$, and the partial application of the register
update function $\func{\fabr{\mu}{q,\sigma}}V{\kstar{\p{\union V{\Gamma}}}}$
expresses the final values of the registers in terms of their initial
ones.
Consider the expression $\funcapptrad{\mu}{q,\sigma,u}=aubcvd$, where
$u,v\in V$ are registers, and $a,b,c,d\in\kstar{\Gamma}$ are string
constants. Because of the associative property, any update expression
can be equivalently represented -- as in $\funcapptrad{\mu}{q,\sigma,u}=aub^{\prime}vd$
where $b^{\prime}=bc$ -- so that there is at most one string constant
between consecutive registers in this update expression. The summary
for $\sigma$ therefore contains the shape $\func{S_{\sigma}}V{\kstar V}$
indicating the sequence of registers in each update expression and,
for each register $v$ and each position $k$ from $1,2,\ldots,\strlen{\funcapptrad{S_{\sigma}}v}+1$,
a string $\gamma_{k}\in\kstar{\Gamma}$ indicating the $k^{\mbox{th}}$
string constant appearing in $\funcapptrad{\mu}{q,\sigma,u}$.
\begin{defn}[Shape of a path]
\label{defn:Noncomm:Shapes} A \emph{shape} $\func SV{\kstar V}$
is a copyless function over a finite set of registers $V$. Let $\pi=q_{1}\to^{\sigma_{1}}q_{2}\to^{\sigma_{2}}\to\cdots\to^{\sigma_{n}}q_{n+1}$
be a path through a $\operatorname{\mbox{CCRA}}$ $M$. The \emph{shape of the path
$\pi$} is the function $\func{S_{\pi}}V{\kstar V}$ such that for
all registers $v\in V$ , $\funcapptrad{S_{\pi}}v$ is the string
projection onto $V$ of the register update expression $\funcapptrad{\mu}{q_{1},\sigma,v}$:
$\funcapptrad{S_{\pi}}v=\funcapptrad{\pi_{V}}{\funcapptrad{\mu}{q_{1},\sigma,v}}$.
\end{defn}
We refer to a string constant in the update expression as a \emph{patch}
in the corresponding shape. Because of the copyless restriction on
the register update function, the set of all shapes over $V$ is finite.
The following is an immediate consequence of the space of shapes being
finite:
\begin{prop}
\label{prop:Noncomm:Shapes:Regular} Let $q,q^{\prime}\in Q$ be two
states in a $\operatorname{\mbox{CCRA}}$ $M$, and $S$ be a shape. The set of all
strings from $q$ to $q^{\prime}$ in $M$ with shape $S$ is regular.\end{prop}
\begin{example}
It is helpful to visualize shapes as bipartite graphs (figure \ref{fig:Noncomm:Shapes}),
though this representation omits some important information about
the shape. Since the shape of a path indicates the pattern in which
register values flow during computation, an edge $u\to v$ can be
informally read as ``The value of $u$ flows into $v$''. Because
of the copyless restriction, every node on the left is connected to
at most one node on the right.
\begin{figure*}
\begin{centering}
\subfloat[\label{fig:Noncomm:Shapes:q1a} $q_{1}\to^{a}q_{1}$, $S_{\bot}$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (ox);
\path [->] (iy) edge (oy);
\path [->] (iz) edge (oz);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Noncomm:Shapes:q1b} $q_{1}\to^{b}q_{2}$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (ox);
\path [->] (iy) edge (ox);
\path [->] (iz) edge (oy);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Noncomm:Shapes:q1bb} $q_{1}\to^{b}q_{2}\to^{b}q_{2}$,
$S_{\top}$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (ox);
\path [->] (iy) edge (ox);
\path [->] (iz) edge (ox);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Noncomm:Shapes:Weird} Shape of the update $x:=yz$, $y:=x$,
$z:=\epsilon$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (oy);
\path [->] (iy) edge (ox);
\path [->] (iz) edge (ox);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Noncomm:Shapes:NoncommS1} Shape $S_{1}$ of the update
$x:=x$, $y:=yz$, $z:=\epsilon$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (ox);
\path [->] (iy) edge (oy);
\path [->] (iz) edge (oy);
\end{tikzpicture}
\par\end{centering}
} \hfill{} \subfloat[\label{fig:Noncomm:Shapes:NoncommS2} Shape $S_{2}$ of the update
$x:=xz$, $y:=y$, $z:=\epsilon$.]{\begin{centering}
\begin{tikzpicture}
\node (ix) {$x$};
\node [below=0.6 of ix] (iy) {$y$};
\node [below=0.6 of iy] (iz) {$z$};
\node [right=1 of ix] (ox) {$x$};
\node [right=1 of iy] (oy) {$y$};
\node [right=1 of iz] (oz) {$z$};
\path [->] (ix) edge (ox);
\path [->] (iy) edge (oy);
\path [->] (iz) edge (ox);
\end{tikzpicture}
\par\end{centering}
}
\par\end{centering}
\caption{\label{fig:Noncomm:Shapes} Visualizing shapes as bipartite graphs.
Figures \ref{fig:Noncomm:Shapes:q1a}-\ref{fig:Noncomm:Shapes:q1bb}
describe the shapes of some paths in the earlier $\operatorname{\mbox{SST}}$ example of
figure \ref{fig:Combinators-to-RegFuns:CCRA}.}
\end{figure*}
\end{example}
When two paths are concatenated, their shapes are combined. We define
the \emph{concatenation $\shapecat{S_{1}}{S_{2}}$} of two shapes
$S_{1}$ and $S_{2}$ as follows. For some register $v\in V$, let
$\funcapptrad{S_{2}}v=v_{1}v_{2}\ldots v_{k}$. Then $\funcapptrad{\shapecat{S_{1}}{S_{2}}}v=s_{1}s_{2}\ldots s_{k}$,
where $s_{i}=\funcapptrad{S_{1}}{v_{i}}$. By definition, therefore,
\begin{prop}
\label{prop:Noncomm:Shapes:Concat} Let $\pi_{1}$ and $\pi_{2}$
be two paths through a $\operatorname{\mbox{CCRA}}$ $M$ such that the final state
of $\pi_{1}$ is the same as the initial state of $\pi_{2}$. Then,
for all registers $v$, $\funcapptrad{S_{\pi_{1}\pi_{2}}}v=\funcapptrad{\shapecat{S_{\pi_{1}}}{S_{\pi_{2}}}}v$.
\end{prop}
\subsection{Proof outline \label{sub:Noncomm:Outline}}
To summarize the effect of a set of paths with the same shape, we
introduce the notion of an expression vector -- for a shape $S$,
an \emph{expression vector} $\vector A$ is a collection of function
expressions, such that for each register $v$, and for each patch
$k$ in $\funcapptrad Sv$, there is a corresponding function expression
$\func{\vector A_{v,k}}{\kstar{\Sigma}}{\kstar{\Gamma}}$. An expression
vector $\vector A$ \emph{summarizes a set of paths} $L$ with shape
$S$, if for each path $\pi\in L$ with initial state $q$, and input
string $\sigma$, and for each register $v$, in the update expression
$\funcapptrad{\mu}{q,\sigma,v}$, the constant value $\gamma_{k}\in\kstar{\Gamma}$
at position $k$ is given by $\funcapptrad{\vector A_{v,k}}{\sigma}$.
\begin{example}
Consider the loop $\kstar a$ at the state $q_{1}$ in the $\operatorname{\mbox{SST}}$
of figure \ref{fig:Combinators-to-RegFuns:CCRA}. Consider some concrete
string, $a^{k}$. The effect of this string is to update $x:=xa^{k}$,
$y:=y$, and $z:=zb^{k}$. The shape of this set of paths is the identity
function $\funcapptrad Sv=v$, for all $v$. Define the expression
vector $\vector A$ as follows: $\vector A_{x,1}=\vector A_{y,1}=\vector A_{y,2}=\vector A_{z,1}=\const{\kstar{\Sigma}}{\epsilon}$,
$\vector A_{x,2}=\itersum{\const aa}$, and $\vector A_{z,2}=\itersum{\const ab}$.
Then $\vector A$ summarizes the set of paths $\kstar a$ at the state
$q_{1}$.
\end{example}
The outer loop of our algorithm is an iteration which proceeds in
lock-step with the DFA-to-regular expression translator. In step $i$,
for each pair of states $q,q^{\prime}\in Q$, and shape $S$, we maintain
an expression vector $\paregv iSq{q^{\prime}}$. The invariant maintained
is that $\paregv iSq{q^{\prime}}$ summarizes all paths $\sigma\in\pareg iq{q^{\prime}}$
with shape $S$.
After this iteration is complete, pick a state $q_{f}\in F$, a shape
$S$, and some register $v$. Construct the function expression $f_{S,v}=\repsum{\paregp n{S,v,1}{q_{0}}{q_{f}}}{\repsum{\paregp n{S,v,2}{q_{0}}{q_{f}}}{\repsum{\cdots}{\paregp n{S,v,\left|\funcapptrad Sv\right|+1}{q_{0}}{q_{f}}}}}$.
Because $v$ initially held the empty string $\epsilon$, it follows
that for each path $q_{0}\to^{\sigma}q_{f}$ with shape $S$, the
final value in the register $v$ is given by $\funcapptrad{f_{S,v}}{\sigma}$.
We will then have constructed a function expression equivalent to
the given $\operatorname{\mbox{CCRA}}$ $M$.
There are therefore two steps in this construction:
\begin{enumerate}
\item Construct $\paregv 0Sq{q^{\prime}}$, for each pair of registers $q,q^{\prime}\in Q$,
and shape $S$.
\item For each $1\leq i<n$, $0\leq j\leq i$, and for all shapes $S$,
and pairs of states $q,q^{\prime}\in Q$, given $\paregv jSq{q^{\prime}}$,
construct $\paregv{i+1}Sq{q^{\prime}}$.
\end{enumerate}
\subsection{Operations on expression vectors \label{sub:Noncomm:EVOps}}
In this subsection, we create a library of basic operations on expression
vectors, including concatenation and union.
\subsubsection{Restricting expression domains \label{sub:Noncomm:EVOps:Restrict}}
Given an expression vector $\vector A$ for a shape $S$, the domain
of the expression vector, written as $\domain{\vector A}$, is defined
as the language $\bigintersection{v,k}{\domain{\vector A_{v,k}}}$,
where $\domain{\vector A_{v,k}}$ is the domain of the component function
expressions. We would want to restrict the component expressions in
a vector so that they all have the same domain -- given a cost function
$\func f{\kstar{\Sigma}}{\kstar{\Gamma}}$ and a language $L\subseteq\kstar{\Sigma}$,
we define the \emph{restriction of $f$ to $L$} as $\restrict fL=\repsum f{\const L{\epsilon}}$.
This is equivalent to saying that $\funcapptrad{\restrict fL}{\sigma}=\funcapptrad f{\sigma}$,
if $\sigma\in L$, and $\funcapptrad{\restrict fL}{\sigma}=\bot$,
otherwise. We extend this to restrict expression vectors $\vector A$
to languages $L$, $\restrict{\vector A}L$, by defining $\left(\restrict{\vector A}L\right)_{v,k}$
as $\restrict{\vector A_{v,k}}L$.
\subsubsection{Shifting expressions \label{sub:Noncomm:EVOps:Shift}}
Given a cost function $f$ and a language $L$, the \emph{left-shifted
function $\lshift fL$} is the function which reads an input string
in $\domain f\cdot L$, and applies $f$ to the prefix and ignores
the suffix, provided the split is unique, i.e. $\lshift fL=\splitsum f{\const L{\epsilon}}$.
Similarly, the \emph{right-shifted function $\rshift fL=\splitsum{\const L{\epsilon}}f$}.
The shift operators can also be extended to expression vectors: $\lshift{\vector A}L$
is defined as $\left(\lshift{\vector A}L\right)_{v,k}=\lshift{\restrict{\vector A_{v,k}}{\domain{\vector A}}}L$,
and $\rshift{\vector A}L$ is defined as $\left(\rshift{\vector A}L\right)_{v,k}=\rshift{\restrict{\vector A_{v,k}}{\domain{\vector A}}}L$.
\subsubsection{Concatenation \label{sub:Noncomm:EVOps:Concat}}
Let $L$ be a set of paths with shape $S$, and $L^{\prime}$ be a
set of paths with shape $S^{\prime}$. Let the expression vectors
$\vector A$ and $\vector B$ summarize paths in $L$ and $L^{\prime}$
respectively. We now construct an expression vector $\vector A\cdot\vector B$
which summarizes unambiguous paths in $L\cdot L^{\prime}$.
Consider a path $\pi\in L\cdot L^{\prime}$ which can be unambiguously
decomposed as $\pi=\pi_{1}\pi_{2}$ with $\pi_{1}\in L$ and $\pi_{2}\in L^{\prime}$.
When applying $\vector A$ (resp. $\vector B$) to this path, we should
shift the expression vector to examine only $\pi_{1}$ (resp. $\pi_{2}$).
Thus, define $\vector A^{\prime}=\lshift{\vector A}{\domain{\vector B}}$,
and $\vector B^{\prime}=\rshift{\vector B}{\domain{\vector A}}$.
Pick a register $v$, and let $v:=f_{1}v_{1}f_{2}v_{2}\ldots v_{k}f_{k+1}$
be the update expression for $v$ in $\vector B^{\prime}$. For each
register $v_{i}$ in the right-hand side, let $v_{i}:=f_{i1}v_{i1}f_{i2}v_{i2}\ldots v_{ik_{i}}f_{ik_{i}+1}$
be the update expression for $v_{i}$ in $\vector A^{\prime}$. View
string concatenation as the function combinator $+$, and substitute
the expression for each $v_{i}$ in $\vector A^{\prime}$ into the
expression for $v$ in $\vector B^{\prime}$. Then, observe that $\left(\vector A\cdot\vector B\right)_{v,k}$
is the $k^{\mbox{th}}$ function expression in the string that results.
\subsubsection{Choice \label{sub:Noncomm:EVOps:Choice}}
Let $\vector A$ and $\vector B$ be expression vectors, both for
some shape $S$. Let $\vector A^{\prime}=\restrict{\vector A}{\domain{\vector A}}$
and $\vector B^{\prime}=\restrict{\vector B}{\domain{\vector B}}$.
Then, define the choice $\choice{\vector A}{\vector B}$ is the expression
vector for shape $S$ such that for each register $v$ and patch $k$,
$\left(\choice{\vector A}{\vector B}\right)_{v,k}=\choice{\vector A_{v,k}^{\prime}}{\vector B_{v,k}^{\prime}}$.
\begin{claim}
\label{clm:Noncomm:EVOps:Choice} If $L$ and $L^{\prime}$ are disjoint
sets of paths with the same shape $S$, such that $\vector A$ summarizes
paths in $L$ and $\vector B$ summarizes paths in $L^{\prime}$,
then $\choice{\vector A}{\vector B}$ summarizes paths in $\union L{L^{\prime}}$.
\end{claim}
The notation $\vartriangleright\rosetbr{f_{1},f_{2},\ldots,f_{k}}$ is shorthand
for the expression $\choice{f_{1}}{\choice{f_{2}}{\choice{\cdots}{f_{k}}}}$.
We ensure that when this notation is used, the functions have mutually
disjoint domains, so the order is immaterial.
\subsection{Constructing $\paregv 0Sq{q^{\prime}}$ \label{sub:Noncomm:Rs0}}
For each string $a\in\union{\Sigma}{\roset{\epsilon}}$, and each
pair of states $q,q^{\prime}\in Q$ such that $q\to^{a}q^{\prime}$,
if $S$ is the shape of the update expression of $q\to^{a}q^{\prime}$,
we define $\paregv aSq{q^{\prime}}$ as follows. For each register
$v$ and patch $k$ in $\funcapptrad Sv$, $\paregp a{S,v,k}q{q^{\prime}}=\const a{\gamma_{v,k}}$,
where $\gamma_{v,k}$ is the $k^{\mbox{th}}$ string constant appearing
in the update expression $\funcapptrad{\mu}{q,a,v}$. For all other
$a\in\union{\Sigma}{\roset{\epsilon}}$, $q,q^{\prime}\in Q$, and
shapes $S$, define $\paregv aSq{q^{\prime}}=\bot$. Finally, $\paregv 0Sq{q^{\prime}}=\mbox{ }\vartriangleright\rusetbr{\paregv aSq{q^{\prime}}}{a\in\union{\Sigma}{\roset{\epsilon}}}$.
By construction,
\begin{claim}
\label{clm:Noncomm:Rs0} For each pair of states $q,q^{\prime}\in Q$
and shape $S$, $\paregv 0Sq{q^{\prime}}$ summarizes all paths $\sigma\in\pareg 0q{q^{\prime}}$
from $q$ to $q^{\prime}$ with shape $S$.\end{claim}
\subsection{A total order over the registers \label{sub:Noncomm:RegisterOrder}}
During the iteration step of the construction, we have to provide
function expressions for $\funcapptrad{\vector R_{S}^{\left(i+1\right)}}{q,q^{\prime}}$
in terms of the candidate function expressions at step $i$. Register
values may flow in complicated ways: consider for example the shape
in figure \ref{fig:Noncomm:Shapes:Weird}. The construction of $\funcapptrad{\vector R_{S}^{\left(i+1\right)}}{q,q^{\prime}}$
is greatly simplified if we assume that the shapes under consideration
are idempotent under concatenation.
\begin{defn}
\label{defn:Noncomm:RegisterOrder} Let $V$ be a finite set of registers,
and $\preceq$ be a total order over $V$. We call a shape $S$ over
$V$ \emph{normalized} with respect to $\preceq$ if
\begin{enumerate}
\item for all $u,v\in V$, if $v$ occurs in $\funcapptrad Su$, then $u\preceq v$,
\item for all $u,v\in V$, if $v$ occurs in $\funcapptrad Su$, then $u$
itself occurs in $\funcapptrad Su$, and
\item for all $v\in V$, there exists $u\in V$ such that $v$ occurs in
$\funcapptrad Su$.
\end{enumerate}
A $\operatorname{\mbox{CCRA}}$ $M$ is normalized if the shape of each of its update
expressions is normalized with respect to $\preceq$.
\end{defn}
For example, the shapes in figures \ref{fig:Noncomm:Shapes:q1a},
\ref{fig:Noncomm:Shapes:q1bb}, \ref{fig:Noncomm:Shapes:NoncommS1},
and \ref{fig:Noncomm:Shapes:NoncommS2} are normalized, while \ref{fig:Noncomm:Shapes:q1b}
and \ref{fig:Noncomm:Shapes:Weird} are not. Informally, the first
condition requires that all registers in the $\operatorname{\mbox{CCRA}}$ flow upward,
and the second ensures that shapes are idempotent. Observe that if
the individual transitions in a path are normalized, then the whole
path is itself normalized.
\begin{prop}
\label{prop:Noncomm:RegisterOrder} For every $\operatorname{\mbox{CCRA}}$ $M$, there
is an equivalent normalized $\operatorname{\mbox{CCRA}}$ $M^{\prime}$.
\end{prop}
\begin{IEEEproof}
Let $M=\brtuple{Q,\Sigma,V,\delta,\mu,q_{0},F,\nu}$. Let $V^{\prime}=\rusetbr{x_{i}}{0\leq i\leq\left|V\right|}$
(so that $\left|V^{\prime}\right|=\left|V\right|+1$), and define
the register ordering as $x_{i}\preceq x_{j}$ iff $i\leq j$. $x_{0}$
is a sink register which accumulates all those register values which
are lost during computation. Let $Q^{\prime}$ be the set of all those
pairs $\brtuple{q,f}$, where $q\in Q$ is the current state, and
the permutation $\func fV{V^{\prime}\setminus\rosetbr{x_{0}}}$ is
the register renaming function. For simplicity, let us extend each
register renaming function $f$ to $\arrow{\union V{\Gamma}}{\union{V^{\prime}}{\Gamma}}$
by defining $\funcapptrad f{\gamma}=\gamma$, for $\gamma\in\Gamma$.
We further extend it to $\arrow{\kstar{\left(\union V{\Gamma}\right)}}{\kstar{\left(\union{V^{\prime}}{\Gamma}\right)}}$
by $\fabr f{v_{1}v_{2}\ldots v_{k}}=\funcapptrad f{v_{1}}\funcapptrad f{v_{2}}\ldots\funcapptrad f{v_{k}}$.
Let $F^{\prime}=\rusetbr{\brtuple{q,f}}{q\in F}$, and define the
output function $\nu^{\prime}$ as $\funcapptrad{\nu^{\prime}}{q,f}=\funcapptrad f{\funcapptrad{\nu}q}$.
For each state $\brtuple{q,f}\in Q^{\prime}$, and each symbol $a\in\Sigma$,
define $f^{\prime}$ as follows. For each register $v\in V$, if at
least one register occurs in $\fabr{\mu}{q,a,v}$, then $\funcapptrad{f^{\prime}}v=\min\rusetbr{\fabr fu}{u\mbox{ occurs in }\funcapptrad{\mu}{q,a,v}}$.
Observe that, because of the copyless restriction, for every pair
of distinct registers $u,v\in V$, $\funcapptrad{f^{\prime}}u\neq\funcapptrad{f^{\prime}}v$.
For all registers $v$ such that $\funcapptrad{f^{\prime}}v$ is still
undefined, define $\funcapptrad{f^{\prime}}v$ arbitrarily such that
$f^{\prime}$ is a permutation. Now $\fabr{\delta^{\prime}}{\brtuple{q,f},a}=\brtuple{\fabr{\delta}{q,a},f^{\prime}}$.
Define $\fabr{\mu^{\prime}}{\brtuple{q,f},a,x_{0}}=x_{0}+\funcapptrad f{v_{1}}+\funcapptrad f{v_{2}}+\cdots+\funcapptrad f{v_{k}}$,
where $\rosetbr{v_{1},v_{2},\ldots,v_{k}}$ is the set of registers
in $M$ whose value is lost during the transition. For all registers
$v\in V$, if $\fabr{\mu}{q,a,v}=v_{1}v_{2}\ldots v_{k}\in\kstar{\left(\union V{\Gamma}\right)}$,
define $\fabr{\mu^{\prime}}{\brtuple{q,f},a,\fabr{f^{\prime}}v}=\fabr f{v_{1}}+\fabr f{v_{2}}+\cdots+\fabr f{v_{k}}$.
For an arbitrary ordering $v_{1}\leq v_{2}\leq\cdots\leq v_{\left|V\right|}$
of the original registers $V$, define $\funcapptrad{f_{0}}{v_{i}}=x_{i}$.
It can be shown that the $\operatorname{\mbox{CCRA}}$ $M^{\prime}=\brtuple{Q^{\prime},\Sigma,V^{\prime},\delta^{\prime},\mu^{\prime},\brtuple{q_{0},f_{0}},F^{\prime},\nu^{\prime}}$
is equivalent to $M$, and that its transitions are normalized.\end{IEEEproof}
We will now assume that all $\operatorname{\mbox{CCRA}}$s and shapes under consideration
are normalized, and we elide this assumption in all definitions and
theorems.
\subsection{A partial order over shapes \label{sub:Noncomm:ShapeOrder}}
We now make the observation that some shapes cannot be used in the
construction of other shapes. Consider the shapes $S_{1}$ and $S_{\top}$
from figure \ref{fig:Noncomm:Shapes}. Let $\pi$ be a path through
the $\operatorname{\mbox{CCRA}}$ with shape $S_{1}$. Then, no sub-path of $\pi$
can have shape $S_{\top}$, because if such a sub-path were to exist,
then the value in register $y$ would be promoted to $x$, and the
registers $x$ and $y$ could then never be separated. We now create
a partial-order $\sqsubseteq$, and an equivalence relation $\sim$
over the set $\mathbb{S}$ of upward flowing shapes which together capture
this notion of ``can appear as a subpath''.
\begin{defn}
\label{defn:Noncomm:ShapeOrder} If $S$ is a shape over the set of
registers $V$, then the support of $S$, $\support S=\ruset{v\in V}{v\mbox{ occurs in }\funcapptrad Sv}$.
If $S_{1}$ and $S_{2}$ are two shapes, then $S_{1}\sqsubset S_{2}$
iff $\support{S_{1}}\supset\support{S_{2}}$. We call two shapes $S_{1}$
and $S_{2}$ \emph{support-equal}, written as $S_{1}\sim S_{2}$,
if $\support{S_{1}}=\support{S_{2}}$.
\end{defn}
For example, the shape $S_{\bot}$ from figure \ref{fig:Noncomm:Shapes}
is the bottom element of $\sqsubseteq$, and $S_{\top}$ is the top element.
$S_{1}\sim S_{2}$, and both shapes are strictly sandwiched between
$S_{\bot}$ and $S_{\top}$. Note that support-equality is a finer
relation than incomparability%
\footnote{Note that incomparability with respect to $\sqsubseteq$ is not even an
equivalence relation over shapes.%
} with respect to $\sqsubseteq$. In an early attempt to create a partial
order over shapes, we considered formalizing the relation $R_{\autobox{\mathit{sp}}}$,
``can appear as the shape of a sub-path''. However, this approach
fails because $R_{\autobox{\mathit{sp}}}$ is not a partial order.
In particular, observe that $\shapecat{S_{1}}{S_{2}}=S_{1}$, and
$\shapecat{S_{2}}{S_{1}}=S_{2}$. Thus, both $\tuple{S_{1},S_{2}}$
and $\tuple{S_{2},S_{1}}$ occur in $R_{\autobox{\mathit{sp}}}$,
and they are not equal. The presence of ``crossing edges'' in the
visualization of $S_{2}$ is what complicates the construction, but
we could not find a syntactic transformation on $\operatorname{\mbox{CCRA}}$s that
would eliminate these crossings.
\begin{claim}
\label{clm:Noncomm:SubpathLeq} Let $\pi$ be a path through the $\operatorname{\mbox{CCRA}}$
$M$ with shape $S$, and $\pi^{\prime}$ be a subpath of $\pi$ with
shape $S^{\prime}$. If $S^{\prime}\not\sqsubset S$, then $S^{\prime}\sim S$.
\end{claim}
\begin{IEEEproof}
Assume otherwise, so $S^{\prime}\not\sim S$. Then, for some register
$v\in\support S$, $v\notin S^{\prime}$. The effect of the entire
path $\pi$ is to make the initial value of $v$ flow into itself,
but on the subpath $\pi^{\prime}$, $v$ is promoted to some upper
register $v^{\prime}$. Because of the normalization condition (definition
\ref{defn:Noncomm:RegisterOrder}), it follows that on the suffix,
the value in $v^{\prime}\prec v$ cannot flow back into $v$, leading
to a contradiction.\end{IEEEproof}
\begin{claim}
\label{clm:Noncomm:SubpathPrefix} Let $\pi$ be a path through the
$\operatorname{\mbox{CCRA}}$ $M$ with shape $S$, and let $\pi^{\prime}$ be the
shortest prefix with shape $S^{\prime}$ such that $S^{\prime}\not\sqsubset S$.
Then $S^{\prime}=S$.
\end{claim}
\begin{IEEEproof}
Assume otherwise. From claim \ref{clm:Noncomm:SubpathLeq}, we know
that $S^{\prime}\sim S$.
\begin{casenv}
\item For some register $u\notin\support S$, and registers $v,w\in\support S$,
with $v\neq w$, $u\to v$ in $S$, and $u\to w$ in $S^{\prime}$.
Once $u$ has flowed into $w$, the ``superpath'' cannot remove
$u$ from $w$. It is thus a contradiction that $u$ flows into $v$
in $S$.
\item For some register $v\in\support S$, the order of registers in $\funcapptrad Sv$
and $\funcapptrad{S^{\prime}}v$ are different. For some registers
$u$ and $w$, $u$ occurs before $w$ in $\funcapptrad Sv$, and
$w$ occurs before $u$ in $\funcapptrad{S^{\prime}}v$. However,
once the values of $w$ and $u$ have been appended to $v$ in the
order $wu$, they cannot be separated to be recast in the order $uw$.
It is thus a contradiction that $u$ occurs before $w$ in $\funcapptrad Sv$.\end{casenv}
\end{IEEEproof}
\subsection{Kleene-{*} and revisiting states \label{sub:Noncomm:Kstar}}
\global\long\def\Lfirst#1{\funcapptrad{L_{\autobox{\mathit{first}}}}{#1}}
\global\long\def\evfirst#1{\vector A_{#1}}
\global\long\def\evlast#1{\vector B_{#1}}
\global\long\def\evinc#1{\vector C_{#1}}
At each step of the iteration, for each pair of states $q,q^{\prime}\in Q$,
and for each shape $S$, we construct a new expression vector $\paregv{i+1}Sq{q^{\prime}}$,
summarizing paths in $\pareg{i+1}q{q^{\prime}}$ with shape $S$.
Recall that, from the DFA-to-regex translator, $\pareg{i+1}q{q^{\prime}}=\pareg iq{q^{\prime}}+\pareg iq{q_{i+1}}\kstar{\pareg i{q_{i+1}}{q_{i+1}}}\pareg i{q_{i+1}}{q^{\prime}}$.
Let $\evlast S$ be an expression vector which summarizes paths in
$\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$ with shape $S$. We can then
write $\paregv{i+1}Sq{q^{\prime}}=\choice{\paregv iSq{q^{\prime}}}{\evinc S}$,
where $\evinc S=\mbox{ }\vartriangleright\rusetbr{\paregv i{S_{1}}q{q_{i+1}}\cdot\evlast{S_{2}}\cdot\paregv i{S_{3}}{q_{i+1}}{q^{\prime}}}{\shapecat{S_{1}}{\shapecat{S_{2}}{S_{3}}}=S}$.
Our goal is therefore to construct $\evlast S$, for each $S$. We
construct these expression vectors inductively, according to the partial
order $\sqsubseteq$. The remaining subsections are devoted to expressing
$\evlast S$.
\subsection{Decomposing loops \label{sub:Noncomm:Decomp}}
Consider any path $\sigma$ in $\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$
with shape $S$. From claims \ref{clm:Noncomm:SubpathLeq} and \ref{clm:Noncomm:SubpathPrefix},
we can unambiguously decompose $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}\sigma_{f}$,
where
\begin{enumerate}
\item each $\sigma_{j}\in\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$ is a self-loop
at $q_{i+1}$,
\item for each $j$, $1\leq j\leq k$, the shape $S_{j}$ of $\sigma_{j}$
is support-equal to $S$, $S_{j}\sim S$, and $S_{f}\sqsubset S$,
and
\item for each $j$, $1\leq j\leq k$, and for each proper prefix $\sigma_{\autobox{\mathit{pre}}}\in\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$
of $\sigma_{j}$, $S_{\autobox{\mathit{pre}}}\sqsubset S$.
\end{enumerate}
Let us call the split $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}\sigma_{f}$
the \emph{$S$-decomposition} of $\sigma$. See figure \ref{fig:Noncomm:Decomp}.
\begin{figure*}
\begin{centering}
\begin{tikzpicture}
\node [state] (q1) {$q_{i + 1}$};
\node [state, right=1.3 of q1] (q2) {$q_{i + 1}$};
\node [state, right=1.3 of q2] (q3) {$q_{i + 1}$};
\node [state, right=1.3 of q3] (q4) {$q_{i + 1}$};
\node [state, right=1.3 of q4] (q5) {$q_{i + 1}$};
\node [state, right=1.3 of q5] (q6) {$q_{i + 1}$};
\node [state, right=1.3 of q6] (q7) {$q_{i + 1}$};
\node [state, right=1.3 of q7] (q8) {$q_{i + 1}$};
\path [->] (q1) edge node [label=above:{$S_1 = S$}, label=below:{$\sigma_1$}] {} (q2);
\path [->] (q2) edge node [label=above:{$S_2 \sim S$}, label=below:{$\sigma_2$}] {} (q3);
\path [->] (q3) edge node [label=above:{$\cdots$}] {} (q4);
\path [->] (q4) edge node [label=above:{$S_j \sim S$}, label=below:{$\sigma_j$}] {} (q5);
\path [->] (q5) edge node [label=above:{$\cdots$}] {} (q6);
\path [->] (q6) edge node [label=above:{$S_k \sim S$}, label=below:{$\sigma_k$}] {} (q7);
\path [->] (q7) edge node [label=above:{$S_f \sqsubset S$}, label=below:{$\sigma_f$}] {} (q8);
\node [state, below=0.75 of q2] (qj0) {$q_{i + 1}$};
\node [state, below=0.75 of q5] (qji) {$q_{i + 1}$};
\node [state, below=0.75 of q7] (qjf) {$q_{i + 1}$};
\path [-, dashed] (q4) edge (qj0);
\path [-, dashed] (q5) edge (qjf);
\path [->] (qj0)
edge node [label=above:{$S_{\autobox{\mathit{pre}}} \sqsubset S$},
label=below:{$\sigma_{\autobox{\mathit{pre}}} \in \kstar{\pareg{i}{q_{i + 1}}{q_{i + 1}}}$}] {}
(qji);
\path [->] (qji)
edge node [label=above:{$S_{\autobox{\mathit{suff}}}$},
label=below:{$\sigma_{\autobox{\mathit{suff}}} \in \pareg{i}{q_{i + 1}}{q_{i + 1}}$}] {}
(qjf);
\end{tikzpicture}
\par\end{centering}
\caption{\label{fig:Noncomm:Decomp} Decomposing paths in $\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$
with shape $S$. $\sigma_{j}$ can be unambiguously written as $\sigma_{\autobox{\mathit{pre}}}\sigma_{\autobox{\mathit{suff}}}$,
with $\sigma_{\autobox{\mathit{suff}}}\in\pareg i{q_{i+1}}{q_{i+1}}$.}
\end{figure*}
Consider some shape $S^{\prime}\sim S$, and let $\Lfirst{S^{\prime}}$
be the set of all paths $\pi\in\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$
with shape $S^{\prime}$ such that no proper prefix $\pi_{\autobox{\mathit{pre}}}$
of $\pi$ has shape $S_{\autobox{\mathit{pre}}}\sim S$. We can
then unambiguously write $\pi=\pi_{\autobox{\mathit{pre}}}\pi_{\autobox{\mathit{last}}}$,
with $\pi_{\autobox{\mathit{pre}}}\in\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$,
$\pi_{\autobox{\mathit{last}}}\in\pareg i{q_{i+1}}{q_{i+1}}$, and
such that $S_{\autobox{\mathit{pre}}}\sqsubset S$. Define $\evfirst{S^{\prime}}=\mbox{ }\vartriangleright\rusetbr{\paregv{i+1}{S_{\autobox{\mathit{pre}}}}{q_{i+1}}{q_{i+1}}\cdot\paregv i{S_{\autobox{\mathit{post}}}}{q_{i+1}}{q_{i+1}}}{\shapecat{S_{\autobox{\mathit{pre}}}}{S_{\autobox{\mathit{post}}}}=S\mbox{ and }S_{\autobox{\mathit{pre}}}\sqsubset S}$.
\begin{claim}
\label{clm:Noncomm:Kstar1:Lfirst} For all shapes $S^{\prime}\sim S$,
the expression vector $\evfirst{S^{\prime}}$ summarizes all paths
in $\Lfirst{S^{\prime}}$.
\end{claim}
\subsection{Computing $\evlast S$ \label{sub:Noncomm:BS}}
We now construct the expression vector $\evlast S$. Consider a path
$\sigma$, and its $S$-decomposition $\sigma=\sigma_{1}\sigma_{2}\ldots\sigma_{k}\sigma_{f}$.
Given a register $v$, and a patch $1\leq k\leq\left|\funcapptrad Sv\right|+1$,
three cases may arise:
First, if $\funcapptrad Sv=\epsilon$, i.e. $v$ is reset during
the computation. $v$ was reset while processing $\sigma_{k}$. Any
registers flowing into it during this time were also reset by $\sigma_{k}$.
Thus, its value is entirely determined entirely by $\sigma_{k}$ and
$\sigma_{f}$. First define $\vector F=\mbox{ }\vartriangleright\rusetbr{\evfirst{S_{1}}\cdot\evlast{S_{2}}}{\shapecat{S_{1}}{S_{2}}=S\mbox{ and }S_{2}\sqsubset S}$,
and let $L_{f}=\bigunion{S^{\prime}\sim S}{\Lfirst{S^{\prime}}}$.
Observe that $\funcapptrad{\mu}{q_{i+1},\sigma,v}=\funcapptrad{\mu}{q_{i+1},\sigma_{k}\sigma_{f},v}=\funcapptrad{\vector F_{v,1}}{\sigma_{k}\sigma_{f}}$,
and therefore define $\evlast{S,v,1}=\splitsum{\const{\kstar{L_{f}}}{\epsilon}}{\vector F_{v,1}}$.
Second, if $1<k<\left|\funcapptrad Sv\right|+1$, i.e. that $k$ refers
to an internal patch in $\funcapptrad Sv$. Once the registers are
combined in some order, any changes can only be appends at the beginning
and end of the register value. The $k^{\mbox{th}}$ constant in $\funcapptrad{\mu}{q_{i+1},\sigma,v}$
is consequently determined by $\sigma_{1}$. Therefore, define $\evlast{S,v,k}=\splitsum{\evfirst{S,v,k}}{\const{\kstar{L_{f}}}{\epsilon}}$.
Finally, if $k=1$, or $k=\left|\funcapptrad Sv\right|+1$, i.e. $k$
is either the first or the last patch. First, we know that $v\in\support S$.
Also, we know that any registers which flow into $v$ have to be non-support
registers. See figure \ref{fig:Noncomm:BS}. Thus, the value being
appended to $v$ while processing $\sigma_{j}$ is determined entirely
by $\sigma_{j}$ and $\sigma_{j-1}$. The idea is to use chained sum
to compute this value.
\begin{figure}
\begin{centering}
\begin{tikzpicture}
\node [state] (q1) {$q_{i + 1}$};
\node [state, right=1.8 of q1] (q2) {$q_{i + 1}$};
\node [state, right=1.8 of q2] (q3) {$q_{i + 1}$};
\node [below=0.35 of q1] (x1) {$v$};
\node [below=0.35 of q2] (x2) {$v$};
\node [below=0.35 of q3] (x3) {$v$};
\node [below=0.35 of x1] (y1) {$w$};
\node [below=0.35 of x2] (y2) {$w$};
\path [->] (q1) edge node [label=above:{$S_j \sim S$}, label=below:{$\sigma_j$}] {} (q2);
\path [->] (q2) edge node [label=above:{$S_{j + 1} \sim S$}, label=below:{$\sigma_{j + 1}$}] {} (q3);
\path [->] (x1) edge (x2);
\path [->] (x2) edge (x3);
\path [(->] (y1) edge (y2);
\path [->] (y2) edge (x3);
\path [-, dashed] (q1) edge (x1);
\path [-, dashed] (x1) edge (y1);
\path [-, dashed] (q2) edge (x2);
\path [-, dashed] (x2) edge (y2);
\path [-, dashed] (q3) edge (x3);
\end{tikzpicture}
\par\end{centering}
\caption{\label{fig:Noncomm:BS} For any path in $\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$,
inward flows into a (support) register $v$ have to be from non-support
registers.}
\end{figure}
We will now define $\evlast{S,v,k}$ for $k=\left|\funcapptrad Sv\right|+1$.
The case for $k=1$ is symmetric, and would involve reversing the
order of the operators, and replacing chained sum with the left-chained
sum.
We are determining the constant value appended to the end of $v$
while processing $\sigma$. We distinguish three phases of addition:
while processing $\sigma_{1}$, only the constant at the end of $\evfirst{S,v,k}$
is appended. While processing $\sigma_{j}$, $j>1$, \emph{both string
constants and registers} appearing after the occurrence of $v$ in
$\funcapptrad Sv$ are appended. Third, while processing $\sigma_{f}$,
both string constants and registers appearing after the occurrence
of $v$ in $\funcapptrad Sv$ are appended. The interesting part about
the second case is that this appending happens in a loop, and we therefore
need the lookback provided by the chained sum operator. Otherwise,
this case is similar to the simpler third case, where a value is appended
exactly once.
While processing $\sigma_{1}$, some symbols are appended to the $k^{\mbox{th}}$
position in $\funcapptrad Sv$. This is given by $f_{\autobox{\mathit{pre}}}=\splitsum{\evfirst{S,v,k}}{\const{\kstar{L_{f}}}{\epsilon}}$.
Similarly, while processing the suffix $\sigma_{f}$, some symbols
are appended. Say some register $u\to v$ in $S_{f}$. Then $u\notin\support{S_{f}}$,
and hence $u\notin\support S$ and $u\notin\support{S_{k}}$. Thus,
the value appended by $\sigma_{f}$ is determined by $\sigma_{k}\sigma_{f}$.
For each pair of shapes $S_{k}$ and $S_{f}$ such that $S_{k}\sim S$,
and $S_{f}\sqsubset S$, consider $\evfirst{S_{k}}^{\prime}=\lshift{\evfirst{S_{k}}}{\domain{S_{f}}}$,
and $\evlast{S_{f}}^{\prime}=\rshift{\evlast{S_{f}}}{\domain{\evfirst{S_{k}}}}$.
Consider the update expression $\evlast{S_{f},v}^{\prime}$: say this
is $v:=\sigma v\tau$, where $\sigma$ and $\tau$ are strings over
expressions and registers. For each register $u$ in $\tau$, substitute
the value $\evfirst{S_{k},u,1}^{\prime}$ -- since $u$ was reset
while processing $S_{k}$, this expression gives the contents of the
register $u$ -- and interpret string concatenation in $\tau$ as
the function combinator sum. Label this result as $f_{\autobox{\mathit{post}},S_{k},S_{f}}$.
Define $f_{\autobox{\mathit{post}}}=\splitsum{\const{\kstar{L_{f}}}{\epsilon}}{\vartriangleright\rusetbr{f_{\autobox{\mathit{pre}},S_{k},S_{f}}}{S_{k}\sim S\mbox{ and }S_{f}\sqsubset S}}$.
Finally, consider the value appended while processing $\sigma_{j}$,
for $j>1$. This is similar to the case for $\sigma_{f}$: if $u\to v$
in $S_{j}$, when $u\notin\support{S_{j}}$ and $u\notin\support{S_{j-1}}$.
Thus, the value appended by $\sigma_{j}$ is determined by $\sigma_{j-1}\sigma_{j}$.
For each pair of states $S_{j-1}\sim S$ and $S_{j}\sim S$,
consider $\evfirst{S_{j-1}}^{\prime}=\lshift{\evfirst{S_{j-1}}}{\domain{\evfirst{S_{j}}}}$,
and $\evfirst{S_{j}}^{\prime}=\rshift{\evfirst{S_{j}}}{\domain{\evfirst{S_{j-1}}}}$.
Consider the update expression $\evfirst{S_{j},v,k}^{\prime}$. Let
this be $v:=\sigma v\tau$, where $\sigma$ and $\tau$ are strings
over expressions and registers. For each register $u$ in $\tau$,
substitute the value $\evfirst{S_{j-1},u,1}^{\prime}$ -- since $u$
was reset while processing $S_{j-1}$, this expression gives the contents
of the register $u$ -- and interpret string concatenation in $\tau$
as the function combinator sum. Label this result as $f_{S_{j-1},S_{j}}$.
Define $f=\sum(\vartriangleright\rusetbr{f_{S_{j-1},S_{j}}}{S_{j-1}\sim S\mbox{ and }S_{j}\sim S},L_{f})$.
Finally, define $\evlast{S,v,k}=\choice{\p{\repsum{f_{\autobox{\mathit{pre}}}}{f_{\autobox{\mathit{post}}}}}}{\p{\repsum{f_{\autobox{\mathit{pre}}}}{\repsum f{f_{\autobox{\mathit{post}}}}}}}$.
By construction, we have:
\begin{claim}
$\evlast S$ summarizes all paths in $\kstar{\pareg i{q_{i+1}}{q_{i+1}}}$
with shape $S$.
\end{claim}
This completes the proof of theorem \ref{thm:Noncomm}.
\section{Conclusion \label{sec:Conclusion}}
In this paper, we have characterized the class of regular functions
that map strings to values from a monoid using a set of function combinators.
We hope that these results provide additional evidence of robust and
foundational nature of this class. The identification of the combinator
of chained sum, and its role in the proof of expressive completeness
of the combinators, should be of particular technical interest. There
are many avenues for future research. First, the question whether
all the combinators we have used are \emph{necessary} for capturing
all regular functions remains open (we conjecture that the set of
combinators is indeed minimal). Second, it is an open problem to develop
the notion of a congruence and a Myhill-Nerode-style characterization
for regular functions (see \cite{Boj13} for an attempt where authors
give such a characterization, but succeed only after retaining the
``origin'' information that associates each output symbol with a
specific input position). Third, it would be worthwhile to find analogous
algebraic characterizations of regularity when the domain is, instead
of finite strings, infinite strings \cite{AFT12} or trees \cite{EM99,AD12}
and/or when the range is a semiring \cite{DKV09,CRA-LICS}. Finally,
on the practical side, we plan to develop a declarative language for
document processing based on the regular combinators identified in
this paper.
\bibliographystyle{plain}
|
1,108,101,565,745 | arxiv | \subsection*{Acknowledgment.}
This research was done while the author was visiting at
Texas A{\&}M University in the summer 2011
for ``Workshop in Analysis and Probability'' supported by NSF.
He gratefully acknowledges the kind hospitality and
stimulating environment provided by TAMU and the program organizers.
The author would like to thank Professor M.~Sapir for beautiful
lectures on coarse embeddings during the workshop, in which he asked
if subexponential asymptotic dimension growth implies Yu's property $A$.
The author was partially supported by JSPS and Sumitomo Foundation.
|
1,108,101,565,746 | arxiv | \section{Introduction}
Automatic post-editing (APE) has actively been studied by researchers because it can reduce the effort required for editing machine-translated content and contribute to domain-specific translation \cite{isabelle2007domain, chatterjee2019findings, moon2021recent}. However, APE encounters a chronic problem concerning data generation \cite{negri2018escape, lee2021adaptation}. Generally, data for the APE task comprises the source sentence (SRC), machine translation of the sentence(MT), and corresponding post-edit sentence (PE), collectively known as an APE triplet. Generating these data require an elaborate process that involves identifying errors in the sentence and providing suitable revisions. This incurs the absence of appropriate training data for most language pairs and limits the acquisition of large datasets for this purpose \cite{chatterjee-EtAl2020WMT, moon2021empirical}.
To alleviate this problem, we develop and release a noise-based automatic data generation tool that can construct APE-triplet data from a parallel corpus, for all language pairs with English as the target language. The data generation tool proposed in this study enables the application of several noising schemes, such as semantic and morphemic level noise, as well as adjustments to the noise ratio that determines the quality of the MT sentence. Using this tool, the end-user can generate high-quality APE triplets as per the intended objective and conduct data-centric APE research.
\section{Data Construction Process and Tool Implementation}
\paragraph{Process}
We developed an APE data generation tool that automatically construct APE datasets from a given parallel corpus. The working of our tool is outlined in Figure \ref{fig:process} and described as follows. The source and target sentence in the parallel corpus are considered the SRC and MT of the APE triplet, respectively, and a noising scheme is implemented for the generation of a pseudo-MT \cite{lee2020noising}. Noise is introduced by replacing certain tokens in the target sentence with others, using one of the four following noising schemes.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{figure.pdf}
\caption{Overview of data construction process. $T_i$ refers to the tokenized component of the target sentence, $P_i$ indicates the POS tag corresponding to token $T_i$, and $T_{i}^{j}$ refers to the replacement token generated from the $j^{th}$ noise category. Throughout this process, the end-user can arbitrarily set the noise category and noise ratio and thereby obtain personalized APE triplets.}
\label{fig:process}
\end{figure}
\begin{itemize}
\item {\textsc{Random}: ~~} The random noising scheme replaces tokens in the original target sentence in a random manner \cite{park2020neural}. In this scheme, no semantic or syntactic information is reflected, and the noise is applied simply by replacing existing tokens with others from the target side of the parallel corpus.
\item {\textsc{Semantic}: ~~ } In the semantic noising scheme, each token in the target sentence is replaced with the corresponding synonym retrieved from the WordNet database \cite{fellbaum2010wordnet}. As all the tokens are replaced with semantically identical words, the APE model can learn to correct instances of inappropriate word-use arising from subtle differences in context or formality.
\item{\textsc{Morphemic}: ~~ } In the morphemic noising scheme, certain tokens in the sentence are replaced using tokens with the same part-of-speech (POS) tag. The replacement token is extracted from the given parallel corpus.
\item{\textsc{Syntactic}: ~~ } The syntactic noising scheme implements phrase-level substitutions.
Prior to the noising process, phrase chunking is performed using begin, inside, outside (BIO) tagging, and MT is created via replacement with an identically tagged phrase.
\end{itemize}
As these noising schemes are applied only to the target side, the data generation process is source-language agnostic. This enables the applicability of the proposed tool for all language pairs whose target language is English, with minimal human supervision. Furthermore, end-users can adjust the noise ratio that determines the number of tokens to be replaced (\emph{i.e.,} noised) in a sentence as desired. This enables the flexible APE data construction, thereby facilitating data-centric APE research.
\paragraph{Tool Implementation}
For the implementation of our tool, end-users need to specify the intended noise category and noising ratio and provide a parallel corpus with corresponding language pairs. The proposed tool is distributed as a web application developed using the Flask framework \cite{grinberg2018flask}. For the implementation of the noising process, Natural Language Toolkit (NLTK) \cite{bird2009natural} and SENNA NLP toolkit\footnote{https://ronan.collobert.com/senna/license.html} are utilized. In particular, NLTK is used for POS tagging and WordNet retrieval in the morphemic and semantic noising schemes, whereas SENNA is utilized for BIO tagging in the syntactic noising scheme. The web application of the proposed tool is publicly available \footnote{\url{http://nlplab.iptime.org:9092/}}.
\section{Conclusion}
The tool proposed in this paper reduces the need for expert-level human supervision generally required for APE data generation, thereby facilitating APE research on many language pairs that have not been studied thus far. The personalization capability of the proposed APE data generation tool can enable data-centric APE research that derives optimal performance through high-quality data \cite{park2021should}.
\section*{Acknowledgment}
This research was supported by the MSIT(Ministry of Science and ICT), Korea, under the ITRC(Information Technology Research Center) support program(IITP-2018-0-01405) supervised by the IITP(Institute for Information \& Communications Technology Planning \& Evaluation) and Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2021R1A6A1A03045425). Heuiseok Lim$^\dagger$ is a corresponding author.
|
1,108,101,565,747 | arxiv |
\section{Introduction}
\label{sec_intro}
\input{intro}
\section{Preliminaries}
\label{sec_defs}
\input{defs}
\subsection{Delay Games}
\label{subsec_delaygames}
\input{delaygames}
\subsection{The Borel Hierarchy}
\label{subsec_borel}
\input{borel}
\section{Borel Determinacy of Delay Games w.r.t.\ Fixed Delay Functions}
\label{sec_shiftresults}
\input{results_shift}
\section{Omnipotent Strategies in Delay Games}
\label{sec_strategies}
\input{strat}
\subsection{Omnipotent Strategies for Player~I}
\label{subsec_straI}
\input{stratI}
\subsection{Omnipotent Strategies for Player~O}
\label{subsec_straO}
\input{stratO}
\section{Borel Determinacy of Delay Games with Omnipotent Strategies}
\label{sec_skipresults}
\input{results_skip}
\section{Decidability}
\label{sec_dec}
\input{decidability}
\section{Characterizing the Existence of Omnipotent Strategies}
\label{sec_char}
\input{charac}
\section{Conclusion}
\label{sec_conc}
\input{conc}
\bibliographystyle{splncs03}
|
1,108,101,565,748 | arxiv | \subsection{Scalar Division to the Nearest Binary Integer}
\label{ScDiv}
\vspace{-0.05in}
The scheme parameter $t = \lfloor \frac{Q}{2} \rfloor$ value is already published, and hence, it is known. From the equations \ref{equ:AddDec} and \ref{equ:MulDec} in the scheme, if we denote $u = (c_0 + s \cdot c_1)$, then to decrypt the message $m$ correctly, all we need to do is to compute $m = \lceil \frac{u}{t} \rfloor$. The nearest binary integer equivalent of $\lceil \frac{u}{t} \rfloor$ can be computed by measuring the distance between $u$ and $t$ as $(\text{Absolute}(u - t) < \frac{t}{2})\ ?\ 1 : 0$. If this distance is larger than half of $t$, it indicates that $u$ and $t$ are far from each other, and thus, the nearest integer of the quotient $\frac{u}{t}$ must be $0$. But if this distance is less than half of $t$, then the nearest integer of the quotient $\frac{u}{t}$ must be $1$. Thus, we implement the scalar division hardware circuit as shown in Figure \ref{fig:submodules}, without using any hardware division circuit.
\section{Performance Evaluation}
\label{Perf}
We evaluate the performance of all the design implementations through synthesis on a Xilinx Zynq-7000 family xc7z020clg400-1 FPGA. The tool used for synthesis is ISE design suite 14.7, with all designs implemented in Verilog 2001. To generate synthesis results, the input coefficient bit width is considered $1200$ bits and there are $40$ coprime moduli, $q$, having $30$ bits each. The degree of the polynomial is taken as $1024$.
\subsection{Hardware Cost and Latency}
We start by discussing the hardware cost and latency of the individual operations listed in Table \ref{table:HC} and \ref{table:LF}. The hardware cost depends on the size of each coefficient (either $1200$ bits or $30$ bits) and the number of portions into which a $1200$-bit coefficient is divided. The latency computation factors in the number of portions, $k$, or the polynomial degree, $n$ as required by the implementation of a module. That is why in Table \ref{table:LF}, the latencies are represented as a factor of $k$ or $n$.
\begin{table}[h!]
\caption{Hardware cost of the individual operations.}
\begin{center}
\begin{tabular}{ | l | c | c | c | c |}
\hline
Operation & LUT Slices & Registers & DSP & BRAM \\
\hline
\hline
Mod Operator (\%) & 798 & 0 & 0 & 0 \\
\hline
Barrett Reduction & 71 & 0 & 0 & 0 \\
\hline
Modified Barrett Reduction & 23 & 0 & 3 & 0 \\
\hline
\hline
RNS (serial) & 7592 & 90 & 0 & 1.5 \\
\hline
RNS (serial modified) & 145 & 56 & 3 & 1.5 \\
\hline
RNS (parallel) & 88353 & 1242 & 0 & 2 \\
\hline
RNS (parallel modified) & 133 & 86 & 3 & 2 \\
\hline
\hline
CRT & 3883 & 2408 & 20 & 6 \\
\hline
CRT (LUT-based) & 1274 & 301 & 4 & 6 \\
\hline
\hline
Modulo Inverse (Fermat's little) & 1889 & 120 & 14 & 1 \\
\hline
Modulo Inverse (Extended Euclidean) & 3993 & 154 & 3 & 1 \\
\hline
\hline
NTT & 6188 & 1291 & 0 & 3 \\
\hline
NTT-based Polynomial Multiplication & 8261 & 162 & 30 & 6 \\
\hline
Polynomial Addition & 1185 & 56 & 0 & 1 \\
\hline
Scalar Multiplication & 118 & 10 & 0 & 3 \\
\hline
Scalar Division to nearest integer & 672 & 14 & 0 & 3 \\
\hline
\hline
PowersOf2 & 113 & 20 & 0 & 1 \\
\hline
Inner Product & 8961 & 796 & 0 & 3 \\
\hline
Div\&Round & 0 & 30 & 0 & 1 \\
\hline
\end{tabular}
\label{table:HC}
\end{center}
\end{table}
The built in Verilog $mod$ operator is very expensive and takes about $800$ LUTs to perform a $30$-bit modulo reduction. For the same bit width, the classic Barrett reduction reduces the hardware cost by almost $11$ times, and our proposed modified Barrett reduction reduces the cost by about $35$ times. Since modular reduction is performed very frequently and used by all the modules, using the modified Barrett reduction method substantially reduces the hardware resources required for implementing the entire homomorphic encryption scheme. Note that the latency of the modular reduction module is not shown, as the implementation comprises combinational logic only. We observe that the RNS parallel implementation utilizes about $12$ times more LUT slices, when compared to the serial implementation, while the latency of the serial implementation is about $40$ times higher than the parallel one. The RNS serial and parallel modified implementations listed in the table use the modified Barrett reduction to perform the modulo reduction operation, instead of the inbuilt Verilog $mod$ operator. The hardware resource utilization is dramatically reduced by this modification, however, the latency remains the same.
\begin{table}[h!]
\caption{Latency and frequency of the individual operations.}
\begin{center}
\begin{tabular}{ | l | c | c |}
\hline
Operation & Latency (clock cycles) & Frequency (MHz) \\
\hline
RNS (serial) & 120$n$ & 260.5 \\
\hline
RNS (serial modified) & 120$n$ & 265.3 \\
\hline
RNS (parallel) & 3$n$ & 314.1 \\
\hline
RNS (parallel modified) & 3$n$ & 316.4 \\
\hline
\hline
CRT & 1404$n$ & 132.4 \\
\hline
CRT (LUT-based) & 156$n$ & 134.5 \\
\hline
\hline
Modulo Inverse (Fermat's little) & 3240$n$ & 113.7 \\
\hline
Modulo Inverse (Extended Euclidean) & 360$n$ & 117.4 \\
\hline
\hline
NTT & 10240$k$ & 218.1 \\
\hline
NTT-based Polynomial Multiplication & 20480$k$ & 121.9 \\
\hline
Polynomial Addition & 3072$k$ & 144.4 \\
\hline
Scalar Multiplication & 2048$k$ & 243.8 \\
\hline
Scalar Division to nearest integer & 2048$k$ & 129.5 \\
\hline
\hline
PowersOf2 & 163840 & 349.1 \\
\hline
Inner Product & 153600 & 124.3 \\
\hline
Div\&Round & $k$ & 204.3 \\
\hline
\end{tabular}
\label{table:LF}
\end{center}
\end{table}
The regular CRT implementation is about $3$ times more expensive than a LUT-based CRT. This difference is because regular CRT spends a lot of hardware resources for computing the multiplicative inverse, while the LUT-based CRT, with all its precomputations, not only requires fewer hardware resources but also performs computations in $9$ times fewer clock cycles. While computing multiplicative inverses, Fermat's little theorem facilitates a low hardware cost implementation, with about half the hardware cost as compared to the widely used extended Euclidean method. However, the extended Euclidean method performs computations about $9$ times faster. An NTT-based polynomial multiplication cuts down the latency from $n^2$ to $n\log n$. The polynomial addition, scalar multiplication, and scalar division submodules avoid the usage of modular reduction, multiplication and division operations respectively. Hence, these submodules are implemented using minimal hardware resources and have a low latency. For the rest of the other submodules involved in relinearisation, because of all the optimizations in implementation, we observe a low hardware cost and latency.
\subsection{Hardware library vs Software library Speedup}
Table \ref{table:HET} provides the time, in clock cycles, for computing various homomorphic encryption operations. We represent time in clock cycles due to frequency difference between FPGA and general-purpose CPU.
\begin{table}[h!]
\caption{Time required for homomorphic encryption operations.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Operation & Time (in clock cycles) \\
\hline
\hline
Homomorphic addition & $3072$ \\
\hline
Homomorphic multiplication & $71338$ \\
\hline
\hline
Relinearisation KeyGen (version 1) & $86698$ \\
\hline
Relinearisation (version 1) & $18432$ \\
\hline
\hline
Relinearisation KeyGen (version 2) & $72362$ \\
\hline
Relinearisation (version 2) & $112298$ \\
\hline
\hline
RNS + CRT & $23259$ \\
\hline
Encryption & $75434$ \\
\hline
Decryption (Degree-1) & $73386$ \\
\hline
Decryption (Degree-2) & $141653$ \\
\hline
\end{tabular}
\label{table:HET}
\end{center}
\end{table}
As seen in the table, a single HE addition is about $23\times$ faster than a HE multiplication. Moreover, if one has to choose between relinearisation version $1$ and $2$, then version $1$ would be the unanimous choice, as it is about $6 \times$ faster than version $2$, even when key generation takes almost the same time. It is worth noting that, although the table lists the relinearisation key generation time for both versions, these keys can be precomputed, and hence, the time required for key generation need not be included in the overall time required for HE operations. Based on the parameters that are used in the implementation, we can evaluate a circuit of depth $56$, with relinearisation performed after every multiplication operation. Therefore, using the data from Table \ref{table:HET}, we can compute the number of clock cycles required for this entire set of operations. We observe that when the circuit evaluation is done with relinearisation version $1$, the total cycles required are $5,031,984$, while evaluation done with relinearisation version $2$ takes $10,194,651$ cycles.
\begin{table}[h!]
\caption{Hardware speedup for homomorphic encryption operations.}
\begin{center}
\begin{tabular}{ | l| l | l| l| }
\hline
Operation & \multicolumn{1}{|p{3.4cm}|}{\centering Palisade library \\(Time in clock cycles)} & \multicolumn{1}{|p{3.5cm}|}{\centering Our Hardware library \\(Time in clock cycles)} & Speedup \\
\hline
\hline
Encryption & 119700000 & 75434 & $1500\times$ \\
\hline
Homomorphic mult. & 299729520 & 71338 & $4200\times$ \\
\hline
Homomorphic add. & 9070884 & 3072 & $2950\times$ \\
\hline
Decryption & 22400640 & 73386 & $300\times$ \\
\hline
\end{tabular}
\label{table:HWS}
\end{center}
\end{table}
Next, we present the speed up obtained by the hardware accelerator designed using the modules in our hardware library in comparison to its software counterpart. For this comparison, we recorded the number of clock cycles required for encryption, HE multiplication, HE addition and decryption in the Palisade software library using same underlying scheme with RNS implementation using same parameters. Table \ref{table:HWS} lists the observed speedup. This evaluation assumes we utilize the maximum possible resources available on the FPGA that we used for our evaluation. There is a scope to further enhance the speedup using more hardware resources.
\begin{table}[h!]
\caption{Hardware speedup for Logistic Regression prediction.}
\begin{center}
\begin{tabular}{ | l | c | }
\hline
Operation & Speedup (in clock cycles) \\
\hline
Logistic Regression prediction & $2650\times$ \\
\hline
\end{tabular}
\label{table:HWSLR}
\end{center}
\end{table}
Finally, we evaluate the time required to make a prediction using logistic regression, a common tool used in machine learning for binary classification problems. Making predictions using logistic regression model with $x_0,x_1,...,x_n$ features require computing the logistic regression equation, $Y = {{e^{X}}/{1 + e^{X}}}$, where $X = \sum^{n}_{i=0}b_ix_i$. We first compute $X$ and then use the value in the Remez algorithm (an iterative minimax approximation algorithm \cite{FW1965}) equation, $Y(X) = -0.004X^3 + 0.197X + 0.5$ to compute the probability. The use of Remez algorithm equation helps avoid log computation which is required in the logistic regression equation. However, these equations require working over floating point numbers and the FV scheme does not support the encryption of floating point numbers. Hence, we scale them to integers and perform fixed point operations instead. We observe that our hardware accelerator requires $470,064$ clock cycles to perform one logistic regression prediction using homomorphic encryption, providing a speedup of around $2650\times$ over a software-based prediction as mentioned in Table \ref{table:HWSLR}.
\subsection{GCD}
The largest positive integer that divides two or more given integers is known as the greatest common divisor, GCD, of those integers. Mathematically, $gcd(a,b)$ where $a$ and $b$ are not both zero, may be defined as the smallest positive integer $d$ that can be written as $d = a \cdot x\ + b \cdot y$, where $x$ and $y$ are integers. For our implementation purposes, we need to check if two numbers are relatively prime or not by computing the GCD. We know that two numbers are coprime if their GCD equals $1$, which can also be written as $1 = a \cdot x\ + b \cdot y$.
\subsubsection{Classic Euclidean Method}
Algorithm \ref{alg:CEAD} shows the way GCD is computed using the classic Euclidean method (aka the long division method). This algorithm computes GCD by recursively dividing the larger number by the smaller number. After each division, the remainder replaces the smaller number involved in the division process. The time complexity of this algorithm is $O(log(min(a, b)))$.
\begin{algorithm}
\footnotesize
\SetCommentSty{\textit}
\nonl Let $a, b \in \mathbb{Z}_q[x]/\langle f(x) \rangle$ and $a \geq b$. \\
\nonl \DontPrintSemicolon \;
\nonl \textbf{gcd(a, b)} \\
\If{$a \% b == 0$}
{\textbf{Return} $b$ as $\langle gcd(a,b) \rangle$.}
\Else
{\textbf{Return} $gcd(b, a \% b)$.}
\caption{Classic Euclidean Algorithm by Division}
\label{alg:CEAD}
\end{algorithm}
Division operations are very expensive to implement on FPGA, but we still implement this algorithm so as to evaluate its associated hardware cost and performance. The hardware implementation is shown in Figure \ref{fig:GCD}. We also implement this algorithm with our modified Barrett reduction to see its influence on the hardware cost. We will discuss it later, in Section \ref{Perf}.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=5.5in]{Figures/GCD.pdf}
\end{center}
\caption{\small Hardware implementation of different methods to compute GCD: \textit{Classic Euclidean method}, \textit{Subtraction method}, and \textit{Binary method}.}
\label{fig:GCD}
\end{figure*}
\subsubsection{Subtraction Method}
Another classic way of computing GCD is via recursive subtraction, wherein we recursively subtract the smaller integer from the larger integer as seen in Algorithm \ref{alg:CEAS}.
\begin{algorithm}
\footnotesize
\SetCommentSty{\textit}
\nonl Let $a, b \in \mathbb{Z}_q[x]/\langle f(x) \rangle$. \\
\nonl \DontPrintSemicolon \;
\nonl \textbf{gcd(a, b)} \\
\If{$a == b$}
{\textbf{Return} $a$ as $\langle gcd(a,b) \rangle$.}
\If{$a > b$}
{\textbf{Return} $gcd(a-b, b)$.}
\Else
{\textbf{Return} $gcd(a, b-a)$.}
\caption{Euclidean Algorithm by Subtraction}
\label{alg:CEAS}
\end{algorithm}
While the implementation of this algorithm is straight-forward and reduces the hardware cost significantly, the time complexity is higher than the previous algorithm involving divisions. The number of steps are linear, and so the worst case time complexity is $O(max(a,b))$. The hardware circuit is also shown in Figure \ref{fig:GCD}.
\subsubsection{Binary Euclidean Method}
The binary Euclidean method is customized specifically for large integers and works best when all the involved arithmetic operations cannot be done in constant time. Due to the binary representation, operations are performed in linear time based on the length of the binary representation, even for very big integers. As shown in Algorithm \ref{alg:EB}, the GCD is computed with a divide and conquer approach using only subtraction, shift operations, and parity checking. The time complexity of this algorithm is $O(log(a+b))$. The hardware implementation for this algorithm is shown in the Figure \ref{fig:GCD}.
\begin{algorithm}
\footnotesize
\SetCommentSty{\textit}
\nonl Let $a, b \in \mathbb{Z}_q[x]/\langle f(x) \rangle$. \\
\nonl \DontPrintSemicolon \;
\nonl \textbf{gcd(a, b, res)} \\
\If{$a == b$}
{\textbf{Return} $res * a$ as $\langle gcd(a,b) \rangle$.}
\ElseIf{$a \% 2 == 0$ and $b \% 2 == 0$}
{\textbf{Return} $gcd(a/2, b/2, 2*res)$.}
\ElseIf{$a \% 2 == 0$}
{\textbf{Return} $gcd(a/2, b, res)$.}
\ElseIf{$b \% 2 == 0$}
{\textbf{Return} $gcd(a, b/2, res)$.}
\ElseIf{$a > b$}
{\textbf{Return} $gcd(a-b, b, res)$.}
\Else
{\textbf{Return} $gcd(a, b-a, res)$.}
\caption{Binary Euclidean Algorithm}
\label{alg:EB}
\end{algorithm}
\section{Conclusion}
\label{con}
We presented a fast hardware arithmetic hardware library with a focus on accelerating the key arithmetic operations involved in RLWE-based somewhat homomorphic encryption. For all of these operations, we include a hardware cost efficient serial implementation and a fast parallel implementation in the library. We also presented a modular and hierarchical implementation of a hardware accelerator using the modules of the proposed arithmetic library to demonstrate the speedup achievable in hardware. The parameterized design implementation approach of the modules and the hardware accelerator provides the flexibility to extend use of the modules for other schemes, such as BGV, and the accelerator for many applications, especially in the FPGA-centric cloud computing environment. Evaluation of the implementation shows that a speed up of about $4200\times$ and $2950\times$ for evaluating homomorphic multiplication and addition respectively is achievable in hardware when compared to software implementation.
As future work, we would like to optimize and implement the arithmetic operations involved in bootstrapping as well. The bootstrap operation is one of the key functions in achieving fully homomorphic encryption, but it remains very expensive to perform. Optimizing the bootstrap operation will render it more practical to use. We are also actively working on integrating other RLWE-based homomorphic encryption schemes, like BGV, into our library so as to leverage inherent advantages that these schemes offer. Once we have the required operations and schemes implemented, we will open-source the arithmetic library and FPGA design examples.
\subsection{Polynomial Multiplication using NTT}
\label{PolyMul}
Polynomial multiplication is the most performed operation in homomorphic encryption and has the highest implementation complexity. Therefore, the latency of the polynomial multiplication module will govern the efficiency of the entire implementation. Hence, it is critical to design an efficient polynomial multiplication module. A conventional approach to implement a polynomial multiplier is to use convolution method. However, this approach is expensive to implement in hardware, as it requires performing $O(n^2)$ multiplications for a degree $n$ polynomial. This complexity can be reduced to $O(n \log_2 n)$ multiplications instead by using NTT combined with negative wrapped convolution to perform polynomial multiplication. We leverage the NTT-based multiplication algorithm proposed by Chen et al. \cite{CH2015} in our implementation. The steps involved in this algorithm are described in Algorithm \textbf{3}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo3.eps}
\end{center}
\label{alg:MUL}
\end{figure}
\subsubsection{Number Theoretic Transform}
\label{NTT}
A generalization of the Fast Fourier Transform (FFT) over a finite ring $R_q = R/\langle q \rangle = \mathbb{Z}_q[x]/\langle f(x) \rangle$ is represented by Number Theoretic Transform (NTT). The equation of the NTT is as follows:
\begin{equation} \label{eq: NTT}
X_i = \sum_{k = 0}^{n-1} x_k \cdot \omega^{ik}
\end{equation}
where $\omega$ is the $n^{th}$ root of unity in the corresponding polynomial field and for a ring ${R_q}$, where $q$ is a prime number, the $n^{th}$ root of unity $\omega$ must satisfy two conditions:
\begin{enumerate}
\item $\omega^{n}$ = 1 mod $q$, \vspace{-0.1in}
\item The period of $\omega^{i}$ for $i \in \{0, 1, 2, \cdots n-1\} $ is exactly $n$.
\end{enumerate}
One of the efficient ways to compute $\omega$ is by using the following approach:
\begin{enumerate}
\item First compute the primitive root of $q$, which must satisfy: \vspace{-0.1in}
\begin{itemize}
\item $\alpha^{q-1}$ = 1 mod $q$
\item The period of $\alpha^{i}$ for $i \in \{0, 1, 2, \cdots q-1\} $ is exactly $q-1$.
\end{itemize}
\item And since $\omega^{n} \equiv \alpha^{q-1}$ mod $q$, we can compute:
\begin{equation*}
\omega = \alpha^{(q-1)/n} \text{ mod } q
\end{equation*}
\item As a final step, verify that this $\omega$ meets both the conditions mentioned above.
\end{enumerate}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo4.eps}
\end{center}
\label{alg:NTT}
\end{figure}
Applying inverse NTT (iNTT) is straight forward and can be performed using the existing NTT module by replacing $\omega$ with $\omega^{-1}$, where $\omega^{-1}$ = $\omega^{n-1}$ mod $q$. iNTT computation also requires computing the inverse of $n$, which can be computed as $n^{-1} \cdot n$ = 1 mod $q$.
Although there exist hardware implementations of NTT, they are quite expensive because of the way they compute the indices of the points and the corresponding $w^i$. Investing a large number of multiplications and divisions for these computations may not be an issue with software implementation, however they lead to a higher resource consumption in the hardware counterpart. Therefore, in our implementation of the NTT algorithm, Algorithm \ref{alg:NTT}, we perform the indices computation using only shift and xor operations. The benefit of doing so is that the shift and xor are not only inexpensive to implement but they conveniently replace the large multiplication and division circuits. By leveraging this highly optimized NTT implementation, we implement the fast polynomial multiplication algorithm (Algorithm \textbf{4}) very efficiently. Figure \ref{fig:PolyMulNTT} shows a high level circuit for polynomial multiplication and the operations within the NTT block.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=5.3in]{Figures/PolyNTT.eps}
\end{center}
\caption{Polynomial Multiplication and Operation within NTT}
\label{fig:PolyMulNTT}
\end{figure}
\subsection{Modular reduction}
Modular reduction is not only at the core of many asymmetric cryptosystems, it is the most performed operation in encryption schemes based on R-LWE. This is because, in RLWE, all the operations are required to be performed over large finite rings. The function of the modular reduction operation is to compute the remainder of an integer division. Mathematically, it is written as $r = a \pmod q$.
While the modular reduction operation sounds relatively simple, the division of two large integers is very costly. Moreover, the moduli in RLWE-based schemes are prime numbers and not power-of-2 numbers, which makes the operation non-trivial. Therefore, the hardware implementation of the modulo operation is quite expensive. For example, the use of the inbuilt modulo operator, $\%$ in Verilog for $30$-bit operands utilizes about $800$ LUTs, and when there are many such modular operations involved, the hardware cost quickly adds up. Hence, optimization of modulo operation can lead to significant hardware cost reductions.
One well-known modular reduction optimization algorithm is Barrett reduction\cite{BP1986}. It is preferred over Montgomery reduction \cite{MM1985}, as it operates on the given integer number directly, while Montgomery reduction requires numbers to be converted into and out of Montgomery form, which is expensive in itself. We will discuss the Barrett reduction next, and then later, we propose some modifications to the existing Barrett reduction algorithm to reduce the hardware cost further.
\subsubsection{Barrett reduction}
The Barrett reduction algorithm was introduced by P. D. Barrett \cite{BP1986} to optimize the modular reduction operation by replacing divisions with multiplications, so as to avoid the slowness of long divisions. The key idea behind the Barrett reduction is to precompute a factor using division for a given prime modulus, $q$, and thereafter, the computations only involve multiplications, subtractions and shift operations. These operations are faster than the division operation. The Barrett reduction algorithm steps are shown in Algorithm \textbf{1} and works as described below.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo1.eps}
\end{center}
\label{alg:BR}
\end{figure}
Since the modulus $q$ is known in advance and the factor $r$ depends only on this modulus, it can be precomputed and stored. Then, the reduction function requires only computing the remainder value, $t$. While computing the $t$ value, a division operation by $4^k$, being power-of-2, can be performed using the right-shift operation. Hence, the entire computation reduces to just two multiplications, one right-shift, and one subtraction operation. Furthermore, the computation is performed in one step and, thus, is performed in constant time. The hardware implementation circuit is as shown in Figure \ref{fig:MR}.
For our hardware-based Barrett reduction implementation, we specifically included some additional optimizations. One such optimization is a careful bit width analysis. Say that the modulus requires exactly $k$ bits, then the product $\lfloor \frac{ar}{4^k} \rfloor q$ fits in $2k$ bits. We also observed that the computed values $t$ do not need more than $m + 1$ bits. The advantage of this observation is that we can safely ignore the upper $m - 1$ bits of the product. This in turn reduces the size of the registers to $m + 1$ bits while performing computations.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/MR.eps}
\end{center}
\caption{Hardware implementation of Barrett reduction and Modified Barrett reduction.}
\label{fig:MR}
\end{figure}
\subsubsection{Modified Barrett reduction}
Hasenplaugh et al. \cite{HG2007} introduced an iterative folding method as a modification to the Barrett reduction method. This method not only reduces the number of required multiplications via an increased number of precomputations, but also reduces the bit width of the operations performed. We modify their proposed approach and propose Algorithm \textbf{2}, which computes modulo reduction in a single fold.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo2.eps}
\end{center}
\label{alg:MBR}
\end{figure}
When compared to Barrett reduction, the proposed algorithm precomputes $k$ with half the bit width and $r$ with one third the bit width. This significantly reduces the multiplication bit-width while performing actual computations after the coefficient integers are known. Moreover, we were able to get rid of an additional check, $t < q$, which is required in case of Barrett reduction. Hence, the result for modulo reduction is available with minimal operations using minimal bit width. The modified Barrett reduction's implementation is shown in Figure \ref{fig:MR}.
\section{Arithmetic Hardware Library For HE}
\label{FPGA}
We will start with discussing the standalone modules first (i.e., RNS and CRT) along with the supplemental operations to these modules. Then, we will describe other arithmetic operations shared between all the core building blocks, along with their design implementations. All the operations are customized for hardware-based implementation, and we include both hardware-cost efficient serial and fast parallel implementations in the library. It is worth noting that, in all the algorithms, $Q$ will denote the large integer and $q_i$ or $q$ will denote a prime factor of $Q$.
\input{Sections/RNS}
\input{Sections/MR}
\input{Sections/NTT}
\input{Sections/PolyAdd}
\input{Sections/ScalarMul}
\input{Sections/ScalarDiv}
\input{Sections/CRT}
\input{Sections/MI}
\input{Sections/Gaussian}
\input{Sections/Relinearisation}
\section{Homomorphic Encryption}
\label{HE}
To present the hardware library, we first start by introducing the underlying RLWE-based homomorphic encryption scheme. For this purpose, we chose the Fan-Vercauteren (FV) \cite{FS2012} scheme, as it has more controlled noise growth while performing homomorphic operations when compared to approaches like the BGV (Brakerski-Gentry-Vaikuntanathan) scheme~\cite{BL2014}. Moreover, Costache et al. \cite{CE20} presented results showing FV scheme outperforming BGV for large plaintext moduli.
\subsection{FV Scheme}
The FV scheme operates in the ring $R = \mathbb Z_Q[x]/ \left< f(x) \right>$, with $f(x) = \phi_d(x)$ the $d^{th}$ cyclotomic polynomial. The plaintext $m$ is chosen in the ring $R_t$ for some small $t$ and a ciphertext consists of only one element in the ring $R_Q$ for a large integer $Q$. The security of the scheme is governed by the degree of this polynomial $f(x)$ and the size of $Q$.
The secret key, $s_k$, is sampled from the ring $R_2$ or a Gaussian distribution, $\chi_k$. The public key, $a$, is sampled from the ring $R_Q$ and the error vector, $e$, is sampled from a second Gaussian distribution, $\chi_{err}$. The other public key, $b$, is computed as follows:
\begin{equation}
b = [-(a \cdot s + e)]_{R_Q}
\label{equ:KeyGen}
\end{equation}
Encryption of the plaintext message yields a pair of ciphertexts as follows:
\begin{equation}
ct = ([(b \cdot r_0 + r_2 + t \cdot m)]_{R_Q}, [(a \cdot r_0 + r_1)]_{R_Q})
\label{equ:Enc}
\end{equation}
The homomorphic addition operation adds two such pairs of ciphertexts:
\begin{equation}
(c_0, c_1) = ([ct_1[0] + ct_2[0]]_{R_Q}, [ct_1[1] + ct_2[1]]_{R_Q})
\label{equ:Add}
\end{equation}
After the addition operation, decryption is done as:
\begin{equation}
m_1 + m_2 = \left[ \bigg \lfloor \frac{[c_0 + c_1 \cdot s_k]}{t} \bigg \rfloor \right]_{R_Q}
\label{equ:AddDec}
\end{equation}
The homomorphic multiplication operation multiplies two such pairs of ciphertexts using the following equations:
\begin{align}
\begin{split}
c_0 = [ct_1[0] \cdot ct_2[0]]_{R_Q} \\
c_1 = [ct_1[0] \cdot ct_2[1] + ct_1[1] \cdot ct_2[0]]_{R_Q} \\
c_2 = [ct_1[1] \cdot ct_2[1]]_{R_Q}
\label{equ:Mul}
\end{split}
\end{align}
Decryption, after the multiplication operation, using the secret key is computed as:
\begin{equation}
m_1 \cdot m_2 = \left[ \bigg \lfloor \frac{[c_0 + c_1 \cdot s_k + c_2 \cdot s^2_k]}{t} \bigg \rfloor \right]_{R_Q}
\label{equ:MulDec}
\end{equation}
Since after the multiplication operation a degree $2$ ciphertext is obtained, to continue further multiplication operations this degree $2$ ciphertext needs to be reduced to a degree $1$ ciphertext. In the FV scheme, this is achieved by performing a relinearisation operation. The scheme facilitates two different approaches for performing relinearisation. Relinearisation version $1$ operation consists of generating the relinearisation key, decomposing $c_2$ to limit the noise explosion, and then conversion to a degree $1$ ciphertext by using generated relinearisation keys and the decomposed $c_2$. The relinearisation keys are generated as follows:
\begin{equation}
rlk = [([-(a_i \cdot s + e_i) + T^i \cdot s^2]_{R_Q},a_i) : i \in [0..\ell]]
\label{equ:Relin1KeyGen}
\end{equation}
Here, $T$ is independent of $t$ and $\ell = \lfloor log_T(Q) \rfloor$. Decomposition of $c_2$ involves rewriting $c_2$ in base $T$ and can be computed using the following equation:
\begin{equation}
c_2 = \sum_{i=0}^{\ell} T^i \cdot c_{2}^{(i)}
\label{equ:Decomp}
\end{equation}
Next the relinearisation operation can be performed as follows:
\begin{align}
\begin{split}
c_{0}^{'} = [c_0 + \sum_{i=0}^{\ell} rlk[i][0] \cdot c_{2}^{(i)}]_{R_Q} \\
c_{1}^{'} = [c_1 + \sum_{i=0}^{\ell} rlk[i][1] \cdot c_{2}^{(i)}]_{R_Q}
\label{equ:Relin1}
\end{split}
\end{align}
Since, we have obtained a degree $1$ ciphertext after the relinearisation operation, the decryption can be performed without using the $s_{k}^{2}$ term, as in equation \ref{equ:MulDec}. Therefore, the decryption operation simplifies to the equation:
\begin{equation}
m_1 \cdot m_2 = c_0^{'} + c_1^{'} \cdot s_k
\label{equ:RelinMulDec}
\end{equation}
Note that the choice of $T$ will determine the size of relinearisation keys and the noise growth during the relinearisation operation. The larger the value of $T$, the smaller the relinearisation keys will be, but the noise introduced by relinearisation will be higher. And the smaller the value of $T$, the larger the relinearisation keys will be, with smaller noise introduction. So the value of $T$ must be picked in a balanced way.
Relinearisation version $2$ is a modified form of modulus switching and hence requires choosing a second modulus $p$ such that $p \geq Q^3$ for small enough error samples. Now, the relinearisation keys can be generated as follows:
\begin{equation}
rlk = ([-(a \cdot s + e) + p \cdot s^2]_{R_{p \cdot Q}}, a)
\label{equ:Relin2KeyGen}
\end{equation}
Here, $a \in R_{p \cdot Q}$ and $e \xleftarrow{} \chi'_{err}$. We can perform the relinearisation using the following computation:
\begin{align}
\begin{split}
c_{0}^{'} = c_0 + \left[ \bigg \lfloor \frac{c_{2} \cdot rlk[0]}{p} \bigg \rceil \right] _{R_Q} \\
c_{1}^{'} = c_1 + \left[ \bigg \lfloor \frac{c_{2} \cdot rlk[1]}{p} \bigg \rceil \right]_{R_Q}
\label{equ:Relin2}
\end{split}
\end{align}
Once the $c_2$ component is removed, we can perform decryption using equation \ref{equ:RelinMulDec}.
\subsection{Required Operations}
Our proposed arithmetic library includes highly optimized hardware-based implementations of Residue Number System (RNS), Chinese Remainder Theorem (CRT), modulo inverse, fast polynomial multiplication using Number Theoretic Transform(NTT), polynomial addition, modulo reduction, Gaussian noise sampler and relinearisation operations. While implementing these operations for our arithmetic library, the design choices are highly motivated by the parameter selection. This is because an RLWE-based encryption scheme requires adding a small noise vector to obfuscate the plaintext message as shown previously in equations \ref{equ:KeyGen} and \ref{equ:Enc}. While performing homomorphic addition and multiplication, the noise present in the ciphertexts gets doubled and squared respectively. Due to this noise growth, the ring $R_Q$, along with the degree of the polynomial needs to be large, so as to compute a circuit of certain depth and still enable successful decryption of the result. Hence, the parameter $Q$ with the degree of the polynomial $f(x)$ needs to be large. The operations on this large parameter set not only increase the hardware cost but also slow down the homomorphic encryption.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/RO_flat.eps}
\end{center}
\vspace{-0.15in}
\caption{Illustrative sequence of operations.}
\label{fig:RO}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=1.0\textwidth]{Figures/system_architecture.eps}
\end{center}
\vspace{-0.1in}
\caption{Core building blocks of RLWE-based Somewhat Homomorphic Encryption.}
\label{fig:SHE}
\end{figure}
We use the concepts of modular arithmetic to speed up HE computations. The underlying FV scheme does not restrict $Q$ to a prime number; instead, $Q$ can be the product of small primes. When working modulo a product of numbers, say $Q = q_1 \times q_2 \times \cdots \times q_k$, Residue Number System (RNS) helps reduce the coefficients in each of the $R_{q_i}$ and Chinese Remainder Theorem (CRT) lets us work in each modulus $q_i$ separately. Since the computational cost is proportional to the size of operands, this is faster than working in the full modulus $Q$. Moreover, breaking down the coefficients into smaller integers using RNS also limits the noise expansion. Figure \ref{fig:RO} illustrates the sequence of operations that will be required in HE using the FV scheme, for $Q = q_1 \times q_2 \times q_3 \times q_4$.
In our implementation of the arithmetic library, we consider $Q$ to be $1200$ bits and degree of the polynomial, $n$ as $1024$ with a $128$-bit security level. With these values of $Q$ and $n$, we can evaluate a binary circuit of depth $56$ using somewhat homomorphic encryption. The RNS module will take a $1200$-bit wide integer coefficient as an input, perform $x_i = x \ mod \ q_i$, and thus break $x$ into $40$ small integers of $30$ bits each. This enables us to set up $40$ pipelines to perform $40$ operations in parallel, providing the required performance boost. Once the homomorphic add or multiplication operation is done on these small integers, CRT can combine them to map back to the original $1200$-bit width. Note that the bit width selection for the small integers is a design decision one can make based on available resources. Our implementations take $q_i$ as a parameter, which facilitates different bit width selections.
Figure \ref{fig:SHE} shows all the core building blocks required to perform somewhat homomorphic encryption (SHE) addition and multiplication operations using FV scheme. The client-side building blocks include key generation, relinearisation (versions $1$ and $2$) key generation, encryption, decryption (for both degree-1 and degree-2 ciphertexts), residue number system (RNS) along with modular reduction, and Chinese remainder theorem (CRT) along with modulo inversion. The cloud provider has blocks to perform homomorphic addition, homomorphic multiplication, and relinearisation (versions $1$ and $2$). While RNS and CRT are standalone modules, the rest of the main blocks share the following submodules: polynomial multiplication, polynomial addition, modular reduction, scalar multiplication, scalar division to nearest binary integer, noise sampler, and true random number generator. Certain operations like decompositions, powers-of-$2$ computations, divide and round operations are specifically required for the purpose of relinearisation, which we will discuss in detail in Section \ref{relin}.
\subsection{Polynomial Addition}
\label{PolyAdd}
Polynomial addition is the second most frequently used operation after polynomial multiplication. The schematic of the hardware implementation for polynomial addition is shown in Figure \ref{fig:submodules}. The implementation performs a component-wise addition operation on the coefficients of the polynomial. Note that the results are wrapped either within small modulus $q$ or large modulus $Q$ depending on which main module is utilizing this submodule to perform polynomial addition.
\subsection{Relinearisation}
\label{relin}
We discuss relinearisation version $1$ implementation details first. In version $1$, the key generation will reuse most of the existing submodules except for the powers-of-2 computation. This operation is indicated by the PowersOf2 submodule in Figure \ref{fig:Relin}. The values of $T^i$ are not precomputed and stored to reduce the memory overhead. Instead, we take the vector $s^2$ and perform a left shift operation on all of the elements of this vector. The first set of left shift operations should be by $0$ bits to indicate $2^0$ multiplication, then by $1$ bit for $2^1$ multiplication, and so on until $2^\ell$ multiplications are performed.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=1\columnwidth]{Figures/Relin.eps}
\end{center}
\caption{PowersOf2, Inner Product, and Div\&Round submodules.}
\label{fig:Relin}
\end{figure}
Note that we generate the relinearisation keys in $R_{q_1}, \ldots, R_{q_k}$ rather than $R_Q$. This facilitates the routing of the relinearisation keys correctly to the corresponding $q_i$ operation pipeline without the need to perform modular reductions. Although we perform $k$ times more operations, these operations are significantly faster. Additionally, the output of this submodule is arranged in such a fashion that the elements having the same index, from key $rlk[0]$, are treated as a single output. A similar output format holds true for $rlk[1]$ as well. This helps in faster indexing of the relinearisation keys while computing the inner product with $c_2$. The schematic of PowersOf2 submodule implementation is as shown in Figure \ref{fig:Relin}.
In the relinearisation version $1$ module in Figure \ref{fig:Relin}, the decomposition submodule's task represents the ciphertext $c_2$ at bit level, i.e., converting the coefficients from $R_q$ to $R_T$ and here $T = 2$. We know that in hardware bit-level operations can be performed readily, and hence, this operation becomes trivial. Therefore, we do not specifically provide an implementation of this submodule, and it is shown for completeness in Figure \ref{fig:SHE}. Next, we describe the implementation of the inner product submodule. To avoid performing actual multiplication operations, we leverage our scalar multiplication module (Section \ref{ScMul}) within this submodule, since $c_2$ is binary. Hence, a conditional operator does the work of multiplying elements of $c_2$ and relinearisation keys. We just need to use adders to compute the summation to finish the inner product computations. Implementation of this submodule is shown in Figure \ref{fig:Relin}.
We will explore the relinearisation version $2$ submodules now. For key generation, we pick the largest $q_i$ from the moduli set, compute $q_{i}^3$ and then the immediate next power of two is set as the value of $p$. This $p$ is the scaling factor. Since we choose a power of $2$ as $p$, we can simply perform shift left operations to emulate the multiplication of $p$ with $s^2$ while generating the relinearisation keys. Note that $rlk[1]$ or $a$ is sampled from $R_{p \cdot q_i}$. Additionally, to maintain the required security, the error samples need to be generated from a different noise sampler. Hence, a second instance of the noise sampler is used here with the required parameter settings. The rest of the submodules are as previously discussed.
While performing the relinearisation operation in version $2$, the ciphertext needs to be scaled down. This task is accomplished by using the Div\&Round submodule shown in Figure \ref{fig:Relin}. As the scale factor $p$ is a power of $2$, division operations can be avoided, and shift right operations can be performed instead. Most other existing implementations (both software and hardware), precompute $\frac{1}{p}$, round it down, and perform multiplication operations instead of division. There are two disadvantages to this approach. First rounding leads to loss of precision, generating approximate results and magnifying the errors in decryption as the levels of operations increase. Second, even though the expensive division operations are avoided, multiplications are still costly, requiring large multipliers which are not only expensive but also lead to a lower operating frequency. Figure \ref{fig:Relin} shows the Div\&Round submodule circuit.
\subsection{Modulo Inverse}
A modulo inverse or multiplicative inverse is the main computation involved in CRT. Moreover, while working with homomorphic encryption, due to large parameters, the amount of storage available can be a concern. Thus, instead of using LUT-based CRT, we may need to use a regular CRT implementation with the modulo inverse computed on the fly.
The multiplicative inverse of $a \pmod q$ exists if and only if $a$ and $q$ are relatively prime (i.e., if $gcd(a, q) = 1$). Given two integers $a$ and $q$, the modulo inverse is defined by an integer $p$ such that
\begin{equation}
a \cdot p \equiv 1 \pmod q
\label{eq:MI}
\end{equation}
Here, the value of $p$ should be in ${0, 1, 2, \ldots q-1}$, i.e., in the range of integer modulo $q$. In our case, we need to compute the multiplicative inverse between pairwise moduli, $q_i$. There are two primary algorithms used in computing a modulo inverse. We discuss both algorithms in detail next.
\subsubsection{Fermat’s Little Theorem}
Fermat's little theorem \cite{VE2016} is typically used to simplify the process of modular exponentiation. But since we know $q$ is prime, we can also use Fermats’s little theorem to find the modulo inverse. According to this theorem, we can rewrite equation \ref{eq:MI} as follows:
\begin{equation}
a^{q-1} \equiv 1 \pmod q
\label{eq:FL1}
\end{equation}
If we multiply both sides of this equation with $a^{-1}$, we get
\begin{equation}
a^{-1} \equiv a^{q-2} \pmod q
\label{eq:FL2}
\end{equation}
Equation \ref{eq:FL2} is the only computation carried out by this theorem to get the value of $a^{-1}$ and this is what is shown in Algorithm \textbf{6}. The algorithm has a time complexity of $O(log^2 q)$. For our hardware-based implementation, we precompute the power factors from $1$ to $q-2$ to save the computation cost. This not only speeds up computation but also significantly reduces the hardware cost. The hardware implementation is shown in Figure \ref{fig:MI}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo6.eps}
\end{center}
\label{alg:FLT}
\end{figure}
\subsubsection{Extended Euclidean Algorithm}
The extended Euclidean algorithm \cite{VE2016} is an extension to the classic Euclidean algorithm that is used for finding the greatest common divisor (GCD). According to this algorithm, if $a$ and $q$ are relatively prime, there exist integers $x$ and $y$ such that $ax + qy = 1$, and such integers may be found using the Euclidean algorithm. Considering this equation modulo $q$, it follows that $ax = 1$; i.e., $x=a^{-1} \pmod q$. The algorithm used for our implementation is as shown in Algorithm \textbf{7}.
The algorithm works as follows. Given two integers $0 < a < q$, using the classic
Euclidean algorithm equations, one can compute $gcd(a,q) = r_j$, where $r_j$ is the remainder. In the classic
Euclidean algorithm, we start by dividing $q$ by $a$ (integer division with remainder), then repeatedly divide the previous divisor by the previous remainder until there is no remainder. The last remainder we divided by is the greatest common divisor. To avoid division operations, the classic Euclidean algorithm equations can be rewritten as follows:
\vspace{-0.1in}
\begin{align*}
r_1 &= a - q \cdot x_1, \\
r_2 &= a - r_1 \cdot x_2, \\
r_3 &= r_1 - r_2 \cdot x_3, \\
&\vdots\\
r_j &= r_{j-2} - r_{j-1} \cdot x_j
\end{align*}
Then, in the last of these equations, $r_j = r_{j-2} - r_{j-1} \cdot x_j$ , replace $r_{j-1}$ with its expression in terms of $r_{j-3}$ and $r_{j-2}$ from the equation immediately above it. Continue this process
successively, replacing $r_{j-2},r_{j-3},\ldots,$ until we obtain the final equation $r_j = ax + qy$,
with $x$ and $y$ integers. In our special case that $gcd(a,b) = 1$, the integer equation reads as $1 = ax + qy$ and therefore we deduce $1 \equiv ax \pmod q$ so that the residue of $x$ is the multiplicative inverse of $a \pmod q$. The time complexity of this algorithm is $O(log(min(a,q)))$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo7.eps}
\end{center}
\label{alg:EEA}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/MI.eps}
\end{center}
\caption{Hardware implementation of Fermat's little and extended Euclidean theorem.}
\label{fig:MI}
\end{figure}
The hardware implementation is shown in Figure \ref{fig:MI}. The algorithm is simplified by removing unnecessary variables and computations to make it more suitable for hardware implementation. The implementation is done in an iterative fashion so that the input parameters gradually decrease while keeping the GCD of the parameters
unchanged.
\section{Introduction}
\label{Intro}
As the internet becomes easily accessible, almost all electronic devices collect enormous amounts of private and sensitive data from routine activities. These electronic devices may be as small as wearable electronics \cite{PE2003}, like a smart watch collecting personal health information or a cell phone collecting location information, or may be as large as an IoT-based smart home \cite{BD2011} \cite{KI2016} collecting routine information like room temperature, door status (open or closed), smart meter reading, and other such details. These electronic devices have limited power, storage, and compute capabilities and often need external support to process the collected information. Cloud computing \cite{MN2011} \cite{HC2008} provides a convenient means not only to store the collected information but also to apply various compute functions to this stored data. The processed information can be used easily for various purposes, like machine learning predictions.
As cloud computing services become readily available and affordable, many industrial sectors have begun to use cloud services instead of setting up their own infrastructure. Sectors like automotive production, education, finance, banking, health care, manufacturing, and many more leverage cloud services for some or all of their storage and computing needs. This, in turn, leads to the storage of a lot of sensitive data in the cloud. Hence, while cloud computing provides the convenience of sharing resources and compute capabilities for individuals and business owners equally, it brings its own challenges in maintaining data privacy \cite{PC2010} \cite{KC2011}. A cloud owner has access to all of the private data pertaining to the clients and can also observe the computations being carried out on this private data. Moreover, cloud services are shared among many clients; therefore, even if a client may assume that the cloud service provider is honest and will ensure that there is no data breach in their environment, the chances of data leakage remain high, due to the shared storage space or compute node on the cloud.
Homomorphic encryption \cite{GF2009} is a ground-breaking technique to enable secure private cloud storage and computation services. Homomorphic encryption allows evaluating functions on encrypted data to generate an encrypted result. This result, when decrypted, matches the result of the same operations performed on the unencrypted data. Thus, a data owner can encrypt the data and then send it to cloud for processing. The cloud running the homomorphic encryption based services will perform computations on the encrypted data and send the results back to the data owner. The data owner, having access to the private key, performs the decryption and obtains the result. The cloud does not have access to the private key or the plain data, and, hence, the security concerns related to private data processing on the cloud can be mitigated. An illustrative scenario is shown in Figure \ref{fig:cloud}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\columnwidth]{Figures/cloud.eps}
\end{center}
\vspace{-0.1in}
\caption{Third-party cloud service provider with Homomorphic Encryption.}
\label{fig:cloud}
\vspace{-0.05in}
\end{figure}
The idea of homomorphic encryption was first proposed in 1978 by Rivest et al. \cite{RD1978}. In 2009, Gentry's seminal work \cite{GF2009} provided a framework to make fully homomorphic encryption feasible, and almost a decade's work has now made it practical \cite{NC2011}. While homomorphic encryption has become realistic, it still remains several magnitudes too slow, making it expensive and resource intensive. There are no existing homomorphic encryption schemes with performance levels that would allow large-scale practical usage. Substantial efforts have been put forward to develop full-fledged software libraries for homomorphic encryption. Such libraries include SEAL \cite{CS2017}, Palisade \cite{PL2019}, cuHE \cite{DC2015}, HElib \cite{HS2014}, NFLLib \cite{AN2016}, Lattigo \cite{LT2019}, and HEAAN \cite{KH2018}. All of these libraries are based on the RLWE-based encryption scheme, and they generally implement Brakerski-Gentry-Vaikuntanathan (BGV) \cite{BL2014}, Fan-Vercauteren (FV) \cite{FS2012}, and Cheon-Kim-Kim-Song (CKKS) \cite{CH2017} homomorphic encryption schemes with very similar parameters.
Although the software implementations are impressive, they are still incapable of gaining the required performance, as they are limited by the underlying hardware. For example, Gentry et al. \cite{GC2012}, in their homomorphic evaluation of an AES circuit, reported approximately $48$ hours of execution time on an Intel Xeon CPU running at $2.0$GHz. Even their parallel SIMD style implementation took around $40$ minutes per block to evaluate $54$ AES blocks. A modern Intel Xeon CPU takes about $20$ns to perform a regular AES encryption block, hence it is evident that homomorphic evaluation of an AES block is about $1.2\times10^{11}$ times slower than a regular evaluation. Similarly, logistic regression, a popular machine learning tool, is often used to make predictions using client's private data in the cloud. A software-based homomorphic logistic regression prediction takes about $1.6$ hour while a regular logistic prediction takes about $95$ns.
If homomorphic encryption's full potential and power can be unleashed by realizing the required performance levels, it will make cloud computing more reliable via enhanced trust of service providers and their mechanisms for protecting users’ data. Hence, there is a need to accelerate the homomorphic encryption operation directly on the hardware to achieve maximum throughput with a low latency. With this in mind, we propose an arithmetic hardware library that includes the major arithmetic operations involved in homomorphic encryption. A hardware accelerator designed using the modules from this library can reduce the computational time for HE operations. To lower the power usage and improve performance, new cloud architectures integrate FPGAs to offload and accelerate compute tasks such as deep learning, encryption, and video conversion. The FPGA-based design and optimization approach introduced in this work fits into this class of FPGA-equipped cloud architectures.
The key contributions of the work are as follows:
\begin{itemize}
\item A fast and hardware-cost efficient hardware arithmetic library to individually accelerate all operations within homomorphic encryption. A speedup of $4200\times$ and $2950\times$ is observed to evaluate homomorphic multiplication and addition respectively.
\item An open-source, FPGA-board agnostic, parameterized design implementation of the modules to provide flexibility to adjust parameters so as to meet the desired security levels, hardware cost and multiplication depth
\item A modular and hierarchical implementation of a hardware accelerator using the modules of the proposed arithmetic library to demonstrate the speedup achievable in hardware.
\end{itemize}
The rest of the paper is organized as follows. In Section \ref{HE}, we briefly present the underlying scheme and discuss the required arithmetic operations. Section \ref{FPGA} introduces these arithmetic operations and their efficient implementation. In Section \ref{Perf}, we evaluate the associated hardware cost and latency and then conclude the paper in Section \ref{con} along with future work.
\subsection{Chinese Remainder Theorem}
The Chinese remainder theorem, CRT \cite{KM2007}, states that if we know the residue of an integer, $a$ modulo two primes $q_1$ and $q_2$, it is possible to reconstruct $<a>_{q_1q_2}$ as follows. Let $<a>_{q_1}$ = $a_1$ and $<a>_{q_2}$ = $a_2$, then the value of $a \pmod Q$, where $Q=q_1\cdot q_2$, can be found by
\begin{equation}
a = <q_1t_1a_2 + q_2t_2a_1>Q
\label{eq:CRT}
\end{equation}
where $t_1$ is the multiplicative inverse of $q_1 \pmod q_2$ and $t_2$ is the multiplicative inverse of $q_2 \pmod q_1$. This is feasible as the inverses $t_1$ and $t_2$ always exist, since $q_1$ and $q_2$ are coprime. Mathematically, $<a>_{q_1q_2}$ can also be represented by a set of congruent equations as follows:
\begin{align}
\begin{split}
a \equiv a_1 \pmod {q_1} \\
a \equiv a_2 \pmod {q_2}
\end{split}
\end{align}
Using the CRT, we combine all the small integers back into one large integer, so as to generate the required final result. A naive approach to implement CRT would compute pairwise multiplicative inverse or modulo inverse for two given moduli and then use equation \ref{eq:CRT} to merge values of $a_1$ and $a_2$ to get $a$. This process can be carried out recursively until the final coefficient value is obtained.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.99\columnwidth]{Figures/algo5.eps}
\end{center}
\label{alg:CRTLuT}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\columnwidth]{Figures/CRT_LUT.eps}
\end{center}
\caption{Hardware implementation of CRT.}
\label{fig:CRT}
\end{figure}
The problem with this approach is that computing the multiplicative inverse at runtime increases latency. Therefore, a better approach is to precompute the modulo inverse values since all the moduli are known in advance. And then the actual computation reduces to just a single step, which can be performed in one clock cycle. The precomputation and computation steps involved are shown in Algorithm \textbf{5}. We call this approach LUT-based, since the precomputated values are stored in LUTs; its hardware implementation circuit is shown in Figure \ref{fig:CRT}. The hardware cost for LUT-based CRT can be further optimized by breaking down the single step multiplication and addition operation into various steps. This will enable the reuse of multipliers and adders during these steps. We leave this as future work for now.
\subsection{Scalar Multiplication}
\label{ScMul}
Since the message space is binary, a conditional assignment operator can be used to implement the scalar multiplication operation. As shown in Figure \ref{fig:submodules}, $m$, the plaintext message, is an $n$-bit vector. Thus, computing $tm$ essentially requires choosing $t$ or $0$ according to each bit of $m$. Since we avoid performing actual multiplication operations, hardware cost is greatly reduced.
\begin{figure}[ht]
\centering
\includegraphics[width=1\columnwidth]{Figures/Operations.eps}
\caption{Polynomial Addition, Scalar Multiplication, and Scalar Division submodules.}
\label{fig:submodules}
\end{figure}
\subsection{Gaussian Noise Sampler}
\label{GNS}
The security of the RLWE-based encryption scheme is governed by small error samples generated from a Gaussian distribution. Hence, a Gaussian noise sampler lies at the core of maintaining the required security level. However, it is critical to select a sampling algorithm with a high sampling efficiency and throughput so that the key generation and the encryption operations, at the client side, still remain efficient. We leverage the implementation of a Ziggurat-based Gaussian noise sampler done by the authors in \cite{AP2020}. Due to space constraints we do not present the implementation details here but the interested readers can refer to the actual paper.
\subsection{Residue Number System}
A residual number system, RNS \cite{GR1959} \cite{SR1986} \cite{SR1967} is a mathematical way of representing an integer by its value modulo a set of k integers ${q_1,q_2,q_3,\ldots ,q_k}$, called the moduli, which generally should be pairwise coprime. An integer, $x$, can be represented in the residue number system by a set of its remainders ${x_1,x_2,x_3,\ldots ,x_k}$ under Euclidean division by the respective moduli. That is, $x_i = x \pmod q_i$ and $0\leq x_{i}<q_{i}$ for every $i$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.85\columnwidth]{Figures/RNS.eps}
\end{center}
\caption{Serial and parallel implementation of RNS.}
\label{fig:RNS}
\end{figure}
The serial implementation of RNS is shown in Figure \ref{fig:RNS}. Each $1200$-bit coefficient is modulo reduced by a $q_i$ and stored in the respective BRAM. For $k$ moduli, it takes $k$ cycles to perform all the computations. When modulo reductions are performed in parallel, all the computations can be performed in a single clock cycle instead. The parallel implementation is shown in Figure \ref{fig:RNS}. Since the $mod$ operation is the key operation in RNS, we next optimize the modulo reduction operation. This will allow us to reduce the hardware cost substantially.
|
1,108,101,565,749 | arxiv | \section{Introduction}
\label{sec:introduction}
\noindent The recent measurement of the Cosmic Microwave Background (CMB) anisotropies provided by the \emph{Planck} satellite mission (see \cite{Ade:2015xua, Ade:2015lrj}, for example) have provided a wonderful confirmation of the standard $\Lambda$CDM cosmological model. However, when the base model is extended and other cosmological parameters are let free to vary, a few ``anomalies'' are present in the parameter
values that,
even if their significance is only at the level of two standard deviations, deserve further investigation.
First of all, the parameter $A_{L}$, that measures the amplitude of the lensing signal
in the CMB angular spectra \cite{Calabrese:2008rt}, has been found larger than the standard value with
$A_{L}=1.22\pm0.10$ at $\limit{68}$ ($A_{L}=1$ being the expected value in $\Lambda$CDM)
from \emph{Planck} temperature and polarization angular spectra \cite{Ade:2015xua}. A value of $A_{L}$ larger than one is difficult
to accommodate in $\Lambda$CDM, and several solutions have been proposed
as modified gravity \cite{edmg,Huang:2015srv}, neutrino anisotropies \cite{Gerbino:2013ova},
and compensated isocurvature perturbations \cite{cip}.
Combining \emph{Planck} with data from the Atacama Cosmology Telescope (ACT) and the South Pole Telescope (SPT)
to better constrain the foregrounds, Couchot et al. \cite{couchot}, found a consistency with $A_L=1$. However
the compatibility of the CMB datasets used is unclear.
More recently Addison et al. \cite{addison} have found that including the $A_{L}$ parameter solves the tension
between \emph{Planck} and WMAP9 on the value of the derived cosmological parameters.
As shown in \cite{Ade:2015xua}, the $A_{L}$ anomaly persists when the \emph{Planck} data is combined with Baryonic Acoustic Oscillation surveys (BAO), it is enhanced when the CFHTLenS {shear lensing survey} is included, but it practically disappears when
CMB lensing from \emph{Planck} trispectrum observations are considered.
The $A_{L}$ anomaly is also still present in a $12$-parameter extended $\Lambda$CDM analysis of the
\emph{Planck} dataset (see \cite{ems}), showing no significant correlation with extra parameters such as the dark energy equation of state $w$,
the neutrino mass, and the neutrino effective number $N_\mathrm{eff}$.
Second, the \emph{Planck} dataset prefers a positively curved universe, again at about
two standard deviations with $\Omega_k = -0.040\pm0.020$ at $\limit{68}$.
This ``anomaly'' is not due to an increased parameter volume effect but, as stated in
\cite{Ade:2015lrj}, curvature provides a genuine better fit to the data with an improved
fit of $\Delta \chi^2 \sim 6$. When BAO data is included, however, {the curvature of the universe} is
again compatible with zero with the stringent constraint
$\Omega_k=-0.000\pm0.005$ at $\limit{95}$.
The fact that both the $A_{L}$ and $\Omega_k$ anomalies disappear when
reliable external datasets are included suggests that their origin might be
a systematic or that they are produced by a different physical effect
than lensing or curvature.
In this respect it is interesting to note that a third parameter is constrained to anomalous values from the \emph{Planck} data.
The primordial scalar spectral index $n_{\rm s}$ of scalar perturbations is often assumed to be independent of scale. However, since
some small scale-dependence is expected,\footnote{\emph{E.g.}, we expect a running of the tilt $n_{\rm s}$ of order $(1-n_{\rm s})^2$ in slow-roll inflation.}
we can expand the dimensionless scalar power spectrum $\Delta^2_\zeta(k) = k^3P_\zeta(k)/2\pi^2$ as
\begin{equation}
\label{eq:Delta_of_k}
\Delta^2_\zeta(k) = A_\mathrm{s}\(\frac{k}{k_\star}\)^{n_{\rm s}-1 + \frac{\alpha_\mathrm{s}}{2}\log\frac{k}{k_\star} + \frac{\beta_\mathrm{s}}{6}\(\log\frac{k}{k_\star}\)^2}\,\,,
\end{equation}
where $\alpha_\mathrm{s}$ is the running of the spectral index, $\beta_\mathrm{s}$ is the running of the running, and $k_\star = 0.05\,\mathrm{Mpc}^{-1}$.
The \emph{Planck} temperature and polarization data analysis presented in
\cite{Ade:2015lrj}, while providing a small indication for a {\it positive} running different from zero ($\alpha_\mathrm{s}=0.009\pm0.010$ at $\limit{68}$), suggests also the presence of a running of the running at the level of two standard deviations
($\beta_\mathrm{s}=0.025\pm0.013$ at $\limit{68}$). The inclusion of a running of the running improves the fit to the \emph{Planck}
temperature and polarization data by $\Delta\chi^2\sim 5$ \textcolor{black}{with respect to the $\Lambda$CDM model. Therefore we do not expect that such anomaly is due to the increased parameter volume, and could be a hint of possible
new physics beyond the standard model}. A discussion of the impact of this anomaly on inflationary models
has been presented in \cite{menabetas, Huang:2015cke}.
Given this result, it is
timely to discuss the possible
correlations between these three anomalies, $\beta_\mathrm{s}$, $A_{L}$ and $\Omega_k$ and see, for example, if
one of them vanishes if a second one is considered at the same time in the analysis.
Moreover \textcolor{black}{(related to the above points), it is necessary to investigate in more detail how the inclusion of $\beta_\mathrm{s}$ helps giving a better fit to the data, and} test if the indication for the running of the running
survives when additional datasets as BAO or lensing (CMB and shear)
are considered. This is the goal of this paper.
We structure the discussion as follows. In the next section we will describe the analysis method and
the cosmological datasets used. In \sect{results} we present our results
and discuss possible correlations between $\beta_\mathrm{s}$, $A_{L}$
and $\Omega_k$. We also investigate the possibility that a running of the running affects current
and future measurements of CMB spectral distortions, comparing our results with those of \cite{Powell:2012xz}. Finally, in \sect{conclusions} we derive our conclusions.
\section{Method}
\label{sec:method}
\noindent We perform a Monte Carlo Markov Chain (MCMC) analysis of the most recent cosmological
datasets using the publicly available code \texttt{cosmomc}~\cite{Lewis:2002ah, Lewis:2013hha}.
We consider the $6$ parameters of the standard $\Lambda$CDM model, \emph{i.e.}
the baryon $\omega_\mathrm{b}\equiv\Omega_\mathrm{b} h^2$ and cold dark matter $\omega_\mathrm{c}\equiv\Omega_\mathrm{c} h^2$ energy densities,
the angular size of the horizon at the last scattering surface $\theta_\mathrm{MC}$, the
optical depth $\tau$, the amplitude of primordial scalar perturbations $\log (10^{10}A_\mathrm{s})$ and the scalar spectral index $n_{\rm s}$.
We extend this scenario by including the running of the scalar spectral
index $\alpha_\mathrm{s}$ and the running of the running $\beta_\mathrm{s}$. We fix the
pivot scale at $k_\star=0.05\,\mathrm{Mpc}^{-1}$. This is our baseline cosmological model, that we will call ``base'' in the following.
Moreover, as discussed in the introduction, we also consider separate variation in the
lensing amplitude $A_{L}$, in the curvature density $\Omega_k$ and in the sum of neutrino masses $\sum m_\nu$.
The main dataset we consider, to which we refer as ``\emph{Planck}'', is based on CMB temperature and polarization anisotropies. We analyze the temperature and polarization \emph{Planck} likelihood \cite{Aghanim:2015xee}: more precisely, we make use of the $TT$, $TE$, $EE$ high-$\ell$ likelihood together with the $TEB$ pixel-based low-$\ell$ likelihood. The additional datasets we consider are the following:
\begin{itemize}[leftmargin=*]
\item {\emph{Planck} measurements of the lensing potential power spectrum $C^{\phi\phi}_\ell$ \cite{Ade:2015zua};}
\item weak gravitational lensing data of the CFHTLenS survey \cite{Heymans:2012gg, Erben:2012zw}, taking only wavenumbers
$k\leq 1.5 h\,\mathrm{Mpc}^{-1}$\cite{Ade:2015xua, Kitching:2014dtq};
\item {Baryon Acoustic Oscillations (BAO): the surveys included are 6dFGS \cite{Beutler:2011hx}, SDSS-MGS \cite{Ross:2014qpa}, BOSS LOWZ \cite{Anderson:2013zyy} and CMASS-DR11 \cite{Anderson:2013zyy}. This dataset will help to break geometrical degeneracies when we let $\Omega_k$ free to vary.}
\end{itemize}
\begin{table*}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
$\mathrm{base}$ \vertsp \emph{Planck} \vertsp + lensing \vertsp + WL \vertsp + BAO \\
\hline
\morehorsp
$\Omega_\mathrm{b}h^2$ \vertsp ${\siround{0.02216}{5}}\pm{\siround{0.00017}{5}}$ \vertsp ${\siround{0.02215}{5}}\pm{\siround{0.00017}{5}}$ \vertsp ${\siround{0.02221}{5}}\pm{\siround{0.00017}{5}}$ \vertsp ${\siround{0.02224}{5}}\pm{\siround{0.00015}{5}}$ \\
\morehorsp
$\Omega_\mathrm{c}h^2$ \vertsp ${\siround{0.1207}{4}}\pm{\siround{0.0015}{4}}$ \vertsp ${\siround{0.1199}{4}}\pm{\siround{0.0015}{4}}$ \vertsp ${\siround{0.1197}{4}}\pm{\siround{0.0014}{4}}$ \vertsp ${\siround{0.1196}{4}}\pm{\siround{0.0011}{4}}$ \\
\morehorsp
$100\theta_\mathrm{MC}$ \vertsp ${\siround{1.0407}{5}}\pm{\siround{0.00032}{5}}$ \vertsp ${\siround{1.0408}{5}}\pm{\siround{0.00032}{5}}$ \vertsp ${\siround{1.04078}{5}}\pm{\siround{0.00032}{5}}$ \vertsp ${\siround{1.04082}{5}}^{+\siround{0.00029}{5}}_{-\siround{0.0003}{5}}$ \\
\morehorsp
$\tau$ \vertsp ${\siround{0.091}{3}}\pm{\siround{0.019}{3}}$ \vertsp ${\siround{0.064}{3}}\pm{\siround{0.014}{3}}$ \vertsp ${\siround{0.086}{3}}\pm{\siround{0.019}{3}}$ \vertsp ${\siround{0.096}{3}}\pm{\siround{0.018}{3}}$ \\
\morehorsp
$H_0$ \vertsp ${\siround{66.88}{2}}\pm{\siround{0.68}{2}}$ \vertsp ${\siround{67.16}{2}}\pm{\siround{0.67}{2}}$ \vertsp ${\siround{67.29}{2}}^{+\siround{0.66}{2}}_{-\siround{0.65}{2}}$ \vertsp ${\siround{67.36}{2}}^{+\siround{0.49}{2}}_{-\siround{0.48}{2}}$ \\
\morehorsp
$\log(10^{10} A_\mathrm{s})$ \vertsp ${\siround{3.118}{3}}\pm{\siround{0.037}{3}}$ \vertsp ${\siround{3.061}{3}}\pm{\siround{0.026}{3}}$ \vertsp ${\siround{3.104}{3}}^{+\siround{0.038}{3}}_{-\siround{0.037}{3}}$ \vertsp ${\siround{3.125}{3}}\pm{\siround{0.036}{3}}$ \\
\morehorsp
$n_\mathrm{s}$ \vertsp ${\siround{0.9582}{4}}^{+\siround{0.0055}{4}}_{-\siround{0.0054}{4}}$ \vertsp ${\siround{0.9607}{4}}\pm{\siround{0.0054}{4}}$ \vertsp ${\siround{0.9608}{4}}\pm{\siround{0.0055}{4}}$ \vertsp ${\siround{0.9613}{4}}^{+\siround{0.0046}{4}}_{-\siround{0.0047}{4}}$ \\
\morehorsp
$\alpha_\mathrm{s}$ \vertsp ${\siround{0.011}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.012}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.012}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.01}{3}}\pm{\siround{0.01}{3}}$ \\
\morehorsp
$\beta_\mathrm{s}$ \vertsp ${\siround{0.027}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.022}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.026}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.025}{3}}\pm{\siround{0.013}{3}}$ \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\limit{68}$ bounds
on $\Omega_\mathrm{b}h^2$, $\Omega_\mathrm{c}h^2$, $100\theta_\mathrm{MC}$, $\tau$, $H_0$, $\log(10^{10} A_\mathrm{s})$, $n_\mathrm{s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$, for the listed datasets: the model is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$, $k_\star = 0.05\,\mathrm{Mpc}^{-1}$.}}
\label{tab:results}
\end{center}
\end{table*}
\section{Results}
\label{sec:results}
\noindent In Tab.~\ref{tab:results} we present the constraints on $n_{\rm s}$, $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$
from the \emph{Planck} 2015 temperature and polarization data and in combination
with BAO, cosmic shear and CMB lensing.
As we can see, the Planck alone dataset provides an indication for $\beta_\mathrm{s}>0$ at more
than two standard deviations with $\beta_\mathrm{s}=0.027\pm0.013$ at $\limit{68}$.
It is interesting to investigate the impact of the inclusion of $\alpha_\mathrm{s}$ and
$\beta_\mathrm{s}$ on the remaining $6$ parameters of the $\Lambda$CDM model.
Comparing our results with those reported in Table $3$ of \cite{Ade:2015lrj},
we see that there are no major shifts on the parameters. The
largest shifts are present for the scalar spectral index $n_{\rm s}$, that is $\sim 0.9$
standard deviations {\it lower} when $\beta_\mathrm{s}$ is included, and for
the reionization optical depth $\tau$ that is $\sim 0.9$
standard deviations {\it higher} with respect to the standard $\Lambda$CDM
scenario. A similar shift is also present for the value of the root mean square
density fluctuations on scales of $8 h\,\mathrm{Mpc}^{-1}$ (the $\sigma_8$ derived parameter),
which is higher by about one standard deviation when $\beta_\mathrm{s}$ is considered.
In Fig.~\ref{fig:tau_sigma8-v-nrunrun} we plot the probability contour at $\limit{68}$ and
$\limit{95}$ for the several combinations of datasets in the $\text{$\beta_\mathrm{s}$ -- $\sigma_8$}$ and $\text{$\beta_\mathrm{s}$ -- $\tau$}$ planes respectively. Clearly, a new determination of $\tau$ from future large-scale polarization data as those expected from
the Planck HFI experiment could have an impact on the value of $\beta_\mathrm{s}$.
On the other hand, this one sigma shift in $\tau$ with respect to $\Lambda$CDM shows that a
large-scale measurement of CMB polarization does not fully provide a direct determination
of $\tau$ but that some model dependence is present.
Moreover, as expected, there is a strong correlation between $\alpha_\mathrm{s}$ and
$\beta_\mathrm{s}$. Because of this correlation, the running $\alpha_\mathrm{s}$ is constrained to be positive,
with $\alpha_\mathrm{s}>0$ at more than $\limit{68}$ when $\beta_\mathrm{s}$ is considered.
This is a $\sim 1.3$ standard deviations shift on $\alpha_\mathrm{s}$ if we compare this result
with the value obtained using the same dataset but fixing $\beta_\mathrm{s}=0$ in Table
$5$ of \cite{Ade:2015lrj}.
In Fig.~\ref{fig:ns_nrun-v-nrunrun} we plot the two dimensional likelihood constraints
in the $\text{$n_{\rm s}$ -- $\beta_\mathrm{s}$}$ and $\text{$\alpha_\mathrm{s}$ -- $\beta_\mathrm{s}$}$ planes respectively.
As we can see, a correlation between the parameters is clearly present. \textcolor{black}{However, when $\alpha_\mathrm{s}$ and possibly higher derivatives of the scalar tilt are left free to vary, the constraints
will depend on the choice of the pivot scale $k_\star$ \cite{Cortes:2007ak}. We have therefore considered two additional values of $k_\star$, \emph{i.e.} $k_\star = 0.01\,\mathrm{Mpc}^{-1}$ and $k_\star = 0.002\,\mathrm{Mpc}^{-1}$: the resulting plots are shown in \sect{dependence_pivot} (where we also present a simple argument to explain the stability of $\sigma_{\beta_\mathrm{s}}$ under change of $k_\star$), while \tab{pivots} shows the $\limit{68}$ constraints on $n_{\rm s}$, $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ (``base'' model, \emph{Planck} $TT$, $TE$, $EE$ + lowP dataset\footnote{A study of the impact of $k_\star$ when also $A_L$, $\sum m_\nu$ and $\Omega_K$ are varied is left to future work.}). From \tab{pivots} we see that, while the $1\sigma$ indication for $\alpha_\mathrm{s} > 0$ disappears if we change $k_\star$ (becoming a $\sim2\sigma$ evidence for negative running), $\beta_\mathrm{s}$ remains larger than $0$ at $\sim 2\sigma$.\footnote{We also note a $\sim1\sigma$ indication of blue tilt when $k_\star$ is $0.002\,\mathrm{Mpc}^{-1}$.}
We therefore conclude that the preference for blue $\beta_\mathrm{s}$ is stable under the variation of $k_\star$: by studying the improvement in $\chi^2$
with respect to the $\Lambda\mathrm{CDM}$ and $\Lambda\mathrm{CDM} + \alpha_\mathrm{s}$ models, we can understand what is its origin.}
\begin{table}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
$\mathrm{base}$ \vertsp $k_\star = 0.01\,\mathrm{Mpc}^{-1}$ \vertsp $k_\star = 0.002\,\mathrm{Mpc}^{-1}$ \\
\hline
\morehorsp
$n_\mathrm{s}$ \vertsp ${\siround{0.9758}{4}}^{+\siround{0.0117}{4}}_{-\siround{0.0116}{4}}$ \vertsp ${\siround{1.0632}{4}}^{+\siround{0.0466}{4}}_{-\siround{0.0459}{4}}$ \\
\morehorsp
$\alpha_\mathrm{s}$ \vertsp ${\siround{-0.032}{3}}\pm{\siround{0.015}{3}}$ \vertsp ${\siround{-0.076}{3}}\pm{\siround{0.035}{3}}$ \\
\morehorsp
$\beta_\mathrm{s}$ \vertsp ${\siround{0.027}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.027}{3}}\pm{\siround{0.013}{3}}$ \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\limit{68}$ constraints on $n_\mathrm{s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$, for the listed pivot scales: the model is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$, and the dataset is \emph{Planck} ($TT$, $TE$, $EE$ + lowP).}}
\label{tab:pivots}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{nrunrun_sigma8.pdf}
&\includegraphics[width=\columnwidth]{nrunrun_tau.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Constraints at $\limit{68}$ and $\limit{95}$ in the $\text{$\beta_\mathrm{s}$ -- $\sigma_8$}$ plane (left panel)
and in the $\text{$\beta_\mathrm{s}$ -- $\tau$}$ plane (right panel).}}
\label{fig:tau_sigma8-v-nrunrun}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{ns_nrunrun.pdf}
&\includegraphics[width=\columnwidth]{nrun_nrunrun.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Likelihood constraints
in the $\text{$n_{\rm s}$ -- $\beta_\mathrm{s}$}$ (left panel) and $\text{$\alpha_\mathrm{s}$ -- $\beta_\mathrm{s}$}$ (right panel) planes for
different combination of datasets, as discussed in the text.}}
\label{fig:ns_nrun-v-nrunrun}
\end{figure*}
\begin{figure*}[!hbt]
\includegraphics[width=0.48\textwidth]{aps143.pdf}
\caption{\footnotesize{Shift in the amplitude of unresolved foreground point sources
at $143$ GHz between the $\Lambda$CDM case and the case when variation in
$\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ are considered. The dataset used is \emph{Planck} temperature and
polarization angular spectra.}}
\label{fig:aps143}
\end{figure*}
The \emph{Planck} likelihood consists essentially of three terms: a low-$\ell$ ($\ell=2\div29$) TEB likelihood
based on the \emph{Planck} LFI $70$ GHz channel full mission dataset, an high-$\ell$ likelihood based on \emph{Planck} HFI
$100$ GHz, $143$ GHz and $217$ GHz channels half mission dataset and, finally, an additional
$\chi^2$ term that comes from the external priors assumed on foregrounds (see \cite{Aghanim:2015xee}).
By looking at the mean $\chi^2_\mathrm{eff}$ values from these three terms we can better understand from where
(low $\ell$, high $\ell$, foregrounds) the indication for $\beta_s$ is coming.
Comparing with the $\chi^2$ values obtained under standard $\Lambda$CDM with
$\alpha_\mathrm{s}=0$ and $\beta_\mathrm{s}=0$, we have found that while the high-$\ell$ likelihood remains
unchanged, there is an improvement in the low-$\ell$ likelihood of $\Delta \chi^2_\mathrm{eff} \sim 2.5$
and in the foregrounds term with $\Delta \chi^2_\mathrm{eff} \sim 1$. The inclusion of $\beta_\mathrm{s}$ provides
therefore a better fit to the low-$\ell$ part of the CMB spectrum and to the foregrounds
prior. While the better fit to the low-$\ell$ part of the CMB spectrum can be easily explained
by the low quadrupole $TT$ anomaly and by the dip at $\ell \sim 20-30$, the change due to foregrounds
is somewhat unexpected since, in general, foregrounds do not correlate with cosmological
parameters. We have found a significant correlation between $\beta_\mathrm{s}$ and the point source amplitude
at $143$ GHz, $A^{PS}_{143}$. The posterior of $A^{PS}_{143}$ shifts indeed by half sigma towards lower
values with respect to the standard
$\Lambda$CDM case (see \fig{aps143}) from $A^{PS}_{143}=43\pm 8$ to
$A^{PS}_{143}=39\pm8$ at $\limit{68}$. This shift could also explain the small difference between the
constraints reported here and those reported in \cite{Ade:2015lrj}, that uses
the \emph{Pliklite} likelihood code where foregrounds are marginalized at their $\Lambda$CDM values.
\textcolor{black}{Before proceeding, we stress that using a likelihood ratio test \cite{Liddle:2004nh} it is easy to see that, for a $\Delta\chi^2_\mathrm{eff}\sim 3.5$ (as the one we find here), there still is a $\sim 17\%$ probability that the $\Lambda\mathrm{CDM}$ model is the correct one.\footnote{\textcolor{black}{Using the fact that $2\log(\mathcal{L}_1/\mathcal{L}_2)$ is distributed as a $\chi^2$ with $\mathrm{d.o.f.} = \mathrm{d.o.f.}_1 - \mathrm{d.o.f.}_2$.}} Given the \emph{Planck} $TT$, $TE$, $EE$ + lowP dataset, this is the significance with which the $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$ model is preferred over the $\Lambda\mathrm{CDM}$ one.}
Going back to \tab{results}, we can see that the indication for $\beta_\mathrm{s}>0$ is slightly weakened but still
present also when external datasets are considered.
Adding CMB lensing gives $\beta_\mathrm{s}=0.022\pm0.013$, \ie reducing the tension to about $1.7$ standard
deviations, while the inclusion of
{weak lensing} and BAO {data} does not
lead to an appreciable decrease in the statistical significance of
$\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$.
In Tab.~\ref{tab:results-mnu} we report similar constraints but including also variations in the
neutrino mass absolute scale $\sum m_{\nu}$. The constraints obtained from the \emph{Planck} 2015
data release on the neutrino masses are indeed very strong, especially when combined with
BAO data, ruling out the possibility of a direct detection from current and future
beta and double beta decay experiments (see, \emph{e.g.}, \cite{gerbino}). Since
\emph{Planck} data show a preference for $\beta_\mathrm{s}>0$, it is clearly interesting to investigate if the inclusion of running has
some impact on the cosmological constraints on $\sum m_{\nu}$.
Comparing the results of Tab.~\ref{tab:results-mnu} with those in \cite{Ade:2015lrj}, which were obtained
assuming $\alpha_\mathrm{s}=\beta_\mathrm{s}=0$, we see that the constraints on $\sum m_{\nu}$ are only
slightly weakened, moving from $\sum m_{\nu}<0.490\,\mathrm{eV}$ to $\sum m_{\nu}<0.530\,\mathrm{eV}$
at $\limit{95}$ for the \emph{Planck} dataset alone and from $\sum m_{\nu}<0.590\,\mathrm{eV}$ to
$\sum m_{\nu}<0.644\,\mathrm{eV}$ at $\limit{95}$ when also lensing is considered.
The constraints on $\sum m_{\nu}$ including the WL and BAO datasets are essentially unaffected by $\beta_\mathrm{s}$. We can therefore conclude that there is no
significant correlation between $\beta_\mathrm{s}$ and $\sum m_{\nu}$.
In Fig.~\ref{fig:mnu} we plot the posterior distributions for $\sum m_{\nu}$, while in Fig.~\ref{fig:nrunrun_v_mnu} we plot the probability contour at $\limit{68}$ and
$\limit{95}$ for the several combinations of datasets in the $\text{$\beta_\mathrm{s}$ -- $\sum m_{\nu}$}$ plane, respectively.
\begin{table*}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
$\mathrm{base} + \sum m_\nu$ \vertsp \emph{Planck} \vertsp + lensing \vertsp + WL \vertsp + BAO \\
\hline
\morehorsp
$\Omega_\mathrm{b}h^2$ \vertsp ${\siround{0.02213}{5}}\pm{\siround{0.00018}{5}}$ \vertsp ${\siround{0.02207}{5}}\pm{\siround{0.00019}{5}}$ \vertsp ${\siround{0.02219}{5}}\pm{\siround{0.00018}{5}}$ \vertsp ${\siround{0.02224}{5}}\pm{\siround{0.00015}{5}}$ \\
\morehorsp
$\Omega_\mathrm{c}h^2$ \vertsp ${\siround{0.1208}{4}}\pm{\siround{0.0016}{4}}$ \vertsp ${\siround{0.1206}{4}}\pm{\siround{0.0016}{4}}$ \vertsp ${\siround{0.1199}{4}}\pm{\siround{0.0015}{4}}$ \vertsp ${\siround{0.1196}{4}}\pm{\siround{0.0011}{4}}$ \\
\morehorsp
$100\theta_\mathrm{MC}$ \vertsp ${\siround{1.04062}{5}}^{+\siround{0.00033}{5}}_{-\siround{0.00034}{5}}$ \vertsp ${\siround{1.0406}{5}}\pm{\siround{0.00035}{5}}$ \vertsp ${\siround{1.04072}{5}}\pm{\siround{0.00033}{5}}$ \vertsp ${\siround{1.04082}{5}}\pm{\siround{0.0003}{5}}$ \\
\morehorsp
$\tau$ \vertsp ${\siround{0.095}{3}}^{+\siround{0.019}{3}}_{-\siround{0.02}{3}}$ \vertsp ${\siround{0.08}{3}}\pm{\siround{0.019}{3}}$ \vertsp ${\siround{0.088}{3}}\pm{\siround{0.02}{3}}$ \vertsp ${\siround{0.095}{3}}^{+\siround{0.02}{3}}_{-\siround{0.019}{3}}$ \\
\morehorsp
$\big(\sum m_\nu\big)/\mathrm{eV}$ \vertsp $< \siround{0.53}{3}$ \vertsp $< \siround{0.644}{3}$ \vertsp $< \siround{0.437}{3}$ \vertsp $< \siround{0.159}{3}$ \\
\morehorsp
$H_0$ \vertsp ${\siround{65.76}{2}}^{+\siround{2.12}{2}}_{-\siround{0.99}{2}}$ \vertsp ${\siround{64.76}{2}}^{+\siround{2.49}{2}}_{-\siround{1.7}{2}}$ \vertsp ${\siround{66.46}{2}}^{+\siround{1.76}{2}}_{-\siround{0.91}{2}}$ \vertsp ${\siround{67.38}{2}}\pm{\siround{0.56}{2}}$ \\
\morehorsp
$\log(10^{10} A_\mathrm{s})$ \vertsp ${\siround{3.127}{3}}^{+\siround{0.038}{3}}_{-\siround{0.039}{3}}$ \vertsp ${\siround{3.093}{3}}^{+\siround{0.037}{3}}_{-\siround{0.036}{3}}$ \vertsp ${\siround{3.109}{3}}\pm{\siround{0.038}{3}}$ \vertsp ${\siround{3.124}{3}}^{+\siround{0.037}{3}}_{-\siround{0.038}{3}}$ \\
\morehorsp
$n_\mathrm{s}$ \vertsp ${\siround{0.9576}{4}}^{+\siround{0.0056}{4}}_{-\siround{0.0057}{4}}$ \vertsp ${\siround{0.9583}{4}}\pm{\siround{0.0057}{4}}$ \vertsp ${\siround{0.9601}{4}}^{+\siround{0.0055}{4}}_{-\siround{0.0054}{4}}$ \vertsp ${\siround{0.9612}{4}}^{+\siround{0.0047}{4}}_{-\siround{0.0048}{4}}$ \\
\morehorsp
$\alpha_\mathrm{s}$ \vertsp ${\siround{0.011}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.011}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.012}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.01}{3}}^{+\siround{0.01}{3}}_{-\siround{0.011}{3}}$ \\
\morehorsp
$\beta_\mathrm{s}$ \vertsp ${\siround{0.028}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.023}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.026}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.025}{3}}\pm{\siround{0.013}{3}}$ \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\limit{68}$ bounds and $\limit{95}$ upper limits on $\Omega_\mathrm{b}h^2$, $\Omega_\mathrm{c}h^2$, $100\theta_\mathrm{MC}$, $\tau$, $\sum m_\nu$, $H_0$, $\log(10^{10} A_\mathrm{s})$, $n_\mathrm{s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$, for the listed datasets: the model is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s} + \sum m_\nu$, $k_\star = 0.05\,\mathrm{Mpc}^{-1}$.}}
\label{tab:results-mnu}
\end{center}
\end{table*}
\begin{table*}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
$\mathrm{base} + A_L$ \vertsp \emph{Planck} \vertsp + lensing \vertsp + WL \vertsp + BAO \\
\hline
\morehorsp
$\Omega_\mathrm{b}h^2$ \vertsp ${\siround{0.02227}{5}}\pm{\siround{0.00019}{5}}$ \vertsp ${\siround{0.02214}{5}}\pm{\siround{0.00018}{5}}$ \vertsp ${\siround{0.02235}{5}}\pm{\siround{0.00019}{5}}$ \vertsp ${\siround{0.02232}{5}}\pm{\siround{0.00016}{5}}$ \\
\morehorsp
$\Omega_\mathrm{c}h^2$ \vertsp ${\siround{0.1196}{4}}\pm{\siround{0.0017}{4}}$ \vertsp ${\siround{0.1202}{4}}\pm{\siround{0.0017}{4}}$ \vertsp ${\siround{0.1185}{4}}\pm{\siround{0.0016}{4}}$ \vertsp ${\siround{0.119}{4}}\pm{\siround{0.0011}{4}}$ \\
\morehorsp
$100\theta_\mathrm{MC}$ \vertsp ${\siround{1.04081}{5}}\pm{\siround{0.00033}{5}}$ \vertsp ${\siround{1.04076}{5}}\pm{\siround{0.00033}{5}}$ \vertsp ${\siround{1.04093}{5}}\pm{\siround{0.00033}{5}}$ \vertsp ${\siround{1.04089}{5}}\pm{\siround{0.0003}{5}}$ \\
\morehorsp
$\tau$ \vertsp ${\siround{0.07}{3}}\pm{\siround{0.025}{3}}$ \vertsp ${\siround{0.07}{3}}\pm{\siround{0.025}{3}}$ \vertsp $< \siround{0.095}{3}$ \vertsp ${\siround{0.07}{3}}^{+\siround{0.024}{3}}_{-\siround{0.026}{3}}$ \\
\morehorsp
$A_L$ \vertsp ${\siround{1.106}{3}}^{+\siround{0.079}{3}}_{-\siround{0.09}{3}}$ \vertsp ${\siround{0.984}{3}}^{+\siround{0.058}{3}}_{-\siround{0.064}{3}}$ \vertsp ${\siround{1.157}{3}}^{+\siround{0.077}{3}}_{-\siround{0.086}{3}}$ \vertsp ${\siround{1.118}{3}}^{+\siround{0.075}{3}}_{-\siround{0.084}{3}}$ \\
\morehorsp
$H_0$ \vertsp ${\siround{67.38}{2}}\pm{\siround{0.77}{2}}$ \vertsp ${\siround{67.04}{2}}^{+\siround{0.75}{2}}_{-\siround{0.76}{2}}$ \vertsp ${\siround{67.88}{2}}\pm{\siround{0.73}{2}}$ \vertsp ${\siround{67.64}{2}}^{+\siround{0.52}{2}}_{-\siround{0.53}{2}}$ \\
\morehorsp
$\log(10^{10} A_\mathrm{s})$ \vertsp ${\siround{3.073}{3}}^{+\siround{0.05}{3}}_{-\siround{0.051}{3}}$ \vertsp ${\siround{3.074}{3}}^{+\siround{0.05}{3}}_{-\siround{0.051}{3}}$ \vertsp ${\siround{3.044}{3}}^{+\siround{0.044}{3}}_{-\siround{0.051}{3}}$ \vertsp ${\siround{3.072}{3}}\pm{\siround{0.049}{3}}$ \\
\morehorsp
$n_\mathrm{s}$ \vertsp ${\siround{0.9621}{4}}\pm{\siround{0.0062}{4}}$ \vertsp ${\siround{0.9597}{4}}\pm{\siround{0.0061}{4}}$ \vertsp ${\siround{0.9652}{4}}^{+\siround{0.0059}{4}}_{-\siround{0.006}{4}}$ \vertsp ${\siround{0.9637}{4}}\pm{\siround{0.0049}{4}}$ \\
\morehorsp
$\alpha_\mathrm{s}$ \vertsp ${\siround{0.01}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.012}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.01}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.009}{3}}\pm{\siround{0.01}{3}}$ \\
\morehorsp
$\beta_\mathrm{s}$ \vertsp ${\siround{0.021}{3}}\pm{\siround{0.014}{3}}$ \vertsp ${\siround{0.024}{3}}\pm{\siround{0.014}{3}}$ \vertsp ${\siround{0.018}{3}}\pm{\siround{0.013}{3}}$ \vertsp ${\siround{0.019}{3}}\pm{\siround{0.013}{3}}$ \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\limit{68}$ bounds and $\limit{95}$ upper limits on $\Omega_\mathrm{b}h^2$, $\Omega_\mathrm{c}h^2$, $100\theta_\mathrm{MC}$, $\tau$, $A_L$, $H_0$, $\log(10^{10} A_\mathrm{s})$, $n_\mathrm{s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$, for the listed datasets: the model is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s} + A_L$, $k_\star = 0.05\,\mathrm{Mpc}^{-1}$.}}
\label{tab:results-alens}
\end{center}
\end{table*}
\begin{table*}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
$\mathrm{base} + \Omega_K$ \vertsp \emph{Planck} \vertsp + lensing \vertsp + WL \vertsp + BAO \\
\hline
\morehorsp
$\Omega_\mathrm{b}h^2$ \vertsp ${\siround{0.0223}{5}}\pm{\siround{0.00019}{5}}$ \vertsp ${\siround{0.02213}{5}}\pm{\siround{0.00018}{5}}$ \vertsp ${\siround{0.02214}{5}}\pm{\siround{0.00019}{5}}$ \vertsp ${\siround{0.02218}{5}}\pm{\siround{0.00018}{5}}$ \\
\morehorsp
$\Omega_\mathrm{c}h^2$ \vertsp ${\siround{0.1192}{4}}^{+\siround{0.0017}{4}}_{-\siround{0.0018}{4}}$ \vertsp ${\siround{0.1204}{4}}\pm{\siround{0.0017}{4}}$ \vertsp ${\siround{0.1206}{4}}\pm{\siround{0.0017}{4}}$ \vertsp ${\siround{0.1205}{4}}\pm{\siround{0.0016}{4}}$ \\
\morehorsp
$100\theta_\mathrm{MC}$ \vertsp ${\siround{1.04086}{5}}\pm{\siround{0.00034}{5}}$ \vertsp ${\siround{1.04074}{5}}\pm{\siround{0.00033}{5}}$ \vertsp ${\siround{1.04068}{5}}^{+\siround{0.00035}{5}}_{-\siround{0.00034}{5}}$ \vertsp ${\siround{1.04072}{5}}^{+\siround{0.00033}{5}}_{-\siround{0.00034}{5}}$ \\
\morehorsp
$\tau$ \vertsp ${\siround{0.062}{3}}^{+\siround{0.024}{3}}_{-\siround{0.028}{3}}$ \vertsp ${\siround{0.076}{3}}\pm{\siround{0.026}{3}}$ \vertsp ${\siround{0.099}{3}}^{+\siround{0.023}{3}}_{-\siround{0.024}{3}}$ \vertsp ${\siround{0.094}{3}}\pm{\siround{0.018}{3}}$ \\
\morehorsp
$\Omega_K$ \vertsp ${\siround{-0.0302}{4}}^{+\siround{0.025}{4}}_{-\siround{0.0173}{4}}$ \vertsp ${\siround{0.0045}{4}}^{+\siround{0.0096}{4}}_{-\siround{0.0076}{4}}$ \vertsp ${\siround{0.0082}{4}}^{+\siround{0.0091}{4}}_{-\siround{0.0071}{4}}$ \vertsp ${\siround{0.0015}{4}}\pm{\siround{0.0021}{4}}$ \\
\morehorsp
$H_0$ \vertsp ${\siround{57.75}{2}}^{+\siround{4.81}{2}}_{-\siround{6.34}{2}}$ \vertsp ${\siround{69.71}{2}}^{+\siround{4.11}{2}}_{-\siround{4.62}{2}}$ \vertsp ${\siround{71.7}{2}}^{+\siround{3.91}{2}}_{-\siround{5.02}{2}}$ \vertsp ${\siround{67.72}{2}}^{+\siround{0.71}{2}}_{-\siround{0.72}{2}}$ \\
\morehorsp
$\sigma_8$ \vertsp ${\siround{0.799}{3}}^{+\siround{0.033}{3}}_{-\siround{0.036}{3}}$ \vertsp ${\siround{0.837}{3}}\pm{\siround{0.029}{3}}$ \vertsp ${\siround{0.86}{3}}^{+\siround{0.026}{3}}_{-\siround{0.027}{3}}$ \vertsp ${\siround{0.85}{3}}\pm{\siround{0.016}{3}}$ \\
\morehorsp
$\log(10^{10} A_\mathrm{s})$ \vertsp ${\siround{3.057}{3}}^{+\siround{0.048}{3}}_{-\siround{0.058}{3}}$ \vertsp ${\siround{3.087}{3}}\pm{\siround{0.052}{3}}$ \vertsp ${\siround{3.133}{3}}^{+\siround{0.047}{3}}_{-\siround{0.049}{3}}$ \vertsp ${\siround{3.124}{3}}^{+\siround{0.036}{3}}_{-\siround{0.037}{3}}$ \\
\morehorsp
$n_\mathrm{s}$ \vertsp ${\siround{0.9642}{4}}^{+\siround{0.0064}{4}}_{-\siround{0.0065}{4}}$ \vertsp ${\siround{0.9589}{4}}^{+\siround{0.0064}{4}}_{-\siround{0.0063}{4}}$ \vertsp ${\siround{0.9574}{4}}\pm{\siround{0.0063}{4}}$ \vertsp ${\siround{0.9587}{4}}\pm{\siround{0.0057}{4}}$ \\
\morehorsp
$\alpha_\mathrm{s}$ \vertsp ${\siround{0.008}{3}}^{+\siround{0.01}{3}}_{-\siround{0.011}{3}}$ \vertsp ${\siround{0.013}{3}}\pm{\siround{0.01}{3}}$ \vertsp ${\siround{0.014}{3}}\pm{\siround{0.011}{3}}$ \vertsp ${\siround{0.011}{3}}\pm{\siround{0.01}{3}}$ \\
\morehorsp
$\beta_\mathrm{s}$ \vertsp ${\siround{0.013}{3}}\pm{\siround{0.014}{3}}$ \vertsp ${\siround{0.027}{3}}^{+\siround{0.015}{3}}_{-\siround{0.017}{3}}$ \vertsp ${\siround{0.035}{3}}^{+\siround{0.015}{3}}_{-\siround{0.017}{3}}$ \vertsp ${\siround{0.027}{3}}\pm{\siround{0.014}{3}}$ \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\limit{68}$ bounds on $\Omega_\mathrm{b}h^2$, $\Omega_\mathrm{c}h^2$, $100\theta_\mathrm{MC}$, $\tau$, $\Omega_K$, $H_0$, $\sigma_8$, $\log(10^{10} A_\mathrm{s})$, $n_\mathrm{s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$, for the listed datasets: the model is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s} + \Omega_K$, $k_\star = 0.05\,\mathrm{Mpc}^{-1}$.}}
\label{tab:results-omegak}
\end{center}
\end{table*}
\begin{figure}[!hbt]
\includegraphics[width=0.48\textwidth]{mnu_v2.pdf}
\caption{\footnotesize{One-dimensional posterior distributions for the sum of neutrino masses $\sum m_\nu$, for the indicated datasets. The model considered is $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s} + \sum m_\nu$.}}
\label{fig:mnu}
\end{figure}
\begin{figure}[!hbt]
\includegraphics[width=0.48\textwidth]{nrunrun_mnu.pdf}
\caption{\footnotesize{Two-dimensional posteriors in the $\text{$\beta_\mathrm{s}$ -- $\sum m_\nu$}$ plane, for the indicated datasets. We see that there is no correlation between $\sum m_\nu$ and $\beta_\mathrm{s}$.}}
\label{fig:nrunrun_v_mnu}
\end{figure}
In Tab. \ref{tab:results-alens} we report the constraints from the same datasets but letting
also the lensing amplitude $A_{L}$ free to vary. As discussed in the introduction,
\emph{Planck} data are also suggesting a value for $A_{L}>1$ and is therefore
interesting to check if there is a correlation with $\beta_\mathrm{s}$.
As we can see there is a correlation between the two parameters but not extremely significant.
Even with a lower statistical significance, at about $\sim\text{$1.2$ -- $1.5$}$ standard deviations for $A_{L}$ and $\beta_\mathrm{s}$ respectively (that could be also explained by the increased volume of parameter space), data seem
to suggest the presence of {\it both} anomalies.
When the CMB lensing data are included, $A_{L}$ goes back to its standard value while
the indication for $\beta_\mathrm{s}$ increases. When the WL shear data are included the
$A_{L}$ anomaly is present while the indication for $\beta_\mathrm{s}$ is weakened.
We also consider variation in the curvature of the universe and we report the constraints
in Tab.~\ref{tab:results-omegak}.
As we can see, also in this case we have a correlation between $\beta_\mathrm{s}$ and
$\Omega_k$ but not significant enough to completely cancel any indication
for these anomalies from \emph{Planck} data. Indeed, when $\Omega_k$ is considered,
we have still a preference for $\Omega_k<0$ and $\beta_\mathrm{s}>0$ at more than one standard
deviation. More interestingly, when external datasets are included, the indication
for a positive curvature simply vanishes, while we get $\beta_\mathrm{s}>0$ slightly below $\limit{95}$.
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{nrunrun_Alens.pdf}
&\includegraphics[width=\columnwidth]{nrunrun_omegak.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Constraints at $\limit{68}$ and $\limit{95}$ in the $\text{$\beta_\mathrm{s}$ -- $A_{L}$}$ plane (left panel)
and in the $\text{$\beta_\mathrm{s}$ -- $\Omega_k$}$ plane (right panel).}}
\label{fig:alens_omegak-v-nrunrun}
\end{figure*}
In Fig. \ref{fig:alens_omegak-v-nrunrun} we show the constraints at $\limit{68}$ and $\limit{95}$ in the $\text{$\beta_\mathrm{s}$ -- $A_{L}$}$ plane (left panel)
and in the $\text{$\beta_\mathrm{s}$ -- $\Omega_k$}$ plane (right panel).
\textcolor{black}{We conclude this section by looking at what are the improvements (or non-improvements) in $\chi^2$ over our base model $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$ when additional parameters ($A_L$, $\sum m_\nu$ and $\Omega_K$) are considered: the tables (Tabs.~\ref{tab:chi2-pl}, \ref{tab:chi2-pl+lens}, \ref{tab:chi2-pl+wl} and \ref{tab:chi2-pl+bao}) containing all the $\Delta\chi^2$ (which we define by $\chi^2_\mathrm{base} - \chi^2_\text{base + ext.}$) are collected in \sect{chi2}. When considering the ${} + A_L$ extension, we see that an improvement $\Delta\chi^2\sim 1.5$ ($\Delta\chi^2\sim 6$) is obtained for the \emph{Planck} $TT$, $TE$, $EE$ + lowP + BAO dataset (\emph{Planck} $TT$, $TE$, $EE$ + lowP + WL dataset), while the addition of CMB lensing data to \emph{Planck} temperature and polarization data leads to $\Delta\chi^2\sim-1.5$, mainly driven by a worse fit to the foregrounds.}
\textcolor{black}{When $\sum m_\nu$ or $\Omega_K$ are left free to vary, we see that the fit to the data is in general worse: only when adding $\Omega_K$ to the \emph{Planck} $TT$, $TE$, $EE$ + lowP + WL dataset we get a $\Delta\chi^2\sim 2$ improvement.}
\section{Present and Future Constraints from $\mu$-Distortions}
\label{sec:distortions}
\noindent \textcolor{black}{CMB $\mu$-type spectral distortions \cite{Zeldovich:1969ff, Sunyaev:1970er} from the dissipation of acoustic waves at redshifts between $z = \num{2d6}\equiv z_\mathrm{dC}$ and $z = \num{5d4}\equiv z_{\mu\text{-}y}$ offer a window on the primordial power spectrum at very small scales, ranging from $50$ to $\num{d4}$ $\mathrm{Mpc}^{-1}$ (for most recent works on this topic see \cite{Dent:2012ne, Chluba:2012we, Khatri:2013dha, Clesse:2014pna, Enqvist:2015njy, Cabass:2016giw, Chluba:2016bvg} and references therein). The impact of a PIXIE-like mission on the constraints on the running $\alpha_\mathrm{s}$ has been recently analyzed in \cite{Cabass:2016giw}, while \cite{Khatri:2013dha, Chluba:2016bvg} also investigated the variety of signals (and corresponding forecasts) that are expected in the $\Lambda$CDM model (not limited to a $\mu$-type distortion).}
In this section, we briefly investigate the constraining power of $\mu$-distortions on $\beta_\mathrm{s}$, given the \emph{Planck} constraints on $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ of \sect{results}. We
compute the contribution to the $\mu$-monopole from Silk damping of acoustic waves in the photon-baryon plasma \cite{Silk:1967kq, Peebles:1970ag, Kaiser:1983abc, Hu:1992dc, Chluba:2012gq}, using the expression for the distortion visibility function presented in \cite{Khatri:2013dha}.\footnote{This is related to the method called ``Method II'' in \cite{Chluba:2016bvg}, the difference being the visibility function $J_\mathrm{bb}(z)$ used: $J_\mathrm{bb}(z)$ is approximated to $\exp(-(z/z_\mathrm{dC})^{5/2})$ in the ``Method II'' of \cite{Chluba:2016bvg}, while \cite{Khatri:2013dha} derives a fitting formula to take into account the dependence of $J_\mathrm{bb}(z)$ on cosmological parameters. At the large values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ allowed by \emph{Planck}, we do not expect this difference to be very relevant for our final result.} To understand the relationship between the $\mu$ amplitude and the parameters of the primordial power spectrum, one can compute the (integrated) fractional energy that is dissipated by the acoustic waves $\delta_\gamma$ between $z = \num{2d6}$ and $z = \num{5d4}$: this energy feeds back into the background and generates $\mu$-distortions according to (see also \cite{Pajer:2012vz, Pajer:2013oca})
\begin{equation}
\label{eq:mu-silk}
\mu(\vec{x})
\approx\frac{1.4}{4}\braket{\delta_\gamma^2(z,\vec{x})}_p\Big|^{z_\mathrm{dC}}_{z_{\mu\text{-}y}}
\,\,,
\end{equation}
where $\braket{\dots}_p$ indicates the average over a period of oscillation and $\zeta$ is the primordial curvature perturbation. The diffusion damping length appearing in the above formula is given by \cite{Silk:1967kq, Peebles:1970ag, Kaiser:1983abc}
\begin{equation}
\label{eq:diff-dist}
k_\textup{D}(z) = \sqrt{\int^{+\infty}_z\mathrm{d} z\frac{1+z}{H n_e\sigma_\textup{T}}\bigg[\frac{R^2 + \frac{16}{15}(1+R)}{6(1+R)^2}\bigg]}\,\,.
\end{equation}
The observed $\mu$-distortion monopole is basically the ensemble average of $\mu(\vec{x})$ at $z = \num{5d4}$: by averaging \eq{mu-silk}, then, one sees that it is equal to the log-integral of the primordial power spectrum multiplied by a window function
\begin{equation}
\label{eq:mu-window}
W_\mu(k) = 2.3\,e^{-2k^2/k^2_D}\Big|^{z_\mathrm{dC}}_{z_{\mu\text{-}y}}\,\,,
\end{equation}
which localizes the integral between $50\,\mathrm{Mpc}^{-1}$ to $\num{d4}\,\mathrm{Mpc}^{-1}$.
\tab{results-mu} shows how, already with the current limit on the $\mu$-distortion amplitude from the FIRAS instrument on the COBE satellite, namely $\mu = (1 \pm 4)\times 10^{-5}$ at $\limit{68}$ \cite{Fixsen:1996nj}, we can get a $28\%$ increase in the $\limit{95}$ upper limits on $\alpha_\mathrm{s}$, and a $33\%$ increase in those on $\beta_\mathrm{s}$ (we also stress that, in the case of $\beta_\mathrm{s}$ fixed to zero, including FIRAS does not result in any improvement on the bounds for $\alpha_\mathrm{s}$). In \fig{theo_bound}, we also report a {forecast for PIXIE}, whose expected error on $\mu$ is $\num{d-8}$ \cite{Kogut:2011xw}.\footnote{In \cite{Chluba:2013pya} it was shown that, when also $r$-distortions are considered, the expected error should be larger (about $\sigma_\mu = \num{1.4d-8}$): however at the large values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ allowed by \emph{Planck},
the forecasts of \tab{results-mu} are not significantly affected. \textcolor{black}{$r$-distortions are the residual distortions that encode the information on the transition between the $\mu$-era (when distortions are of the $\mu$-type) and the $y$-era (when the CMB is not in kinetic equilibrium and energy injections result in distortions of the $y$-type). We refer to \cite{Khatri:2012tw, Chluba:2013vsa} for a study of these residual distortions, and to \cite{Khatri:2013dha,Chluba:2013pya} for a study of their constraining power on cosmological parameters.}} Besides, we see that:
\begin{itemize}[leftmargin=*]
\item for the best-fit values of cosmological parameters in the $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$ model, which leads to $\mu = \num{1.09d-6}$, PIXIE will be able to detect spectral distortions from Silk damping at extremely high significance (\fig{theo_bound}). Besides, we see that {a statistically significant detection of $\beta_\mathrm{s}$ is expected},
along with a {sizable shrinking} of the {available parameter space} (\fig{theo_bound}). As we discuss later, any detection of such values of $\mu$-distortions will rule out single-field slow-roll inflation, if we assume that all the generated distortions are due to Silk damping and not to other mechanisms like, for example, decaying Dark Matter particles;\footnote{We did not investigate, in this work, whether it could be possible to have models of multi-field inflation (or models where the slow-roll assumption is relaxed \cite{Destri:2008fj}) that can predict such values for the $\mu$-distortion amplitude. \textcolor{black}{We refer to \cite{Clesse:2014pna} for an analysis of some multi-field scenarios.}}
\item for a fiducial value of $\mu$ corresponding to the $\Lambda\mathrm{CDM}$ best-fit
\ie $\mu = \num{1.57d-8}$ \cite{Cabass:2016giw}, we see that
we get a $84\%$ increase in the $\limit{95}$ upper limits on $\alpha_\mathrm{s}$, and a $83\%$ increase in those on $\beta_\mathrm{s}$. More precisely, values of $\beta_\mathrm{s}$ larger than $0.02$ will be excluded at $\sim 5\sigma$.
\end{itemize}
\begin{table}[!hbt]
\begin{center}
\begin{tabular}{lccc}
\toprule
\horsp
base \vertsp $\alpha_\mathrm{s}$ \vertsp $\beta_\mathrm{s}$ \vertsp $\mu$ \\
\hline
\morehorsp
\emph{Planck} \vertsp $0.011\pm0.021$ \vertsp $0.027\pm0.027$ \vertsp $/$ \\
\horsp
+ FIRAS \vertsp $0.006^{+0.017}_{-0.018}$ \vertsp $0.020^{+0.016}_{-0.019}$ \vertsp $(0.77^{+3.10}_{-0.77})\times\num{d-6}$ \\
\horsp
+ PIXIE \vertsp $-0.007^{+0.012}_{-0.013}$ \vertsp $0.001^{+0.008}_{-0.009}$ \vertsp $(1.59^{+1.75}_{-1.52})\times\num{d-8}$ \\
\botrule
\end{tabular}
\caption{\footnotesize{$\limit{95}$ bounds on $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ from the \emph{Planck} ($TT$, $TE$, $EE$ + lowP), \emph{Planck} + FIRAS and \emph{Planck} + PIXIE datasets, for the $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$ (\ie ``base'') model. The results have been obtained by post-processing with a Gaussian likelihood the Markov chains considering $\mu =(1.0\pm4.0)\times10^{-5}$ \cite{Fixsen:1996nj} for FIRAS, and $\mu = (1.57\pm1.00)\times\num{d-8}$ for PIXIE. See the main text for a discussion of the bounds on the $\mu$-amplitude.}}
\label{tab:results-mu}
\end{center}
\end{table}
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{nrun_v_nrunrun-work_in_progress-full-no_50_sigma.pdf}
&\includegraphics[width=\columnwidth]{nrun_v_nrunrun-work_in_progress-theo_bound-full.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Left panel: $\limit{68}$ and $\limit{95}$ contours in the $\alpha_\mathrm{s}$ -- $\beta_\mathrm{s}$ plane, for the \emph{Planck} (blue) and \emph{Planck} + FIRAS (green) datasets (base model). The red regions represent the $2\sigma$ and $5\sigma$ limits from PIXIE around the \emph{Planck} best-fit for the $\Lambda\mathrm{CDM}$ model, \ie $\mu = \num{1.57d-8}$ \cite{Cabass:2016giw}. Right panel: same as left panel, with the red contours represent the $\limit{68}$ and $\limit{95}$ limits from PIXIE, obtained by post-processing the Markov chains with a Gaussian likelihood $\mu = (1.57\pm1.00)\times\num{d-8}$. The grey region represents the values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ that lead to a slow-roll parameter $\epsilon(k)$, computed via the Taylor expansion of \eq{epsilon_SR}, less than zero before or at $k = \num{2d4}\,\mathrm{Mpc}^{-1}$.}}
\label{fig:theo_bound}
\end{figure*}
We conclude this section with a comment on the validity of a Taylor expansion (in $\log k/k_\star$) of the power spectrum down to scales probed by spectral distortions. We can estimate the terms in the expansion of $n_\mathrm{s}(k)$ by choosing $k= 10^4$ Mpc$ ^{-1} $, corresponding to $k_\mathrm{D}$ at $z= z_\mathrm{dC}$: for values of $\beta_\mathrm{s}$ of order $0.06$ (which are still allowed at $\limit{95}$, as shown in \fig{theo_bound}), the term $\frac{\beta_\mathrm{s}}{6}(\log k/k_\star)^2$ in Eq. (\ref{eq:Delta_of_k}) becomes of order $1$. For this reason, \tab{results-mu} does not report the
limits on $\mu$ coming from the current \emph{Planck} constraints on the scale dependence of the spectrum. When existing limits on $\mu$ from FIRAS are instead added, an extrapolation of $\Delta_\zeta^2(k)$ at the scales probed by $\mu$-distortions starts to become meaningful, and when also PIXIE is included in our forecast around the $\Lambda\mathrm{CDM}$ prediction, the upper bounds on $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ are lowered enough that a perturbative expansion becomes viable,
making our forecast valid.
\section{Large $\beta_\mathrm{s}$ and Slow-Roll Inflation}
\label{sec:slow_roll_inflation}
\noindent In this section we discuss briefly the implications that values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ of order $\num{d-2}$ have for slow-roll inflation. We can compute the running of the slow-roll parameter $\epsilon$ in terms of $n_{\rm s}$, $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ by means of the simple slow-roll relations
\begin{subequations}
\label{eq:slow_roll_rels}
\begin{align}
&N - N_\star = -\log\frac{k}{k_\star}\,\,, \label{eq:slow_roll_rels-1} \\
&1 - n_{\rm s} = 2\epsilon-\frac{1}{\epsilon}\frac{\mathrm{d}\epsilon}{\mathrm{d} N}
\,\,, \label{eq:slow_roll_rels-2}
\end{align}
\end{subequations}
where $N$ is the number of e-foldings from the end of inflation, decreasing as time increases (\ie $H\mathrm{d} t=-\mathrm{d} N$), and \eq{slow_roll_rels-1} holds we if neglect the time derivative of the inflaton speed of sound $c_\mathrm{s}$. The running of $\epsilon$ up to third order in $N$ is given, then, by
\begin{equation}
\label{eq:epsilon_SR}
\epsilon(N) = \epsilon(N_\star) + \sum_{i = 1}^3\frac{\epsilon^{(i)}}{i!}(N-N_\star)^i\,\,,
\end{equation}
where the coefficients $\epsilon^{(i)}$ are given in \sect{appendix-slow_roll}.
At scales around $k_\star$, $n_{\rm s}$ dominates, so that $\epsilon$ is increasing and a red spectrum is obtained. However, in presence of positive $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$, at small scales $\epsilon$ becomes smaller, until it becomes zero at $k\approx 39.7\,\mathrm{Mpc}^{-1}$ for $\alpha_\mathrm{s} = 0.01$ and $\beta_\mathrm{s} = 0.02$ (taking $\epsilon_\star = 0.002$, \ie the maximum value allowed by current bounds on $r$, when the inflaton speed of sound
$c_\mathrm{s}$ is fixed to $1$). If we impose that $\epsilon$ stays positive down to $k\approx\num{2d4}\,\mathrm{Mpc}^{-1}$, which is of the same order of magnitude of the maximum $k$ probed by $\mu$-distortions (see \sect{distortions}),
we can obtain a theoretical bound on $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$. We show this bound in \fig{theo_bound}: this plot tells us that a large part of the contours from \emph{Planck} + FIRAS and \emph{Planck} + PIXIE cannot be interpreted in the context of slow-roll inflation extrapolated to $\mu$-distortion scales, because $\epsilon$ becomes negative before reaching $k\approx k_\mathrm{D}(z_\mathrm{dC})$.\footnote{We point out that it is possible to obtain large positive $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ in slow-roll inflation when modulations of the potential are considered \cite{Kobayashi:2010pz}. However, we will not consider such models in this work.}
A similar discussion was presented in \cite{Powell:2012xz}: by means of a slow-roll reconstruction of the inflaton potential \cite{Liddle:1994dx, Kinney:2002qn}, it was shown that if $\beta_\mathrm{s}$ is controlled only by leading-order terms in the slow-roll expansion (see \sect{appendix-slow_roll}), it is not possible to find a single-field inflation model that fits the posteriors from \emph{Planck}.
These kind of bounds tell us that the Taylor expansion is not suitable for extrapolating the inflationary spectrum far away from the CMB window, in presence of the values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ that are currently allowed by \emph{Planck}, since $\epsilon$ becomes zero already $\sim 7$ e-folds after the horizon exit of $k_\star$. To avoid this problem, one could consider a series expansion that takes into account the theoretical bounds on $\epsilon$, \ie $\epsilon(N = 0) = 1$ and $0 < \epsilon < 1$:
the Taylor series does not respect these requirements, so it does \textit{not} in general represent a possible power spectrum from inflation, over the whole range of scales. Only when the values of the phenomenological parameters describing the scale dependence of the spectrum are small,
the Taylor expansion can be a good approximation of a realistic power spectrum over a range of scales much larger than those probed by the CMB.
Another possibility is to consider bounds on the primordial power spectrum coming from observables that lie outside the CMB scales, but are still at small enough $k$ that the Taylor series applies. These would be complementary to spectral distortions, which are basically sensitive only to scales around $740\,\mathrm{Mpc}^{-1}$ \cite{Chluba:2012we, Chluba:2013pya}, opening the possibility of multi-wavelength constraints on the scale dependence of the spectrum.
In this regard,
observations of the Ly-$\alpha$ forest could be very powerful
the forest constrains wavenumbers $k\approx 1h\,\mathrm{Mpc}^{-1}
),\footnote{Even if modeling the ionization state and thermodynamic properties of the intergalactic medium to convert flux measurements into a density power spectrum is very challenging (see \cite{Zaldarriaga:2000mz} and \cite{Adshead:2010mc} for a discussion).} In \cite{Palanque-Delabrouille:2015pga}, an analysis of the the one-dimensional Ly-$\alpha$ forest power spectrum measured in \cite{Palanque-Delabrouille:2013gaa} was carried out, showing that it provides also small-scale constraints on the tilt $n_{\rm s}$ and the running $\alpha_\mathrm{s}$: more precisely, for a $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \sum m_{\nu}$ model, a detection at approximately $3\sigma$ of $\alpha_\mathrm{s}$ ($\alpha_\mathrm{s} = -0.00135^{+0.0046}_{-0.0050}$ at $\limit{68}$) is obtained. It would be interesting to carry out this analysis including the running of the running, to see if the bounds on $\beta_\mathrm{s}$ are also lowered.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented new constraints on the running of the running of the scalar spectral index $\beta_\mathrm{s}$ and
discussed in more detail
the $2\sigma$ indication for $\beta_\mathrm{s} > 0$ that comes from the analysis of CMB anisotropies data from the \emph{Planck} satellite.
We have extended previous analyses by considering simultaneous variations in the lensing amplitude parameter
$A_{L}$ and the curvature of the universe $\Omega_k$.
We have found that, while a correlation does exist between these parameters, \emph{Planck} data still hint for non-standard
values in the extended $\Lambda\mathrm{CDM} +\alpha_\mathrm{s}+\beta_\mathrm{s}+A_L$ and $\Lambda\mathrm{CDM} +\alpha_\mathrm{s}+\beta_\mathrm{s}+\Omega_k$ model,
only partially suggesting a common origin for their anomalous signal related to the low CMB quadrupole.
We have found that the \emph{Planck} constraints on neutrino masses $\sum m_{\nu}$ are essentially stable
under the inclusion of $\beta_\mathrm{s}$.
We have shown how future measurements of CMB $\mu$-type spectral distortions from the dissipation of acoustic waves, such as those expected by PIXIE, could
severely constrain both the running and the running of the running. More precisely we have found that an improvement on \emph{Planck} bounds by a factor of $\sim 80\%$ is expected. Finally, we discussed the conditions under which the phenomenological expansion of the primordial power spectrum in \eq{Delta_of_k} can be extended to scales much shorter that those probed by CMB anisotropies and can provide a good approximation to the predictions of inflationary models.
\section*{Acknowledgments}
\noindent We would like to thank Jens Chluba and Takeshi Kobayashi for careful reading of the manuscript and useful comments. We would also like to thank the referee for providing useful comments. GC and AM are supported by the research grant Theoretical Astroparticle Physics number 2012CPPYP7 under the program PRIN 2012 funded by MIUR and by TASP, iniziativa specifica INFN. EP is supported by the Delta-ITP consortium, a program of the Netherlands organization for scientific research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work has been done within the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02. EDV acknowledges the support of the European Research Council via the Grant number 267117 (DARK, P.I. Joseph Silk).
\section{Appendix}
\label{sec:appendix}
\subsection{Dependence on the choice of pivot scale}
\label{sec:dependence_pivot}
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{ns_v_nrunrun-piv_01-planck_only-referee.pdf}
&\includegraphics[width=\columnwidth]{nrun_v_nrunrun-piv_01-planck_only-referee.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Likelihood constraints
in the $\text{$n_{\rm s}$ -- $\beta_\mathrm{s}$}$ (left panel) and $\text{$\alpha_\mathrm{s}$ -- $\beta_\mathrm{s}$}$ (right panel) planes for
\emph{Planck} ($TT$, $TE$, $EE$ + lowP), at a pivot $k_\star = 0.01\,\mathrm{Mpc}^{-1}$.}}
\label{fig:pivot-0_01}
\end{figure*}
\begin{figure*}
\begin{center}
\begin{tabular}{c c}
\includegraphics[width=\columnwidth]{ns_v_nrunrun-piv_002-planck_only-referee.pdf}
&\includegraphics[width=\columnwidth]{nrun_v_nrunrun-piv_002-planck_only-referee.pdf}
\end{tabular}
\end{center}
\caption{\footnotesize{Likelihood constraints
in the $\text{$n_{\rm s}$ -- $\beta_\mathrm{s}$}$ (left panel) and $\text{$\alpha_\mathrm{s}$ -- $\beta_\mathrm{s}$}$ (right panel) planes for
\emph{Planck} ($TT$, $TE$, $EE$ + lowP), at a pivot $k_\star = 0.002\,\mathrm{Mpc}^{-1}$.}}
\label{fig:pivot-0_002}
\end{figure*}
\noindent \textcolor{black}{When including derivatives of the scalar spectral index as free parameters, one can expect that the constraints on them will depend on the choice of pivot scale $k_\star$ \cite{Cortes:2007ak}: for example, for \emph{Planck} the pivot $k_\star = 0.05\,\mathrm{Mpc}^{-1}$ is chosen to decorrelate $n_{\rm s}$ and $\alpha_\mathrm{s}$. For this reason, we considered two additional values of $k_\star$ in the analysis of the ``base'' ($\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$) model with \emph{Planck} ($TT$, $TE$, $EE$ + lowP) data: $k_\star = 0.01\,\mathrm{Mpc}^{-1}$ and $k_\star = 0.002\,\mathrm{Mpc}^{-1}$. We report the results in Figs.~\ref{fig:pivot-0_01}, \ref{fig:pivot-0_002} and \tab{pivots}: we see that at $k_\star = 0.01\,\mathrm{Mpc}^{-1}$ the tilt and $\beta_\mathrm{s}$ decorrelate, while the degeneracy between $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ goes from positive to negative. For $k_\star = 0.002\,\mathrm{Mpc}^{-1}$, instead, we see that $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ are still negatively correlated, while the degeneracy between $n_{\rm s}$ and $\beta_\mathrm{s}$ becomes positive. However we see from \tab{pivots} that, while changing the pivot cancels the $1\sigma$ indication for $\alpha_\mathrm{s} > 0$, the $2\sigma$ preference for $\beta_\mathrm{s} > 0$ remains in both cases.}
\textcolor{black}{We can understand why the marginalized error on $\beta_\mathrm{s}$ does not change if we change the pivot scale $k_\star$ with a simple Fisher analysis. For a log-likelihood for $\vec{n}\equiv(n_{\rm s},\alpha_\mathrm{s},\beta_\mathrm{s})$ (marginalized over all parameters except $n_{\rm s}$, $\alpha_\mathrm{s}$, $\beta_\mathrm{s}$) given by}
\begin{equation}
\label{eq:likelihood_n}
\mathcal{L}|_{k_\star^{(0)}}\propto (\vec{n}-\vec{n}_0)^T\cdot F_{k_\star^{(0)}}\cdot(\vec{n}-\vec{n}_0)\,\,,
\end{equation}
\textcolor{black}{with inverse covariance matrix $F_{k_\star^{(0)}}$, a change of pivot will result in}
\begin{equation}
\label{eq:likelihood_n_transformed}
\mathcal{L}|_{k_\star}\propto (M\cdot\vec{n}-\vec{n}_0)^T\cdot F_{k_\star^{(0)}}\cdot(M\cdot\vec{n}-\vec{n}_0)\,\,,
\end{equation}
\textcolor{black}{where $M$ is given by the scale dependence of $\vec{n}$, \emph{i.e.}}
\begin{equation}
\label{eq:M_matrix}
\begin{split}
\vec{n}_{k_\star} &= M\cdot\vec{n}_{k_\star^{(0)}} \\
&=
\begin{pmatrix}
1 & \log\frac{k_\star}{k_\star^{(0)}} & \frac{1}{2}\log^2\frac{k_\star}{k_\star^{(0)}}\\
0 & 1 & \log\frac{k_\star}{k_\star^{(0)}} \\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
n_{\rm s}(k_\star^{(0)}) \\
\alpha_\mathrm{s}(k_\star^{(0)}) \\
\beta_\mathrm{s}(k_\star^{(0)})
\end{pmatrix}
\,\,,
\end{split}
\end{equation}
\textcolor{black}{and it is straightforward to verify that it has unit determinant. For a Gaussian likelihood, we can forget about $\vec{n}_0$ (we can just call $\vec{n}_0 = M\cdot\vec{m}_0$ and do a translation), so that all information will be coming from the transformed inverse covariance, \emph{i.e.}}
\begin{equation}
\label{eq:transformed_F}
F_{k_\star} = M^T\cdot F_{k_\star^{(0)}}\cdot M\,\,.
\end{equation}
\textcolor{black}{Since $M$ has unit determinant, the ``figure of merit'' $\text{f.o.m.}\propto 1/\det{F_{k_\star}}$ (which is basically $1/\text{volume of $\limit{68}$ ellipsoid}$) will not change if we change the pivot. What will indeed change are the marginalized and non-marginalized $1\sigma$ errors on the parameters: however, it is straightforward to show with linear algebra that the marginalized error on the running of the running, which is given by}
\begin{equation}
\label{eq:one_sigma_beta}
\sigma(\beta_\mathrm{s}(k_\star)) = \sqrt{\Big(F_{k_\star}^{-1}\Big)_{33}}\,\,,
\end{equation}
\textcolor{black}{does not change under the transformation of \eq{M_matrix}.}
\textcolor{black}{This simple picture does not explain why the mean values of $n_{\rm s}$ and $\alpha_\mathrm{s}$ change. We ascribe this to the presence of the additional parameter $A_\mathrm{s}$: under the transformation of \eq{M_matrix} it will not change linearly, so the Gaussian approximation will not hold. The data will still constrain $A_\mathrm{s}$ well enough, so that $\sigma(A_\mathrm{s})$ will not contribute to the errors on the parameters, but the position of the peak of the transformed likelihood will change.}
\subsection{$\Delta\chi^2$: base model vs. extensions}
\label{sec:chi2}
\noindent\textcolor{black}{In this appendix we collect the full $\Delta\chi^2$ tables: we refer to \sect{results} for a discussion of the various improvements and non-improvements in $\chi^2$ for the different choices of datasets and parameters that have been considered. In all the tables below, $\Delta\chi^2$ stands for $\chi^2_\mathrm{base} - \chi^2_\text{base + ext.}$, both obtained via MCMC sampling of the likelihood.}
\begin{table}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
\vertsp vs. ${} + A_L$ \vertsp vs. ${} + \sum m_\nu$ \vertsp vs. ${} + \Omega_K$ \\
\hline
\morehorsp
$\Delta\chi^2_\mathrm{plik}$ \vertsp \siround{2.1}{1} \vertsp \siround{-1.8}{1} \vertsp \siround{2.4}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{lowP}$ \vertsp \siround{-0.9}{1} \vertsp \siround{-0.6}{1} \vertsp \siround{-1.3}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{prior}$ \vertsp \siround{-1.}{1} \vertsp \siround{0.1}{1} \vertsp \siround{-1.9}{1} \\
\hline
\morehorsp
$\Delta\chi^2$ \vertsp \siround{0.2}{1} \vertsp \siround{-2.3}{1} \vertsp \siround{-0.7}{1} \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\chi^2$ comparison between the base $\Lambda\mathrm{CDM} + \alpha_\mathrm{s} + \beta_\mathrm{s}$ model and the other extensions considered in the main text, for the \emph{Planck} $TT$, $TE$, $EE$ + lowP dataset. The last line contains the overall $\Delta\chi^2$ for all the likelihoods included in the analysis.}}
\label{tab:chi2-pl}
\end{center}
\end{table}
\begin{table}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
\vertsp vs. ${} + A_L$ \vertsp vs. ${} + \sum m_\nu$ \vertsp vs. ${} + \Omega_K$ \\
\hline
\morehorsp
$\Delta\chi^2_\mathrm{plik}$ \vertsp \siround{1.9}{1} \vertsp \siround{-0.5}{1} \vertsp \siround{1.5}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{lowP}$ \vertsp \siround{-0.5}{1} \vertsp \siround{0.1}{1} \vertsp \siround{0.}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{prior}$ \vertsp \siround{-3.8}{1} \vertsp \siround{-2.7}{1} \vertsp \siround{-0.1}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{lensing}$ \vertsp \siround{0.9}{1} \vertsp \siround{1.5}{1} \vertsp \siround{-1.3}{1} \\
\hline
\morehorsp
$\Delta\chi^2$ \vertsp \siround{-1.6}{1} \vertsp \siround{-1.6}{1} \vertsp \siround{0.1}{1} \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{Same as \tab{chi2-pl}, but with the addition of CMB lensing data.}}
\label{tab:chi2-pl+lens}
\end{center}
\end{table}
\begin{table}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
\vertsp vs. ${} + A_L$ \vertsp vs. ${} + \sum m_\nu$ \vertsp vs. ${} + \Omega_K$ \\
\hline
\morehorsp
$\Delta\chi^2_\mathrm{plik}$ \vertsp \siround{3.}{1} \vertsp \siround{-4.3}{1} \vertsp \siround{-1.6}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{lowP}$ \vertsp \siround{-0.3}{1} \vertsp \siround{0.8}{1} \vertsp \siround{-0.3}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{prior}$ \vertsp \siround{0.8}{1} \vertsp \siround{2.9}{1} \vertsp \siround{0.1}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{CFHTLenS}$ \vertsp \siround{2.3}{1} \vertsp \siround{-0.6}{1} \vertsp \siround{3.8}{1} \\
\hline
\morehorsp
$\Delta\chi^2$ \vertsp \siround{5.9}{1} \vertsp \siround{-1.3}{1} \vertsp \siround{2.}{1} \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{Same as \tab{chi2-pl}: the dataset is \emph{Planck} $TT$, $TE$, $EE$ + lowP + WL.}}
\label{tab:chi2-pl+wl}
\end{center}
\end{table}
\begin{table}[!hbtp]
\begin{center}
\begin{tabular}{lcccc}
\toprule
\horsp
\vertsp vs. ${} + A_L$ \vertsp vs. ${} + \sum m_\nu$ \vertsp vs. ${} + \Omega_K$ \\
\hline
\morehorsp
$\Delta\chi^2_\mathrm{plik}$ \vertsp \siround{0.7}{1} \vertsp \siround{1.}{1} \vertsp \siround{-2.7}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{lowP}$ \vertsp \siround{-1.8}{1} \vertsp \siround{-1.2}{1} \vertsp \siround{-1.}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{prior}$ \vertsp \siround{1.4}{1} \vertsp \siround{0.3}{1} \vertsp \siround{0.8}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{6DF}$ \vertsp \siround{0.1}{1} \vertsp \siround{0.}{1} \vertsp \siround{0.1}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{MGS}$ \vertsp \siround{-0.8}{1} \vertsp \siround{0.}{1} \vertsp \siround{-0.8}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{DR11CMASS}$ \vertsp \siround{0.9}{1} \vertsp \siround{0.1}{1} \vertsp \siround{1.1}{1} \\
\morehorsp
$\Delta\chi^2_\mathrm{DR11LOWZ}$ \vertsp \siround{1.1}{1} \vertsp \siround{0.1}{1} \vertsp \siround{1.1}{1} \\
\hline
\morehorsp
$\Delta\chi^2$ \vertsp \siround{1.5}{1} \vertsp \siround{0.3}{1} \vertsp \siround{-1.4}{1} \\
\hline
\bottomrule
\end{tabular}
\caption{\footnotesize{$\Delta\chi^2$ for the \emph{Planck} $TT$, $TE$, $EE$ + lowP + BAO dataset.}}
\label{tab:chi2-pl+bao}
\end{center}
\end{table}
\subsection
Derivation of slow-roll expansion for $\epsilon$}
\label{sec:appendix-slow_roll}
\noindent Starting from \eq{slow_roll_rels-2}, differentiating it w.r.t. $N$ and then using \eq{slow_roll_rels-1}, one can find the coefficients $\epsilon^{(i)}$ of a Taylor expansion of $\epsilon(N)$ in terms of the parameters describing the scale dependence of the primordial spectrum $\Delta^2_\zeta(k)$. More precisely, one finds (calling $\epsilon_\star\equiv\epsilon(N_\star)$)
\begin{subequations}
\label{eq:epsilon_coefficients}
\begin{align}
&\epsilon^{(1)} = (n_{\rm s} - 1)\epsilon_\star + 2\epsilon_\star^2\,\,, \label{eq:epsilon_coefficients-1} \\
&\epsilon^{(2)} = -\alpha_\mathrm{s}\epsilon_\star + 4\epsilon_\star\epsilon^{(1)} + (n_{\rm s}-1)\epsilon^{(1)}\,\,, \label{eq:epsilon_coefficients-2} \\
&\epsilon^{(3)} = \beta_\mathrm{s}\epsilon_\star - 2\alpha_\mathrm{s}\epsilon^{(1)} \nonumber \\
&\hphantom{\epsilon^{(3)} =} + (n_{\rm s}-1)\{-\alpha_\mathrm{s}\epsilon_\star + 4\epsilon_\star\epsilon^{(1)} + (n_{\rm s}-1) \epsilon^{(1)}\} \nonumber \\
&\hphantom{\epsilon^{(3)} =} + 4\{\epsilon_\star[-\alpha_\mathrm{s}\epsilon_\star + 4\epsilon_\star\epsilon^{(1)}+(n_{\rm s}-1)\epsilon^{(1)}] + (\epsilon^{(1)})^2\}\,\,. \label{eq:epsilon_coefficients-3}
\end{align}
\end{subequations}
By plugging in the values of $\alpha_\mathrm{s}$ and $\beta_\mathrm{s}$ allowed by \emph{Planck}, one can extrapolate $\epsilon$ at scales different from $k_\star$. See \sect{slow_roll_inflation} for a discussion.
|
1,108,101,565,750 | arxiv | \chapter*{\contentsname}
\@mkboth{Contents}{Contents
\thispagestyle{plain
\vspace*{-10.6em}
\@starttoc{toc
\if@restonecol\twocolumn\fi
}
\makeatother
\input xy
\xyoption{all}
\firstpage{337}
\lastpage{415}
\begin{document}
\pagenumbering{roman}
\isbn{978-1-60198-664-1}
\DOI{10.1561/0400000055}
\abstract{Many graph properties (e.g., connectedness, containing a complete
\hbox{subgraph}) are known to be difficult to check. In a decision-tree model, the cost
of an algorithm is measured by the number of edges in the graph that it queries. R. Karp
conjectured in the early 1970s that all monotone graph properties are evasive---that is,
any algorithm which computes a monotone graph property must check all edges in the worst
case. This conjecture is unproven, but a lot of progress has been made. Starting with the
work of Kahn, Saks, and Sturtevant in 1984, topological methods have been applied to
prove partial results on the Karp conjecture. This text is a tutorial on these
topological \hbox{methods}. I give a fully self-contained account of the central proofs
from the paper of Kahn, Saks, and Sturtevant, with no prior knowledge of topology
assumed. I also briefly survey some of the more recent results on \hbox{evasiveness}.}
\articletitle{Evasiveness of Graph Properties and Topological Fixed-Point Theorems}
\authorname1{Carl A. Miller}
\affiliation1{University of Michigan}
\author1address2ndline{Department of Electrical Engineering and Computer Science, 2260 Hayward St.}
\author1city{Ann Arbor}
\author1zip{MI 48109-2121}
\author1country{USA}
\author1email{[email protected]}
\journal{tcs}
\volume{7}
\issue{4}
\copyrightowner{C. A. Miller}
\pubyear{2011}
\maketitle
\setcounter{page}{1}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{conj}[theorem]{Conjecture}
\def\mb{\mathbb}
\def\mf{\mathbf}
\def\im{\textnormal{im }}
\def\ker{\textnormal{ker }}
\def\sign{\textnormal{sign }}
\def\Tr{\textnormal{Tr}}
\def\mod{\textnormal{mod }}
\def\bar{\textnormal{bar}}
\def\weight{\textnormal{weight}}
\def\Perm{\textnormal{Perm}}
\def\Pow{\textnormal{Pow}}
\def\Sym{\textnormal{Sym}}
\def\coker{\textnormal{coker }}
\chapter{Introduction}\label{chap1}
Let $V$ be a finite set of size $n$, and let $\mathbf{G} ( V )$ denote the set of
undirected graphs on $V$. For our purposes, a \textbf{graph property} is simply a
function
\begin{eqnarray}
f \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
which is such that whenever two graphs $Z$ and $Z'$ are isomorphic, $f ( Z ) = f ( Z' )$.
A graph $Z$ ``has property $f$'' if $f ( Z ) = 1$.
We can measure the cost of an algorithm for computing $f$ by counting the number of
edge-queries that it makes. We assume that these edge-queries are adaptive (i.e., the
choice of query may depend on the outcomes of previous queries). An algorithm for $f$ can
thus be represented by a binary decision-tree (see Figure~\ref{dectreefigure}). The
\textbf{decision-tree complexity of $f$}, which we denote by $D ( f )$, is the least
possible depth for a decision-tree that computes $f$. In other words, $D ( f )$ is the
number of edge-queries that an optimal algorithm for $f$ has to make in the worst case.
\begin{figure}[!t
\centerline{\includegraphics{f1-1}}
\fcaption{A binary decision tree.\label{dectreefigure}}
\end{figure}
Some graph properties are difficult to compute. For example, let $h( Z ) = 1$ if and
only if $Z$ contains a cycle. Suppose that an algorithm for $h$ makes queries to an
adversary whose goal is to maximize cost. The adversary can adaptively construct a graph
$Y$ to foil the \hbox{algorithm}:~each time a pair $( i, j ) \in V \times V$ is queried,
the adversary answers ``yes,'' unless the inclusion of that edge would necessarily make
the graph $Y$ have a cycle, in which case he answers ``no.'' After $\binom{n}{2} - 1$
edge-queries by the algorithm have been made, the known edges will form a tree on the
elements of $V$. The algorithm at this point will have no choice but to query the last
unknown edge to determine whether or not a cycle exists. We conclude from this argument
that $h$ is a graph property that has the maximal decision-tree complexity
$\binom{n}{2}$. Such properties are called \textbf{evasive}.
A graph property is \textbf{monotone} if it is either always preserved by the addition of
edges (monotone-increasing) or always preserved by the deletion of edges
(monotone-decreasing). In 1973 the following conjecture was made \cite{rosenberg1973}.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{conj}[The Karp Conjecture]
All nontrivial monotone graph properties are evasive.
\end{conj}}
\noindent To date, this conjecture is unproven and no counterexamples are known. However
in 1984, a seminal paper was published by Kahn et~al.~\cite{kss1984} which proved the conjecture in some cases. This
paper showed that evasiveness can be established through the use of topological
fixed-point theorems. It has been followed by many more papers which exploited its
method to prove better results.
This text is a tutorial on the topological method of~\cite{kss1984}. My goal is to
provide background on the problem and to take the reader through all of the necessary
proofs. Let us begin with some history.
\section{Background}
Research on the decision-tree complexity of graph properties---including properties for
both directed and undirected graphs---dates back at least to the early 1970s
\cite{bbl1974,bollobas1976,hr1972,ht1974,kirkpatrick1974,mw1975,rosenberg1973}. Proofs
were given in early papers that certain specific graph properties are evasive (e.g.,
connectedness, containment of a complete subgraph of fixed size), and that other
properties at least have decision-tree complexity $\Omega (n^2)$. Although it was known
that there are graph properties whose decision-tree complexity is not $\Omega (n^2)$ (see
Example~18 in \cite{bbl1974}), Aanderaa and Rosenberg conjectured that all
\textbf{monotone} graph properties have decision-tree complexity $\Omega (n^2)$
\cite{rosenberg1973}. This conjecture was proved by Rivest and Vuillemin \cite{rv1976}
who showed that all monotone graph properties satisfy $D (f) \geq n^2/16$. Kleitman and
Kwiatkowski \cite{kk1980} improved this bound to $D (f) \geq n^2/9$.
Underlying some of these proofs is the insight that if a graph property $f$ has
nonmaximal decision-tree complexity, then the collection of graphs that satisfy $f$ have
some special structure. For example, if $f$ is not evasive, then in the set of graphs
satisfying $f$ there must be an equal number of graphs having an odd number of edges and
an even number of edges. Rivest and Vuillemin \cite{rv1976} used the fact that if $f$ has
decision-tree complexity $\binom{n}{2} - k$, then the weight enumerator of $f$ (i.e., the
polynomial $\sum_j c_j t^j$, where $c_j$ is the number of $f$-graphs containing~$j$
edges) must be divisible by $(1 + t)^k$.
A topological method for the evasiveness problem was introduced in~\cite{kss1984}.
Suppose that $h$ is a monotone-increasing graph property on a vertex set $\{ 0, 1,
\ldots, n-1 \}$. Let $T$ be the collection of all graphs that do \textit{not} satisfy
$h$. The set $T$ has the property that if $G$ is in $T$, then all of its subgraphs are
in $T$. This is a close analogy to the property which defines simplicial complexes in
topology. Let $\{ x_{ab} \mid 0 \leq i < j < n\}$ be a labeled collection of linearly
independent vectors in some vector space $\mathbb{R}^N$. Each graph in $T$ determines a
simplex in $\mathbb{R}^N$: one takes the convex hull of the vectors $x_{ab}$
corresponding to the edges $\{ a, b \}$ that are in the graph. The union of these hulls
forms a simplicial complex, $\Gamma_h$. The complex for ``connectedness'' on
four vertices (represented in three dimensions) is shown in
Figure~\ref{connectednessfigure}.
\begin{figure}[!t
\centerline{\includegraphics{f1-2}}
\fcaption{The simplicial complex for ``connectedness'' on four
vertices.\label{connectednessfigure}}
\end{figure}
A fundamental insight of \cite{kss1984} is that nonevasiveness can be translated to a
topological condition. If $h$ is not evasive, then $\Gamma_h$ has a certain topological
property called \textbf{collapsibility}. This property, which we will define formally
later in this text, essentially means that $\Gamma_h$ can be folded into itself and
contracted to a single point. This property implies the even--odd weight-balance
condition mentioned above, but it is stronger. In particular, it allows for the
application of topological fixed-point theorems.
The following theorem is attributed to R.~Oliver.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Oliver \cite{oliver1975}]\label{fptquote}
Let $\Gamma$ be a collapsible simplicial complex. Let $G$ be a finite group which
satisfies the following condition:
\begin{itemize}
\item[(*)] There is a normal subgroup $G' \subseteq G$, whose
size is a power of a prime, such that $G / G'$ is cyclic.
\end{itemize}
Then, any action of $G$ on $\Gamma$ has a fixed point.
\end{theorem}
When $\Gamma = \Gamma_h$, the fixed points of $G$ correspond to graphs, and this theorem
essentially forces the existence of certain graphs that do not satisfy $h$. This theorem
is the basis for the following result of \cite{kss1984}:
\begin{theorem}[Kahn et~al.~\cite{kss1984}]\label{kss1quote}
Let $f$ be a monotone graph property on graphs of size $p^k$, where $p$ is prime. If $f$
is not evasive, then it must be trivial.
\end{theorem}
\noindent The proof of this theorem essentially proceeds by demonstrating an appropriate
group action $G$ on the set of graphs of order $p^k$ such that the only $G$-invariant
graphs are the empty graph and the complete graph.
Thus evasiveness is known for all values of $n$ that are prime powers. What about other
values of $n$? One could hope that if the decision-tree complexity is always
$\binom{p}{2}$ when the vertex set is size $p$, then the quantity $\binom{p}{2}$ is a
lower bound for the cases $p+1$, $p+2$, and so forth. Unfortunately there is no known
way to show this. However, all is not lost. The following general theorem is also proved
in \cite{kss1984}.
\begin{theorem}[Kahn et~al. \cite{kss1984}]\label{kss2quote}
Let $f$ be a nontrivial monotone graph property of order $n$. Then,
\begin{eqnarray}
D ( f ) \geq \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\noindent The paper \cite{kss1984} was then followed by several other papers on
evasiveness by other authors who used the topological approach to prove new results on
evasiveness \cite{bbkk2010,cks2002,king1990,kt2010,triesch1994,triesch1996,yao1988}. Some
of these papers found new group actions $G \circlearrowleft \Delta_h$ to exploit in the
nonprime cases.
The target results of this exposition are Theorems~\ref{kss1quote} and \ref{kss2quote}, and a theorem by Yao on evasiveness of bipartite graphs
\cite{yao1988}. Now let us summarize what we need to do in order to get there.
\section{Outline of Text}
My goal in this exposition is to give a reader who does not know \hbox{algebraic}
topology a complete tutorial on topological proofs of evasiveness. Therefore, a fair
amount of space will be devoted to building up concepts from algebraic topology. I have
tended be economical in my \hbox{discussions} and to develop concepts only on an
as-needed basis. Readers who wish to learn more algebraic topology after this exposition
may want to consult good references such as \cite{hatcher2002,munkres}.
We begin, in \textit{\nameref{basicconceptschapter}}, by formalizing the class of
simplicial complexes and its relation to the class of graph properties. While
we have presented a simplicial complex in this introduction as a subset of
$\mathbb{R}^n$, it can also be defined simply as a collection of finite sets. (This is
the notion of an \textbf{abstract simplicial complex}.) Although the definition in terms
of subsets of $\mathbb{R}^n$ is helpful for intuition, the definition in terms of finite
sets is the one we will use in all proofs.
A critical construction in this monograph is the set of \textbf{homology
groups} of a simplicial complex. These groups are algebraic objects which measure the
shape of the complex, and also~--- crucially for our purposes~--- help us understand the
behavior of the complex under automorphisms. \textit{\nameref{chaincomplexchapter}}
defines homology groups and provides some of the standard theory for them.
In \textit{\nameref{fptchapter}} we prove some topological results. The first is the
Lefschetz fixed-point theorem. One way to state this theorem is to say that any
automorphism of a collapsible simplicial complex has a fixed point. However we instead
prove a theorem which applies to the more general class of
\textbf{$\mathbb{F}_p$-acyclic} complexes. A simplicial complex is
$\mathbb{F}_p$-acyclic if its homology groups (over $\mathbb{F}_p$) are trivial. When a
simplicial complex is $\mathbb{F}_p$-acyclic it behaves much like a collapsible complex
(and in particular, any automorphism has a fixed point). Finally, we prove a version of
Theorem~\ref{fptquote}. The proof of the theorem depends on finding a tower of subgroups
\begin{eqnarray}
\{ 0 \} = G_0 \subset G_1 \subset G_2 \subset \cdots \subset G_n = G,
\end{eqnarray}
where each quotient $G_i / G_{i-1}$ is cyclic, and performing an inductive argument.
\textit{\nameref{resultschapter}} proves \hbox{Theorem}~\ref{kss1quote}, a
\hbox{bipartite} result of Yao \cite{yao1988}, and Theorem~\ref{kss2quote}. We
conclude with an informal discussion of a few of the more recent results
on decision-tree complexity of graph properties
\cite{bbkk2010,cks2002,king1990,kt2010,triesch1994,triesch1996}.
My primary sources for this exposition were
\cite{duandko,kss1984,munkres,smith1941,yao1988}. A particular debt is owed to Du
and Ko~\cite{duandko}, which was my first introduction to the subject.
\section{Related Topics}
I will briefly mention two alternative lines of research that are related to the one I
cover here. One can change the measure of complexity that one is using to measure graph
properties, and this leads to new problems requiring different methods. A natural
variant is the \textbf{randomized decision-tree complexity.} Suppose that in our
decision-tree model, our algorithm is permitted to make random choices at each step about
which edges to check. We define the cost of the algorithm on a particular input graph to
be the \textit{expected} number of edge queries, and the cost of the algorithm as a whole
to be the maximum of this quantity over all input graphs. The minimum of this quantity
over all algorithms is the randomized decision-tree complexity, $R ( f )$.
There is a line of research studying the randomized decision tree complexity of monotone
graph properties
\cite{ck2007,fkw2002,groger1992,hajnal1991,king1991,odonnel2005,yao1991}. While it is
easy to see that $R ( f )$ can be less than $\binom{n}{2}$, there are graph properties
for which $R ( f )$ is provably $\Omega ( n^2 )$ (such as the ``emptiness
property''---the property that the graph contains no edges). It is conjectured that $R (
f )$ is always $\Omega ( n^2 )$ for monotone graph properties, just as in the
deterministic model. The best proved lower bound \cite{ck2007, hajnal1991} is $\Omega (
n^{4/3} \left( \log n \right)^{1/3})$.
Another variant of decision-tree complexity is \textbf{bounded-error quantum query
complexity}. A quantum query algorithm for a graph property uses a quantum ``oracle'' in
its computation. The oracle accepts a quantum state which is a superposition of
edge-queries to a graph, and it returns a quantum state which encodes the answers to
those queries. The algorithm is permitted to use this oracle along with arbitrary
quantum operations to determine its result. The algorithm is permitted to make errors,
but the likelihood of an error must be below a fixed bound on all inputs.
(See~\cite{bcwz1999}.)
In the quantum case it is clear that a lower bound of $\Omega ( n^2 )$ does not hold:
Grover's algorithm~\cite{ambainis2004} can search a space of size $N$ in time $\Theta (
\sqrt{N} )$ using an oracle model. With a modified version of Grover's algorithm, one
can compute the emptiness property in time $\Theta ( n )$. There are a number of other
monotone properties for which the quantum query complexity is known to be $o ( n^2 )$
(see \cite{ck2010} for a good summary on this topic). It is conjectured that all monotone
graph properties have quantum query complexity $\Omega ( n )$. The best proved lower
bound is $\Omega ( n^{2/3} )$, from an unpublished result attributed to Santha and Yao
\hbox{(see \cite{syz2004})}.
\section{Further Reading}
Other expositions about topological proofs of evasiveness can be found in \cite{duandko}
(in the context of computational complexity theory) and \cite{kozlov2008} (in the context
of algebraic topology), and also in Lovasz's lecture notes \cite{ly2002}. A reader who
wishes to learn more about algebraic topology can consult \cite{munkres}, or, for a more
advanced treatment, \cite{hatcher2002}. For the particular subject of the topology of
complexes arising from graphs, there is an extensive treatment \cite{jonsson2008}, which
builds further on many of the concepts that I will discuss here. And finally, for
readers who generally enjoy reading about applications of topology to problems in
discrete mathematics, the excellent book \cite{matousek2008} contains more material of
the same flavor. It involves applications of a different topological result (the
Borsuk--Ulam theorem) to some problems in elementary mathematics.
\chapter{Basic Concepts}\label{basicconceptschapter}
\section{Graph Properties}\label{graphpropsection}
This part of the text covers some preliminary material. We begin by formalizing some
basic terminology for finite graphs.
For our purposes, a \textbf{finite graph} is an ordered pair of sets $(V, E)$, in
which $V$ (the \textbf{vertex set}) is a finite set, and $E$ (the \textbf{edge set}) is
a set of $2$-element subsets of $V$. For example, the pair
\begin{eqnarray}\label{graphoforder4}
\left( \left\{ 0, 1, 2, 3 \right\} , \left\{ \{ 0, 1 \} ,
\{ 0, 2 \} , \{ 1, 2 \}, \{ 2, 3 \} \right\} \right)
\end{eqnarray}
is a finite graph with four vertices, diagrammed in Figure~\ref{4graphfigure}.
\begin{figure
\centerline{\includegraphics{f2-1}}
\fcaption{A graph on four vertices.\label{4graphfigure}}
\end{figure}
An \textbf{isomorphism} between two finite graphs is a one-to-one correspondence between
the vertices of the two graphs which matches up their edges. In precise terms, if $G =
(V, E)$ and $G' = ( V' , E' )$ are two graphs, then an isomorphism between $G$ and $G'$
is a bijective function $f : V \to V'$ which is such that the set
\begin{eqnarray}
\left\{ \{ f ( v ) , f ( w ) \} \mid
\{ v , w \} \in E \right\}
\end{eqnarray}
is equal to $E'$. For example, the graph in Figure~\ref{4graphfigure} is
isomorphic to the graph in Figure~\ref{4graphaltfigure} under the map $f \colon \{ 0, 1,
2, 3 \} \to \{ 0, 1, 2, 3 \}$ defined~by
\begin{eqnarray}
f( 0 ) = 1 &\quad & f ( 1 ) = 2 \\
f( 2 ) = 3 &\quad & f ( 3 ) = 0.
\end{eqnarray}
\begin{figure}[!b
\centerline{\includegraphics{f2-2}}
\fcaption{A graph that is isomorphic to the graph in
Figure~\ref{4graphfigure}.\label{4graphaltfigure}}
\end{figure}
We can now formalize the notion of a graph property. Briefly
stated, a graph property is a function on graphs which is
compatible with graph isomorphisms.
Let
$V_0$ be a finite set, and let $\mathbf{G} \left( V_0 \right)$
denote the set of all graphs that have $V_0$ as their vertex set.
Then a function
\begin{eqnarray}
h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1 \}
\end{eqnarray}
is a \textbf{graph property} (over $V_0$) if all pairs $(G, G')$ of isomorphic graphs
in $\mathbf{G} \left( V_0 \right)$ satisfy $h ( G ) =
h ( G' )$.
\begin{figure}[!b
\label{graphsoforder3}
\centerline{\includegraphics{f2-3}}
\fcaption{Two graphs of size $3$.\label{twographsfigure}}
\end{figure}
For example, consider the graphs in Figure~\ref{twographsfigure}, which are
members of $\mathbf{G} \left( \{ 0, 1, 2 \} \right)$. Then the function
\begin{eqnarray}
h_1 \colon \mathbf{G} \left( \{ 0, 1, 2 \} \right) \to \{ 0, 1 \}
\end{eqnarray}
defined by
\begin{eqnarray}
h_1 ( G ) = \left\{ \begin{array}{@{}l@{\quad}l@{}} 1 & \textnormal{if } G = G_1 \\
0 & \textnormal{if } G \neq G_1
\end{array} \right.
\end{eqnarray}
is a graph property. However, the function $h_2$ defined by
\begin{eqnarray}
h_2 ( G ) = \left\{ \begin{array}{@{}l@{\quad}l@{}} 1 & \textnormal{ if } G = G_2 \\
0 & \textnormal{ if } G \neq G_2
\end{array} \right.
\end{eqnarray}
is {\it not} a graph property, since there exist graphs in $\mathbf{G} \left( \{ 0, 1, 2
\} \right)$ which are isomorphic to $G_2$ but not equal to $G_2$.
If $G, G' \in \mathbf{G} \left( V_0 \right)$ are graphs such that the edge set of $G'$ is
a subset of the edge set of $G$, then we say that $G'$ is a \textbf{subgraph} of $G$.
Note that this relationship gives us a partial ordering on the set $\mathbf{G} \left( V_0
\right)$. Let us say that a function $h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1
\}$ is \textbf{monotone increasing} if it respects this ordering. In other words, $h$ is
monotone increasing if it satisfies $h ( G' ) \leq h ( G )$ for all pairs $(G', G)$ such
that $G'$ is a subgraph of $G$. Likewise, we say that the function $h$ is
\textbf{monotone decreasing} if it satisfies $h ( G' ) \geq h ( G)$ whenever $G'$ is a
subgraph of $G$.
If $h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1 \}$ is a function, then a
\textbf{decision tree} for $h$ is a step-by-step procedure for computing the
value of $h$. An example is the decision tree in Figure~\ref{decisiontreeexample}, which
computes the value of the function $h_2$ defined above. The diagram in
Figure~\ref{decisiontreeexample} describes an algorithm for computing $h_2$. Each node
in the tree specifies an ``edge-query'', and each branch in the tree specifies how the
algorithm responds to the results of the edge query. For example, suppose that we wish to
apply the algorithm to compute the value of $h_2$ on the graph $G_1$ (from
(\ref{graphsoforder3}), above). The algorithm would first query the edge $\{ 0, 1 \}$,
and it would find that this edge \textit{is} contained in $G_1$. It would then follow
the ``Y'' branch from $\{ 0, 1 \}$, and query the edge $\{ 1, 2 \}$. It would then
follow the ``Y'' branch from $\{ 1, 2 \}$, and determine that the value of $h_2$ is zero.
\begin{figure}[!t
\centerline{\includegraphics{f2-4}}
\fcaption{A decision tree for the graph property $h_2$.\label{decisiontreeexample}}
\end{figure}
The \textbf{decision-tree complexity} of a function $h \colon \mathbf{G} \left( V_0
\right) \to \{ 0, 1 \}$ is the smallest possible depth for a decision-tree which
correctly computes~$h$. We denote this quantity by $D(h)$. For example, the depth of the
decision-tree in Figure~\ref{decisiontreeexample} is $3$. It can be shown that any
decision-tree that computes $h_2$ must have depth at least $3$. Therefore, $D ( h_2 ) =
3$.
It is easy to prove that for any function $h \colon \mathbf{G} \left( V_0 \right) \to \{
0, 1 \}$, the inequality
\begin{eqnarray}
D ( h ) \leq \binom{ \left| V_0 \right|}{2}
\end{eqnarray}
is satisfied. If the function $h$ satisfies
\begin{eqnarray}
D( h ) = \binom{ \left| V_0 \right|}{2}\!,
\end{eqnarray}
then we will say that the function $h$ is {\bf evasive}. Evasive functions are the
functions that are the most difficult to compute via a decision-tree.\footnote{The
concepts of ``decision-tree complexity'' and ``evasiveness'' can be defined for any
Boolean function. See Chapter~\ref{resultschapter} of \cite{duandko} for a more detailed
treatment.}
\section{Simplicial Complexes}\label{simplicialcomplexsection}
Now we give a brief introduction to the notion of a simplicial complex. We draw
on~\cite{munkres} for definitions and terminology.
There are at least two natural ways of defining simplicial complexes---one is as a
collection of finite sets, and another is as a collection of subsets of $\mathbb{R}^n$.
The first definition is the easiest to work with (and it will be the one we use the most
in this monograph). But the second definition is also important because
it provides some indispensible geometric intuition. We will begin by building up the
second definition.
\begin{definition}
Let $N$ and $n$ be positive integers, with $n \leq N$. Let $\mf{v}_0, \mf{v}_1, \ldots,
\mf{v}_n \in \mathbb{R}^N$ be vectors satisfying the condition that
\begin{eqnarray}
\{ ( \mf{v}_1 - \mf{v}_0 ) , ( \mf{v}_2 - \mf{v}_0 ), ( \mf{v}_3 - \mf{v}_0 ),
\ldots , ( \mf{v}_n - \mf{v}_0 ) \}
\end{eqnarray}
is linearly independent set. Then the \textbf{$n$-simplex spanned by $\{ \mf{v}_0,
\mf{v}_1, \ldots, \mf{v}_n \}$} is the set
\begin{eqnarray}
\left\{ \sum_{i=0}^n c_i \mf{v}_i \mid \textnormal{ $0 \leq c_i \leq 1$ for all $i$, and
$\sum_{i=0}^n c_i = 1$ } \right\}\!.
\end{eqnarray}
\end{definition}
When we refer to an ``$n$-simplex'', we simply mean a set which can be defined in the
above form. Note that a $1$-simplex is simply a line segment. A $2$-simplex is a solid
triangle, and a $3$-simplex is a solid tetrahedron.
\begin{definition}
Let $N$ and $n$ be positive integers. Let $v_0, \ldots, v_n \in \mb{R}^N$ be vectors
which span an $n$-simplex~$V$. Then the \textbf{faces} of~$V$ are the simplices in
$\mb{R}^N$ that are spanned by nonempty subsets of $\{ v_0, v_1, \ldots , v_n \}$.
\end{definition}
So, for example, the $2$-simplex in $\mathbb{R}^3$ shown in Figure~\ref{2simplexfigure}
has seven faces (including itself): three of dimension zero,
three of dimension~$1$, and one of dimension two. In
general, an $n$-simplex has $\binom{n+1}{k+1}$ $k$-dimensional faces.
\begin{figure}[!b
\centerline{\includegraphics{f2-5}}
\fcaption{A $2$-simplex.\label{2simplexfigure}}
\end{figure}
\begin{definition}
Let $N$ be a positive integer. A \textbf{simplicial complex} in $\mathbb{R}^N$ is a set
$S$ of simplicies in $\mathbb{R}^N$ which satisfies the following two conditions.
\begin{enumerate}
\item If $V$ is a simplex that is contained in $S$, then all faces of $V$ are also
contained in $S$.
\item If $V$ and $W$ are simplicies in $S$ such that $V \cap W \neq \emptyset$,
then $V \cap W$ is a face of both $V$ and $W$.
\end{enumerate}
\end{definition}
An example of a simplicial complex in $\mathbb{R}^2$ is shown in
Figure~\ref{2dimcomplexfig}.
\begin{figure}[!t
\centerline{\includegraphics{f2-6}}
\fcaption{A simplicial complex in $\mathbb{R}^2$.\label{2dimcomplexfig}}
\end{figure}
Now, as mentioned earlier, there is another definition of simplicial complexes which
simply describes them as collections of finite sets. Following \cite{munkres}, we will
use the term ``abstract simplicial complex'' to distinguish this definition from the
previous one.
\begin{definition}
An \textbf{abstract simplicial complex} is a set $\Delta$ of finite
nonempty sets which satisfies the following condition:
\begin{itemize}
\item If a set $Q$ is an element of $\Delta$, then all nonempty subsets
of $Q$ must also be elements of $\Delta$.
\end{itemize}
\end{definition}
Given a simplicial complex $S$ in $\mathbb{R}^N$, one can obtain an abstract simplicial
complex as follows. Let $T$ be the set of all points in $\mathbb{R}^N$ which occur as
$0$-simplicies in $S$. Let $\Delta_S$ be the set of all subsets $T' \subseteq T$ which
span simplicies that are in $S$. Then, $\Delta_S$ is an abstract simplicial complex. (In
a sense, $\Delta_S$ records the ``gluing information'' for the simplicial complex~$S$.)
It is also easy to perform a reverse construction. Suppose that $\Delta$ is an abstract
simplicial complex. Let
\begin{eqnarray}
U = \bigcup_{Q \in \Delta} Q
\end{eqnarray}
be the union of all of the sets that are contained in $\Delta$. Let $N = \left| U
\right|$. Simply choose a set $V \subseteq \mathbb{R}^N$ consisting of $N$ linearly
independent vectors, and choose a one-to-one map $r \colon U \to V$. Every set in
$\Delta$ determines a simplex in $\mathbb{R}^N$ (via $r$), and the collection of all of
these simplicies is a simplicial complex.
We define some terminology for abstract simplicial complexes.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex. Then,
\begin{itemize}
\item A \textbf{simplex in $\Delta$} is simply an element
of $\Delta$. The \textbf{dimension} of a simplex $Q \in \Delta$, denoted $\dim ( Q )$,
is the \hbox{quantity} \hbox{$(\left| Q \right| - 1)$}. An \textbf{$n$-simplex} in
$\Delta$ is an element of $\Delta$ of \hbox{dimension}~$n$.
\item If $Q, Q' \in \Delta$
and $Q' \subseteq Q$, then we say that $Q'$ is a \textbf{face} of $Q$.
\item The \textbf{vertex set of $\Delta$} is the set
\begin{eqnarray}
\bigcup_{Q \in \Delta} Q.
\end{eqnarray}
Elements of this set are called \textbf{vertices of $\Delta$}.
\end{itemize}
\end{definition}
Here is an initial example of how abstract simplicial complexes arise. Let $F$ be a
finite set. Let $\mathcal{P} ( F )$ denote the power set of $F$. Let $t \colon
\mathcal{P} ( F ) \to \{ 0, 1 \}$ be a function which is ``monotone increasing,'' in the
sense that any pair of sets $(A, B)$ such that $A \subseteq B \subseteq F$ satisfies $t (
A ) \leq t ( B) $. Then, the set
\begin{eqnarray}
\left\{ C \mid \emptyset \subset C \subseteq F \textnormal{ and } t ( C ) = 0 \right\}
\end{eqnarray}
is an abstract simplicial complex.
Thus, a monotone increasing function on a power set determines an abstract simplicial
complex. This connection is the basis for what we will discuss next.
\section{Monotone Graph Properties}\label{graphpropertysimplicial}
Now we will establish a relationship between monotone graph properties and simplicial
complexes. We also introduce a topological concept (``collapsibility'') which has an
important role in this relationship.
Let $V_0$ be a finite set. Using notation from \textit{\nameref{graphpropsection}}, let
$\mf{G} ( V_0 )$ denote the set of all graphs that have vertex set $V_0$. The elements of
$\mf{G} ( V_0 )$ are thus pairs of the form $(V_0 , E)$, where $E$ can be any subset of
the set
\begin{eqnarray}\label{thesetofalledges}
\left\{ \{ v, w \} \mid v , w \in V_0 \right\}.
\end{eqnarray}
Let $h \colon \mf{G} ( V_0 ) \to \{ 0, 1 \}$ be a monotone increasing function. Then
the \textbf{abstract simplicial complex associated with $h$}, denoted
$\Delta_h$, is the set of all nonempty subsets $E$ of set (\ref{thesetofalledges}) such
that
\begin{eqnarray}
h \left( ( V_0, E ) \right) = 0.
\end{eqnarray}
\begin{example}
Consider the set $\mathbf{G} \left( \{ 0, 1, 2, 3 \} \right)$ of graphs on the vertex set
$\{0, 1, 2, 3 \}$. Define functions
\begin{eqnarray}
h_1 \colon \mathbf{G} \left( \{ 0, 1, 2, 3 \} \right) \to \{ 0, 1 \}, \\
h_2 \colon \mathbf{G} \left( \{ 0, 1, 2, 3 \} \right) \to \{ 0, 1 \}
\end{eqnarray}
by
\begin{eqnarray}
h_1 (G) & = & \left\{ \begin{array}{@{}l@{\quad}l@{}}
1 & \textnormal{if $G$ has at least three edges,} \\
0 & \textnormal{otherwise} \end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}
h_2 (G) & = & \left\{ \begin{array}{@{}l@{\quad}l@{}}
1 & \textnormal{if vertex ``$2$'' has at least one edge in $G$, } \\
0 & \textnormal{otherwise.} \end{array}
\right.
\end{eqnarray}
Then the simplicial complexes for $h_1$ and $h_2$ are shown in
Figures~\ref{graphprop1fig} and \ref{graphprop2fig}.\footnote{Note: Ignore the apparent
intersections in the interior of the diagram for $h_1$. Imagine that the lines in the
diagram only intersect at the labeled points $\{ 0, 1 \}, \{ 0, 2 \} , \{ 1, 2 \} , \{ 0,
3 \} , \{ 2 , 3\}$, and $\{ 1, 3\}$. (To really draw this diagram accurately, we would
need three dimensions.)}
\end{example}
\begin{figure
\centerline{\includegraphics{f2-7}}
\fcaption{The simplicial complex of $h_1$.\label{graphprop1fig}}
\end{figure}
\begin{figure
\centerline{\includegraphics{f2-8}}
\fcaption{The simplicial complex of $h_2$.\label{graphprop2fig}}
\end{figure}
Thus we have a way of associating with any monotone-increasing graph
function
\begin{eqnarray}
h \colon \mathbf{G} ( V_0 ) \to \{ 0, 1 \}
\end{eqnarray}
an abstract simplicial complex $\Delta_h$. The simplices of $\Delta_h$ correspond to
graphs on $V_0$. The vertices of $\Delta_h$ correspond to \textit{edges} (not vertices!)
of graphs on $V_0$.
The association $[ h \mapsto \Delta_h ]$ is useful because it allows us to reinterpret
statements about graph functions in terms of simplicial complexes. What we will do now
is to prove a theorem (for later use) which exploits this association. The theorem
relates a condition on graph functions (``evasiveness,'' from
\textit{\nameref{graphpropsection}}) to a condition on simplicial complexes
(``collapsibility'').
We begin with some definitions.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex, and let \hbox{$\alpha \in \Delta$} be a
simplex. Then $\Delta$ is a \textbf{maximal} simplex if it is not contained in any other
simplex in $\Delta$.
\end{definition}
\begin{definition}
Let $\Delta$ be an abstract simplicial complex, and let \hbox{$\beta \in \Delta$} be a
simplex. Then $\beta$ is called a \textbf{free face} of $\Delta$ if it is
\hbox{nonmaximal} and it is contained in only one maximal simplex in $\Delta$. If $\beta$
is a free face and $\alpha$ is the unique maximal simplex that contains it, then we will
say that \textbf{$\beta$ is a free face of $\alpha$}.
\end{definition}
\begin{definition}
An \textbf{elementary collapse} of an abstract simplicial complex is the operation of
choosing a single free face from the complex and deleting the face along with all the
faces that contain it.
\end{definition}
Here is an example of an elementary collapse: if
\begin{eqnarray}
\Sigma_1 = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 0, 2 \} , \{ 1, 2 \} , \{
0, 1, 2 \} \right\}\!,
\end{eqnarray}
then $\{0, 1\}$ is a free face of $\{0, 1, 2\}$ in $\Delta$. By deleting the simplicies
$\{0,1\}$ and $\{0, 1, 2\}$, we obtain the complex
\begin{eqnarray}
\Sigma_2 = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 2 \} , \{ 1, 2 \} \right\}\!.
\end{eqnarray}
The complex $\Sigma_2$ is an elementary collapse of the complex $\Sigma_1$. See
\hbox{Figure}~\ref{elementarycollapsefig}.
\begin{figure}[!b
\centerline{\includegraphics{f2-9}}
\fcaption{An elementary collapse.\label{elementarycollapsefig}}
\end{figure}
The previous example is an instance of what we will call a \textbf{primitive} elementary
collapse. An elementary collapse is primitive if the free face that is deleted has
dimension one less than the maximal simplex in which it is contained. In such a case,
the maximal simplex and free face itself are the only two simplices that are deleted.
(Not all elementary collapses are primitive. An example of a nonprimitive elementary
collapse would be deleting all of the simplices $\{0\}$, $\{0,1\}$, $\{0,2\}$, and $\{0,
1, 2\}$ from $\Sigma_1$.)
\begin{definition}
Let $\Delta$ be an abstract simplicial complex. Then $\Delta$ is
\textbf{collapsible} if there exists a sequence of elementary collapses
\begin{eqnarray}
\Delta , \Delta_1, \Delta_2 , \Delta_3 , \ldots, \Delta_n
\end{eqnarray}
such that $\left| \Delta_n \right| = 1$.
\end{definition}
In other words, $\Delta$ is collapsible if there exists a sequence of elementary
collapses which reduce $\Delta$ to a single $0$-simplex.
The abstract simplicial complexes $\Sigma_1$ and $\Sigma_2$ defined above are both
collapsible. An example of an abstract simplicial complex that is not collapsible is the
following:
\begin{eqnarray}
\Sigma = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 0, 2 \} , \{ 1, 2 \}
\right\}\!.
\end{eqnarray}
(This simplicial complex has no free faces, and therefore cannot be collapsed.)
The following theorem asserts that the simplicial complexes associated with
certain monotone-increasing graph functions are collapsible. The theorem uses the
concept of ``evasiveness'' from \textit{\nameref{graphpropsection}}.
\begin{theorem}\label{collapsibilitytheorem}
Let $V_0$ be a finite set. Let
\begin{eqnarray}
h \colon \mathbf{G} \left( V_0 \right) \to \{ 0 , 1 \}
\end{eqnarray}
be a monotone-increasing function which is not evasive. If the complex $\Delta_h$ is not
empty, then it is collapsible.
\end{theorem}
\begin{proof}
The theorem has an elegant visual proof. Essentially, what we do is to construct a
decision-tree for $h$ and then read off a collapsing-procedure for $\Delta_h$ from the
decision-tree.\footnote{Thanks to Yaoyun Shi, who suggested the nice visualization that
appears in this proof.}
\begin{figure
\centerline{\includegraphics{f2-10}}
\fcaption{A decision tree.\label{adecisiontreefig}}
\end{figure}
Let $n = |V_0|$. Since we have assumed that the function $h$ is not evasive, there must
exist a decision tree of depth smaller than \hbox{$n(n-1)/2$} which decides $h$. Let $T$
be such a tree. (See Figure~\ref{adecisiontreefig}.) By modifying $T$ if necessary, we
can produce another decision-tree $T'$ which decides $h$ and which satisfies the
following conditions. (See Figure~\ref{decisiontreeTprimefig}.)
\begin{itemize}
\item The paths in $T'$ do not have repeated edges. (That is,
no edge $\{ i , j \}$ appears more than once on any path in $T'$.)
\item Every path in $T'$ has length exactly $[n(n-1)/2 - 1]$.
\end{itemize}
\begin{figure
\centerline{\includegraphics{f2-11}}
\fcaption{A decision tree of uniform height.\label{decisiontreeTprimefig}}
\end{figure}
We can define a natural total ordering on the leaves of tree $T'$. The ordering is
defined by asserting that for any parent-node in the tree, all leaves that can be reached
through the ``Y'' branch of the node are smaller than all the leaves that can be reached
through the ``N'' branch of the node. Since any two leaves share a common ancestor, this
rule gives a total ordering.
For any leaf of tree $T'$, there are exactly two graphs which would cause
the leaf to be reached during computation. Thus there is a
one-to-two correspondence between leaves of $T'$ and graphs
on $V_0$. An example is shown in Figure~\ref{decisiontreedepth2fig}. Note that each leaf
is labeled with either with a ``$1$'' or a ``$0$'', depending on the value taken by the
function $h$ at the corresponding graphs. The simplicial complex $\Delta_h$ is composed
out of the graphs that appear at the ``$0$''-leaves of the tree.
\begin{figure}[!t
\centerline{\includegraphics{f2-12}}
\fcaption{A decision tree of height $2$ for graphs of size
$3$.\label{decisiontreedepth2fig}}
\end{figure}
The ordering of the leaves of $T'$ provides a recipe for collapsing $\Delta_h$. Simply
find the smallest (i.e., leftmost) ``$0$''-leaf that appears in tree $T'$. This leaf
corresponds to a pair of simplices $\gamma_1, \gamma_2 \in \Delta_h$ with $\gamma_1
\subseteq \gamma_2$. From the ordering of the leaves, we can deduce that $\gamma_1$ and
$\gamma_2$ are not contained in any simplices in $\Delta_h$ other than themselves. Thus
$\gamma_1$ is a free face of $\Delta_h$. We can therefore perform an elementary
collapse: let
\begin{eqnarray}
\Delta_1 = \Delta_h \smallsetminus \{ \gamma_1 , \gamma_2 \}.
\end{eqnarray}
Now find the second smallest $0$-leaf that appears in $T'$. This leaf corresponds to
another pair of simplices $\gamma'_1, \gamma'_2 \in \Delta_h$ which are not contained in
any other simplices in $\Delta_h$, except possibly $\gamma_1$ or $\gamma_2$. Perform
another elementary collapse:
\begin{eqnarray}
\Delta_2 = \Delta_1 \smallsetminus \{ \gamma'_1 , \gamma'_2 \}.
\end{eqnarray}
Continuing in this manner, we can obtain a sequence of elementary collapses
\begin{eqnarray}
\Delta_h, \Delta_1 , \Delta_2 , \Delta_3 , \ldots , \Delta_n
\end{eqnarray}
such that $\left| \Delta_n \right| = 1$. Therefore, $\Delta_h$ is collapsible.
\end{proof}
\section{Group Actions on Simplicial Complexes}\label{groupactionsection}
Now we define the notion of a \textbf{simplicial isomorphism} between abstract simplicial
complexes. This is a case of the more general notion of a simplicial map (see
\cite{munkres}).
\begin{definition}
Let $\Delta$ and $\Delta'$ be abstract simplicial complexes. A simplicial isomorphism
from $\Delta$ to $\Delta'$ is a bijective map
\begin{eqnarray}
f \colon \Delta \to \Delta'
\end{eqnarray}
which is such that for any $Q_1, Q_2 \in \Delta$,
\begin{eqnarray}
Q_1 \subseteq Q_2 \hskip0.2in \Longleftrightarrow \hskip0.2in
f ( Q_1 ) \subseteq f (Q_2 ).
\end{eqnarray}
\end{definition}
In other words, a simplicial isomorphism between two abstract complexes $\Delta$,
$\Delta'$ is a one-to-one matching $f$ between the simplicies of~$\Delta$ and~$\Delta'$
which respects inclusion. We note the following assertions, which can be proven easily
from this definition:
\begin{itemize}
\item If $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then
$f$ respects dimension (i.e., if $Q \in \Delta$ is an $n$-simplex, then
$f(Q)$ must be an $n$-simplex).
\item If $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then there is an associated map of
vertex sets
\begin{eqnarray}
\hat{f} \colon \bigcup_{Q \in \Delta} Q \to
\bigcup_{Q' \in \Delta'} Q'
\end{eqnarray}
defined by $f ( \{ v \} ) = \{ \hat{f} ( v ) \}$. (Let us call this the \textbf{vertex
map} of $f$.) The map $\hat{f}$ uniquely determines $f$.
\end{itemize}
Let $\Delta$ be an abstract simplicial complex. A simplicial automorphism of $\Delta$ can
be specified either as an inclusion preserving permutation of the elements of $\Delta$,
or simply as a permutation
\begin{eqnarray}
b \colon \bigcup_{Q \in \Delta} Q \to \bigcup_{Q \in \Delta} Q
\end{eqnarray}
of the vertex set of $\Delta$ satisfying
\begin{eqnarray}
Q \in \Delta \Longrightarrow b ( Q ) \in \Delta.
\end{eqnarray}
When we speak of a \textbf{group action} $G \circlearrowleft \Delta$, we mean an action
of a group~$G$ on $\Delta$ by simplicial automorphisms.
In \textit{\nameref{fptchapter}} we will be concerned with determining the ``fixed
points'' of a group action on an abstract simplicial complex. As we will see, describing
this set requires some care. One could simply take the set $\Delta^G$ of $G$-invariant
simplices. But this set is not always subcomplex of $\Delta$. Consider the
two-dimensional complex $\Sigma$ in Figure~\ref{groupactionfig}, which
consists of the sets $\{ 0, 1, 2 \}$ and $\{ 0, 2, 3 \}$ and all of their proper nonempty
subsets. If we let $f \colon \Sigma \to \Sigma$ be the simplicial automorphism which
transposes $\{ 1 \}$ and $\{3 \}$ and leaves $\{ 0 \}$ and $\{ 2 \}$ fixed, then
$\Sigma^f$ is a subcomplex of $\Sigma$. However, if we let $h \colon \Sigma \to \Sigma$
be the simplicial automorphism which transposes $\{ 0 \}$ and $\{ 2 \}$ and leaves $\{ 1
\}$ and $\{ 3 \}$ fixed, then $\Delta^h$ is not a subcomplex of $\Sigma$, since it
contains the set $\{ 0, 2 \}$ but does not contain its subsets $\{ 0 \}$ and $\{ 2 \}$.
\begin{figure}[!b
\centerline{\includegraphics[scale=1.03]{f2-13}}
\fcaption{The complex $\Sigma$.\label{groupactionfig}}
\end{figure}
It is helpful to look at group actions on abstract simplicial \hbox{complexes} in terms
of the geometric representation introduced in
\textit{\nameref{simplicialcomplexsection}}. Let $\mf{e}_0, \mf{e}_1, \ldots, \mf{e}_n$
be the standard basis vectors in $\mathbb{R}^{n+1}$. These vectors span an $n$-simplex
\begin{eqnarray}
\delta = \left\{ \sum_{i=0}^n c_i \mf{v}_i \mid 0 \leq c_i \leq 1 , \sum_{i=0}^n c_i = 1
\right\}\!.
\end{eqnarray}
If $f \colon \{ 0, 1, \ldots, n \} \to \{ 0, 1, \ldots, n \}$ is a permutation with
orbits $B_1, \ldots, B_m \subseteq \{ 0, 1, \ldots, n \}$, then $f$ induces a bijective
map on $\delta$. The invariant set $\delta^f$ consists of those linear combinations
$\sum c_i \mf{v}_i$ satisfying the condition that $c_i = c_j$ whenever $i$ and $j$ lie in
the same orbit. The set $\delta^f$ is an $(m-1)$-simplex which is spanned by the vectors
\begin{eqnarray}
\left\{ \frac{ \sum_{i \in B_k } \mf{v}_i }{\left| B_k \right|} \mid k = 1, 2, \ldots, m
\right\}\!.
\end{eqnarray}
This motivates the following definition.
\enlargethispage{4pt}
\begin{definition}
Let $\Delta$ be a finite abstract simplicial complex with vertex set $V$, and let $G
\circlearrowleft \Delta$ be a group action. Let $A_1, \ldots, A_m \subseteq V$ denote
the orbits of the action of $G$ on $V$. Then, let $\Delta^{[G]}$ denote the set of all
subsets $T \subseteq \{ A_1, \ldots , A_m \}$ satisfying
\begin{eqnarray}
\bigcup_{S \in T} S \in \Delta.
\end{eqnarray}
\end{definition}
\noindent It is easy to see that the set $\Delta^{[G]}$ is always a simplicial complex.
In the case of the complex $\Sigma$ from Figure~\ref{groupactionfig}, if we let $H$ be
the group generated by the automorphism $h$ which transposes $\{ 0 \}$ and $\{ 2 \}$, the
complex $\Sigma^{[H]}$ is one-dimensional and consists of
three zero simplices and two one-simplices. (See
Figure~\ref{groupaction2fig}.) The vertices of $\Sigma^{[H]}$ are the orbits $\{ 1 \}$,
$\{ 3 \}$, and $\{ 0, 2 \}$.
\begin{figure}[!b
\centerline{\includegraphics{f2-14}}
\fcaption{The complex $\Sigma^{[H]}$.\label{groupaction2fig}}
\vspace*{-3pt}
\end{figure}
This complex $\Delta^{[G]}$ will be important in \textit{\nameref{fptchapter}}.
\chapter{Chain Complexes}\label{chaincomplexchapter}
\vspace*{-12pt}
In this part of the text we will introduce some algebraic objects which are crucial for
measuring the behavior of simplicial complexes. The \hbox{central} objects of concern
are \textbf{chain complexes} and \textbf{homology groups}. We will define these objects
and develop some important tools for dealing with them.
\section{Definition of Chain Complexes}\label{chaincomplexsection}
A \textbf{complex of abelian groups} is a sequence of abelian groups
\begin{eqnarray}
Z_0 , Z_1, Z_2, \ldots
\end{eqnarray}
together with group homomorphisms $d_i \colon Z_i \to Z_{i-1}$ for each $i > 0$,
\hbox{satisfying} the condition
\begin{eqnarray}
d_{i-1} \circ d_i = 0
\end{eqnarray}
(or equivalently, $\im d_i \subseteq \ker d_{i-1}$). The groups $Z_i$ and the maps $d_i$
are often expressed in a diagram like so:
\begin{eqnarray}
\xymatrix{\cdots \ar[r] & Z_3 \ar[r]^{d_3}
& Z_2 \ar[r]^{d_2}
& Z_1 \ar[r]^{d_1}
& Z_0}
\end{eqnarray}
We abbreviate the complex as $Z_\bullet$.
A chain complex is a particular complex of abelian groups that is obtained from a
simplicial complex. The definition of chain complex that we will use requires first
choosing a total ordering of the vertices of the abstract simplicial complex in question.
If the vertices of the abstract simplicial complex happen to be elements of a totally
ordered set (such as the set of integers), then our choice is already made for us.
Otherwise, it is necessary before applying our definition to specify what ordering of
vertices we are using. The particular choice of ordering is not terribly important, but
it must be made consistently.
We introduce some new notation which takes this ordering issue into account.
\begin{notation}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are all elements of $V$. For any sequence of distinct elements $v_0, v_1,
\ldots, v_n \in V$ such that
\begin{eqnarray}
\left\{ v_0, \ldots , v_n \right\} \in \Delta
\end{eqnarray}
and
\begin{eqnarray}
v_0 < v_1 < v_2 < \ldots < v_n,
\end{eqnarray}
let
\begin{eqnarray}
[v_0, v_1, \ldots , v_n]
\end{eqnarray}
denote the $n$-simplex $\left\{ v_0, \ldots , v_n \right\}$ in $\Delta$.
\end{notation}
\noindent This notation allows us to cleanly handle the ordering on the vertices of an
abstract simplicial complex. Note that if we say, ``$[ v_0, v_1, \ldots , v_n ]$ is a
simplex in $\Delta$'', we are implying both that $\{ v_0 , \ldots , v_n \}$ is an
element of $\Delta$ \textit{and} that the sequence $v_0, v_1, \ldots, v_n$ is in
ascending order.
Now we will define the sequence of groups which make up a chain complex.
\begin{definition}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Let $n$ be a nonnegative integer. Then, the
\textbf{$n$th chain group of $V$ over $\mathbb{R}$}, denoted $K_n ( \Delta, \mathbb{R}
)$, is the set of all formal $\mathbb{R}$-linear combinations of $n$-simplices in
$\Delta$.
\end{definition}
\begin{example}\label{triangleexample}
Let $\Sigma$ be the simplicial complex
\begin{eqnarray}
\Sigma = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \},
\{ 1, 2 \}, \{ 0, 2 \} \right\}.
\end{eqnarray}
Then, $\Sigma$ has three zero-simplices ($[0]$, $[1]$, and $[2]$) and three one-simplices
($[0,1]$, $[1,2]$, and $[0,2]$). The chain group $K_0 \left( \Sigma, \mathbb{R} \right)$
is a three-dimensional real vector space, and its elements can be expressed
in the form
\begin{eqnarray}
r_1 [ 0 ] + r_2 [ 1 ] + r_3 [2],
\end{eqnarray}
where $r_1$, $r_2$, and $r_3$ denote real numbers. The chain group $K_1 \left( \Sigma,
\mathbb{R} \right)$ is a three-dimensional real vector space, and its
elements can be expressed in the form
\begin{eqnarray}
r_4 [0, 1] + r_5 [1, 2] + r_6 [0, 2 ],
\end{eqnarray}
where $r_4$, $r_5$, and $r_6$ denote real numbers.
\end{example}
In general, if $\Delta$ is an abstract simplicial complex, then $K_n \left( \Delta,
\mathbb{R} \right)$ is a real vector space whose dimension is equal to the number
$n$-simplicies in $\Delta$. (If $\Delta$ has no $n$-simplicies, then $K_n \left( \Delta
, \mathbb{R} \right)$ is a zero vector space.)
\begin{definition}\label{boundarymapdef}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Let $n$ be a positive integer. Then the
\textbf{boundary map} on the $n$th chain group of $\Delta$ (over $\mathbb{R}$) is the
unique $\mathbb{R}$-linear homomorphism
\begin{eqnarray}
d_n \colon K_n \left( \Delta , \mathbb{R} \right)
\to K_{n-1} \left( \Delta , \mathbb{R} \right)
\end{eqnarray}
defined by the equations
\begin{eqnarray}
\label{boundarymapdefeqn}
d_n \left( [ v_0, v_1, \ldots, v_n ] \right) =
\sum_{i = 0}^n (-1)^i [ v_0, v_1, \ldots, v_{i-1} , v_{i+1} ,
\ldots, v_n ]
\end{eqnarray}
(where $[v_0, v_1, \ldots, v_n]$ can be taken to be any $n$-simplex in $\Delta$.)
\end{definition}
\begin{example}\label{solidtriangleexample}
Let
\begin{eqnarray}
\Sigma' = \left\{ \{ 0 \} , \{ 1 \}, \{ 2 \} , \{ 0, 1 \} , \{ 1, 2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray}
Then the boundary map
\begin{eqnarray}
d_2 \colon K_2 \left( \Sigma' , \mathbb{R} \right)
\to K_1 \left( \Sigma' , \mathbb{R} \right)
\end{eqnarray}
is defined by the equation
\begin{eqnarray}
d_2 \left( [ 0, 1, 2 ] \right) & = & [1, 2] - [0, 2] + [0, 1].
\end{eqnarray}
The boundary map
\begin{eqnarray}
d_1 \colon K_1 \left( \Sigma' , \mathbb{R} \right)
\to K_0 \left( \Sigma' , \mathbb{R} \right)
\end{eqnarray}
is defined by the equations
\begin{eqnarray}
d_1 \left( [ 0, 1 ] \right) & = & [0] - [1] \\
d_1 \left( [ 0, 2 ] \right) & = & [0] - [2] \\
d_1 \left( [ 1, 2 ] \right) & = & [1] - [2].
\end{eqnarray}
\end{example}
Note that in equation~(\ref{boundarymapdefeqn}), the simplicies that appear on the right
side are precisely the $(n-1)$-simplex faces of the simplex $[v_0, v_1, \ldots, v_n]$.
Geometrically, if $U \subseteq \mathbb{R}^N$ is an $n$-simplex, then the codimension-$1$
faces of $U$ make up the boundary (or exterior) of the set $U$. This gives us an idea of
why $d_n$ is called a ``boundary'' map.
\begin{proposition}\label{doublezeroprop}
Let $\Delta$ be an abstract simplicial complex whose vertices are totally ordered. Let
$n$ be an integer such that $n \geq 2$. Then the map
\begin{eqnarray}
d_{n-1} \circ d_{n} \colon K_n \left( \Delta, \mathbb{R} \right)
\to K_{n-2} \left( \Delta, \mathbb{R} \right)
\end{eqnarray}
is the zero map.
\end{proposition}
\begin{proof}
Let $Q = [v_0, v_1, \ldots , v_n]$ be an $n$-simplex in $\Delta$. Then,
applying Definition (\ref{boundarymapdef}) twice, we find
\begin{eqnarray*}
&&d_{n-1} \left( d_n \left( Q \right) \right)\\
& &\qquad = \sum_{i=0}^n d_{n-1} \left( (-1)^i [ v_0, \ldots, v_{i-1} , v_{i+1} ,
\ldots v_n ] \right) \\
&&\qquad = \sum_{i=0}^n \left( \sum_{j=0}^{i-1} (-1)^{i+j} [v_0, \ldots , v_{j-1} ,
v_{j+1} , \ldots
v_{i-1}, v_{i+1}, \ldots , v_n ] \right. \\
&&\qquad \quad +\! \left. \sum_{j=i+1}^n (-1)^{i+j-1} [v_0, \ldots , v_{i-1}, v_{i+1} ,
\ldots , v_{j-1}, v_{j+1}, \ldots, v_n ]\!\!\right)\!.
\end{eqnarray*}
All terms in this double-summation cancel, and thus we find that
\begin{eqnarray}
d_{n-1} ( d_n ( Q ) ) = 0.
\end{eqnarray}
Therefore by linearity, $d_{n-1} \circ d_n$ is the zero map.
\end{proof}
\noindent If $\Delta$ is an abstract simplicial complex with ordered vertices, then the
\textbf{chain complex of $\Delta$ over $\mathbb{R}$} is the set of $\mathbb{R}$-chain
groups of $\Delta$ together with their boundary maps:
\begin{eqnarray}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{R} \right)
\ar[r]^{d_2} & K_1 \left( \Delta , \mathbb{R} \right) \ar[r]^{d_1} &
K_0 \left( \Delta , \mathbb{R} \right) \ar[r]^{d_0} & 0}
\end{eqnarray}
For any $n$, the \textbf{$n$th homology group} of $\Delta$ is defined by
\begin{eqnarray}
H_n \left( \Delta , \mathbb{R} \right) = (\ker d_n)/(\im d_{n+1})\!,
\end{eqnarray}
Consider the complex $\Sigma$ from Example~\ref{triangleexample}. The kernel of $d_0$ is
the entire space $K_0 ( \Delta , \mathbb{R} )$, while the image of $d_1$ is the set of
all linear combinations $r_1 [0 ] + r_2 [1 ] + r_3 [2]$ which are such that \hbox{$r_1 +
r_2 + r_3 = 0$}. The quotient $H_0 ( \Delta , \mathbb{R} ) = \ker d_0 / \im d_1$ is a
one-dimensional real \hbox{vector} space. The homology group $H_1 ( \Delta , \mathbb{R}
) = \ker d_1 / \{ 0 \}$ is also a one-dimensional real vector space, spanned by the
element $[0, 1] - [0, 2] + [1, 2]$. All other homology groups of $\Sigma$ are
zero-dimensional.
As we will see in \textit{\nameref{picturingsection}}, the homology groups are
interesting because they supply structural information about the complex $\Delta$. As an
initial example, the reader is invited to prove the \hbox{following} fact as an
exercise:\vadjust{\pagebreak} for any finite abstract simplicial \hbox{complex}~$\Delta$,
the dimension of $H_0 ( \Delta , \mathbb{R} )$ is equal to the number of connected
components of $\Delta$.
Although we defined chain groups using $\mathbb{R}$ (the set of real numbers), it is
possible to define them using other algebraic structures in place of $\mathbb{R}$. Here
is a definition for chain groups over $\mathbb{F}_p$. Proposition~\ref{doublezeroprop}
and the definition of homology groups carry over immediately to this case.
\begin{definition}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Then $K_n \left( \Delta , \mathbb{F}_p \right)$
denotes the vector space of formal $\mathbb{F}_p$-linear combinations of $n$-simplicies
in $V$. For each $n \geq 1$, the map
\begin{eqnarray}
d_n \colon K_n \left( \Delta , \mathbb{F}_p \right) \to
K_{n-1} \left( \Delta , \mathbb{F}_p \right)
\end{eqnarray}
is the unique $\mathbb{F}_p$-linear map defined by
\begin{eqnarray}
d_n \left( [ v_0, v_1, \ldots, v_n ] \right) = \sum_{i = 0}^n (-1)^i [ v_0, v_1, \ldots,
v_{i-1} , v_{i+1} , \ldots , v_n].\\[-13pt]\nn
\end{eqnarray}
\end{definition}
For the rest of this exposition we will be focusing on homology groups with coefficients
in $\mathbb{F}_p$, since these will eventually be the basis for our proofs of fixed-point
theorems. Much of what we will do in this text with $\mathbb{F}_p$-homology could be done
just as well with $\mathbb{R}$-homology, but there will be a key result
(Proposition~\ref{acyclicitypreservation}) which depends critically on the fact that we
are using coefficients in $\mathbb{F}_p$.
\section{Chain Complexes and Simplicial Isomorphisms}\label{chainmapsection}
Suppose that
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & I_{n+1} \ar[r]^{d_{n+1}}
& I_n \ar[r]^{d_n} & I_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
and
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & J_{n+1} \ar[r]^{d_{n+1}}
& J_n \ar[r]^{d_n} & J_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
are two complexes of abelian groups. A \textbf{map of complexes} $F \colon I_\bullet \to
J_\bullet$ is a family of homomorphisms
\begin{eqnarray}
F_n \colon I_n \to J_n
\end{eqnarray}
such that
\begin{eqnarray}
d_n \circ F_n = F_{n-1} \circ d_n.
\end{eqnarray}
Note that, as a consequence of this rule, the map $F_n$ must send the kernel of $d_n^I$
to the kernel of $d_n^J$. Moreover, the family $F$ induces maps on homology groups
\begin{eqnarray}
H_n ( I_\bullet ) \to H_n ( J_\bullet ).
\end{eqnarray}
for every $n$.
Let $p$ be a prime. We are going to define the maps of chain complexes that are
associated with simplicial isomorphisms. Some care must be taken in this
definition. Let $f \colon \Delta \to \Delta'$ be a simplicial isomorphism. An obvious way
to map $K_n \left( \Delta , \mathbb{F}_p \right)$ to $K_n \left( \Delta' , \mathbb{F}_p
\right)$ would be to naively apply $f$ like so: $\sum c_i Q_i \mapsto \sum c_i f ( Q_i
)$. However, this definition does not necessarily give a map of complexes, because
it is not necessarily compatible with the maps $d_i$. The reader will recall that the
definition of $d_i$ depends on the ordering of the vertices of the simplicial complex in
question. The map $f$ may not be compatible with the ordering of the vertices of $\Delta$
and $\Delta'$. In our definition of the maps $K_n \left( \Delta , \mathbb{F}_p \right)
\to K_n \left( \Delta , \mathbb{F}_p \right)$, we need to take this ordering issue into
account.
Note that for any bijection $g \colon S_1 \to S_2$ between two totally ordered sets $S_1$
and $S_2$, there is a unique permutation $\alpha \colon S_2 \to S_2$ which makes the
composition $\alpha \circ g$ an order-preserving map. Let us say that the \textbf{sign}
of the map $g$ is the sign of its associated permutation $\alpha$.\footnote{See
\cite{lang}, pp.~30--31 for a definition of the sign of a permutation. Briefly: if
$\sigma : X \to X$ is a permutation of a finite set $X$, then we can write $\sigma =
\tau_1 \circ \tau_2 \circ \ldots \circ \tau_m$ for some $m$, where each of the maps
$\tau_i \colon X \to X$ is a permutation which transposes two elements. The sign of
$\sigma$ is $(-1)^m$.}
\begin{definition}\label{chainmapdef}
Suppose that $\Delta$ and $\Delta'$ are abstract simplicial complexes whose vertex
sets are totally ordered. Suppose that $f \colon \Delta \to \Delta'$ is a simplicial
isomorphism and that $\hat{f}$ is its vertex map. Let $p$ be a prime, and let $n$ be a
nonnegative integer. The \textbf{$n$th chain map associated with $f$} (over
$\mathbb{F}_p$) is the unique $\mathbb{F}_p$-linear map
\begin{eqnarray}
F_n \colon K_n ( \Delta , \mathbb{F}_p ) \to K_n ( \Delta' , \mathbb{F}_p )
\end{eqnarray}
given by
\begin{eqnarray}
Q & \mapsto & \big( \sign (\hat{f}_{\mid Q}) \big) f ( Q ).
\end{eqnarray}
for all $Q \in \Delta$. Here, $( \sign ( \hat{f}_{\mid Q} ) )$ denotes the sign of the
bijection $( \hat{f} )_{\mid Q} \colon Q \to f ( Q )$.
\end{definition}
Let $\Sigma'$ be the complex from Example~\ref{solidtriangleexample}, and let $g \colon
\Sigma' \to \Sigma'$ be the automorphism given by the permutation $[0 \mapsto 1, 1
\mapsto 2, 2 \mapsto 0]$. Then the chain maps $G_n$ associated with $g$ are
as shown below.
\begin{eqnarray*}
\begin{array}{ccccc}
G_0 ( [0] ) = [1] && G_1 ([0,1]) = [1,2] && \\
G_0 ( [1] ) = [2] && G_1 ([1,2]) = -[0,2] && G_2 ( [0, 1, 2] ) = [0, 1, 2] \\
G_0 ( [2] ) = [0] & & G_1 ( [0,2]) = -[0,1] &&
\end{array}
\end{eqnarray*}
\begin{proposition}
The chain maps $F_n$ of Definition~\ref{chainmapdef} determine a map of complexes,
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left ( \Delta' ,
\mathbb{F}_p \right)\!.
\end{eqnarray}
\end{proposition}
\begin{proof}
It suffices to show that for any $n > 0$, and any $n$-simplex $Q \in \Delta$,
\begin{eqnarray}
d_n ( F_n ( Q ) ) = F_n ( d_n ( Q ) ).
\end{eqnarray}
Let $n$ be a positive integer, and let $Q \in \Delta$ be an $n$-simplex. Write the
simplices $Q$ and $f(Q)$ as
\begin{eqnarray}
Q = [v_0, v_1, \ldots , v_n ], \qquad f(Q) = [w_0, w_1, \ldots, w_n ].
\end{eqnarray}
(Here, as usual, we assume that the sequences $v_0, \ldots , v_n$ and $w_0, \ldots , w_n$
are in ascending order.) The elements $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q ) )$ are
linear combinations of faces of the simplex $[w_0, \ldots , w_n]$. We need simply to
show that the coefficients in the expressions for $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q
) )$ are the same.
Suppose that the face
\begin{eqnarray}
[v_0, v_1, \ldots , v_{i-1} , v_{i+1} , \ldots , v_n]
\end{eqnarray}
of $Q$ maps to the face
\begin{eqnarray}
[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]
\end{eqnarray}
under $f$. Then, by applying the definitions of $d_n$ and $F_n$ we find that the
coefficient of $[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in $d_n ( F_n ( Q
) )$ is
\begin{eqnarray}\label{coeff1}
(-1)^j \big( \sign \hat{f}_{\mid Q} \big),
\end{eqnarray}
whereas the coefficient of $[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in
$F_n ( d_n ( Q ) )$~is
\begin{eqnarray}\label{coeff2}
\big(\sign \hat{f}_{\mid \{ v_0, \ldots , v_{i-1} , v_{i+1} , \ldots v_n \}} \big)
(-1)^i.
\end{eqnarray}
It is a fact (easily proven from the definition of sign) that
\begin{eqnarray}
\big(\sign \hat{f}_{\mid \{ v_0, \ldots , v_{i-1} , v_{i+1} , \ldots v_n \}} \big) =
(-1)^{j-i} \big(\sign \hat{f}_{\mid Q} \big).
\end{eqnarray}
Therefore quantities~(\ref{coeff1}) and (\ref{coeff2}) are equal. So the coefficients of
$[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in $d_n ( F_n ( Q ) )$ and $F_n (
d_n ( Q ) )$ are the same. This reasoning can be repeated to show that all of the
coefficients in $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q ) )$ are the same.
\end{proof}
We have proven that if $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then
there is induced chain map (in fact, an isomorphism),
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta' ,
\mathbb{F}_p \right)\!.
\end{eqnarray}
This chain map induces vector space isomorphisms
\begin{eqnarray}
H_n \left( \Delta , \mathbb{F}_p \right)
\to H_n \left( \Delta' , \mathbb{F}_p \right)
\end{eqnarray}
for every $n \geq 0$. (We may denote these maps using the same symbol,~$F$.)
\section{Picturing Homology Groups}\label{picturingsection}
Before continuing any further with our technical discussion of chain complexes, let us
take a moment to explore some geometric interpretations for the concepts introduced so
far. For convenience, we will assume in the following discussion that $p$ is a prime
greater than or equal to~$5$.
Consider the the two-dimensional simplicial complex $\Gamma$ shown in
Figure~\ref{trianglefig1}. If $[v_0, v_1]$ is a $1$-simplex (where we assume the
existence of an ordering under which $v_0 < v_1$), then let us represent the chain
element $[v_0, v_1 ] \in K_1 ( \Gamma , \mathbb{F}_p )$ by drawing an arrow from $v_0$ to
$v_1$, and let us represent the negation $- [v_0, v_1 ] \in K_1 ( \Gamma , \mathbb{F}_p
)$ by drawing an arrow from $v_1$ to $v_0$. We can likewise use double-headed arrows to
represent\vadjust{\pagebreak} the elements $2 [v_0, v_1]$ and $-2 [v_0, v_1]$. Sums of
such elements can be represented as collections of arrows. In this way we can draw
some of the elements of $K_1 ( \Gamma , \mathbb{F}_p )$ as diagrams like the one in
Figure~\ref{trianglefig1}.
\begin{figure}[!b
\centerline{\includegraphics{f3-1}}
\fcaption{A complex $\Gamma$ and a chain element $a \in K_1 ( \Gamma , \mathbb{F}_p
)$.\label{trianglefig1}}
\end{figure}
\begin{figure}[!b
\centerline{\includegraphics{f3-2}}
\fcaption{Two elements $x,y \in K_1 ( \Gamma , \mathbb{F}_p )$ which are contained in the
same coset of $H_1 ( \Gamma , \mathbb{F}_p )$.\label{trianglefig3}}
\end{figure}
An element $c \in K_1 ( \Gamma , \mathbb{F}_p )$ that is represented in this way will
satisfy $dc = 0$ if and only if for every vertex $v$ of $\Gamma$, the total multiplicity
of incoming arrows at $v$ is the same, mod $p$, as the total multiplicity of the outgoing
arrows at $v$. The element $a$ represented in Figure~\ref{trianglefig1} is such a case.
Each element $c \in K_1 ( \Gamma , \mathbb{F}_p )$ satisfying $dc = 0$ represents an
element of the quotient $H_1 ( \Gamma , \mathbb{F}_p ) = \ker d_1 / \im d_2$, and thus
we can use this geometric interpretation to understand $H_1 ( \Gamma , \mathbb{F}_p )$.
Note that, although there are many diagrams that we could draw which satisfy the
balanced-multiplicity condition mentioned above, it will often occur that two diagrams
represent the same element of $H_1 ( \Gamma , \mathbb{F}_p )$. Figure~\ref{trianglefig3}
gives an example. In fact, any two elements $u, v \in \ker d_1$ will lie in the same
coset of $H_1 ( \Gamma , \mathbb{F}_p )$ if and only if the amount of flow around the
missing center triangle of $\Gamma$ is the same mod $p$ for both $u$ and $v$. This makes
it easy to express the structure of $H_1 ( \Gamma , \mathbb{F}_p )$: if we let $\alpha
\in H_1 ( \Gamma , \mathbb{F}_p )$ be the coset containing the element $y$ from
Figure~\ref{trianglefig3}, then $H_1 ( \Gamma , \mathbb{F}_p )$ is a one-dimensional
$\mathbb{F}_p$-vector space that is spanned by $\alpha$.
Meanwhile, it is easy to see that $\ker d_2 = \{ 0 \}$ and hence \hbox{$H_2 ( \Gamma ,
\mathbb{F}_p ) = \{ 0 \}$}. We thus have the following:
\begin{eqnarray}
H_0 ( \Gamma , \mathbb{F}_p ) & \cong & \mathbb{F}_p \\
H_1 ( \Gamma , \mathbb{F}_p ) & \cong & \mathbb{F}_p \\
H_i ( \Gamma , \mathbb{F}_p ) & \cong & \{ 0 \} \hskip0.2in
\textnormal{for all } i \geq 2.
\end{eqnarray}
This kind of reasoning can be used to describe the homology groups of any finite
simplicial complex $\Pi$ that is contained in $\mathbb{R}^2$. The dimension of $H_1 ( \Pi
, \mathbb{F}_p )$ for such a complex is always equal to the number holes enclosed by
$\Pi$.
\begin{figure}[!b
\centerline{\includegraphics[scale=0.99]{f3-3}}
\fcaption{The complex $\Gamma'$ and the effect of three different
automorphisms.\label{2trianglefig}}
\vspace*{-3pt}
\end{figure}
Such visualizations are also useful for understanding the behavior of homology groups
under automorphisms. Figure~\ref{2trianglefig} shows an example of a simplicial complex
$\Gamma'$ for which $H_1 ( \Gamma' , \mathbb{F}_p ) \cong \mathbb{F}_p^2$. Any
automorphism of $\Gamma'$ induces a linear automorphism of $H_1 ( \Gamma' , \mathbb{F}_p
)$. The figure describes a few such automorphism in terms of two chosen basis elements
$\lambda , \beta \in H_1 ( \Gamma' , \mathbb{F}_p )$.
\begin{figure}[!t
\centerline{\includegraphics[scale=0.95]{f3-4}}
\fcaption{The complex $\Lambda$ and the effect of three different
automorphisms.\label{torusfigure}}
\vspace*{-3pt}
\end{figure}
To observe nontrivial automorphisms of higher homology groups, we need to consider
simplicial complexes in three-dimensional space. Figure~\ref{torusfigure} shows a
simplicial complex $\Lambda$ in $\mathbb{R}^3$ which has the shape of a torus. Let $z
\in K_2 ( \Lambda , \mathbb{F}_p )$ be a linear combination of\vadjust{\pagebreak} all
the $2$-simplices in $\Lambda$ in which the coefficient of the simplex $[v_0, v_1, v_2]$
in $z$ is $(+1)$ if the vertices $v_0$, $v_1$, and $v_2$ appear in clockwise order on the
surface of the torus, and $(-1)$ if they appear in counterclockwise order. When
Definition~\ref{boundarymapdef} is applied to compute $d z$, all terms cancel and we find
that $d z = 0$. The element $z$ determines a coset $\delta \in H_2 ( \Lambda ,
\mathbb{F}_p)$, which spans the one-dimensional space $H_2 ( \Lambda, \mathbb{F}_p )$.
\enlargethispage{12pt}
Figure~\ref{torusfigure} gives a basis $\{ \sigma , \rho \}$ for the
two-dimensional space $H_1 ( \Lambda , \mathbb{F}_p )$, and explains the
effect of various automorphisms on $H_1 ( \Lambda , \mathbb{F}_p )$ and $H_2 ( \Lambda ,
\mathbb{F}_p )$.
\section{Some Homological Algebra}
We resume developing concepts from an algebraic standpoint. It is helpful now to take
time to study homology groups in a more abstract setting, without reference to simplicial
complexes. For any complex of abelian groups
\begin{equation}
\xymatrix{\ldots
\ar[r] & K_{n+1} \ar[r]^{d_{n+1}}
& K_n \ar[r]^{d_n} & K_{n-1} \ar[r]^{d_{n-1}} & \ldots},
\end{equation}
the $n$th homology group of $K_\bullet$ is defined by
\begin{equation}
H_n ( K_\bullet , \mathbb{F}_p ) =
( \ker d_n ) / ( \im d_{n+1} ).
\end{equation}
In this part of the text we will state a result (Proposition~\ref{snakelemmaprop}) which
allows us to relate the homology groups\vadjust{\pagebreak} of $K_\bullet$ to the
homology groups of smaller complexes. This will be an essential building block in later
proofs.
Let us say that a sequence of maps of abelian groups
\begin{eqnarray}
\xymatrix{ \ldots \ar[r] & A_{n+1} \ar[r]^{f_{n+1}} &
A_n \ar[r]^{f_n} &
A_{n-1} \ar[r]^{f_{n-1}} & \ldots }
\end{eqnarray}
is \textbf{exact} if it satisfies the condition $\ker f_n = \im f_{n+1}$ for every $n$.
Thus, a sequence of the form
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & P \ar[r]^f & Q \ar[r]^g & R \ar[r] & 0}
\end{eqnarray}
is\enlargethispage{12pt} exact if and only if $f$ is injective, $g$ is surjective, and
$\im f = \ker g$. (Note that this makes $R$ isomorphic to the quotient $Q / f ( P )$.)
Suppose that a sequence of maps of complexes
\begin{eqnarray}\label{seqofcomplexes}
\xymatrix{ 0 \ar[r] & X_\bullet \ar[r]^F & Y_\bullet \ar[r]^G & Z_\bullet \ar[r] & 0}
\end{eqnarray}
is such that
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & X_n \ar[r]^{F_n} &
Y_n \ar[r]^{G_n} & Z_n \ar[r] & 0 }
\end{eqnarray}
is an exact sequence for every $n$. Then we will say that~(\ref{seqofcomplexes}) is an
exact sequence of complexes.
I claim that if
\begin{eqnarray}
\xymatrix{
& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\
0 \ar[r] & X_{n+1} \ar[r]^F \ar[d]^d & Y_{n+1} \ar[r]^G \ar[d]^d & Z_{n+1} \ar[r] \ar[d]^d & 0 \\
0 \ar[r] & X_n \ar[r]^F \ar[d] & Y_n \ar[r]^G \ar[d] & Z_n \ar[r] \ar[d] & 0 \\
& \vdots & \vdots & \vdots & \\
}
\end{eqnarray}
\noindent is an exact sequence of complexes, then
\begin{eqnarray}
H_n ( X_\bullet ) \to H_n ( Y_\bullet ) \to H_n ( Z_\bullet )
\end{eqnarray}
is an exact sequence. This can be seen through a ``diagram-chasing'' argument. It is
obvious that
\begin{eqnarray}
\im \left[ H_n ( X_\bullet ) \to H_n ( Y_\bullet ) \right] \subseteq
\ker \left[ H_n ( Y_\bullet ) \to H_n ( Z_\bullet ) \right],
\end{eqnarray}
and so we only need to prove the reverse inclusion. Suppose that $(y + \im d_n^Y)$ is a
coset in $H_n ( Y_\bullet )$ that is killed by the map to $H_n ( Z_\bullet )$. Then $G (
y ) \in \im d_{n+1}^Z$, so we can find $z' \in Z_{n+1}$ such that $dz' = G( y )$.
Choosing an arbitrary element $y' \in G^{-1} \{ z' \}$, we have $y - dy' \in \ker G$, and
therefore by exactness, $F (x) = y - dy'$ for some $x$. Since $dy = 0$ and $d(dy') = 0$,
we have $F ( dx ) = d F (x ) = 0$ and therefore $dx = 0$. Thus $(x + \im d_n^X)$ is a
coset in $H_n ( X_\bullet )$ which maps to $y + \im d_n^Y$, and the claim is proved.
While it might be tempting to assume that the maps $H_n ( X_\bullet ) \to H_n ( Y_\bullet
)$ are injective and the maps $H_n ( Y_\bullet ) \to H_n ( Z_\bullet )$ are surjective,
this is not generally true. The homology groups of $X_\bullet$, $Y_\bullet$, and
$Z_\bullet$ have a more complex relationship which is expressed by the following
proposition.
\begin{proposition}\label{snakelemmaprop}
Let $X_\bullet$, $Y_\bullet$, and $Z_\bullet$ be complexes of abelian groups,
and let $F \colon X_\bullet \to Y_\bullet$ and $G \colon Y_\bullet \to Z_\bullet$ be
maps of complexes such that for any $n$, the sequence
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & X_n \ar[r]^{F_n} &
Y_n \ar[r]^{G_n} & Z_n \ar[r] & 0 }
\end{eqnarray}
is an exact sequence. Then, there exist homomorphisms
\begin{eqnarray}
\gamma_n \colon H_n \left( Z_\bullet \right) \to
H_{n-1} \left( X_\bullet \right)
\end{eqnarray}
\noindent for every $n$ which are such that the sequence
\begin{eqnarray}\label{snakesequence}
\xymatrix{ \ldots \ar[r] & H_2 ( Y_\bullet ) \ar[r] &
H_2 ( Z_\bullet ) \ar[lldd]^{\gamma_2} \\
\\
H_1 ( X_\bullet ) \ar[r] & H_1 ( Y_\bullet )
\ar[r] & H_1 ( Z_\bullet) \ar[lldd]^{\gamma_1} \\
\\
H_{0} ( X_\bullet ) \ar[r] & H_{0} ( Y_\bullet)
\ar[r] & H_0 ( Z_\bullet ) \ar[r] & 0 }
\end{eqnarray}
is exact.
\end{proposition}
\removelastskip\pagebreak
Since the proof of this proposition is fairly technical, we have placed it in
Appendix. (See Proposition~\ref{realsnakelemma}.) The maps $\gamma_n$ can be
briefly described like so: let $\overline{G}_n \colon Z_n \to Y_n$ be a function (not
necessarily a homomorphism) which is such that $G_n \circ \overline{G}_n$ is the identity
map, and let $\overline{F}_n \colon F ( X_n ) \to X_n$ be the inverse of $F$. Then, for
any coset
\begin{eqnarray}
z + \im d_{n+1}^Z \in H_n ( Z_\bullet),
\end{eqnarray}
the image under $\gamma_n \colon H_n ( Z_\bullet ) \to H_{n-1} ( X_\bullet )$ is given by
\begin{eqnarray}
\overline{F}_{n-1} ( d ( \overline{G}_n ( z ))) + \im d_n^X \in H_{n-1} ( X_\bullet ).
\end{eqnarray}
As we will see, the above proposition is very useful because it allows us to draw
conclusions about the homology groups of a complex $Y_\bullet$ based on the homology
groups of its subcomplexes and quotient complexes.
We close with a few additional constructions. Note that for any map of complexes $F
\colon I_\bullet \to J_\bullet$, there exist the complexes
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & \im F_{n+1} \ar[r]^{d_{n+1}}
& \im F_n \ar[r]^{d_n} & \im F_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
and
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & \ker F_{n+1} \ar[r]^{d_{n+1}}
& \ker F_n \ar[r]^{d_n} & \ker F_{n-1} \ar[r]^{d_{n-1}} & \ldots} .
\end{eqnarray}
We write these complexes as $(\im F)$ and $(\ker F)$, respectively.
Note that these complexes fit into an exact sequence
\begin{eqnarray}
0 \to \ker F \to I_\bullet \to \im F \to 0.
\end{eqnarray}
The $\textbf{direct sum}$ of $I_\bullet$ and $J_\bullet$, written
$I_\bullet \oplus J_\bullet$, is the complex
\begin{equation}
\xymatrix{ \ldots \ar[r] & I_{n+1} \oplus J_{n+1} \ar[r] & I_n \oplus J_n \ar[r] &
I_{n-1} \oplus J_{n-1} \ar[r] & \ldots },
\end{equation}
where the maps in this complex are simply the maps induced by $d_k \colon I_k \to
I_{k-1}$ and $d_k \colon J_k \to J_{k-1}$. Note that the homology groups of this complex
are simply $H_n ( I_\bullet ) \oplus H_n ( J_\bullet )$.
\section{Collapsibility Implies Acyclicity}\label{acyclicitysection}
Now we will offer our first application of Proposition~\ref{snakelemmaprop}. In
\textit{\nameref{graphpropsection}}, we defined the notion of {\it collapsibility} for
simplicial \hbox{complexes}. In this part of the text we will see how the condition of
collapsibility for a simplicial complex $\Delta$ implies that the homology groups of
$\Delta$ are trivial.
We begin with a useful definition.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex whose vertex-set is totally ordered. Let
$p$ be a prime, and let $n$ be a nonnegative integer. Define the map
\begin{eqnarray}
s \colon K_0 \left( \Delta , \mathbb{F}_p \right) \to \mathbb{F}_p
\end{eqnarray}
by asserting that $s ( \gamma )$ is the sum of the coefficients of $\gamma$. That is, if
\begin{eqnarray}
\gamma & = & c_1 Q_1 + c_2 Q_2 + \ldots + c_r Q_r,
\end{eqnarray}
with $c_i \in \mathbb{F}_p$ and $Q_i \in \Delta$, then
\begin{eqnarray}
s( \gamma ) = c_1 + c_2 + \ldots + c_r \in \mathbb{F}_p.
\end{eqnarray}
The \textbf{reduced $n$th homology group of $\Delta$} over $\mathbb{F}_p$, denoted
$\widetilde{H}_n \left( \Delta , \mathbb{F}_p \right)$, is the $n$th homology group of
the complex
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1}& K_0 \left( \Delta , \mathbb{F}_p
\right) \ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
\end{definition}
The reduced homology groups $\left\{ \widetilde{H}_n \left( \Delta , \mathbb{F}_p \right)
\right\}$ of an abstract simplicial complex $\Delta$ are the same as the ordinary
homology groups $\left\{ H_n \left( \Delta , \mathbb{F}_p \right) \right\}$, except that
the dimension of $\widetilde{H}_0 \left( \Delta , \mathbb{F}_p \right)$ is one less than
the dimension of $H_0 \left( \Delta , \mathbb{F}_p \right)$. Note that the reduced
homology groups of the trivial complex $\{ \{ 0 \} \}$ are all zero.
\begin{definition}\label{acyclicitydefinition}
Let $\Delta$ be an abstract simplicial complex whose vertex-set is totally ordered.
Then, $\Delta$ is \textbf{$\mathbb{F}_p$-acyclic} if
\begin{eqnarray}
\dim_{\mathbb{F}_p } \widetilde{H}_n \left( \Delta , \mathbb{F}_p \right) = 0
\end{eqnarray}
for all nonnegative integers $n$.
\end{definition}
\removelastskip\pagebreak
Stated differently, a complex is $\mathbb{F}_p$-acyclic if its $\mathbb{F}_p$-homology is
the same as that of single point. An example of an $\mathbb{F}_p$-acyclic simplicial
complex is this one, from Example~\ref{solidtriangleexample}.
\begin{eqnarray*}
\Sigma' = \left\{ \{ 0 \} , \{ 1 \}, \{ 2 \} , \{ 0, 1 \} , \{ 1, 2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray*}
One can check by direct calculation that all of the reduced homology groups of this
simplicial complex are trivial. On the other hand, the simplicial complex $\Sigma$ of
Example~\ref{triangleexample} is \textit{not} $\mathbb{F}_p$-acyclic, since
$\widetilde{H}_1 \left( \Sigma' , \mathbb{F}_p \right) \cong \mathbb{F}_p$.
Another way of expressing Definition~\ref{acyclicitydefinition} is this: an abstract
simplicial complex $\Delta$ is $\mathbb{F}_p$-acyclic if
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1} & K_0 \left( \Delta , \mathbb{F}_p
\right) \ar[r]^{\hskip0.2in s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
is an exact sequence.
When a complex forms an exact sequence, let us refer to it as an \textbf{exact complex}.
The following algebraic lemma is useful for proving exactness of complexes.
\begin{lemma}\label{exactnesslemma}
Let
\begin{eqnarray}
0 \to X_\bullet \to Y_\bullet \to Z_\bullet \to 0
\end{eqnarray}
be an exact sequence of complexes of abelian groups. Then,
\begin{enumerate}
\item \label{alglemmapart1}
If $X_\bullet$ and $Y_\bullet$ are exact complexes, then $Z_\bullet$ is an exact\\
complex.
\item \label{alglemmapart2}
If $Y_\bullet$ and $Z_\bullet$ are exact complexes, then $X_\bullet$ is an exact\\
complex.
\item \label{alglemmapart3}
If $X_\bullet$ and $Z_\bullet$ are exact complexes, then $Y_\bullet$ is an exact\\
complex.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (\ref{alglemmapart1}). Suppose that $X_\bullet$ and $Y_\bullet$ are
exact complexes. By Proposition~\ref{snakelemmaprop}, there is an exact sequence
\begin{eqnarray*}
&&\ldots \to H_{n+1} ( Z_\bullet) \to H_n ( X_\bullet) \to H_n ( Y_\bullet) \to H_n(
Z_\bullet)\\
&& \qquad \to H_{n-1} ( X_\bullet) \to H_{n-1} ( Y_\bullet ) \to \ldots
\end{eqnarray*}
The reader will observe that since the groups $\left\{ H_n ( X_\bullet ) \right\}$ and
$\left\{ H_n ( Y_\bullet ) \right\}$ are all zero, the groups $\left\{ H_n ( Z_\bullet )
\right\}$ must all be zero as well. Therefore~$Z_\bullet$ is an exact complex.
Assertions (\ref{alglemmapart2}) and (\ref{alglemmapart3}) follow similarly.
\end{proof}
Now we are ready to prove our main theorem.
\begin{theorem}
\label{acyclicitytheorem} Let $p$ be a prime. Let $\Delta$ be an abstract simplicial
complex which has a total ordering on its vertex set. If $\Delta$ is collapsible, then
$\Delta$ is $\mathbb{F}_p$-acyclic.
\end{theorem}
\begin{proof}
Recall (from \textit{\nameref{graphpropertysimplicial}}) the definition of
\textbf{primitive elementary collapse}. For any elementary collapse $( \Sigma, \Sigma'
)$, there is a sequence of primitive elementary collapses which reduces \hbox{$\Sigma$ to
$\Sigma'$}:
\begin{eqnarray}
\Sigma, \Sigma_1 , \Sigma_2 , \ldots , \Sigma_t , \Sigma'
\end{eqnarray}
(This is an elementary fact which the reader is invited to prove as an exercise.)
Suppose that the complex $\Delta$ is collapsible. There exists a sequence of elementary
collapses which collapse $\Delta$ to a single $0$-simplex. Therefore, there exists a
sequence of \textit{primitive} elementary collapses which collapse $\Delta$ to a single
$0$-simplex. Let
\begin{eqnarray}
\Delta, \Delta_1 , \Delta_2, \ldots , \Delta_r
\end{eqnarray}
be such a sequence, with $\left| \Delta_r
\right| = 1$.
Let $Z_\bullet$ be the complex formed by the quotient groups
\begin{eqnarray}
Z_n = K_n (\Delta , \mathbb{F}_p)/ K_n \left( \Delta_1 , \mathbb{F}_p \right)
\end{eqnarray}
The structure of the complex $Z_\bullet$ is quite simple: it is isomorphic
to the following complex:
\begin{eqnarray}
\hspace*{-6pt}\xymatrix{\ldots \ar[r] & 0 \ar[r] & 0 \ar[r] & \mathbb{F}_p \ar[r]^{Id} &
\mathbb{F}_p \ar[r] & 0 \ar[r] & 0 \ar[r] & \ldots}\qquad
\end{eqnarray}
\vfill\eject
\noindent There is an exact sequence of complexes
\begin{eqnarray}
\xymatrix{
& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\
0 \ar[r] & K_1 \left( \Delta_1 , \mathbb{F}_p \right)
\ar[r] \ar[d]
& K_1 \left( \Delta , \mathbb{F}_p \right) \ar[r] \ar[d]
& Z_1 \ar[r] \ar[d] & 0 \\
0 \ar[r] & K_0 \left( \Delta_1 , \mathbb{F}_p \right)
\ar[r] \ar[d]
& K_0 \left( \Delta , \mathbb{F}_p \right) \ar[r] \ar[d]
& Z_0 \ar[r] \ar[d] & 0 \\
0 \ar[r] & \mathbb{F}_p \ar[r] \ar[d] &
\mathbb{F}_p \ar[r] \ar[d] & 0 \ar[r] \ar[d] & 0 \\
0 \ar[r] & 0 \ar[r] & 0 \ar[r] & 0 \ar[r] & 0 }
\end{eqnarray}
The complex $Z_\bullet$ is clearly exact. So by Lemma~\ref{exactnesslemma},
the complex
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1} & K_0 \left( \Delta , \mathbb{F}_p
\right)\ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
is exact iff
\begin{eqnarray*}
\hspace*{-3pt}\xymatrix{\ldots \ar[r] & K_2 (\Delta_1, \mathbb{F}_p) \ar[r]^{d_2} & K_1
(\Delta_1 , \mathbb{F}_p) \ar[r]^{d_1} & K_0 (\Delta_1 , \mathbb{F}_p)
\ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0}
\end{eqnarray*}
is exact. Therefore $\Delta$ is $\mathbb{F}_p$-acyclic iff $\Delta_1$ is $\mathbb{F}_p$-acyclic.
Similar reasoning shows that for any $i$, $\Delta_i$ is $\mathbb{F}_p$-acyclic iff
$\Delta_{i+1}$ is $\mathbb{F}_p$-acyclic. The theorem follows by induction.
\end{proof}
\chapter{Fixed-Point Theorems}\label{fptchapter}
We are now ready to put the theory from \textit{\nameref{chaincomplexchapter}} to use to
study group actions $G \circlearrowleft \Delta$ on simplicial complexes.
\section{The Lefschetz Fixed-Point Theorem}\label{lftsection}
\begin{theorem}\label{acyclicimplieslft}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices. Suppose that
$\Delta$ is $\mathbb{F}_p$-acyclic for some prime number $p$. Let $f \colon \Delta \to
\Delta$ be a simplicial automorphism. Then, there exists a simplex $Q \in \Delta$ such
that $f ( Q ) = Q$.
\end{theorem}
\begin{proof}
Let us introduce some notation: if $Y$ is a finite-dimensional vector space over
$\mathbb{F}_p$, and $h \colon Y \to Y$ is a linear endomorphism, then let $\Tr_h \left( Y
\right)$ denote the trace of $h$ on $Y$. Note that the trace function is additive over
exact sequences. That is, if
\begin{eqnarray}
0 \to X \to Y \to Z \to 0
\end{eqnarray}
is an exact sequence, and $h$ acts on $X$, $Y$, and $Z$ in a compatible manner, then
\begin{eqnarray}
\Tr_h (Y) = \Tr_h (X) + \Tr_h (Z).
\end{eqnarray}
Let $F$ denote the chain map associated with $f$. Consider the
\hbox{values~of}
\begin{eqnarray}
\Tr_F \left( H_n \left( \Delta , \mathbb{F}_p \right) \right)
\end{eqnarray}
for $n = 0 , 1, 2, {\ldots}\,$. Since $\Delta$ is $\mathbb{F}_p$-acyclic, these are easy
to compute. If $n > 0$, then $H_n \left( \Delta , \mathbb{F}_p \right)$ is a zero vector
space. The vector space $H_0 \left( \Delta , \mathbb{F}_p \right)$ is a one-dimensional
$\mathbb{F}_p$-vector space on which $F$ acts trivially. Therefore,
\begin{eqnarray}
\Tr_F \left( H_0 \left( \Delta , \mathbb{F}_p \right) \right) & = & 1, \\
\Tr_F \left( H_n \left( \Delta, \mathbb{F}_p \right) \right) & = & 0 \hskip0.2in \textnormal{ for } n > 0.
\end{eqnarray}
Now we can carry out the proof using the additivity of the trace function. Suppose, for
the sake of contradiction, that there is \textit{no} simplex in $\Delta$ which is
stabilized by $F$. Then, for any $n$, the chain map $F$ acts on $K_n \left( \Delta ,
\mathbb{F}_p \right)$ by permuting the basis elements in a fixed-point free manner,
possibly changing signs. A matrix representation of this action would be a matrix with
entries from the set $\{ -1, 0, 1 \}$, having only zeroes on the main diagonal. Thus we
see that
\begin{eqnarray}
\Tr_F \left( K_n \left( \Delta , \mathbb{F}_p \right) \right) = 0.
\end{eqnarray}
Observe the following chain of equalities.
\begin{eqnarray*}
0 & = & \sum_{n \geq 0} (-1)^n \Tr_F \left( K_n \left( \Delta , \mathbb{F}_p \right) \right) \\
& = & \Tr_F \left( K_0 \left( \Delta , \mathbb{F}_p \right) \right) +
\sum_{n \geq 1} (-1)^n \left[
\Tr_F \left( \im d_n \right) +
\Tr_F \left( \ker d_n \right)
\right] \\
& = & \Tr_F \left( K_0 \left( \Delta , \mathbb{F}_p \right) \right) - \Tr_F \left( \im
d_1 \right)\\
&& +\, \sum_{n \geq 1} (-1)^n \left[ \Tr_F \left( \ker d_n \right) -
\Tr_F \left( \im d_{n+1} \right) \right] \\
& = & \Tr_F \left( H_0 \left( \Delta , \mathbb{F}_p \right) \right)
+ \sum_{n \geq 1} (-1)^n \Tr_F \left( H_n \left( \Delta ,
\mathbb{F}_p \right) \right) \\
& = & 1.
\end{eqnarray*}
We obtain a contradiction. Therefore, there must exist a simplex $Q$ in $\Delta$ such
that $f ( Q ) = Q$.
\end{proof}
\pagebreak
\vspace*{-20pt}
\begin{corollary}\label{lefschetzcorollary}
Let $\Sigma$ be a finite abstract simplicial complex which is collapsible. Let $g \colon
\Sigma \to \Sigma$ be a simplicial automorphism. Then there must exist a simplex $T \in
\Sigma$ such that $g ( T ) = T$.
\end{corollary}
\begin{proof}
This follows immediately from the above theorem and Theorem~\ref{acyclicitytheorem}.
\end{proof}
Let us consider what Theorem~\ref{acyclicimplieslft} means geometrically. Suppose that
$\Theta$ is an ordinary simplicial complex in $\mathbb{R}^N$ (see
\textit{\nameref{simplicialcomplexsection}}). Then a simplicial automorphism of $\Theta$
is simply a continuous permutation of the points of $\Theta$ which maps every $n$-simplex
of $\Theta$ to another $n$-simplex of $\Theta$ in an affine-linear manner.
Suppose that $V \subset \mathbb{R}^N$ is a single $n$-simplex spanned by $\mathbf{v}_0 ,
\mathbf{v}_1 , \ldots , \mathbf{v}_n \in \mathbb{R}^N$. Note that any affine-linear map
of $V$ onto itself must fix the point
\begin{eqnarray}
\sum_{i = 0}^n \left( \frac{1}{n+1} \right) \mathbf{v}_i \in V.
\end{eqnarray}
{\tra2Thus, any simplicial map which stabilizes $V$ must have a fixed point in~$V$.
Therefore, when we establish that a simplicial automorphism maps a particular simplex to
itself, we have in fact proved that it has a fixed point. This justifies
our calling Theorem~\ref{acyclicimplieslft} a ``fixed-point theorem.''}
\enlargethispage{10pt}
Let $f \colon \Delta \to \Delta$ be a simplicial automorphism which satisfies the
assumptions of Theorem~\ref{acyclicimplieslft}. We can use the reasoning from the proof
of Theorem~\ref{acyclicimplieslft} to draw further conclusions about the set $\Delta^f$.
Note that the quantity
\begin{eqnarray}
\Tr_F ( K_n \left( \Delta , \mathbb{F}_p \right) )
\end{eqnarray}
is equal to the number of $n$-simplicies $Q \in \Delta$ that satisfy $f ( Q ) = Q$. By
the reasoning from the proof of Theorem~\ref{acyclicimplieslft}, we have
\begin{eqnarray}
\sum_{n \geq 0} (-1)^n \Tr_F \left( K_n ( \Delta , \mathbb{F}_p ) \right) & = & 1.
\end{eqnarray}
This implies a different version of Theorem~\ref{acyclicimplieslft}. For any subset $S$
of a simplicial complex $\Delta$, let
\begin{eqnarray}
\chi \left( S \right) & = & \sum_{n \geq 0} (-1)^n \left| \left\{ Q \in S \mid \dim ( Q )
= n \right\} \right|\!.
\end{eqnarray}
The quantity $\chi ( S )$ is called the \textbf{Euler characteristic} of $S$.
\removelastskip\pagebreak
\begin{theorem}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices, and suppose
that $\Delta$ is $\mathbb{F}_p$-acyclic for some prime number $p$. Let $f \colon \Delta
\to \Delta$ be a simplicial automorphism. Then,
\begin{eqnarray}
\chi ( \Delta^f ) & = & 1.
\end{eqnarray}
\end{theorem}
\section{A Nonabelian Fixed-Point Theorem}\label{nonabeliansection}
In this part of the text we will prove a nonabelian fixed-point theorem which is
attributed to R.~Oliver \cite{oliver1975}.
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G$ be a finite group
which acts on $\Delta$ via simplicial automorphisms. By
Corollary~\ref{lefschetzcorollary}, we know that for any element $g \in G$, there must be
a simplex $Q \in \Delta$ such that $g \left ( Q \right) = Q$. We will prove that, under
certain conditions, a stronger statement can be made: there must exist a single
simplex~$Q$ which is stabilized by all the elements of $G$.
Our method of proof for this result is essentially an inductive one. We require that the
automorphism group $G$ has a certain filtration by subgroups,
\begin{eqnarray}
\{ 0 \} = G_0 \subset G_1 \subset G_2 \subset \ldots \subset G_r = G,
\end{eqnarray}
and we inductively deduce conditions on the $G_i$-fixed subsets of $\Delta$, for $i = 0,
1, 2, \ldots, r$. The key to this argument is the first result that we will prove,
Proposition~\ref{acyclicitypreservation}, which tells us that the property of
``$\mathbb{F}_p$-acyclicity'' can be carried forward along this filtration. The proof of
Proposition~\ref{acyclicitypreservation} is the most difficult part of the argument; once
that proposition is proved, the other elements of the argument fall into place easily.
\enlargethispage{6pt}
For now, we will be focusing our attention on simplicial automorphisms $f \colon \Delta
\to \Delta$ for which $\Delta^f$ \textit{is} a subcomplex of $\Delta$. That is, we will
be focusing on those maps $f$ satisfying the condition
\begin{eqnarray}
Q \in \Delta^f \textnormal{ and } Q' \subseteq Q \Longrightarrow
Q' \in \Delta^f.
\end{eqnarray}
for any $Q, Q' \in \Delta$. Geometrically, what this condition implies is that if $f$
stabilizes a simplex $Q$, then it also fixes all of the vertices of $Q$.
The \textbf{order} of a simplicial automorphism $f \colon \Delta \to \Delta$ is the least
$n \geq 1$ such that $f^n$ is the identity. (If no such $n$ exists, then the order of
$f$ is $\infty$.)
\begin{proposition}\label{acyclicitypreservation}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices. Let $p$ be
a prime, and suppose that $\Delta$ is $\mathbb{F}_p$-acyclic. Suppose that $f \colon
\Delta \to \Delta$ is an order-$p$ automorphism of $\Delta$ such that $\Delta^f$ is a
subcomplex of $\Delta$. Then, the complex $\Delta^f$ must be $\mathbb{F}_p$-acyclic.
\end{proposition}
\begin{proof}
Suppose that $\Delta$ is $\mathbb{F}_p$-acyclic. We know, by
Theorem~\ref{acyclicimplieslft}, that the subcomplex $\Delta^f$ must be nonempty. To
prove the proposition, we must show that the homology groups $H_n \left( \Delta^f ,
\mathbb{F}_p \right)$ are trivial for $n > 0$, and that $H_0 \left( \Delta^f ,
\mathbb{F}_p \right)$ is one-dimensional.
The proof that follows is based on the paper ``Fixed-point theorems for periodic
transformations'' by Smith~\cite{smith1941}. The approach of the proof is to
define some special subcomplexes of $K_\bullet \left( \Delta , \mathbb{F}_p \right)$ and
then exploit relationships between these subcomplexes.
Let
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to
K_\bullet \left( \Delta , \mathbb{F}_p \right)
\end{eqnarray}
denote the chain map associated with $f$. Note that since $F$ is a map of
complexes, any linear combination of the maps $F , F^2, F^3, \ldots$ is also a map of
complexes. Define
\begin{eqnarray}
\delta \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta
, \mathbb{F}_p \right)
\end{eqnarray}
by
\begin{eqnarray}
\delta = \mathbb{I} - F.
\end{eqnarray}
(Here $\mathbb{I}$ denotes the identity map.) Define
\begin{eqnarray}
\sigma \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta
, \mathbb{F}_p \right)
\end{eqnarray}
by
\begin{eqnarray}
\sigma = \mathbb{I} + F + F^2 + \ldots + F^{p-1}.
\end{eqnarray}
The maps $\delta$ and $\sigma$ determine four subcomplexes of $K_\bullet \left( \Delta ,
\mathbb{F}_p \right)$:
\begin{eqnarray}
( \im \delta), (\ker\delta), (\im \sigma),\quad \textnormal{and}\quad (\ker \sigma).
\end{eqnarray}
We can describe these four complexes explicitly. Let $\Delta' = \Delta \smallsetminus
\Delta^f$. Let $S \subseteq \Delta'$ be a set which contains exactly one element from
every $f$-orbit in $\Delta'$. Then the following assertions hold (as the reader may
verify):
\begin{itemize}
\item The set
\begin{eqnarray*}
\left\{ \sum_{i=0}^{p-1} F^i \left( Q \right) \mid Q \in S \right\}
\end{eqnarray*}
is a basis\footnote{When we say that a set $T$ is a basis for a complex $X_\bullet$, we
mean that $T$ is a union of bases for the vector spaces $\{ X_i \}$.} for $\left( \im
\sigma \right)$.
\item The set
\begin{eqnarray*}
\left\{ F^i ( Q) - F^{i+1} ( Q ) \mid
Q \in S , 0 \leq i \leq p-2 \right\}
\end{eqnarray*}
is a basis for $\left( \im \delta \right)$.
\item The set
\begin{eqnarray*}
\left\{ F^i ( Q) - F^{i+1} ( Q ) \mid
Q \in S , 0 \leq i \leq p-2 \right\} \cup
\left\{ Q \mid Q \in \Delta^f \right\}
\end{eqnarray*}
is a basis for $\left( \ker \sigma \right)$.
\item The set
\begin{eqnarray*}
\left\{ \sum_{i=0}^{p-1} F^i \left( Q \right) \mid
Q \in S \right\}
\cup
\left\{ Q \mid Q \in \Delta^f \right\}
\end{eqnarray*}
is a basis for $\left( \ker \sigma \right)$.
\end{itemize}
From these bases, we can see that there are the following isomorphisms of complexes:
\begin{eqnarray}\label{keyisomorphism1}
\left( \ker \sigma \right) \cong \left( \im \delta \right)
\oplus K_\bullet \left( \Delta^f , \mathbb{F}_p \right)
\end{eqnarray}
and
\begin{eqnarray}\label{keyisomorphism2}
\left( \ker \delta \right) \cong \left( \im \sigma \right)
\oplus K_\bullet \left( \Delta^f , \mathbb{F}_p \right).
\end{eqnarray}
These imply isomorphisms of homology groups:
\begin{eqnarray}\label{homiso1}
H_n ( \ker \sigma ) \cong H_n ( \im \delta ) \oplus H_n ( \Delta^f , \mathbb{F}_p ), \\
\label{homiso2} H_n ( \ker \delta ) \cong H_n ( \im \sigma)
\oplus H_n ( \Delta^f , \mathbb{F}_p ).
\end{eqnarray}
Now, consider the exact sequences
\begin{eqnarray}
0 \to \left( \ker \sigma \right) \to K_\bullet \left( \Delta ,
\mathbb{F}_p \right) \to \left( \im \sigma \right) \to 0, \\
0 \to \left( \ker \delta \right) \to K_\bullet \left( \Delta ,
\mathbb{F}_p \right) \to \left( \im \delta \right) \to 0
\end{eqnarray}
By Proposition~\ref{snakelemmaprop}, these imply the existence of two long exact
sequences:
\begin{eqnarray*}
&& \ldots \to H_{n+1} ( \im \sigma ) \to H_n ( \ker \sigma ) \to H_n ( \Delta ,
\mathbb{F}_p ) \to H_n ( \im \sigma )\\
&&\qquad \to H_{n-1} ( \ker \sigma ) \to \ldots \\[6pt]
&& \ldots \to H_{n+1} ( \im \delta ) \to H_n ( \ker \delta ) \to H_n ( \Delta ,
\mathbb{F}_p ) \to H_n ( \im \delta )\\
&&\qquad \to H_{n-1} ( \ker \delta ) \to \ldots .
\end{eqnarray*}
Let us step through the terms in these sequences, starting from the left. Let $c$ be the
dimension of the complex $\Delta$ (that is, the dimension of the largest simplex in
$\Delta$). The exact sequences take the form
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow H_c ( \ker \sigma )
\to H_c ( \Delta , \mathbb{F}_p ) \to H_c ( \im \sigma )
\to H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow H_c ( \ker \delta ) \to H_c ( \Delta ,
\mathbb{F}_p ) \to H_c ( \im \delta ) \to H_{c-1} ( \ker \delta ) \to \ldots .
\end{eqnarray*}
Since $\Delta$ is acyclic, we know that $H_c \left( \Delta ,\mathbb{F}_p \right) = \{ 0
\}$, which clearly implies that both $H_c \left( \ker \sigma \right)$ and $H_c \left(
\ker \delta \right)$ are zero. So the exact sequences take the form
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow H_c ( \im \sigma )
\to H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0 \longrightarrow H_c ( \im
\delta ) \to H_{c-1} ( \ker \delta ) \to \ldots
\end{eqnarray*}
But isomorphisms \eqref{homiso1} and \eqref{homiso2} imply that $H_c \left( \im \sigma
\right)$ and $H_c \left( \im \delta \right)$ are also zero. So the exact sequences are
like so:
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow 0
\longrightarrow H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow H_{c-1} ( \ker \delta ) \to \ldots
\end{eqnarray*}
We can apply the same reasoning to show that all terms in the sequences with index
$(c-1)$ are likewise zero. Continuing in this manner, we eventually find that
\textit{all} the homology groups in the sequences that have a positive index are zero.
We are left with the exact sequences in the following form:
\begin{eqnarray}\label{finallongexactsequence}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow
H_0 \left( \ker \sigma \right) \to H_0 \left( \Delta , \mathbb{F}_p
\right) \to H_0 \left( \im \sigma \right) \longrightarrow 0\qquad\quad \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow H_0 \left( \ker \delta \right)
\to H_0 \left( \Delta , \mathbb{F}_p \right) \to H_0 \left( \im \delta \right)
\longrightarrow 0\qquad\quad
\end{eqnarray}
We have shown that all of the homology groups $H_n \left( \ker \sigma \right)$, $n > 0$
are trivial. This implies by isomorphism~(\ref{homiso1}) that $H_n \left( \Delta^f ,
\mathbb{F}_p \right)$ is trivial for all $n > 0$. Also, we know from
isomorphism~(\ref{homiso1}) and sequence~(\ref{finallongexactsequence}) that
\begin{eqnarray}
\dim H_0 \left( \Delta^f , \mathbb{F}_p \right) \leq \dim H_0 \left( \ker \sigma \right)
\leq \dim H_0 \left( \Delta , \mathbb{F}_p \right) = 1.
\end{eqnarray}
The dimension of $H_0 \left( \Delta^f , \mathbb{F}_p \right)$ cannot be zero (since
$\Delta^f$ is nonempty). So $H_0 \left( \Delta^f , \mathbb{F}_p \right)$ must be
one-dimensional. Therefore, $\Delta^f$ is\break $\mathbb{F}_p$-acyclic.
\end{proof}
\begin{corollary}\label{pgroupactioncorollary}
Suppose that $H$ is a group of order $p^m$, with $m \geq 1$, which acts on $\Delta$ in
such a way that $\Delta^h$ is a subcomplex of $\Delta$ for any $h \in H$. Then,
$\Delta^H$ is $\mathbb{F}_p$-acyclic.
\end{corollary}
\begin{proof}
Since $\left| H \right| = p^m$, there exists a filtration of $H$ by normal subgroups,
\begin{eqnarray}
\{ 0 \} = H_0 \subset H_1 \subset \ldots \subset H_m = H
\end{eqnarray}
such that $H_i / H_{i-1} \cong \mathbb{Z} / p \mathbb{Z}$ for any $i \in \{ 1, 2, \ldots,
m \}$. (See Chapter~I, Corollary~6.6 in \cite{lang}.) For any $i \in \{ 1, 2, \ldots ,
m \}$, we can choose an element $a_i \in H_i$ which generates $H_i / H_{i-1}$. Then,
\begin{eqnarray}
\Delta^{H_i} = \left( \Delta^{H_{i-1}} \right)^{a_i}.
\end{eqnarray}
By Proposition~\ref{acyclicitypreservation}, if $\Delta^{H_{i-1}}$ is
$\mathbb{F}_p$-acyclic, so is $\Delta^{H_i}$. The corollary follows by induction.
\end{proof}
Now we are ready to prove the main theorem.
\begin{theorem}\label{nonabelianfixedpointtheorem}
Let $G$ be a finite group satisfying the following \hbox{condition}:
\begin{itemize}
\item There is a normal subgroup $G' \subseteq G$ such that
$\left| G' \right|$ is a prime power and $G / G'$ is cyclic.
\end{itemize}
Let $\Delta$ be a collapsible abstract simplicial complex on which $G$ acts, satisfying
the condition that $\Delta^g$ is a simplicial complex for any $g \in G$. Then, $\chi (
\Delta^G ) = 1$.
\end{theorem}
\begin{proof}
We are given that $\left| G' \right| = p^m$ for some prime $p$ and $m \geq 0$. Choose a
total ordering on the vertices of $\Delta$. By Theorem~\ref{acyclicitytheorem}, $\Delta$
is $\mathbb{F}_p$-acyclic. By Corollary~\ref{pgroupactioncorollary}, $\Delta^{G'}$ is
$\mathbb{F}_p$-acyclic.
Choose an element $b \in G$ which generates $G/G'$. By Theorem~\ref{acyclicimplieslft},
the complex
\begin{eqnarray}
(\Delta^{G'})^b = \Delta^G
\end{eqnarray}
has Euler characteristic equal to $1$.
\end{proof}
Note that Theorem~\ref{nonabelianfixedpointtheorem} implies in particular that the
invariant subcomplex $\Delta^G$ is nonempty.
\section{Barycentric Subdivision}\label{barycentricsubdivisionsection}
In \textit{\nameref{nonabeliansection}} we proved
Theorem~\ref{nonabelianfixedpointtheorem}, which asserts that if a group action $G
\circlearrowleft \Delta$ satisfies certain requirements, then $\Delta^G$ must be
nonempty. The theorem as stated is unfortunately not general enough for our purposes.
Indeed the condition that all of the subsets $\{ \Delta^g \mid g \in G \}$ are
subcomplexes will not be satisfied by the simplicial complexes arising from graph
properties, except in trivial cases. Therefore we need a theorem which can be applied to
group actions that do not satisfy this condition.
\begin{figure}[!b
\centerline{\includegraphics{f4-1}}
\fcaption{The complexes $\Sigma$ and $\bar ( \Sigma )$.\label{barycentricfig}}
\end{figure}
Barycentric subdivision is a process of dividing up the simplicies in a simplicial
complex into smaller simplicies. Barycentric subdivision replaces an abstract simplicial
complex $\Delta$ with a larger complex $\Delta'$ that has similar properties. The
advantage of this construction is that for any simplicial automorphism $g \colon \Delta
\to \Delta$, there is an induced automorphism $g \colon \Delta' \to \Delta'$ which
satisfies the condition that $(\Delta' )^g$ is an abstract simplicial complex. Working
within this larger complex will allow us to prove a generalization of
Theorem~\ref{nonabelianfixedpointtheorem}.
\begin{definition}\label{barycentricdef}
Let $\Delta$ be an abstract simplicial complex. Then the \textbf{barycentric subdivision
of $\Delta$}, denoted $\bar ( \Delta )$, is the simplicial \hbox{complex}
\begin{eqnarray*}
\bar(\Delta ) = \big\{ \{ Q_1, Q_2, \ldots , Q_r \} \mid r \geq 1, Q_i \in \Delta , Q_1
\subset Q_2 \subset \ldots \subset Q_r \big\}.
\end{eqnarray*}
\end{definition}
Here is another way to phrase the above definition. Let $\Delta$ be an abstract
simplicial complex. Then the subset relation $\subset$ gives a partial ordering on the
elements of $\Delta$. The complex $\bar ( \Delta )$ is the set of all $\subset$-chains
in $\Delta$.
As an example, let
\begin{eqnarray}\label{baryexample}
\Sigma = \left\{ \{ 0 \} , \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 1 ,2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray}
The complex $\Sigma$ and its barycentric subdivision $\bar ( \Sigma )$ are shown in
Figure~\ref{barycentricfig}.
Geometrically, the operation $[ \Delta \mapsto \bar ( \Delta ) ]$ has the effect of
splitting every simplex of dimension $n$ in $\Delta$ into $(n+1)!$ simplicies of
\hbox{dimension~$n$}. Note that vertices in $\bar ( \Delta )$ are in one-to-one
correspondence with the simplicies of $\Delta$. Figure~\ref{barycentric2fig} shows
another example of barycentric subdivision.
\begin{figure
\centerline{\includegraphics{f4-2}}
\fcaption{An example of barycentric subdivision.\label{barycentric2fig}}
\end{figure}
As the reader can observe, the simplicial complex $\bar ( \Delta )$ has some similarities
with the original simplicial complex $\Delta$. It can be shown that the
homology groups of $\bar ( \Delta )$ are isomorphic to those of $\Delta$, although we
will not need to prove that here. The following propositions are proved in the Appendix
(as Proposition~\ref{baryeulerappendixprop} and Proposition~\ref{baryappendixprop}).
\begin{proposition}\label{barycollapseprop}
Let $\Delta$ be an abstract simplicial complex. If $\Delta$ is collapsible, the $\bar (
\Delta )$ is also collapsible.
\end{proposition}
\begin{proposition}\label{baryeulerprop}
Let $\Delta$ be a finite abstract simplicial complex. Then, $\chi ( \bar ( \Delta ) ) =
\chi ( \Delta )$.
\end{proposition}
Now, let us consider how this construction behaves under group actions. Let $f \colon
\Delta \to \Delta$ be a simplicial automorphism of $\Delta$. Then there is an induced
simplicial automorphism,
\begin{eqnarray}
f \colon \bar ( \Delta ) \to \bar ( \Delta ).
\end{eqnarray}
The invariant subset $\bar ( \Delta )^f$ can be expressed like so:
\begin{eqnarray*}
\bar ( \Delta )^f = \big\{\{ Q_1, Q_2, \ldots , Q_r \} \mid r \geq 1, Q_i \in \Delta^f ,
Q_1 \subset Q_2 \subset \ldots \subset Q_r \big\}.
\end{eqnarray*}
\pagebreak
\noindent It is easy to see that this set is always a simplicial complex. Thus the
following lemma holds true:
\begin{lemma}\label{barylemma}
Let $\Delta$ be an abstract simplicial complex, and let
$G \circlearrowleft \Delta$ be a group action on $\Delta$. Then,
for any $g \in G$, the set
\begin{eqnarray}
\left( \bar ( \Delta ) \right)^g
\end{eqnarray}
is a subcomplex of $\bar ( \Delta )$.
\end{lemma}
Lemma~\ref{barylemma} can be observed in the example complex $\Sigma$ which we discussed
above~\eqref{baryexample}. As we can see in Figure~\ref{barycentricfig}, any permutation
of the set $\{ 0, 1, 2 \}$ fixes a subcomplex of the complex $\bar ( \Sigma)$.
With the aid of barycentric subdivision, we can now prove the following fixed-point
theorem.
\begin{theorem}\label{fptinit}
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G \circlearrowleft
\Delta$ be a group action on $\Delta$. Suppose that $G$ has a normal subgroup $G'$ which
is such that $\left| G' \right|$ is a prime power and $G / G'$ is cyclic. Then, the set
$\Delta^G$ is nonempty.
\end{theorem}
\begin{proof}
By Proposition~\ref{barycollapseprop}, $\bar ( \Delta )$ is collapsible. By
Theorem~\ref{nonabelianfixedpointtheorem}, $\chi ( \bar ( \Delta )^G ) = 1$. Therefore
$\bar ( \Delta )^G$ is nonempty, and thus $\Delta^G$ is likewise nonempty.
\end{proof}
Now let $\Delta^{[G]}$ denote the complex constructed in
\textit{\nameref{groupactionsection}}. The set $\Delta^{[G]}$ is very similar to
$\Delta^G$; indeed, there is a one-to-one inclusion preserving map
\begin{eqnarray}\label{naturaliso}
i \colon \Delta^{[G]} \to \Delta^G
\end{eqnarray}
which is given simply by mapping any $S \in \Delta^{[G]}$ to the union of the elements of
$S$. (The main difference between $\Delta^G$ and $\Delta^{[G]}$ is that $\Delta^{[G]}$ is
a simplicial complex, whereas $\Delta^G$ generally is not.)
The map~(\ref{naturaliso}) induces a simplicial isomorphism
\begin{eqnarray}\label{barygroupiso}
\bar \left( \Delta^{[G]} \right) \to
\bar ( \Delta )^G
\end{eqnarray}
Figure~\ref{barygroupfig} illustrates the relationship between $\Delta^{[G]}$ and $\bar (
\Delta )^G$. \hbox{Isomorphism}~(\ref{barygroupiso}) enables our final generalization of
\hbox{Theorem}~\ref{nonabelianfixedpointtheorem}.
\begin{figure}[!t
\centerline{\includegraphics{f4-3}}
\fcaption{A continuation of the example from Figures~\ref{groupactionfig}
and \ref{groupaction2fig}. The barycentric subdivision of $\Sigma^{[H]}$ is
isomorphic to $\bar ( \Sigma )^H$.\label{barygroupfig}}
\end{figure}
\begin{theorem}\label{fpt}
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G \circlearrowleft
\Delta$ be a group action on $\Delta$. Suppose that $G$ has a normal subgroup $G'$ which
is such that $\left| G' \right|$ is a prime power and $G / G'$ is cyclic. Then,
\begin{eqnarray}
\chi ( \Delta^{[G]} ) = 1.
\end{eqnarray}
\end{theorem}
\begin{proof}
By Proposition~\ref{barycollapseprop}, $\bar ( \Delta )$ is collapsible. Therefore the
Euler characteristic of $\bar ( \Delta )^G \cong \bar ( \Delta^{[G]} )$ is $1$. By
Proposition~\ref{baryeulerprop}, the Euler characteristic of $\Delta^{[G]}$ is likewise
equal to $1$.
\end{proof}
\chapter[Results on Decision-Tree Complexity]{Results on Decision-Tree Complexity}\label{resultschapter}
In this part of the text, we will give the proofs of three lower bounds on the
decision-tree complexity of graph properties, due to Kahn, Saks, Sturtevant, and Yao.
Then we will sketch (without proof) some more recent results.
Let
\begin{eqnarray}
h \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. The function $h$ satisfies two
conditions: it is increasing (meaning that if $Z$ is a subgraph of $Z'$ then $h ( Z )
\leq h ( Z' )$) and it is also isomorphism-invariant ($Y \cong Y' \Longrightarrow h ( Y )
= h ( Y' )$). Proofs of evasiveness exploit the interaction between these two
conditions.
As we saw in \textit{\nameref{basicconceptschapter}}, the monotone-increasing condition
implies that $h$ determines a simplicial complex, $\Delta_h$, whose \hbox{simplices}
correspond to graphs $Z$ that satisfy $h ( Z ) = 0$. The isomorphism-invariant property
implies that this complex $\Delta_h$ is highly symmetric. If $\sigma$ is any permutation
of $V$, and
\begin{eqnarray}
E \subseteq \left\{ \{ v, w \} \mid v, w \in V \right\}
\end{eqnarray}
is an edge set such that $h ( ( V, E ) ) = 0$, then the edge set
\begin{equation}
\sigma ( E ) = \left\{ \{ \sigma ( v ) , \sigma ( w ) \} \mid \{ v , w \} \in E \right\}
\end{equation}
also satisfies $h ( (V , \sigma ( E )) ) = 0$. Thus there is an induced automorphism
$\sigma \colon \Delta_h \to \Delta_h$.
If $h$ were nonevasive, then $\Delta_h$ would be collapsible, and we could apply
fixed-point theorems to $\Delta_h$. Corollary~\ref{lefschetzcorollary} would imply that
$\Delta_h$ must have a simplex which is stabilized by $\sigma$. Therefore, we have the
following interesting result: if $h$ is a nonevasive graph property, then for any
permutation $\sigma \colon V \to V$ there must be a nontrivial $\sigma$-invariant graph
which does not satisfy $h$. Figure~\ref{invariantgraphsfig} shows what we can deduce
when $\left| V \right| = 9$ and $\sigma$ is chosen to be a cyclic permutation.
\begin{figure}[!t
\centerline{\includegraphics{f5-1}}
\fcaption{Let $V$ be a set of size $9$, and let $h$ be a nontrivial increasing graph
property. If $h$ is not evasive, then at least one of the graphs above must fail to
satisfy $h$.\label{invariantgraphsfig}}
\vspace*{-3pt}
\end{figure}
\enlargethispage{6pt}
When we go further and consider the actions of finite groups on $\Delta_h$, we get
stronger results. Note that the entire symmetric group $\Sym ( V )$ acts on $\Delta_h$.
Unfortunately this group is too big for the application of any fixed-point theorems that
we have proved, and so we must restrict the action to some appropriate subgroup of $\Sym
( V )$. Making this choice of subgroup is a key step for many of the results that we will
discuss.
\section{Graphs of Order $p^k$}\label{mainresultsection}
\begin{theorem}[Kahn et~al. \cite{kss1984}]
Let $V$ be a finite set of order $p^k$, where $p$ is prime and $k \geq 1$. Let
\begin{eqnarray}
h \colon \mathbf{G} (V) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. Then, $h$ must be evasive.
\end{theorem}
\removelastskip\pagebreak
\begin{proof}
Without loss of generality, we may assume that $V$ is the set of elements of the finite
field $\mathbb{F}_{p^k}$. For any $a, b \in \mathbb{F}_{p^k}$ with $a \neq 0$, there is
a permutation of $V$ given by
\begin{eqnarray}
x \mapsto ax + b.
\end{eqnarray}
Let $G \subseteq \textnormal{Sym} ( V )$ be the group of all such permutations. Let $G'
\subseteq G$ be the subgroup consisting of permutations of the form $x \mapsto x + b$.
We make the following observations:
\begin{enumerate}
\item \textbf{The subgroup $G'$ is an abelian group of
order $p^k$.} It is isomorphic to the additive group of $\mathbb{F}_{p^k}$.
\item \textbf{The subgroup $G'$ is normal.}
This is apparent from the fact that for any $x, a, b \in \mathbb{F}_{p^k}$, with $a \neq
0$,
\begin{eqnarray}
a^{-1} ( a x + b ) = x + a^{-1} b.
\end{eqnarray}
\item \textbf{The quotient group $G / G'$ is cyclic.}
The quotient group $G / G'$ is isomorphic to the multiplicative group of
$\mathbb{F}_{p^k}$, which is known to be cyclic (see Theorem IV.1.9 from \cite{lang}).
\item \label{transitivepairs} \textbf{The action of $G$ is transitive on
pairs of distinct elements $( x, x' ) \in V \times V$.} This is a consequence of the fact
that for any pairs $(x, x')$ and $(y, y')$ with $x \neq x'$ and $y \neq y'$, the system
of equations
\begin{eqnarray}
ax + b & = & y \\
a x' + b & = & y'
\end{eqnarray}
has a solution, with $a \neq 0$.
\end{enumerate}
Consider the group action
\begin{eqnarray}
G \circlearrowleft \Delta_h
\end{eqnarray}
Suppose that the graph property $h$ is nonevasive. By
Theorem~\ref{collapsibilitytheorem}, the simplicial complex $\Delta_h$ is
collapsible.\footnote{Technically, this is not true if $\Delta_h$ is empty, and so we
need to address that case separately. If $\Delta_h$ is empty, then $h$ must be the
function that maps the empty graph to zero and all other graphs to $1$. This graph
property is easily seen to be evasive.} By Theorem~\ref{fptinit}, the set~$\left(
\Delta_h \right)^G$ is nonempty. Therefore there is a nonempty $G$-invariant graph which
does not satisfy $h$. But by property (\ref{transitivepairs}) above, the only nonempty
\hbox{$G$-invariant} graph is the complete graph. This makes $h$ a trivial graph
property, and thus we obtain a contradiction.
We conclude that $h$ must be an evasive graph property.
\end{proof}
\section{Bipartite Graphs}
Let $V$ be a finite set which is the disjoint union of two subsets, $Y$ and~$Z$. Then a
bipartite graph on $(Y, Z)$ is a graph whose edges are all elements of the set
\begin{eqnarray}\label{bipartiteedgepairs}
\left\{ \{ y, z \} \mid y \in Y, z \in Z \right\}\!.
\end{eqnarray}
A bipartite isomorphism between such graphs is a graph isomorphism which respects the
partition $(Y, Z)$.
Let $\mathbf{B} ( Y, Z)$ denote the set of all bipartite graphs on $(Y, Z)$. A
\textbf{bipartite graph property} is a function
\begin{eqnarray}\label{examplebipartiteprop}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
which respects bipartite isomorphisms. If this function is monotone increasing, it
determines a simplicial complex $\Delta_f$ whose vertices are elements of the
set~(\ref{bipartiteedgepairs}).
Naturally, we say that the bipartite graph property~(\ref{examplebipartiteprop}) is
evasive if its decision-tree complexity $D(f)$ is equal to $\left| Y \right| \cdot \left|
Z \right|$. The following proposition can be proved by the same method that we used to
prove Theorem~\ref{collapsibilitytheorem}.
\begin{proposition}
Let $Y$ and $Z$ be disjoint finite sets, and let
\begin{eqnarray}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
be a monotone-increasing bipartite graph property which is not evasive. If the complex
$\Delta_f$ is not empty, then it is collapsible.
\end{proposition}
Note that the complex $\Delta_f$ always has a group action,
\begin{eqnarray}
\left( \Sym ( Y ) \times \Sym (Z ) \right) \circlearrowleft \Delta_f.
\end{eqnarray}
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Yao \cite{yao1988}]
Let $Y$ and $Z$ be disjoint finite sets, and let
\begin{eqnarray}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial bipartite graph property which is monotone increasing. Then, $f$ is
evasive.
\end{theorem}}
\begin{proof}
Let $\sigma \colon Y \to Y$ be a cyclic permutation of the elements of $Y$, and let $G
\subseteq \Sym ( Y )$ be the subgroup generated by $\sigma$. The edge set of any
$G$-invariant bipartite graph has the form
\begin{eqnarray}
H_S := \left\{ \{ y, z \} \mid y \in Y, z \in S \right\}
\end{eqnarray}
where $S$ is a subset of $Z$ (see Figure~\ref{invariantgraphfig}). Since $f$ is
isomorphism-invariant and monotone-increasing, the behavior of $f$ on such graphs can be
easily described: there is some integer $k \in \{ 1, 2, \ldots, \left| Z \right| \}$ such
that
\begin{eqnarray}
( V , H_S ) \textnormal{ has property $f$} \Longleftrightarrow \left| S \right| > k.
\end{eqnarray}
\begin{figure}[!b
\centerline{\includegraphics{f5-2}}
\fcaption{An example of a set $H_S$.\label{invariantgraphfig}}
\end{figure}
Let $\Delta = \Delta^{[G]}$. The vertices of $\Delta^{[G]}$ are the sets of the form
\begin{eqnarray}
H_z := \left\{ \{ y, z \} \mid y \in Y \right\}\!,
\end{eqnarray}
with $z \in Z$, and the simplicies are precisely the subsets of $\{ H_z \mid z \in Z \}$
whose union forms a graph that does not have property $f$. Thus we can calculate the
Euler characteristic directly:
\begin{eqnarray}
\chi ( \Delta^{[G]} )
& = & \sum_{j = 0}^{k-1} (-1)^j \binom{\left| Z \right|}{ j + 1} \\
& = & 1 + (-1)^{k-1} \binom{ \left| Z \right| - 1}{k}.
\end{eqnarray}
Suppose that $f$ is nonevasive. Then $\Delta$ is collapsible and by Theorem~\ref{fpt},
\begin{eqnarray}
\chi ( \Delta^{[G]} ) & = & 1.
\end{eqnarray}
But this is possible only if $k = \left| Z \right|$ and $f$ is trivial.
\end{proof}
\section{A General Lower Bound}
Now we prove a lower bound on decision-tree complexity which applies to graphs of
arbitrary size. Our method of proof is based on \cite{kss1984}.
\begin{proposition}\label{generallowerbound}
Let $V$ be a finite set and let
\begin{eqnarray}
h \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. Let $p$ be the largest prime that is
less than or equal to $\left| V \right|$. Then,
\begin{eqnarray}
D ( h ) \geq \frac{p^2}{4}.
\end{eqnarray}
\end{proposition}
\begin{proof}
Assume that $\left| V \right| = n$. For any $r, s \geq 0$, let us write $K_r$ for the
complete graph on $\{ 1, 2, \ldots, r \}$, and let us write $K_{r, s}$ for the complete
bipartite graph on the sets $\{ 1, 2, \ldots, r \}$ and $\{ r+1 , \dots , r + s \}$. For
any two graphs $H = (V, E)$ and $H' = (V' , E' )$, let us abuse notation slightly and
write $H \cup H'$ for the graph $(V \cup V', E \cup E')$.
For any $k \geq 1$, let $C_k$ denote the least decision-tree complexity that occurs for
nontrivial monotone-increasing graph properties on graphs of size $k$. We prove a lower
bound for $D ( h )$ in three cases.
\medskip\textbf{Case 1:} $\mathbf{h ( K_{1,n-1} ) = 0}.$ In this case, the function $h$
induces a nontrivial graph property $h'$ on the vertex set $\{ 2, 3, \ldots, n \}$, given
by
\begin{eqnarray}
h' ( P ) & = & h ( P \cup K_{1, n-1} ).
\end{eqnarray}
This function has decision-tree complexity at
least $C_{n-1}$, and therefore $D ( h ) \geq C_{n-1}$.
\medskip
\textbf{Case 2:} $\mathbf{h ( K_{n-1} ) = 1.}$ In this case the function $h$ induces a
nontrivial graph property $h'$ on the vertex set $\{ 2, 3, \ldots, n \}$ given by
\begin{eqnarray}
h' ( P ) & = & h ( P \cup K_1 ),
\end{eqnarray}
which is likewise nontrivial. This function has decision-tree complexity at least
$C_{n-1}$, and so $D ( h ) \geq C_{n-1}$.
\medskip
\textbf{Case 3:} $\mathbf{h ( K_{1, n-1} ) = 1}$ \textbf{and} $\mathbf{h ( K_{n-1} ) =
0}$. Let $m = \lfloor n/2 \rfloor$. The property $h$ induces a bipartite graph property
on the sets $\{1, 2, \ldots, m \}$ and $\{ m+1, m+2, \ldots, m \}$ defined by
\begin{eqnarray}
h' ( P ) = h ( P \cup K_m ).
\end{eqnarray}
Since $h ( K_m ) \leq h ( K_{n-1} ) = 0$ and $h ( K_m \cup K_{m,n-m} ) \geq h ( K_{1,
n-1} ) = 1$, the property $h'$ is nontrivial. Therefore it has decision-tree complexity
at least $m (n-m)$. The decision-tree complexity of $h$ is likewise bounded by $m ( n-m
) \geq (n-1)^2/4$.
\medskip
In all cases, we have
\begin{eqnarray}
D ( h ) \geq \min \left\{ C_{n-1}, \frac{(n-1)^2}{4} \right\}\!.
\end{eqnarray}
The same reasoning shows that
\begin{eqnarray}
C_k \geq \min \left\{ C_{k-1} , \frac{(k-1)^2}{4} \right\}
\end{eqnarray}
for every $k \in \{ p+1, p+2 , \ldots, n-1 \}$. Therefore
by induction,
\begin{eqnarray}
D ( h ) \geq \min \left\{ C_p , \frac{p^2}{4} \right\}\!.
\end{eqnarray}
The quantity $C_p$ is $\binom{p}{2}$, and the desired result follows.
\end{proof}
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Kahn et~al. \cite{kss1984}]
\label{lowerboundtheorem} Let $C_n$ denote the least decision-tree complexity that occurs
among all nontrivial monotone-increasing graph properties of order $n$. Then,
\begin{eqnarray}
C_n \geq \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\begin{proof}
By the prime number theorem, there is a function $z ( n ) = o ( n )$ such that for any
$n$, the interval $[n - z(n), n]$ contains a prime.\footnote{The prime number theorem
\cite{zagier1997} asserts that if $\pi( n )$ denotes the number of primes less than or
equal to $n$, then $\lim_{n \to \infty} \pi ( n ) \left( n / \ln n \right)^{-1} = 1$. If
there were an infinite number of linearly sized gaps between the primes, this limit could
not exist.} By Proposition~\ref{generallowerbound},
\begin{eqnarray}
C_n & \geq & \frac{(n - z(n))^2}{4} \\
& \geq & \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
as desired.
\end{proof}
\section{A Survey of Related Results}
Much work on the decision-tree complexity of graph properties has followed the papers of
Kahn, Saks, Sturtevant, and Yao. We briefly sketch some of the newer results in this
area.
V. King proved a lower bound for properties of \textbf{directed} graphs.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[King \cite{king1990}]
Let $C'_n$ denote the least decision-tree complexity
that occurs among all nontrivial monotone
\textbf{directed} graph properties of order $n$. Then,
\begin{eqnarray}
C'_n \geq \frac{n^2}{2} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\noindent Triesch \cite{triesch1994, triesch1996} proved multiple results about the
evasiveness of particular subclasses of monotone graph properties.
Korneffel and Triesch improved on the asymptotic bound of
\hbox{Theorem}~\ref{lowerboundtheorem} by using a different group action on the set of
vertices. Let $V$ be a set of size $n$, and let $p$ be a prime that is close to $\left(
\frac{2}{5} \right) n$. Break the set $V$ up into disjoint subsets $V_1$, $V_2$, and
$V_3$, with $\left| V_1 \right| = \left| V_2 \right| = p$ and $\left| V_3 \right| = n -
2p$. Let $\mathbf{P}$ be the class of tripartite graphs on $(V_1, V_2, V_3)$ which,
when taken together with the complete graphs on the sets $V_i$, do not satisfy property
$h$. The abelian group
\begin{eqnarray}
G = \mathbb{Z} / p \mathbb{Z} \times \mathbb{Z} / p \mathbb{Z}
\times \mathbb{Z} / (n - 2p ) \mathbb{Z}
\end{eqnarray}
acts on the class $\mathbf{P}$ by cyclically permuting the elements of $V_1$, $V_2$,
and~$V_3$. From this action and some other arguments, the authors are able to prove the
following.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Korneffel and Triesch \cite{kt2010}]
Let $C_n$ denote the least decision-tree complexity that occurs among all nontrivial
monotone-increasing graph properties of order~$n$. Then,
\begin{eqnarray}
C_n \geq \frac{8 n^2}{25} - o ( n^2 ). \hskip0.2in
\end{eqnarray}
\end{theorem}}
The work of Chakrabati et~al. \cite{cks2002} considers the
\textbf{subgraph containment property}. For any finite graph $X$, let $h_{X,n}$ denote
the graph property for graphs of size $n$ which assigns a value of $1$ to a graph if and
only if it contains a subgraph isomorphic to $X$. This property is studied using another
group action. For appropriate values of $n$, the vertex set $V$ can be partitioned into
sets $V_1, \ldots , V_m$, where $\left| V_i \right| = q^{\alpha_i}$ for some prime power
$q$ which is greater than or equal to the number of vertices in $X$. Choose isomorphisms
$V_i \cong \mathbb{F}_{q^{\alpha_i}}$. Let $G$ be the group of permutations of $V$ that
is generated by the group $\mathbb{F}_{q^{\alpha_1}}^+ \times \ldots \times
\mathbb{F}_{q^{\alpha_m}}^+$ (acting on the sets $V_1 , \ldots , V_m$ in a component-wise
manner) and the group $\mathbb{F}_q^*$ (acting simultaneously on all the sets
$V_i$). If $h_X$ were nonevasive, then there would exist nontrivial $G$-invariant graphs
which do not satisfy $h_X$. Such graphs would have a uniform structure and would
correspond simply to graphs on the set $\{ 1, 2, \ldots, m \}$.
With this reduction the authors are able to prove that $h_{X,n}$ is evasive for all $n$
within a set of positive density. In general, the following asymptotic bound holds:
\begin{eqnarray}
D ( h_{X,n} ) \geq \frac{n^2}{2} - O ( n ).
\end{eqnarray}
This approach was further developed by Babai et al. \cite{bbkk2010}, who proved that
$h_{X,n}$ is evasive for almost all $n$, and that
\begin{eqnarray}
D ( h_{X, n } ) \geq \binom{n}{2} - O ( 1 ).
\end{eqnarray}
As one can observe from recent papers on evasiveness, advances in the strength of results
are paralleled by substantial increases in the difficulty of the proofs! The
increase in difficulty has become fairly steep at this point. Perhaps a new basic
insight, like the one in \cite{kss1984}, will be necessary to proceed further toward the
Karp conjecture.
\chapter*{\contentsname}
\@mkboth{Contents}{Contents
\thispagestyle{plain
\vspace*{-10.6em}
\@starttoc{toc
\if@restonecol\twocolumn\fi
}
\makeatother
\input xy
\xyoption{all}
\firstpage{337}
\lastpage{415}
\begin{document}
\pagenumbering{roman}
\isbn{978-1-60198-664-1}
\DOI{10.1561/0400000055}
\abstract{Many graph properties (e.g., connectedness, containing a complete
\hbox{subgraph}) are known to be difficult to check. In a decision-tree model, the cost
of an algorithm is measured by the number of edges in the graph that it queries. R. Karp
conjectured in the early 1970s that all monotone graph properties are evasive---that is,
any algorithm which computes a monotone graph property must check all edges in the worst
case. This conjecture is unproven, but a lot of progress has been made. Starting with the
work of Kahn, Saks, and Sturtevant in 1984, topological methods have been applied to
prove partial results on the Karp conjecture. This text is a tutorial on these
topological \hbox{methods}. I give a fully self-contained account of the central proofs
from the paper of Kahn, Saks, and Sturtevant, with no prior knowledge of topology
assumed. I also briefly survey some of the more recent results on \hbox{evasiveness}.}
\articletitle{Evasiveness of Graph Properties and Topological Fixed-Point Theorems}
\authorname1{Carl A. Miller}
\affiliation1{University of Michigan}
\author1address2ndline{Department of Electrical Engineering and Computer Science, 2260 Hayward St.}
\author1city{Ann Arbor}
\author1zip{MI 48109-2121}
\author1country{USA}
\author1email{[email protected]}
\journal{tcs}
\volume{7}
\issue{4}
\copyrightowner{C. A. Miller}
\pubyear{2011}
\maketitle
\setcounter{page}{1}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{conj}[theorem]{Conjecture}
\def\mb{\mathbb}
\def\mf{\mathbf}
\def\im{\textnormal{im }}
\def\ker{\textnormal{ker }}
\def\sign{\textnormal{sign }}
\def\Tr{\textnormal{Tr}}
\def\mod{\textnormal{mod }}
\def\bar{\textnormal{bar}}
\def\weight{\textnormal{weight}}
\def\Perm{\textnormal{Perm}}
\def\Pow{\textnormal{Pow}}
\def\Sym{\textnormal{Sym}}
\def\coker{\textnormal{coker }}
\chapter{Introduction}\label{chap1}
Let $V$ be a finite set of size $n$, and let $\mathbf{G} ( V )$ denote the set of
undirected graphs on $V$. For our purposes, a \textbf{graph property} is simply a
function
\begin{eqnarray}
f \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
which is such that whenever two graphs $Z$ and $Z'$ are isomorphic, $f ( Z ) = f ( Z' )$.
A graph $Z$ ``has property $f$'' if $f ( Z ) = 1$.
We can measure the cost of an algorithm for computing $f$ by counting the number of
edge-queries that it makes. We assume that these edge-queries are adaptive (i.e., the
choice of query may depend on the outcomes of previous queries). An algorithm for $f$ can
thus be represented by a binary decision-tree (see Figure~\ref{dectreefigure}). The
\textbf{decision-tree complexity of $f$}, which we denote by $D ( f )$, is the least
possible depth for a decision-tree that computes $f$. In other words, $D ( f )$ is the
number of edge-queries that an optimal algorithm for $f$ has to make in the worst case.
\begin{figure}[!t
\centerline{\includegraphics{f1-1}}
\fcaption{A binary decision tree.\label{dectreefigure}}
\end{figure}
Some graph properties are difficult to compute. For example, let $h( Z ) = 1$ if and
only if $Z$ contains a cycle. Suppose that an algorithm for $h$ makes queries to an
adversary whose goal is to maximize cost. The adversary can adaptively construct a graph
$Y$ to foil the \hbox{algorithm}:~each time a pair $( i, j ) \in V \times V$ is queried,
the adversary answers ``yes,'' unless the inclusion of that edge would necessarily make
the graph $Y$ have a cycle, in which case he answers ``no.'' After $\binom{n}{2} - 1$
edge-queries by the algorithm have been made, the known edges will form a tree on the
elements of $V$. The algorithm at this point will have no choice but to query the last
unknown edge to determine whether or not a cycle exists. We conclude from this argument
that $h$ is a graph property that has the maximal decision-tree complexity
$\binom{n}{2}$. Such properties are called \textbf{evasive}.
A graph property is \textbf{monotone} if it is either always preserved by the addition of
edges (monotone-increasing) or always preserved by the deletion of edges
(monotone-decreasing). In 1973 the following conjecture was made \cite{rosenberg1973}.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{conj}[The Karp Conjecture]
All nontrivial monotone graph properties are evasive.
\end{conj}}
\noindent To date, this conjecture is unproven and no counterexamples are known. However
in 1984, a seminal paper was published by Kahn et~al.~\cite{kss1984} which proved the conjecture in some cases. This
paper showed that evasiveness can be established through the use of topological
fixed-point theorems. It has been followed by many more papers which exploited its
method to prove better results.
This text is a tutorial on the topological method of~\cite{kss1984}. My goal is to
provide background on the problem and to take the reader through all of the necessary
proofs. Let us begin with some history.
\section{Background}
Research on the decision-tree complexity of graph properties---including properties for
both directed and undirected graphs---dates back at least to the early 1970s
\cite{bbl1974,bollobas1976,hr1972,ht1974,kirkpatrick1974,mw1975,rosenberg1973}. Proofs
were given in early papers that certain specific graph properties are evasive (e.g.,
connectedness, containment of a complete subgraph of fixed size), and that other
properties at least have decision-tree complexity $\Omega (n^2)$. Although it was known
that there are graph properties whose decision-tree complexity is not $\Omega (n^2)$ (see
Example~18 in \cite{bbl1974}), Aanderaa and Rosenberg conjectured that all
\textbf{monotone} graph properties have decision-tree complexity $\Omega (n^2)$
\cite{rosenberg1973}. This conjecture was proved by Rivest and Vuillemin \cite{rv1976}
who showed that all monotone graph properties satisfy $D (f) \geq n^2/16$. Kleitman and
Kwiatkowski \cite{kk1980} improved this bound to $D (f) \geq n^2/9$.
Underlying some of these proofs is the insight that if a graph property $f$ has
nonmaximal decision-tree complexity, then the collection of graphs that satisfy $f$ have
some special structure. For example, if $f$ is not evasive, then in the set of graphs
satisfying $f$ there must be an equal number of graphs having an odd number of edges and
an even number of edges. Rivest and Vuillemin \cite{rv1976} used the fact that if $f$ has
decision-tree complexity $\binom{n}{2} - k$, then the weight enumerator of $f$ (i.e., the
polynomial $\sum_j c_j t^j$, where $c_j$ is the number of $f$-graphs containing~$j$
edges) must be divisible by $(1 + t)^k$.
A topological method for the evasiveness problem was introduced in~\cite{kss1984}.
Suppose that $h$ is a monotone-increasing graph property on a vertex set $\{ 0, 1,
\ldots, n-1 \}$. Let $T$ be the collection of all graphs that do \textit{not} satisfy
$h$. The set $T$ has the property that if $G$ is in $T$, then all of its subgraphs are
in $T$. This is a close analogy to the property which defines simplicial complexes in
topology. Let $\{ x_{ab} \mid 0 \leq i < j < n\}$ be a labeled collection of linearly
independent vectors in some vector space $\mathbb{R}^N$. Each graph in $T$ determines a
simplex in $\mathbb{R}^N$: one takes the convex hull of the vectors $x_{ab}$
corresponding to the edges $\{ a, b \}$ that are in the graph. The union of these hulls
forms a simplicial complex, $\Gamma_h$. The complex for ``connectedness'' on
four vertices (represented in three dimensions) is shown in
Figure~\ref{connectednessfigure}.
\begin{figure}[!t
\centerline{\includegraphics{f1-2}}
\fcaption{The simplicial complex for ``connectedness'' on four
vertices.\label{connectednessfigure}}
\end{figure}
A fundamental insight of \cite{kss1984} is that nonevasiveness can be translated to a
topological condition. If $h$ is not evasive, then $\Gamma_h$ has a certain topological
property called \textbf{collapsibility}. This property, which we will define formally
later in this text, essentially means that $\Gamma_h$ can be folded into itself and
contracted to a single point. This property implies the even--odd weight-balance
condition mentioned above, but it is stronger. In particular, it allows for the
application of topological fixed-point theorems.
The following theorem is attributed to R.~Oliver.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Oliver \cite{oliver1975}]\label{fptquote}
Let $\Gamma$ be a collapsible simplicial complex. Let $G$ be a finite group which
satisfies the following condition:
\begin{itemize}
\item[(*)] There is a normal subgroup $G' \subseteq G$, whose
size is a power of a prime, such that $G / G'$ is cyclic.
\end{itemize}
Then, any action of $G$ on $\Gamma$ has a fixed point.
\end{theorem}
When $\Gamma = \Gamma_h$, the fixed points of $G$ correspond to graphs, and this theorem
essentially forces the existence of certain graphs that do not satisfy $h$. This theorem
is the basis for the following result of \cite{kss1984}:
\begin{theorem}[Kahn et~al.~\cite{kss1984}]\label{kss1quote}
Let $f$ be a monotone graph property on graphs of size $p^k$, where $p$ is prime. If $f$
is not evasive, then it must be trivial.
\end{theorem}
\noindent The proof of this theorem essentially proceeds by demonstrating an appropriate
group action $G$ on the set of graphs of order $p^k$ such that the only $G$-invariant
graphs are the empty graph and the complete graph.
Thus evasiveness is known for all values of $n$ that are prime powers. What about other
values of $n$? One could hope that if the decision-tree complexity is always
$\binom{p}{2}$ when the vertex set is size $p$, then the quantity $\binom{p}{2}$ is a
lower bound for the cases $p+1$, $p+2$, and so forth. Unfortunately there is no known
way to show this. However, all is not lost. The following general theorem is also proved
in \cite{kss1984}.
\begin{theorem}[Kahn et~al. \cite{kss1984}]\label{kss2quote}
Let $f$ be a nontrivial monotone graph property of order $n$. Then,
\begin{eqnarray}
D ( f ) \geq \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\noindent The paper \cite{kss1984} was then followed by several other papers on
evasiveness by other authors who used the topological approach to prove new results on
evasiveness \cite{bbkk2010,cks2002,king1990,kt2010,triesch1994,triesch1996,yao1988}. Some
of these papers found new group actions $G \circlearrowleft \Delta_h$ to exploit in the
nonprime cases.
The target results of this exposition are Theorems~\ref{kss1quote} and \ref{kss2quote}, and a theorem by Yao on evasiveness of bipartite graphs
\cite{yao1988}. Now let us summarize what we need to do in order to get there.
\section{Outline of Text}
My goal in this exposition is to give a reader who does not know \hbox{algebraic}
topology a complete tutorial on topological proofs of evasiveness. Therefore, a fair
amount of space will be devoted to building up concepts from algebraic topology. I have
tended be economical in my \hbox{discussions} and to develop concepts only on an
as-needed basis. Readers who wish to learn more algebraic topology after this exposition
may want to consult good references such as \cite{hatcher2002,munkres}.
We begin, in \textit{\nameref{basicconceptschapter}}, by formalizing the class of
simplicial complexes and its relation to the class of graph properties. While
we have presented a simplicial complex in this introduction as a subset of
$\mathbb{R}^n$, it can also be defined simply as a collection of finite sets. (This is
the notion of an \textbf{abstract simplicial complex}.) Although the definition in terms
of subsets of $\mathbb{R}^n$ is helpful for intuition, the definition in terms of finite
sets is the one we will use in all proofs.
A critical construction in this monograph is the set of \textbf{homology
groups} of a simplicial complex. These groups are algebraic objects which measure the
shape of the complex, and also~--- crucially for our purposes~--- help us understand the
behavior of the complex under automorphisms. \textit{\nameref{chaincomplexchapter}}
defines homology groups and provides some of the standard theory for them.
In \textit{\nameref{fptchapter}} we prove some topological results. The first is the
Lefschetz fixed-point theorem. One way to state this theorem is to say that any
automorphism of a collapsible simplicial complex has a fixed point. However we instead
prove a theorem which applies to the more general class of
\textbf{$\mathbb{F}_p$-acyclic} complexes. A simplicial complex is
$\mathbb{F}_p$-acyclic if its homology groups (over $\mathbb{F}_p$) are trivial. When a
simplicial complex is $\mathbb{F}_p$-acyclic it behaves much like a collapsible complex
(and in particular, any automorphism has a fixed point). Finally, we prove a version of
Theorem~\ref{fptquote}. The proof of the theorem depends on finding a tower of subgroups
\begin{eqnarray}
\{ 0 \} = G_0 \subset G_1 \subset G_2 \subset \cdots \subset G_n = G,
\end{eqnarray}
where each quotient $G_i / G_{i-1}$ is cyclic, and performing an inductive argument.
\textit{\nameref{resultschapter}} proves \hbox{Theorem}~\ref{kss1quote}, a
\hbox{bipartite} result of Yao \cite{yao1988}, and Theorem~\ref{kss2quote}. We
conclude with an informal discussion of a few of the more recent results
on decision-tree complexity of graph properties
\cite{bbkk2010,cks2002,king1990,kt2010,triesch1994,triesch1996}.
My primary sources for this exposition were
\cite{duandko,kss1984,munkres,smith1941,yao1988}. A particular debt is owed to Du
and Ko~\cite{duandko}, which was my first introduction to the subject.
\section{Related Topics}
I will briefly mention two alternative lines of research that are related to the one I
cover here. One can change the measure of complexity that one is using to measure graph
properties, and this leads to new problems requiring different methods. A natural
variant is the \textbf{randomized decision-tree complexity.} Suppose that in our
decision-tree model, our algorithm is permitted to make random choices at each step about
which edges to check. We define the cost of the algorithm on a particular input graph to
be the \textit{expected} number of edge queries, and the cost of the algorithm as a whole
to be the maximum of this quantity over all input graphs. The minimum of this quantity
over all algorithms is the randomized decision-tree complexity, $R ( f )$.
There is a line of research studying the randomized decision tree complexity of monotone
graph properties
\cite{ck2007,fkw2002,groger1992,hajnal1991,king1991,odonnel2005,yao1991}. While it is
easy to see that $R ( f )$ can be less than $\binom{n}{2}$, there are graph properties
for which $R ( f )$ is provably $\Omega ( n^2 )$ (such as the ``emptiness
property''---the property that the graph contains no edges). It is conjectured that $R (
f )$ is always $\Omega ( n^2 )$ for monotone graph properties, just as in the
deterministic model. The best proved lower bound \cite{ck2007, hajnal1991} is $\Omega (
n^{4/3} \left( \log n \right)^{1/3})$.
Another variant of decision-tree complexity is \textbf{bounded-error quantum query
complexity}. A quantum query algorithm for a graph property uses a quantum ``oracle'' in
its computation. The oracle accepts a quantum state which is a superposition of
edge-queries to a graph, and it returns a quantum state which encodes the answers to
those queries. The algorithm is permitted to use this oracle along with arbitrary
quantum operations to determine its result. The algorithm is permitted to make errors,
but the likelihood of an error must be below a fixed bound on all inputs.
(See~\cite{bcwz1999}.)
In the quantum case it is clear that a lower bound of $\Omega ( n^2 )$ does not hold:
Grover's algorithm~\cite{ambainis2004} can search a space of size $N$ in time $\Theta (
\sqrt{N} )$ using an oracle model. With a modified version of Grover's algorithm, one
can compute the emptiness property in time $\Theta ( n )$. There are a number of other
monotone properties for which the quantum query complexity is known to be $o ( n^2 )$
(see \cite{ck2010} for a good summary on this topic). It is conjectured that all monotone
graph properties have quantum query complexity $\Omega ( n )$. The best proved lower
bound is $\Omega ( n^{2/3} )$, from an unpublished result attributed to Santha and Yao
\hbox{(see \cite{syz2004})}.
\section{Further Reading}
Other expositions about topological proofs of evasiveness can be found in \cite{duandko}
(in the context of computational complexity theory) and \cite{kozlov2008} (in the context
of algebraic topology), and also in Lovasz's lecture notes \cite{ly2002}. A reader who
wishes to learn more about algebraic topology can consult \cite{munkres}, or, for a more
advanced treatment, \cite{hatcher2002}. For the particular subject of the topology of
complexes arising from graphs, there is an extensive treatment \cite{jonsson2008}, which
builds further on many of the concepts that I will discuss here. And finally, for
readers who generally enjoy reading about applications of topology to problems in
discrete mathematics, the excellent book \cite{matousek2008} contains more material of
the same flavor. It involves applications of a different topological result (the
Borsuk--Ulam theorem) to some problems in elementary mathematics.
\chapter{Basic Concepts}\label{basicconceptschapter}
\section{Graph Properties}\label{graphpropsection}
This part of the text covers some preliminary material. We begin by formalizing some
basic terminology for finite graphs.
For our purposes, a \textbf{finite graph} is an ordered pair of sets $(V, E)$, in
which $V$ (the \textbf{vertex set}) is a finite set, and $E$ (the \textbf{edge set}) is
a set of $2$-element subsets of $V$. For example, the pair
\begin{eqnarray}\label{graphoforder4}
\left( \left\{ 0, 1, 2, 3 \right\} , \left\{ \{ 0, 1 \} ,
\{ 0, 2 \} , \{ 1, 2 \}, \{ 2, 3 \} \right\} \right)
\end{eqnarray}
is a finite graph with four vertices, diagrammed in Figure~\ref{4graphfigure}.
\begin{figure
\centerline{\includegraphics{f2-1}}
\fcaption{A graph on four vertices.\label{4graphfigure}}
\end{figure}
An \textbf{isomorphism} between two finite graphs is a one-to-one correspondence between
the vertices of the two graphs which matches up their edges. In precise terms, if $G =
(V, E)$ and $G' = ( V' , E' )$ are two graphs, then an isomorphism between $G$ and $G'$
is a bijective function $f : V \to V'$ which is such that the set
\begin{eqnarray}
\left\{ \{ f ( v ) , f ( w ) \} \mid
\{ v , w \} \in E \right\}
\end{eqnarray}
is equal to $E'$. For example, the graph in Figure~\ref{4graphfigure} is
isomorphic to the graph in Figure~\ref{4graphaltfigure} under the map $f \colon \{ 0, 1,
2, 3 \} \to \{ 0, 1, 2, 3 \}$ defined~by
\begin{eqnarray}
f( 0 ) = 1 &\quad & f ( 1 ) = 2 \\
f( 2 ) = 3 &\quad & f ( 3 ) = 0.
\end{eqnarray}
\begin{figure}[!b
\centerline{\includegraphics{f2-2}}
\fcaption{A graph that is isomorphic to the graph in
Figure~\ref{4graphfigure}.\label{4graphaltfigure}}
\end{figure}
We can now formalize the notion of a graph property. Briefly
stated, a graph property is a function on graphs which is
compatible with graph isomorphisms.
Let
$V_0$ be a finite set, and let $\mathbf{G} \left( V_0 \right)$
denote the set of all graphs that have $V_0$ as their vertex set.
Then a function
\begin{eqnarray}
h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1 \}
\end{eqnarray}
is a \textbf{graph property} (over $V_0$) if all pairs $(G, G')$ of isomorphic graphs
in $\mathbf{G} \left( V_0 \right)$ satisfy $h ( G ) =
h ( G' )$.
\begin{figure}[!b
\label{graphsoforder3}
\centerline{\includegraphics{f2-3}}
\fcaption{Two graphs of size $3$.\label{twographsfigure}}
\end{figure}
For example, consider the graphs in Figure~\ref{twographsfigure}, which are
members of $\mathbf{G} \left( \{ 0, 1, 2 \} \right)$. Then the function
\begin{eqnarray}
h_1 \colon \mathbf{G} \left( \{ 0, 1, 2 \} \right) \to \{ 0, 1 \}
\end{eqnarray}
defined by
\begin{eqnarray}
h_1 ( G ) = \left\{ \begin{array}{@{}l@{\quad}l@{}} 1 & \textnormal{if } G = G_1 \\
0 & \textnormal{if } G \neq G_1
\end{array} \right.
\end{eqnarray}
is a graph property. However, the function $h_2$ defined by
\begin{eqnarray}
h_2 ( G ) = \left\{ \begin{array}{@{}l@{\quad}l@{}} 1 & \textnormal{ if } G = G_2 \\
0 & \textnormal{ if } G \neq G_2
\end{array} \right.
\end{eqnarray}
is {\it not} a graph property, since there exist graphs in $\mathbf{G} \left( \{ 0, 1, 2
\} \right)$ which are isomorphic to $G_2$ but not equal to $G_2$.
If $G, G' \in \mathbf{G} \left( V_0 \right)$ are graphs such that the edge set of $G'$ is
a subset of the edge set of $G$, then we say that $G'$ is a \textbf{subgraph} of $G$.
Note that this relationship gives us a partial ordering on the set $\mathbf{G} \left( V_0
\right)$. Let us say that a function $h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1
\}$ is \textbf{monotone increasing} if it respects this ordering. In other words, $h$ is
monotone increasing if it satisfies $h ( G' ) \leq h ( G )$ for all pairs $(G', G)$ such
that $G'$ is a subgraph of $G$. Likewise, we say that the function $h$ is
\textbf{monotone decreasing} if it satisfies $h ( G' ) \geq h ( G)$ whenever $G'$ is a
subgraph of $G$.
If $h \colon \mathbf{G} \left( V_0 \right) \to \{ 0, 1 \}$ is a function, then a
\textbf{decision tree} for $h$ is a step-by-step procedure for computing the
value of $h$. An example is the decision tree in Figure~\ref{decisiontreeexample}, which
computes the value of the function $h_2$ defined above. The diagram in
Figure~\ref{decisiontreeexample} describes an algorithm for computing $h_2$. Each node
in the tree specifies an ``edge-query'', and each branch in the tree specifies how the
algorithm responds to the results of the edge query. For example, suppose that we wish to
apply the algorithm to compute the value of $h_2$ on the graph $G_1$ (from
(\ref{graphsoforder3}), above). The algorithm would first query the edge $\{ 0, 1 \}$,
and it would find that this edge \textit{is} contained in $G_1$. It would then follow
the ``Y'' branch from $\{ 0, 1 \}$, and query the edge $\{ 1, 2 \}$. It would then
follow the ``Y'' branch from $\{ 1, 2 \}$, and determine that the value of $h_2$ is zero.
\begin{figure}[!t
\centerline{\includegraphics{f2-4}}
\fcaption{A decision tree for the graph property $h_2$.\label{decisiontreeexample}}
\end{figure}
The \textbf{decision-tree complexity} of a function $h \colon \mathbf{G} \left( V_0
\right) \to \{ 0, 1 \}$ is the smallest possible depth for a decision-tree which
correctly computes~$h$. We denote this quantity by $D(h)$. For example, the depth of the
decision-tree in Figure~\ref{decisiontreeexample} is $3$. It can be shown that any
decision-tree that computes $h_2$ must have depth at least $3$. Therefore, $D ( h_2 ) =
3$.
It is easy to prove that for any function $h \colon \mathbf{G} \left( V_0 \right) \to \{
0, 1 \}$, the inequality
\begin{eqnarray}
D ( h ) \leq \binom{ \left| V_0 \right|}{2}
\end{eqnarray}
is satisfied. If the function $h$ satisfies
\begin{eqnarray}
D( h ) = \binom{ \left| V_0 \right|}{2}\!,
\end{eqnarray}
then we will say that the function $h$ is {\bf evasive}. Evasive functions are the
functions that are the most difficult to compute via a decision-tree.\footnote{The
concepts of ``decision-tree complexity'' and ``evasiveness'' can be defined for any
Boolean function. See Chapter~\ref{resultschapter} of \cite{duandko} for a more detailed
treatment.}
\section{Simplicial \hbox{Complexes}}\label{simplicialcomplexsection}
Now we give a brief introduction to the notion of a simplicial complex. We draw
on~\cite{munkres} for definitions and terminology.
There are at least two natural ways of defining simplicial complexes---one is as a
collection of finite sets, and another is as a collection of subsets of $\mathbb{R}^n$.
The first definition is the easiest to work with (and it will be the one we use the most
in this monograph). But the second definition is also important because
it provides some indispensible geometric intuition. We will begin by building up the
second definition.
\begin{definition}
Let $N$ and $n$ be positive integers, with $n \leq N$. Let $\mf{v}_0, \mf{v}_1, \ldots,
\mf{v}_n \in \mathbb{R}^N$ be vectors satisfying the condition that
\begin{eqnarray}
\{ ( \mf{v}_1 - \mf{v}_0 ) , ( \mf{v}_2 - \mf{v}_0 ), ( \mf{v}_3 - \mf{v}_0 ),
\ldots , ( \mf{v}_n - \mf{v}_0 ) \}
\end{eqnarray}
is linearly independent set. Then the \textbf{$n$-simplex spanned by $\{ \mf{v}_0,
\mf{v}_1, \ldots, \mf{v}_n \}$} is the set
\begin{eqnarray}
\left\{ \sum_{i=0}^n c_i \mf{v}_i \mid \textnormal{ $0 \leq c_i \leq 1$ for all $i$, and
$\sum_{i=0}^n c_i = 1$ } \right\}\!.
\end{eqnarray}
\end{definition}
When we refer to an ``$n$-simplex'', we simply mean a set which can be defined in the
above form. Note that a $1$-simplex is simply a line segment. A $2$-simplex is a solid
triangle, and a $3$-simplex is a solid tetrahedron.
\begin{definition}
Let $N$ and $n$ be positive integers. Let $v_0, \ldots, v_n \in \mb{R}^N$ be vectors
which span an $n$-simplex~$V$. Then the \textbf{faces} of~$V$ are the simplices in
$\mb{R}^N$ that are spanned by nonempty subsets of $\{ v_0, v_1, \ldots , v_n \}$.
\end{definition}
So, for example, the $2$-simplex in $\mathbb{R}^3$ shown in Figure~\ref{2simplexfigure}
has seven faces (including itself): three of dimension zero,
three of dimension~$1$, and one of dimension two. In
general, an $n$-simplex has $\binom{n+1}{k+1}$ $k$-dimensional faces.
\begin{figure}[!b
\centerline{\includegraphics{f2-5}}
\fcaption{A $2$-simplex.\label{2simplexfigure}}
\end{figure}
\begin{definition}
Let $N$ be a positive integer. A \textbf{simplicial complex} in $\mathbb{R}^N$ is a set
$S$ of simplicies in $\mathbb{R}^N$ which satisfies the following two conditions.
\begin{enumerate}
\item If $V$ is a simplex that is contained in $S$, then all faces of $V$ are also
contained in $S$.
\item If $V$ and $W$ are simplicies in $S$ such that $V \cap W \neq \emptyset$,
then $V \cap W$ is a face of both $V$ and $W$.
\end{enumerate}
\end{definition}
An example of a simplicial complex in $\mathbb{R}^2$ is shown in
Figure~\ref{2dimcomplexfig}.
\begin{figure}[!t
\centerline{\includegraphics{f2-6}}
\fcaption{A simplicial complex in $\mathbb{R}^2$.\label{2dimcomplexfig}}
\end{figure}
Now, as mentioned earlier, there is another definition of simplicial complexes which
simply describes them as collections of finite sets. Following \cite{munkres}, we will
use the term ``abstract simplicial complex'' to distinguish this definition from the
previous one.
\begin{definition}
An \textbf{abstract simplicial complex} is a set $\Delta$ of finite
nonempty sets which satisfies the following condition:
\begin{itemize}
\item If a set $Q$ is an element of $\Delta$, then all nonempty subsets
of $Q$ must also be elements of $\Delta$.
\end{itemize}
\end{definition}
Given a simplicial complex $S$ in $\mathbb{R}^N$, one can obtain an abstract simplicial
complex as follows. Let $T$ be the set of all points in $\mathbb{R}^N$ which occur as
$0$-simplicies in $S$. Let $\Delta_S$ be the set of all subsets $T' \subseteq T$ which
span simplicies that are in $S$. Then, $\Delta_S$ is an abstract simplicial complex. (In
a sense, $\Delta_S$ records the ``gluing information'' for the simplicial complex~$S$.)
It is also easy to perform a reverse construction. Suppose that $\Delta$ is an abstract
simplicial complex. Let
\begin{eqnarray}
U = \bigcup_{Q \in \Delta} Q
\end{eqnarray}
be the union of all of the sets that are contained in $\Delta$. Let $N = \left| U
\right|$. Simply choose a set $V \subseteq \mathbb{R}^N$ consisting of $N$ linearly
independent vectors, and choose a one-to-one map $r \colon U \to V$. Every set in
$\Delta$ determines a simplex in $\mathbb{R}^N$ (via $r$), and the collection of all of
these simplicies is a simplicial complex.
We define some terminology for abstract simplicial complexes.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex. Then,
\begin{itemize}
\item A \textbf{simplex in $\Delta$} is simply an element
of $\Delta$. The \textbf{dimension} of a simplex $Q \in \Delta$, denoted $\dim ( Q )$,
is the \hbox{quantity} \hbox{$(\left| Q \right| - 1)$}. An \textbf{$n$-simplex} in
$\Delta$ is an element of $\Delta$ of \hbox{dimension}~$n$.
\item If $Q, Q' \in \Delta$
and $Q' \subseteq Q$, then we say that $Q'$ is a \textbf{face} of $Q$.
\item The \textbf{vertex set of $\Delta$} is the set
\begin{eqnarray}
\bigcup_{Q \in \Delta} Q.
\end{eqnarray}
Elements of this set are called \textbf{vertices of $\Delta$}.
\end{itemize}
\end{definition}
Here is an initial example of how abstract simplicial complexes arise. Let $F$ be a
finite set. Let $\mathcal{P} ( F )$ denote the power set of $F$. Let $t \colon
\mathcal{P} ( F ) \to \{ 0, 1 \}$ be a function which is ``monotone increasing,'' in the
sense that any pair of sets $(A, B)$ such that $A \subseteq B \subseteq F$ satisfies $t (
A ) \leq t ( B) $. Then, the set
\begin{eqnarray}
\left\{ C \mid \emptyset \subset C \subseteq F \textnormal{ and } t ( C ) = 0 \right\}
\end{eqnarray}
is an abstract simplicial complex.
Thus, a monotone increasing function on a power set determines an abstract simplicial
complex. This connection is the basis for what we will discuss next.
\section{Monotone Graph Properties}\label{graphpropertysimplicial}
Now we will establish a relationship between monotone graph properties and simplicial
complexes. We also introduce a topological concept (``collapsibility'') which has an
important role in this relationship.
Let $V_0$ be a finite set. Using notation from \textit{\nameref{graphpropsection}}, let
$\mf{G} ( V_0 )$ denote the set of all graphs that have vertex set $V_0$. The elements of
$\mf{G} ( V_0 )$ are thus pairs of the form $(V_0 , E)$, where $E$ can be any subset of
the set
\begin{eqnarray}\label{thesetofalledges}
\left\{ \{ v, w \} \mid v , w \in V_0 \right\}.
\end{eqnarray}
Let $h \colon \mf{G} ( V_0 ) \to \{ 0, 1 \}$ be a monotone increasing function. Then
the \textbf{abstract simplicial complex associated with $h$}, denoted
$\Delta_h$, is the set of all nonempty subsets $E$ of set (\ref{thesetofalledges}) such
that
\begin{eqnarray}
h \left( ( V_0, E ) \right) = 0.
\end{eqnarray}
\begin{example}
Consider the set $\mathbf{G} \left( \{ 0, 1, 2, 3 \} \right)$ of graphs on the vertex set
$\{0, 1, 2, 3 \}$. Define functions
\begin{eqnarray}
h_1 \colon \mathbf{G} \left( \{ 0, 1, 2, 3 \} \right) \to \{ 0, 1 \}, \\
h_2 \colon \mathbf{G} \left( \{ 0, 1, 2, 3 \} \right) \to \{ 0, 1 \}
\end{eqnarray}
by
\begin{eqnarray}
h_1 (G) & = & \left\{ \begin{array}{@{}l@{\quad}l@{}}
1 & \textnormal{if $G$ has at least three edges,} \\
0 & \textnormal{otherwise} \end{array}
\right.
\end{eqnarray}
and
\begin{eqnarray}
h_2 (G) & = & \left\{ \begin{array}{@{}l@{\quad}l@{}}
1 & \textnormal{if vertex ``$2$'' has at least one edge in $G$, } \\
0 & \textnormal{otherwise.} \end{array}
\right.
\end{eqnarray}
Then the simplicial complexes for $h_1$ and $h_2$ are shown in
Figures~\ref{graphprop1fig} and \ref{graphprop2fig}.\footnote{Note: Ignore the apparent
intersections in the interior of the diagram for $h_1$. Imagine that the lines in the
diagram only intersect at the labeled points $\{ 0, 1 \}, \{ 0, 2 \} , \{ 1, 2 \} , \{ 0,
3 \} , \{ 2 , 3\}$, and $\{ 1, 3\}$. (To really draw this diagram accurately, we would
need three dimensions.)}
\end{example}
\begin{figure
\centerline{\includegraphics{f2-7}}
\fcaption{The simplicial complex of $h_1$.\label{graphprop1fig}}
\end{figure}
\begin{figure
\centerline{\includegraphics{f2-8}}
\fcaption{The simplicial complex of $h_2$.\label{graphprop2fig}}
\end{figure}
Thus we have a way of associating with any monotone-increasing graph
function
\begin{eqnarray}
h \colon \mathbf{G} ( V_0 ) \to \{ 0, 1 \}
\end{eqnarray}
an abstract simplicial complex $\Delta_h$. The simplices of $\Delta_h$ correspond to
graphs on $V_0$. The vertices of $\Delta_h$ correspond to \textit{edges} (not vertices!)
of graphs on $V_0$.
The association $[ h \mapsto \Delta_h ]$ is useful because it allows us to reinterpret
statements about graph functions in terms of simplicial complexes. What we will do now
is to prove a theorem (for later use) which exploits this association. The theorem
relates a condition on graph functions (``evasiveness,'' from
\textit{\nameref{graphpropsection}}) to a condition on simplicial complexes
(``collapsibility'').
We begin with some definitions.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex, and let \hbox{$\alpha \in \Delta$} be a
simplex. Then $\Delta$ is a \textbf{maximal} simplex if it is not contained in any other
simplex in $\Delta$.
\end{definition}
\begin{definition}
Let $\Delta$ be an abstract simplicial complex, and let \hbox{$\beta \in \Delta$} be a
simplex. Then $\beta$ is called a \textbf{free face} of $\Delta$ if it is
\hbox{nonmaximal} and it is contained in only one maximal simplex in $\Delta$. If $\beta$
is a free face and $\alpha$ is the unique maximal simplex that contains it, then we will
say that \textbf{$\beta$ is a free face of $\alpha$}.
\end{definition}
\begin{definition}
An \textbf{elementary collapse} of an abstract simplicial complex is the operation of
choosing a single free face from the complex and deleting the face along with all the
faces that contain it.
\end{definition}
Here is an example of an elementary collapse: if
\begin{eqnarray}
\Sigma_1 = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 0, 2 \} , \{ 1, 2 \} , \{
0, 1, 2 \} \right\}\!,
\end{eqnarray}
then $\{0, 1\}$ is a free face of $\{0, 1, 2\}$ in $\Delta$. By deleting the simplicies
$\{0,1\}$ and $\{0, 1, 2\}$, we obtain the complex
\begin{eqnarray}
\Sigma_2 = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 2 \} , \{ 1, 2 \} \right\}\!.
\end{eqnarray}
The complex $\Sigma_2$ is an elementary collapse of the complex $\Sigma_1$. See
\hbox{Figure}~\ref{elementarycollapsefig}.
\begin{figure}[!b
\centerline{\includegraphics{f2-9}}
\fcaption{An elementary collapse.\label{elementarycollapsefig}}
\end{figure}
The previous example is an instance of what we will call a \textbf{primitive} elementary
collapse. An elementary collapse is primitive if the free face that is deleted has
dimension one less than the maximal simplex in which it is contained. In such a case,
the maximal simplex and free face itself are the only two simplices that are deleted.
(Not all elementary collapses are primitive. An example of a nonprimitive elementary
collapse would be deleting all of the simplices $\{0\}$, $\{0,1\}$, $\{0,2\}$, and $\{0,
1, 2\}$ from $\Sigma_1$.)
\begin{definition}
Let $\Delta$ be an abstract simplicial complex. Then $\Delta$ is
\textbf{collapsible} if there exists a sequence of elementary collapses
\begin{eqnarray}
\Delta , \Delta_1, \Delta_2 , \Delta_3 , \ldots, \Delta_n
\end{eqnarray}
such that $\left| \Delta_n \right| = 1$.
\end{definition}
In other words, $\Delta$ is collapsible if there exists a sequence of elementary
collapses which reduce $\Delta$ to a single $0$-simplex.
The abstract simplicial complexes $\Sigma_1$ and $\Sigma_2$ defined above are both
collapsible. An example of an abstract simplicial complex that is not collapsible is the
following:
\begin{eqnarray}
\Sigma = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 0, 2 \} , \{ 1, 2 \}
\right\}\!.
\end{eqnarray}
(This simplicial complex has no free faces, and therefore cannot be collapsed.)
The following theorem asserts that the simplicial complexes associated with
certain monotone-increasing graph functions are collapsible. The theorem uses the
concept of ``evasiveness'' from \textit{\nameref{graphpropsection}}.
\begin{theorem}\label{collapsibilitytheorem}
Let $V_0$ be a finite set. Let
\begin{eqnarray}
h \colon \mathbf{G} \left( V_0 \right) \to \{ 0 , 1 \}
\end{eqnarray}
be a monotone-increasing function which is not evasive. If the complex $\Delta_h$ is not
empty, then it is collapsible.
\end{theorem}
\begin{proof}
The theorem has an elegant visual proof. Essentially, what we do is to construct a
decision-tree for $h$ and then read off a collapsing-procedure for $\Delta_h$ from the
decision-tree.\footnote{Thanks to Yaoyun Shi, who suggested the nice visualization that
appears in this proof.}
\begin{figure
\centerline{\includegraphics{f2-10}}
\fcaption{A decision tree.\label{adecisiontreefig}}
\end{figure}
Let $n = |V_0|$. Since we have assumed that the function $h$ is not evasive, there must
exist a decision tree of depth smaller than \hbox{$n(n-1)/2$} which decides $h$. Let $T$
be such a tree. (See Figure~\ref{adecisiontreefig}.) By modifying $T$ if necessary, we
can produce another decision-tree $T'$ which decides $h$ and which satisfies the
following conditions. (See Figure~\ref{decisiontreeTprimefig}.)
\begin{itemize}
\item The paths in $T'$ do not have repeated edges. (That is,
no edge $\{ i , j \}$ appears more than once on any path in $T'$.)
\item Every path in $T'$ has length exactly $[n(n-1)/2 - 1]$.
\end{itemize}
\begin{figure
\centerline{\includegraphics{f2-11}}
\fcaption{A decision tree of uniform height.\label{decisiontreeTprimefig}}
\end{figure}
We can define a natural total ordering on the leaves of tree $T'$. The ordering is
defined by asserting that for any parent-node in the tree, all leaves that can be reached
through the ``Y'' branch of the node are smaller than all the leaves that can be reached
through the ``N'' branch of the node. Since any two leaves share a common ancestor, this
rule gives a total ordering.
For any leaf of tree $T'$, there are exactly two graphs which would cause
the leaf to be reached during computation. Thus there is a
one-to-two correspondence between leaves of $T'$ and graphs
on $V_0$. An example is shown in Figure~\ref{decisiontreedepth2fig}. Note that each leaf
is labeled with either with a ``$1$'' or a ``$0$'', depending on the value taken by the
function $h$ at the corresponding graphs. The simplicial complex $\Delta_h$ is composed
out of the graphs that appear at the ``$0$''-leaves of the tree.
\begin{figure}[!t
\centerline{\includegraphics{f2-12}}
\fcaption{A decision tree of height $2$ for graphs of size
$3$.\label{decisiontreedepth2fig}}
\end{figure}
The ordering of the leaves of $T'$ provides a recipe for collapsing $\Delta_h$. Simply
find the smallest (i.e., leftmost) ``$0$''-leaf that appears in tree $T'$. This leaf
corresponds to a pair of simplices $\gamma_1, \gamma_2 \in \Delta_h$ with $\gamma_1
\subseteq \gamma_2$. From the ordering of the leaves, we can deduce that $\gamma_1$ and
$\gamma_2$ are not contained in any simplices in $\Delta_h$ other than themselves. Thus
$\gamma_1$ is a free face of $\Delta_h$. We can therefore perform an elementary
collapse: let
\begin{eqnarray}
\Delta_1 = \Delta_h \smallsetminus \{ \gamma_1 , \gamma_2 \}.
\end{eqnarray}
Now find the second smallest $0$-leaf that appears in $T'$. This leaf corresponds to
another pair of simplices $\gamma'_1, \gamma'_2 \in \Delta_h$ which are not contained in
any other simplices in $\Delta_h$, except possibly $\gamma_1$ or $\gamma_2$. Perform
another elementary collapse:
\begin{eqnarray}
\Delta_2 = \Delta_1 \smallsetminus \{ \gamma'_1 , \gamma'_2 \}.
\end{eqnarray}
Continuing in this manner, we can obtain a sequence of elementary collapses
\begin{eqnarray}
\Delta_h, \Delta_1 , \Delta_2 , \Delta_3 , \ldots , \Delta_n
\end{eqnarray}
such that $\left| \Delta_n \right| = 1$. Therefore, $\Delta_h$ is collapsible.
\end{proof}
\section{Group Actions on Simplicial Complexes}\label{groupactionsection}
Now we define the notion of a \textbf{simplicial isomorphism} between abstract simplicial
complexes. This is a case of the more general notion of a simplicial map (see
\cite{munkres}).
\begin{definition}
Let $\Delta$ and $\Delta'$ be abstract simplicial complexes. A simplicial isomorphism
from $\Delta$ to $\Delta'$ is a bijective map
\begin{eqnarray}
f \colon \Delta \to \Delta'
\end{eqnarray}
which is such that for any $Q_1, Q_2 \in \Delta$,
\begin{eqnarray}
Q_1 \subseteq Q_2 \hskip0.2in \Longleftrightarrow \hskip0.2in
f ( Q_1 ) \subseteq f (Q_2 ).
\end{eqnarray}
\end{definition}
In other words, a simplicial isomorphism between two abstract complexes $\Delta$,
$\Delta'$ is a one-to-one matching $f$ between the simplicies of~$\Delta$ and~$\Delta'$
which respects inclusion. We note the following assertions, which can be proven easily
from this definition:
\begin{itemize}
\item If $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then
$f$ respects dimension (i.e., if $Q \in \Delta$ is an $n$-simplex, then
$f(Q)$ must be an $n$-simplex).
\item If $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then there is an associated map of
vertex sets
\begin{eqnarray}
\hat{f} \colon \bigcup_{Q \in \Delta} Q \to
\bigcup_{Q' \in \Delta'} Q'
\end{eqnarray}
defined by $f ( \{ v \} ) = \{ \hat{f} ( v ) \}$. (Let us call this the \textbf{vertex
map} of $f$.) The map $\hat{f}$ uniquely determines $f$.
\end{itemize}
Let $\Delta$ be an abstract simplicial complex. A simplicial automorphism of $\Delta$ can
be specified either as an inclusion preserving permutation of the elements of $\Delta$,
or simply as a permutation
\begin{eqnarray}
b \colon \bigcup_{Q \in \Delta} Q \to \bigcup_{Q \in \Delta} Q
\end{eqnarray}
of the vertex set of $\Delta$ satisfying
\begin{eqnarray}
Q \in \Delta \Longrightarrow b ( Q ) \in \Delta.
\end{eqnarray}
When we speak of a \textbf{group action} $G \circlearrowleft \Delta$, we mean an action
of a group~$G$ on $\Delta$ by simplicial automorphisms.
In \textit{\nameref{fptchapter}} we will be concerned with determining the ``fixed
points'' of a group action on an abstract simplicial complex. As we will see, describing
this set requires some care. One could simply take the set $\Delta^G$ of $G$-invariant
simplices. But this set is not always subcomplex of $\Delta$. Consider the
two-dimensional complex $\Sigma$ in Figure~\ref{groupactionfig}, which
consists of the sets $\{ 0, 1, 2 \}$ and $\{ 0, 2, 3 \}$ and all of their proper nonempty
subsets. If we let $f \colon \Sigma \to \Sigma$ be the simplicial automorphism which
transposes $\{ 1 \}$ and $\{3 \}$ and leaves $\{ 0 \}$ and $\{ 2 \}$ fixed, then
$\Sigma^f$ is a subcomplex of $\Sigma$. However, if we let $h \colon \Sigma \to \Sigma$
be the simplicial automorphism which transposes $\{ 0 \}$ and $\{ 2 \}$ and leaves $\{ 1
\}$ and $\{ 3 \}$ fixed, then $\Delta^h$ is not a subcomplex of $\Sigma$, since it
contains the set $\{ 0, 2 \}$ but does not contain its subsets $\{ 0 \}$ and $\{ 2 \}$.
\begin{figure}[!b
\centerline{\includegraphics[scale=1.03]{f2-13}}
\fcaption{The complex $\Sigma$.\label{groupactionfig}}
\end{figure}
It is helpful to look at group actions on abstract simplicial \hbox{complexes} in terms
of the geometric representation introduced in
\textit{\nameref{simplicialcomplexsection}}. Let $\mf{e}_0, \mf{e}_1, \ldots, \mf{e}_n$
be the standard basis vectors in $\mathbb{R}^{n+1}$. These vectors span an $n$-simplex
\begin{eqnarray}
\delta = \left\{ \sum_{i=0}^n c_i \mf{v}_i \mid 0 \leq c_i \leq 1 , \sum_{i=0}^n c_i = 1
\right\}\!.
\end{eqnarray}
If $f \colon \{ 0, 1, \ldots, n \} \to \{ 0, 1, \ldots, n \}$ is a permutation with
orbits $B_1, \ldots, B_m \subseteq \{ 0, 1, \ldots, n \}$, then $f$ induces a bijective
map on $\delta$. The invariant set $\delta^f$ consists of those linear combinations
$\sum c_i \mf{v}_i$ satisfying the condition that $c_i = c_j$ whenever $i$ and $j$ lie in
the same orbit. The set $\delta^f$ is an $(m-1)$-simplex which is spanned by the vectors
\begin{eqnarray}
\left\{ \frac{ \sum_{i \in B_k } \mf{v}_i }{\left| B_k \right|} \mid k = 1, 2, \ldots, m
\right\}\!.
\end{eqnarray}
This motivates the following definition.
\enlargethispage{4pt}
\begin{definition}
Let $\Delta$ be a finite abstract simplicial complex with vertex set $V$, and let $G
\circlearrowleft \Delta$ be a group action. Let $A_1, \ldots, A_m \subseteq V$ denote
the orbits of the action of $G$ on $V$. Then, let $\Delta^{[G]}$ denote the set of all
subsets $T \subseteq \{ A_1, \ldots , A_m \}$ satisfying
\begin{eqnarray}
\bigcup_{S \in T} S \in \Delta.
\end{eqnarray}
\end{definition}
\noindent It is easy to see that the set $\Delta^{[G]}$ is always a simplicial complex.
In the case of the complex $\Sigma$ from Figure~\ref{groupactionfig}, if we let $H$ be
the group generated by the automorphism $h$ which transposes $\{ 0 \}$ and $\{ 2 \}$, the
complex $\Sigma^{[H]}$ is one-dimensional and consists of
three zero simplices and two one-simplices. (See
Figure~\ref{groupaction2fig}.) The vertices of $\Sigma^{[H]}$ are the orbits $\{ 1 \}$,
$\{ 3 \}$, and $\{ 0, 2 \}$.
\begin{figure}[!b
\centerline{\includegraphics{f2-14}}
\fcaption{The complex $\Sigma^{[H]}$.\label{groupaction2fig}}
\vspace*{-3pt}
\end{figure}
This complex $\Delta^{[G]}$ will be important in \textit{\nameref{fptchapter}}.
\chapter{Chain Complexes}\label{chaincomplexchapter}
\vspace*{-12pt}
In this part of the text we will introduce some algebraic objects which are crucial for
measuring the behavior of simplicial complexes. The \hbox{central} objects of concern
are \textbf{chain complexes} and \textbf{homology groups}. We will define these objects
and develop some important tools for dealing with them.
\section{Definition of Chain Complexes}\label{chaincomplexsection}
A \textbf{complex of abelian groups} is a sequence of abelian groups
\begin{eqnarray}
Z_0 , Z_1, Z_2, \ldots
\end{eqnarray}
together with group homomorphisms $d_i \colon Z_i \to Z_{i-1}$ for each $i > 0$,
\hbox{satisfying} the condition
\begin{eqnarray}
d_{i-1} \circ d_i = 0
\end{eqnarray}
(or equivalently, $\im d_i \subseteq \ker d_{i-1}$). The groups $Z_i$ and the maps $d_i$
are often expressed in a diagram like so:
\begin{eqnarray}
\xymatrix{\cdots \ar[r] & Z_3 \ar[r]^{d_3}
& Z_2 \ar[r]^{d_2}
& Z_1 \ar[r]^{d_1}
& Z_0}
\end{eqnarray}
We abbreviate the complex as $Z_\bullet$.
A chain complex is a particular complex of abelian groups that is obtained from a
simplicial complex. The definition of chain complex that we will use requires first
choosing a total ordering of the vertices of the abstract simplicial complex in question.
If the vertices of the abstract simplicial complex happen to be elements of a totally
ordered set (such as the set of integers), then our choice is already made for us.
Otherwise, it is necessary before applying our definition to specify what ordering of
vertices we are using. The particular choice of ordering is not terribly important, but
it must be made consistently.
We introduce some new notation which takes this ordering issue into account.
\begin{notation}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are all elements of $V$. For any sequence of distinct elements $v_0, v_1,
\ldots, v_n \in V$ such that
\begin{eqnarray}
\left\{ v_0, \ldots , v_n \right\} \in \Delta
\end{eqnarray}
and
\begin{eqnarray}
v_0 < v_1 < v_2 < \ldots < v_n,
\end{eqnarray}
let
\begin{eqnarray}
[v_0, v_1, \ldots , v_n]
\end{eqnarray}
denote the $n$-simplex $\left\{ v_0, \ldots , v_n \right\}$ in $\Delta$.
\end{notation}
\noindent This notation allows us to cleanly handle the ordering on the vertices of an
abstract simplicial complex. Note that if we say, ``$[ v_0, v_1, \ldots , v_n ]$ is a
simplex in $\Delta$'', we are implying both that $\{ v_0 , \ldots , v_n \}$ is an
element of $\Delta$ \textit{and} that the sequence $v_0, v_1, \ldots, v_n$ is in
ascending order.
Now we will define the sequence of groups which make up a chain complex.
\begin{definition}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Let $n$ be a nonnegative integer. Then, the
\textbf{$n$th chain group of $V$ over $\mathbb{R}$}, denoted $K_n ( \Delta, \mathbb{R}
)$, is the set of all formal $\mathbb{R}$-linear combinations of $n$-simplices in
$\Delta$.
\end{definition}
\begin{example}\label{triangleexample}
Let $\Sigma$ be the simplicial complex
\begin{eqnarray}
\Sigma = \left\{ \{ 0 \}, \{ 1 \} , \{ 2 \} , \{ 0, 1 \},
\{ 1, 2 \}, \{ 0, 2 \} \right\}.
\end{eqnarray}
Then, $\Sigma$ has three zero-simplices ($[0]$, $[1]$, and $[2]$) and three one-simplices
($[0,1]$, $[1,2]$, and $[0,2]$). The chain group $K_0 \left( \Sigma, \mathbb{R} \right)$
is a three-dimensional real vector space, and its elements can be expressed
in the form
\begin{eqnarray}
r_1 [ 0 ] + r_2 [ 1 ] + r_3 [2],
\end{eqnarray}
where $r_1$, $r_2$, and $r_3$ denote real numbers. The chain group $K_1 \left( \Sigma,
\mathbb{R} \right)$ is a three-dimensional real vector space, and its
elements can be expressed in the form
\begin{eqnarray}
r_4 [0, 1] + r_5 [1, 2] + r_6 [0, 2 ],
\end{eqnarray}
where $r_4$, $r_5$, and $r_6$ denote real numbers.
\end{example}
In general, if $\Delta$ is an abstract simplicial complex, then $K_n \left( \Delta,
\mathbb{R} \right)$ is a real vector space whose dimension is equal to the number
$n$-simplicies in $\Delta$. (If $\Delta$ has no $n$-simplicies, then $K_n \left( \Delta
, \mathbb{R} \right)$ is a zero vector space.)
\begin{definition}\label{boundarymapdef}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Let $n$ be a positive integer. Then the
\textbf{boundary map} on the $n$th chain group of $\Delta$ (over $\mathbb{R}$) is the
unique $\mathbb{R}$-linear homomorphism
\begin{eqnarray}
d_n \colon K_n \left( \Delta , \mathbb{R} \right)
\to K_{n-1} \left( \Delta , \mathbb{R} \right)
\end{eqnarray}
defined by the equations
\begin{eqnarray}
\label{boundarymapdefeqn}
d_n \left( [ v_0, v_1, \ldots, v_n ] \right) =
\sum_{i = 0}^n (-1)^i [ v_0, v_1, \ldots, v_{i-1} , v_{i+1} ,
\ldots, v_n ]
\end{eqnarray}
(where $[v_0, v_1, \ldots, v_n]$ can be taken to be any $n$-simplex in $\Delta$.)
\end{definition}
\begin{example}\label{solidtriangleexample}
Let
\begin{eqnarray}
\Sigma' = \left\{ \{ 0 \} , \{ 1 \}, \{ 2 \} , \{ 0, 1 \} , \{ 1, 2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray}
Then the boundary map
\begin{eqnarray}
d_2 \colon K_2 \left( \Sigma' , \mathbb{R} \right)
\to K_1 \left( \Sigma' , \mathbb{R} \right)
\end{eqnarray}
is defined by the equation
\begin{eqnarray}
d_2 \left( [ 0, 1, 2 ] \right) & = & [1, 2] - [0, 2] + [0, 1].
\end{eqnarray}
The boundary map
\begin{eqnarray}
d_1 \colon K_1 \left( \Sigma' , \mathbb{R} \right)
\to K_0 \left( \Sigma' , \mathbb{R} \right)
\end{eqnarray}
is defined by the equations
\begin{eqnarray}
d_1 \left( [ 0, 1 ] \right) & = & [0] - [1] \\
d_1 \left( [ 0, 2 ] \right) & = & [0] - [2] \\
d_1 \left( [ 1, 2 ] \right) & = & [1] - [2].
\end{eqnarray}
\end{example}
Note that in equation~(\ref{boundarymapdefeqn}), the simplicies that appear on the right
side are precisely the $(n-1)$-simplex faces of the simplex $[v_0, v_1, \ldots, v_n]$.
Geometrically, if $U \subseteq \mathbb{R}^N$ is an $n$-simplex, then the codimension-$1$
faces of $U$ make up the boundary (or exterior) of the set $U$. This gives us an idea of
why $d_n$ is called a ``boundary'' map.
\begin{proposition}\label{doublezeroprop}
Let $\Delta$ be an abstract simplicial complex whose vertices are totally ordered. Let
$n$ be an integer such that $n \geq 2$. Then the map
\begin{eqnarray}
d_{n-1} \circ d_{n} \colon K_n \left( \Delta, \mathbb{R} \right)
\to K_{n-2} \left( \Delta, \mathbb{R} \right)
\end{eqnarray}
is the zero map.
\end{proposition}
\begin{proof}
Let $Q = [v_0, v_1, \ldots , v_n]$ be an $n$-simplex in $\Delta$. Then,
applying Definition (\ref{boundarymapdef}) twice, we find
\begin{eqnarray*}
&&d_{n-1} \left( d_n \left( Q \right) \right)\\
& &\qquad = \sum_{i=0}^n d_{n-1} \left( (-1)^i [ v_0, \ldots, v_{i-1} , v_{i+1} ,
\ldots v_n ] \right) \\
&&\qquad = \sum_{i=0}^n \left( \sum_{j=0}^{i-1} (-1)^{i+j} [v_0, \ldots , v_{j-1} ,
v_{j+1} , \ldots
v_{i-1}, v_{i+1}, \ldots , v_n ] \right. \\
&&\qquad \quad +\! \left. \sum_{j=i+1}^n (-1)^{i+j-1} [v_0, \ldots , v_{i-1}, v_{i+1} ,
\ldots , v_{j-1}, v_{j+1}, \ldots, v_n ]\!\!\right)\!.
\end{eqnarray*}
All terms in this double-summation cancel, and thus we find that
\begin{eqnarray}
d_{n-1} ( d_n ( Q ) ) = 0.
\end{eqnarray}
Therefore by linearity, $d_{n-1} \circ d_n$ is the zero map.
\end{proof}
\noindent If $\Delta$ is an abstract simplicial complex with ordered vertices, then the
\textbf{chain complex of $\Delta$ over $\mathbb{R}$} is the set of $\mathbb{R}$-chain
groups of $\Delta$ together with their boundary maps:
\begin{eqnarray}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{R} \right)
\ar[r]^{d_2} & K_1 \left( \Delta , \mathbb{R} \right) \ar[r]^{d_1} &
K_0 \left( \Delta , \mathbb{R} \right) \ar[r]^{d_0} & 0}
\end{eqnarray}
For any $n$, the \textbf{$n$th homology group} of $\Delta$ is defined by
\begin{eqnarray}
H_n \left( \Delta , \mathbb{R} \right) = (\ker d_n)/(\im d_{n+1})\!,
\end{eqnarray}
Consider the complex $\Sigma$ from Example~\ref{triangleexample}. The kernel of $d_0$ is
the entire space $K_0 ( \Delta , \mathbb{R} )$, while the image of $d_1$ is the set of
all linear combinations $r_1 [0 ] + r_2 [1 ] + r_3 [2]$ which are such that \hbox{$r_1 +
r_2 + r_3 = 0$}. The quotient $H_0 ( \Delta , \mathbb{R} ) = \ker d_0 / \im d_1$ is a
one-dimensional real \hbox{vector} space. The homology group $H_1 ( \Delta , \mathbb{R}
) = \ker d_1 / \{ 0 \}$ is also a one-dimensional real vector space, spanned by the
element $[0, 1] - [0, 2] + [1, 2]$. All other homology groups of $\Sigma$ are
zero-dimensional.
As we will see in \textit{\nameref{picturingsection}}, the homology groups are
interesting because they supply structural information about the complex $\Delta$. As an
initial example, the reader is invited to prove the \hbox{following} fact as an
exercise:\vadjust{\pagebreak} for any finite abstract simplicial \hbox{complex}~$\Delta$,
the dimension of $H_0 ( \Delta , \mathbb{R} )$ is equal to the number of connected
components of $\Delta$.
Although we defined chain groups using $\mathbb{R}$ (the set of real numbers), it is
possible to define them using other algebraic structures in place of $\mathbb{R}$. Here
is a definition for chain groups over $\mathbb{F}_p$. Proposition~\ref{doublezeroprop}
and the definition of homology groups carry over immediately to this case.
\begin{definition}
Let $V$ be a totally ordered set, and let $\Delta$ be an abstract simplicial complex
whose vertices are elements of $V$. Then $K_n \left( \Delta , \mathbb{F}_p \right)$
denotes the vector space of formal $\mathbb{F}_p$-linear combinations of $n$-simplicies
in $V$. For each $n \geq 1$, the map
\begin{eqnarray}
d_n \colon K_n \left( \Delta , \mathbb{F}_p \right) \to
K_{n-1} \left( \Delta , \mathbb{F}_p \right)
\end{eqnarray}
is the unique $\mathbb{F}_p$-linear map defined by
\begin{eqnarray}
d_n \left( [ v_0, v_1, \ldots, v_n ] \right) = \sum_{i = 0}^n (-1)^i [ v_0, v_1, \ldots,
v_{i-1} , v_{i+1} , \ldots , v_n].\\[-13pt]\nn
\end{eqnarray}
\end{definition}
For the rest of this exposition we will be focusing on homology groups with coefficients
in $\mathbb{F}_p$, since these will eventually be the basis for our proofs of fixed-point
theorems. Much of what we will do in this text with $\mathbb{F}_p$-homology could be done
just as well with $\mathbb{R}$-homology, but there will be a key result
(Proposition~\ref{acyclicitypreservation}) which depends critically on the fact that we
are using coefficients in $\mathbb{F}_p$.
\section{Chain Complexes and Simplicial Isomorphisms}\label{chainmapsection}
Suppose that
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & I_{n+1} \ar[r]^{d_{n+1}}
& I_n \ar[r]^{d_n} & I_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
and
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & J_{n+1} \ar[r]^{d_{n+1}}
& J_n \ar[r]^{d_n} & J_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
are two complexes of abelian groups. A \textbf{map of complexes} $F \colon I_\bullet \to
J_\bullet$ is a family of homomorphisms
\begin{eqnarray}
F_n \colon I_n \to J_n
\end{eqnarray}
such that
\begin{eqnarray}
d_n \circ F_n = F_{n-1} \circ d_n.
\end{eqnarray}
Note that, as a consequence of this rule, the map $F_n$ must send the kernel of $d_n^I$
to the kernel of $d_n^J$. Moreover, the family $F$ induces maps on homology groups
\begin{eqnarray}
H_n ( I_\bullet ) \to H_n ( J_\bullet ).
\end{eqnarray}
for every $n$.
Let $p$ be a prime. We are going to define the maps of chain complexes that are
associated with simplicial isomorphisms. Some care must be taken in this
definition. Let $f \colon \Delta \to \Delta'$ be a simplicial isomorphism. An obvious way
to map $K_n \left( \Delta , \mathbb{F}_p \right)$ to $K_n \left( \Delta' , \mathbb{F}_p
\right)$ would be to naively apply $f$ like so: $\sum c_i Q_i \mapsto \sum c_i f ( Q_i
)$. However, this definition does not necessarily give a map of complexes, because
it is not necessarily compatible with the maps $d_i$. The reader will recall that the
definition of $d_i$ depends on the ordering of the vertices of the simplicial complex in
question. The map $f$ may not be compatible with the ordering of the vertices of $\Delta$
and $\Delta'$. In our definition of the maps $K_n \left( \Delta , \mathbb{F}_p \right)
\to K_n \left( \Delta , \mathbb{F}_p \right)$, we need to take this ordering issue into
account.
Note that for any bijection $g \colon S_1 \to S_2$ between two totally ordered sets $S_1$
and $S_2$, there is a unique permutation $\alpha \colon S_2 \to S_2$ which makes the
composition $\alpha \circ g$ an order-preserving map. Let us say that the \textbf{sign}
of the map $g$ is the sign of its associated permutation $\alpha$.\footnote{See
\cite{lang}, pp.~30--31 for a definition of the sign of a permutation. Briefly: if
$\sigma : X \to X$ is a permutation of a finite set $X$, then we can write $\sigma =
\tau_1 \circ \tau_2 \circ \ldots \circ \tau_m$ for some $m$, where each of the maps
$\tau_i \colon X \to X$ is a permutation which transposes two elements. The sign of
$\sigma$ is $(-1)^m$.}
\begin{definition}\label{chainmapdef}
Suppose that $\Delta$ and $\Delta'$ are abstract simplicial complexes whose vertex
sets are totally ordered. Suppose that $f \colon \Delta \to \Delta'$ is a simplicial
isomorphism and that $\hat{f}$ is its vertex map. Let $p$ be a prime, and let $n$ be a
nonnegative integer. The \textbf{$n$th chain map associated with $f$} (over
$\mathbb{F}_p$) is the unique $\mathbb{F}_p$-linear map
\begin{eqnarray}
F_n \colon K_n ( \Delta , \mathbb{F}_p ) \to K_n ( \Delta' , \mathbb{F}_p )
\end{eqnarray}
given by
\begin{eqnarray}
Q & \mapsto & \big( \sign (\hat{f}_{\mid Q}) \big) f ( Q ).
\end{eqnarray}
for all $Q \in \Delta$. Here, $( \sign ( \hat{f}_{\mid Q} ) )$ denotes the sign of the
bijection $( \hat{f} )_{\mid Q} \colon Q \to f ( Q )$.
\end{definition}
Let $\Sigma'$ be the complex from Example~\ref{solidtriangleexample}, and let $g \colon
\Sigma' \to \Sigma'$ be the automorphism given by the permutation $[0 \mapsto 1, 1
\mapsto 2, 2 \mapsto 0]$. Then the chain maps $G_n$ associated with $g$ are
as shown below.
\begin{eqnarray*}
\begin{array}{ccccc}
G_0 ( [0] ) = [1] && G_1 ([0,1]) = [1,2] && \\
G_0 ( [1] ) = [2] && G_1 ([1,2]) = -[0,2] && G_2 ( [0, 1, 2] ) = [0, 1, 2] \\
G_0 ( [2] ) = [0] & & G_1 ( [0,2]) = -[0,1] &&
\end{array}
\end{eqnarray*}
\begin{proposition}
The chain maps $F_n$ of Definition~\ref{chainmapdef} determine a map of complexes,
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left ( \Delta' ,
\mathbb{F}_p \right)\!.
\end{eqnarray}
\end{proposition}
\begin{proof}
It suffices to show that for any $n > 0$, and any $n$-simplex $Q \in \Delta$,
\begin{eqnarray}
d_n ( F_n ( Q ) ) = F_n ( d_n ( Q ) ).
\end{eqnarray}
Let $n$ be a positive integer, and let $Q \in \Delta$ be an $n$-simplex. Write the
simplices $Q$ and $f(Q)$ as
\begin{eqnarray}
Q = [v_0, v_1, \ldots , v_n ], \qquad f(Q) = [w_0, w_1, \ldots, w_n ].
\end{eqnarray}
(Here, as usual, we assume that the sequences $v_0, \ldots , v_n$ and $w_0, \ldots , w_n$
are in ascending order.) The elements $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q ) )$ are
linear combinations of faces of the simplex $[w_0, \ldots , w_n]$. We need simply to
show that the coefficients in the expressions for $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q
) )$ are the same.
Suppose that the face
\begin{eqnarray}
[v_0, v_1, \ldots , v_{i-1} , v_{i+1} , \ldots , v_n]
\end{eqnarray}
of $Q$ maps to the face
\begin{eqnarray}
[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]
\end{eqnarray}
under $f$. Then, by applying the definitions of $d_n$ and $F_n$ we find that the
coefficient of $[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in $d_n ( F_n ( Q
) )$ is
\begin{eqnarray}\label{coeff1}
(-1)^j \big( \sign \hat{f}_{\mid Q} \big),
\end{eqnarray}
whereas the coefficient of $[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in
$F_n ( d_n ( Q ) )$~is
\begin{eqnarray}\label{coeff2}
\big(\sign \hat{f}_{\mid \{ v_0, \ldots , v_{i-1} , v_{i+1} , \ldots v_n \}} \big)
(-1)^i.
\end{eqnarray}
It is a fact (easily proven from the definition of sign) that
\begin{eqnarray}
\big(\sign \hat{f}_{\mid \{ v_0, \ldots , v_{i-1} , v_{i+1} , \ldots v_n \}} \big) =
(-1)^{j-i} \big(\sign \hat{f}_{\mid Q} \big).
\end{eqnarray}
Therefore quantities~(\ref{coeff1}) and (\ref{coeff2}) are equal. So the coefficients of
$[w_0, w_1, \ldots , w_{j-1} , w_{j+1} , \ldots , w_n]$ in $d_n ( F_n ( Q ) )$ and $F_n (
d_n ( Q ) )$ are the same. This reasoning can be repeated to show that all of the
coefficients in $d_n ( F_n ( Q ) )$ and $F_n ( d_n ( Q ) )$ are the same.
\end{proof}
We have proven that if $f \colon \Delta \to \Delta'$ is a simplicial isomorphism, then
there is induced chain map (in fact, an isomorphism),
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta' ,
\mathbb{F}_p \right)\!.
\end{eqnarray}
This chain map induces vector space isomorphisms
\begin{eqnarray}
H_n \left( \Delta , \mathbb{F}_p \right)
\to H_n \left( \Delta' , \mathbb{F}_p \right)
\end{eqnarray}
for every $n \geq 0$. (We may denote these maps using the same symbol,~$F$.)
\section{Picturing Homology Groups}\label{picturingsection}
Before continuing any further with our technical discussion of chain complexes, let us
take a moment to explore some geometric interpretations for the concepts introduced so
far. For convenience, we will assume in the following discussion that $p$ is a prime
greater than or equal to~$5$.
Consider the the two-dimensional simplicial complex $\Gamma$ shown in
Figure~\ref{trianglefig1}. If $[v_0, v_1]$ is a $1$-simplex (where we assume the
existence of an ordering under which $v_0 < v_1$), then let us represent the chain
element $[v_0, v_1 ] \in K_1 ( \Gamma , \mathbb{F}_p )$ by drawing an arrow from $v_0$ to
$v_1$, and let us represent the negation $- [v_0, v_1 ] \in K_1 ( \Gamma , \mathbb{F}_p
)$ by drawing an arrow from $v_1$ to $v_0$. We can likewise use double-headed arrows to
represent\vadjust{\pagebreak} the elements $2 [v_0, v_1]$ and $-2 [v_0, v_1]$. Sums of
such elements can be represented as collections of arrows. In this way we can draw
some of the elements of $K_1 ( \Gamma , \mathbb{F}_p )$ as diagrams like the one in
Figure~\ref{trianglefig1}.
\begin{figure}[!b
\centerline{\includegraphics{f3-1}}
\fcaption{A complex $\Gamma$ and a chain element $a \in K_1 ( \Gamma , \mathbb{F}_p
)$.\label{trianglefig1}}
\end{figure}
\begin{figure}[!b
\centerline{\includegraphics{f3-2}}
\fcaption{Two elements $x,y \in K_1 ( \Gamma , \mathbb{F}_p )$ which are contained in the
same coset of $H_1 ( \Gamma , \mathbb{F}_p )$.\label{trianglefig3}}
\end{figure}
An element $c \in K_1 ( \Gamma , \mathbb{F}_p )$ that is represented in this way will
satisfy $dc = 0$ if and only if for every vertex $v$ of $\Gamma$, the total multiplicity
of incoming arrows at $v$ is the same, mod $p$, as the total multiplicity of the outgoing
arrows at $v$. The element $a$ represented in Figure~\ref{trianglefig1} is such a case.
Each element $c \in K_1 ( \Gamma , \mathbb{F}_p )$ satisfying $dc = 0$ represents an
element of the quotient $H_1 ( \Gamma , \mathbb{F}_p ) = \ker d_1 / \im d_2$, and thus
we can use this geometric interpretation to understand $H_1 ( \Gamma , \mathbb{F}_p )$.
Note that, although there are many diagrams that we could draw which satisfy the
balanced-multiplicity condition mentioned above, it will often occur that two diagrams
represent the same element of $H_1 ( \Gamma , \mathbb{F}_p )$. Figure~\ref{trianglefig3}
gives an example. In fact, any two elements $u, v \in \ker d_1$ will lie in the same
coset of $H_1 ( \Gamma , \mathbb{F}_p )$ if and only if the amount of flow around the
missing center triangle of $\Gamma$ is the same mod $p$ for both $u$ and $v$. This makes
it easy to express the structure of $H_1 ( \Gamma , \mathbb{F}_p )$: if we let $\alpha
\in H_1 ( \Gamma , \mathbb{F}_p )$ be the coset containing the element $y$ from
Figure~\ref{trianglefig3}, then $H_1 ( \Gamma , \mathbb{F}_p )$ is a one-dimensional
$\mathbb{F}_p$-vector space that is spanned by $\alpha$.
Meanwhile, it is easy to see that $\ker d_2 = \{ 0 \}$ and hence \hbox{$H_2 ( \Gamma ,
\mathbb{F}_p ) = \{ 0 \}$}. We thus have the following:
\begin{eqnarray}
H_0 ( \Gamma , \mathbb{F}_p ) & \cong & \mathbb{F}_p \\
H_1 ( \Gamma , \mathbb{F}_p ) & \cong & \mathbb{F}_p \\
H_i ( \Gamma , \mathbb{F}_p ) & \cong & \{ 0 \} \hskip0.2in
\textnormal{for all } i \geq 2.
\end{eqnarray}
This kind of reasoning can be used to describe the homology groups of any finite
simplicial complex $\Pi$ that is contained in $\mathbb{R}^2$. The dimension of $H_1 ( \Pi
, \mathbb{F}_p )$ for such a complex is always equal to the number holes enclosed by
$\Pi$.
\begin{figure}[!b
\centerline{\includegraphics[scale=0.99]{f3-3}}
\fcaption{The complex $\Gamma'$ and the effect of three different
automorphisms.\label{2trianglefig}}
\vspace*{-3pt}
\end{figure}
Such visualizations are also useful for understanding the behavior of homology groups
under automorphisms. Figure~\ref{2trianglefig} shows an example of a simplicial complex
$\Gamma'$ for which $H_1 ( \Gamma' , \mathbb{F}_p ) \cong \mathbb{F}_p^2$. Any
automorphism of $\Gamma'$ induces a linear automorphism of $H_1 ( \Gamma' , \mathbb{F}_p
)$. The figure describes a few such automorphism in terms of two chosen basis elements
$\lambda , \beta \in H_1 ( \Gamma' , \mathbb{F}_p )$.
\begin{figure}[!t
\centerline{\includegraphics[scale=0.95]{f3-4}}
\fcaption{The complex $\Lambda$ and the effect of three different
automorphisms.\label{torusfigure}}
\vspace*{-3pt}
\end{figure}
To observe nontrivial automorphisms of higher homology groups, we need to consider
simplicial complexes in three-dimensional space. Figure~\ref{torusfigure} shows a
simplicial complex $\Lambda$ in $\mathbb{R}^3$ which has the shape of a torus. Let $z
\in K_2 ( \Lambda , \mathbb{F}_p )$ be a linear combination of\vadjust{\pagebreak} all
the $2$-simplices in $\Lambda$ in which the coefficient of the simplex $[v_0, v_1, v_2]$
in $z$ is $(+1)$ if the vertices $v_0$, $v_1$, and $v_2$ appear in clockwise order on the
surface of the torus, and $(-1)$ if they appear in counterclockwise order. When
Definition~\ref{boundarymapdef} is applied to compute $d z$, all terms cancel and we find
that $d z = 0$. The element $z$ determines a coset $\delta \in H_2 ( \Lambda ,
\mathbb{F}_p)$, which spans the one-dimensional space $H_2 ( \Lambda, \mathbb{F}_p )$.
\enlargethispage{12pt}
Figure~\ref{torusfigure} gives a basis $\{ \sigma , \rho \}$ for the
two-dimensional space $H_1 ( \Lambda , \mathbb{F}_p )$, and explains the
effect of various automorphisms on $H_1 ( \Lambda , \mathbb{F}_p )$ and $H_2 ( \Lambda ,
\mathbb{F}_p )$.
\section{Some Homological Algebra}
We resume developing concepts from an algebraic standpoint. It is helpful now to take
time to study homology groups in a more abstract setting, without reference to simplicial
complexes. For any complex of abelian groups
\begin{equation}
\xymatrix{\ldots
\ar[r] & K_{n+1} \ar[r]^{d_{n+1}}
& K_n \ar[r]^{d_n} & K_{n-1} \ar[r]^{d_{n-1}} & \ldots},
\end{equation}
the $n$th homology group of $K_\bullet$ is defined by
\begin{equation}
H_n ( K_\bullet , \mathbb{F}_p ) =
( \ker d_n ) / ( \im d_{n+1} ).
\end{equation}
In this part of the text we will state a result (Proposition~\ref{snakelemmaprop}) which
allows us to relate the homology groups\vadjust{\pagebreak} of $K_\bullet$ to the
homology groups of smaller complexes. This will be an essential building block in later
proofs.
Let us say that a sequence of maps of abelian groups
\begin{eqnarray}
\xymatrix{ \ldots \ar[r] & A_{n+1} \ar[r]^{f_{n+1}} &
A_n \ar[r]^{f_n} &
A_{n-1} \ar[r]^{f_{n-1}} & \ldots }
\end{eqnarray}
is \textbf{exact} if it satisfies the condition $\ker f_n = \im f_{n+1}$ for every $n$.
Thus, a sequence of the form
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & P \ar[r]^f & Q \ar[r]^g & R \ar[r] & 0}
\end{eqnarray}
is\enlargethispage{12pt} exact if and only if $f$ is injective, $g$ is surjective, and
$\im f = \ker g$. (Note that this makes $R$ isomorphic to the quotient $Q / f ( P )$.)
Suppose that a sequence of maps of complexes
\begin{eqnarray}\label{seqofcomplexes}
\xymatrix{ 0 \ar[r] & X_\bullet \ar[r]^F & Y_\bullet \ar[r]^G & Z_\bullet \ar[r] & 0}
\end{eqnarray}
is such that
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & X_n \ar[r]^{F_n} &
Y_n \ar[r]^{G_n} & Z_n \ar[r] & 0 }
\end{eqnarray}
is an exact sequence for every $n$. Then we will say that~(\ref{seqofcomplexes}) is an
exact sequence of complexes.
I claim that if
\begin{eqnarray}
\xymatrix{
& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\
0 \ar[r] & X_{n+1} \ar[r]^F \ar[d]^d & Y_{n+1} \ar[r]^G \ar[d]^d & Z_{n+1} \ar[r] \ar[d]^d & 0 \\
0 \ar[r] & X_n \ar[r]^F \ar[d] & Y_n \ar[r]^G \ar[d] & Z_n \ar[r] \ar[d] & 0 \\
& \vdots & \vdots & \vdots & \\
}
\end{eqnarray}
\noindent is an exact sequence of complexes, then
\begin{eqnarray}
H_n ( X_\bullet ) \to H_n ( Y_\bullet ) \to H_n ( Z_\bullet )
\end{eqnarray}
is an exact sequence. This can be seen through a ``diagram-chasing'' argument. It is
obvious that
\begin{eqnarray}
\im \left[ H_n ( X_\bullet ) \to H_n ( Y_\bullet ) \right] \subseteq
\ker \left[ H_n ( Y_\bullet ) \to H_n ( Z_\bullet ) \right],
\end{eqnarray}
and so we only need to prove the reverse inclusion. Suppose that $(y + \im d_n^Y)$ is a
coset in $H_n ( Y_\bullet )$ that is killed by the map to $H_n ( Z_\bullet )$. Then $G (
y ) \in \im d_{n+1}^Z$, so we can find $z' \in Z_{n+1}$ such that $dz' = G( y )$.
Choosing an arbitrary element $y' \in G^{-1} \{ z' \}$, we have $y - dy' \in \ker G$, and
therefore by exactness, $F (x) = y - dy'$ for some $x$. Since $dy = 0$ and $d(dy') = 0$,
we have $F ( dx ) = d F (x ) = 0$ and therefore $dx = 0$. Thus $(x + \im d_n^X)$ is a
coset in $H_n ( X_\bullet )$ which maps to $y + \im d_n^Y$, and the claim is proved.
While it might be tempting to assume that the maps $H_n ( X_\bullet ) \to H_n ( Y_\bullet
)$ are injective and the maps $H_n ( Y_\bullet ) \to H_n ( Z_\bullet )$ are surjective,
this is not generally true. The homology groups of $X_\bullet$, $Y_\bullet$, and
$Z_\bullet$ have a more complex relationship which is expressed by the following
proposition.
\begin{proposition}\label{snakelemmaprop}
Let $X_\bullet$, $Y_\bullet$, and $Z_\bullet$ be complexes of abelian groups,
and let $F \colon X_\bullet \to Y_\bullet$ and $G \colon Y_\bullet \to Z_\bullet$ be
maps of complexes such that for any $n$, the sequence
\begin{eqnarray}
\xymatrix{ 0 \ar[r] & X_n \ar[r]^{F_n} &
Y_n \ar[r]^{G_n} & Z_n \ar[r] & 0 }
\end{eqnarray}
is an exact sequence. Then, there exist homomorphisms
\begin{eqnarray}
\gamma_n \colon H_n \left( Z_\bullet \right) \to
H_{n-1} \left( X_\bullet \right)
\end{eqnarray}
\noindent for every $n$ which are such that the sequence
\begin{eqnarray}\label{snakesequence}
\xymatrix{ \ldots \ar[r] & H_2 ( Y_\bullet ) \ar[r] &
H_2 ( Z_\bullet ) \ar[lldd]^{\gamma_2} \\
\\
H_1 ( X_\bullet ) \ar[r] & H_1 ( Y_\bullet )
\ar[r] & H_1 ( Z_\bullet) \ar[lldd]^{\gamma_1} \\
\\
H_{0} ( X_\bullet ) \ar[r] & H_{0} ( Y_\bullet)
\ar[r] & H_0 ( Z_\bullet ) \ar[r] & 0 }
\end{eqnarray}
is exact.
\end{proposition}
\removelastskip\pagebreak
Since the proof of this proposition is fairly technical, we have placed it in
Appendix. (See Proposition~\ref{realsnakelemma}.) The maps $\gamma_n$ can be
briefly described like so: let $\overline{G}_n \colon Z_n \to Y_n$ be a function (not
necessarily a homomorphism) which is such that $G_n \circ \overline{G}_n$ is the identity
map, and let $\overline{F}_n \colon F ( X_n ) \to X_n$ be the inverse of $F$. Then, for
any coset
\begin{eqnarray}
z + \im d_{n+1}^Z \in H_n ( Z_\bullet),
\end{eqnarray}
the image under $\gamma_n \colon H_n ( Z_\bullet ) \to H_{n-1} ( X_\bullet )$ is given by
\begin{eqnarray}
\overline{F}_{n-1} ( d ( \overline{G}_n ( z ))) + \im d_n^X \in H_{n-1} ( X_\bullet ).
\end{eqnarray}
As we will see, the above proposition is very useful because it allows us to draw
conclusions about the homology groups of a complex $Y_\bullet$ based on the homology
groups of its subcomplexes and quotient complexes.
We close with a few additional constructions. Note that for any map of complexes $F
\colon I_\bullet \to J_\bullet$, there exist the complexes
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & \im F_{n+1} \ar[r]^{d_{n+1}}
& \im F_n \ar[r]^{d_n} & \im F_{n-1} \ar[r]^{d_{n-1}} & \ldots}
\end{eqnarray}
and
\begin{eqnarray}
\xymatrix{\ldots
\ar[r] & \ker F_{n+1} \ar[r]^{d_{n+1}}
& \ker F_n \ar[r]^{d_n} & \ker F_{n-1} \ar[r]^{d_{n-1}} & \ldots} .
\end{eqnarray}
We write these complexes as $(\im F)$ and $(\ker F)$, respectively.
Note that these complexes fit into an exact sequence
\begin{eqnarray}
0 \to \ker F \to I_\bullet \to \im F \to 0.
\end{eqnarray}
The $\textbf{direct sum}$ of $I_\bullet$ and $J_\bullet$, written
$I_\bullet \oplus J_\bullet$, is the complex
\begin{equation}
\xymatrix{ \ldots \ar[r] & I_{n+1} \oplus J_{n+1} \ar[r] & I_n \oplus J_n \ar[r] &
I_{n-1} \oplus J_{n-1} \ar[r] & \ldots },
\end{equation}
where the maps in this complex are simply the maps induced by $d_k \colon I_k \to
I_{k-1}$ and $d_k \colon J_k \to J_{k-1}$. Note that the homology groups of this complex
are simply $H_n ( I_\bullet ) \oplus H_n ( J_\bullet )$.
\section{Collapsibility Implies Acyclicity}\label{acyclicitysection}
Now we will offer our first application of Proposition~\ref{snakelemmaprop}. In
\textit{\nameref{graphpropsection}}, we defined the notion of {\it collapsibility} for
simplicial \hbox{complexes}. In this part of the text we will see how the condition of
collapsibility for a simplicial complex $\Delta$ implies that the homology groups of
$\Delta$ are trivial.
We begin with a useful definition.
\begin{definition}
Let $\Delta$ be an abstract simplicial complex whose vertex-set is totally ordered. Let
$p$ be a prime, and let $n$ be a nonnegative integer. Define the map
\begin{eqnarray}
s \colon K_0 \left( \Delta , \mathbb{F}_p \right) \to \mathbb{F}_p
\end{eqnarray}
by asserting that $s ( \gamma )$ is the sum of the coefficients of $\gamma$. That is, if
\begin{eqnarray}
\gamma & = & c_1 Q_1 + c_2 Q_2 + \ldots + c_r Q_r,
\end{eqnarray}
with $c_i \in \mathbb{F}_p$ and $Q_i \in \Delta$, then
\begin{eqnarray}
s( \gamma ) = c_1 + c_2 + \ldots + c_r \in \mathbb{F}_p.
\end{eqnarray}
The \textbf{reduced $n$th homology group of $\Delta$} over $\mathbb{F}_p$, denoted
$\widetilde{H}_n \left( \Delta , \mathbb{F}_p \right)$, is the $n$th homology group of
the complex
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1}& K_0 \left( \Delta , \mathbb{F}_p
\right) \ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
\end{definition}
The reduced homology groups $\left\{ \widetilde{H}_n \left( \Delta , \mathbb{F}_p \right)
\right\}$ of an abstract simplicial complex $\Delta$ are the same as the ordinary
homology groups $\left\{ H_n \left( \Delta , \mathbb{F}_p \right) \right\}$, except that
the dimension of $\widetilde{H}_0 \left( \Delta , \mathbb{F}_p \right)$ is one less than
the dimension of $H_0 \left( \Delta , \mathbb{F}_p \right)$. Note that the reduced
homology groups of the trivial complex $\{ \{ 0 \} \}$ are all zero.
\begin{definition}\label{acyclicitydefinition}
Let $\Delta$ be an abstract simplicial complex whose vertex-set is totally ordered.
Then, $\Delta$ is \textbf{$\mathbb{F}_p$-acyclic} if
\begin{eqnarray}
\dim_{\mathbb{F}_p } \widetilde{H}_n \left( \Delta , \mathbb{F}_p \right) = 0
\end{eqnarray}
for all nonnegative integers $n$.
\end{definition}
\removelastskip\pagebreak
Stated differently, a complex is $\mathbb{F}_p$-acyclic if its $\mathbb{F}_p$-homology is
the same as that of single point. An example of an $\mathbb{F}_p$-acyclic simplicial
complex is this one, from Example~\ref{solidtriangleexample}.
\begin{eqnarray*}
\Sigma' = \left\{ \{ 0 \} , \{ 1 \}, \{ 2 \} , \{ 0, 1 \} , \{ 1, 2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray*}
One can check by direct calculation that all of the reduced homology groups of this
simplicial complex are trivial. On the other hand, the simplicial complex $\Sigma$ of
Example~\ref{triangleexample} is \textit{not} $\mathbb{F}_p$-acyclic, since
$\widetilde{H}_1 \left( \Sigma' , \mathbb{F}_p \right) \cong \mathbb{F}_p$.
Another way of expressing Definition~\ref{acyclicitydefinition} is this: an abstract
simplicial complex $\Delta$ is $\mathbb{F}_p$-acyclic if
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1} & K_0 \left( \Delta , \mathbb{F}_p
\right) \ar[r]^{\hskip0.2in s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
is an exact sequence.
When a complex forms an exact sequence, let us refer to it as an \textbf{exact complex}.
The following algebraic lemma is useful for proving exactness of complexes.
\begin{lemma}\label{exactnesslemma}
Let
\begin{eqnarray}
0 \to X_\bullet \to Y_\bullet \to Z_\bullet \to 0
\end{eqnarray}
be an exact sequence of complexes of abelian groups. Then,
\begin{enumerate}
\item \label{alglemmapart1}
If $X_\bullet$ and $Y_\bullet$ are exact complexes, then $Z_\bullet$ is an exact\\
complex.
\item \label{alglemmapart2}
If $Y_\bullet$ and $Z_\bullet$ are exact complexes, then $X_\bullet$ is an exact\\
complex.
\item \label{alglemmapart3}
If $X_\bullet$ and $Z_\bullet$ are exact complexes, then $Y_\bullet$ is an exact\\
complex.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (\ref{alglemmapart1}). Suppose that $X_\bullet$ and $Y_\bullet$ are
exact complexes. By Proposition~\ref{snakelemmaprop}, there is an exact sequence
\begin{eqnarray*}
&&\ldots \to H_{n+1} ( Z_\bullet) \to H_n ( X_\bullet) \to H_n ( Y_\bullet) \to H_n(
Z_\bullet)\\
&& \qquad \to H_{n-1} ( X_\bullet) \to H_{n-1} ( Y_\bullet ) \to \ldots
\end{eqnarray*}
The reader will observe that since the groups $\left\{ H_n ( X_\bullet ) \right\}$ and
$\left\{ H_n ( Y_\bullet ) \right\}$ are all zero, the groups $\left\{ H_n ( Z_\bullet )
\right\}$ must all be zero as well. Therefore~$Z_\bullet$ is an exact complex.
Assertions (\ref{alglemmapart2}) and (\ref{alglemmapart3}) follow similarly.
\end{proof}
Now we are ready to prove our main theorem.
\begin{theorem}
\label{acyclicitytheorem} Let $p$ be a prime. Let $\Delta$ be an abstract simplicial
complex which has a total ordering on its vertex set. If $\Delta$ is collapsible, then
$\Delta$ is $\mathbb{F}_p$-acyclic.
\end{theorem}
\begin{proof}
Recall (from \textit{\nameref{graphpropertysimplicial}}) the definition of
\textbf{primitive elementary collapse}. For any elementary collapse $( \Sigma, \Sigma'
)$, there is a sequence of primitive elementary collapses which reduces \hbox{$\Sigma$ to
$\Sigma'$}:
\begin{eqnarray}
\Sigma, \Sigma_1 , \Sigma_2 , \ldots , \Sigma_t , \Sigma'
\end{eqnarray}
(This is an elementary fact which the reader is invited to prove as an exercise.)
Suppose that the complex $\Delta$ is collapsible. There exists a sequence of elementary
collapses which collapse $\Delta$ to a single $0$-simplex. Therefore, there exists a
sequence of \textit{primitive} elementary collapses which collapse $\Delta$ to a single
$0$-simplex. Let
\begin{eqnarray}
\Delta, \Delta_1 , \Delta_2, \ldots , \Delta_r
\end{eqnarray}
be such a sequence, with $\left| \Delta_r
\right| = 1$.
Let $Z_\bullet$ be the complex formed by the quotient groups
\begin{eqnarray}
Z_n = K_n (\Delta , \mathbb{F}_p)/ K_n \left( \Delta_1 , \mathbb{F}_p \right)
\end{eqnarray}
The structure of the complex $Z_\bullet$ is quite simple: it is isomorphic
to the following complex:
\begin{eqnarray}
\hspace*{-6pt}\xymatrix{\ldots \ar[r] & 0 \ar[r] & 0 \ar[r] & \mathbb{F}_p \ar[r]^{Id} &
\mathbb{F}_p \ar[r] & 0 \ar[r] & 0 \ar[r] & \ldots}\qquad
\end{eqnarray}
\vfill\eject
\noindent There is an exact sequence of complexes
\begin{eqnarray}
\xymatrix{
& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\
0 \ar[r] & K_1 \left( \Delta_1 , \mathbb{F}_p \right)
\ar[r] \ar[d]
& K_1 \left( \Delta , \mathbb{F}_p \right) \ar[r] \ar[d]
& Z_1 \ar[r] \ar[d] & 0 \\
0 \ar[r] & K_0 \left( \Delta_1 , \mathbb{F}_p \right)
\ar[r] \ar[d]
& K_0 \left( \Delta , \mathbb{F}_p \right) \ar[r] \ar[d]
& Z_0 \ar[r] \ar[d] & 0 \\
0 \ar[r] & \mathbb{F}_p \ar[r] \ar[d] &
\mathbb{F}_p \ar[r] \ar[d] & 0 \ar[r] \ar[d] & 0 \\
0 \ar[r] & 0 \ar[r] & 0 \ar[r] & 0 \ar[r] & 0 }
\end{eqnarray}
The complex $Z_\bullet$ is clearly exact. So by Lemma~\ref{exactnesslemma},
the complex
\begin{eqnarray*}
\xymatrix{\ldots \ar[r] & K_2 \left( \Delta, \mathbb{F}_p \right) \ar[r]^{d_2} & K_1
\left( \Delta , \mathbb{F}_p \right) \ar[r]^{d_1} & K_0 \left( \Delta , \mathbb{F}_p
\right)\ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0 }
\end{eqnarray*}
is exact iff
\begin{eqnarray*}
\hspace*{-3pt}\xymatrix{\ldots \ar[r] & K_2 (\Delta_1, \mathbb{F}_p) \ar[r]^{d_2} & K_1
(\Delta_1 , \mathbb{F}_p) \ar[r]^{d_1} & K_0 (\Delta_1 , \mathbb{F}_p)
\ar[r]^{\hspace*{0.2in}s} & \mathbb{F}_p \ar[r] & 0}
\end{eqnarray*}
is exact. Therefore $\Delta$ is $\mathbb{F}_p$-acyclic iff $\Delta_1$ is $\mathbb{F}_p$-acyclic.
Similar reasoning shows that for any $i$, $\Delta_i$ is $\mathbb{F}_p$-acyclic iff
$\Delta_{i+1}$ is $\mathbb{F}_p$-acyclic. The theorem follows by induction.
\end{proof}
\chapter{Fixed-Point Theorems}\label{fptchapter}
We are now ready to put the theory from \textit{\nameref{chaincomplexchapter}} to use to
study group actions $G \circlearrowleft \Delta$ on simplicial complexes.
\section{The Lefschetz Fixed-Point Theorem}\label{lftsection}
\begin{theorem}\label{acyclicimplieslft}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices. Suppose that
$\Delta$ is $\mathbb{F}_p$-acyclic for some prime number $p$. Let $f \colon \Delta \to
\Delta$ be a simplicial automorphism. Then, there exists a simplex $Q \in \Delta$ such
that $f ( Q ) = Q$.
\end{theorem}
\begin{proof}
Let us introduce some notation: if $Y$ is a finite-dimensional vector space over
$\mathbb{F}_p$, and $h \colon Y \to Y$ is a linear endomorphism, then let $\Tr_h \left( Y
\right)$ denote the trace of $h$ on $Y$. Note that the trace function is additive over
exact sequences. That is, if
\begin{eqnarray}
0 \to X \to Y \to Z \to 0
\end{eqnarray}
is an exact sequence, and $h$ acts on $X$, $Y$, and $Z$ in a compatible manner, then
\begin{eqnarray}
\Tr_h (Y) = \Tr_h (X) + \Tr_h (Z).
\end{eqnarray}
Let $F$ denote the chain map associated with $f$. Consider the
\hbox{values~of}
\begin{eqnarray}
\Tr_F \left( H_n \left( \Delta , \mathbb{F}_p \right) \right)
\end{eqnarray}
for $n = 0 , 1, 2, {\ldots}\,$. Since $\Delta$ is $\mathbb{F}_p$-acyclic, these are easy
to compute. If $n > 0$, then $H_n \left( \Delta , \mathbb{F}_p \right)$ is a zero vector
space. The vector space $H_0 \left( \Delta , \mathbb{F}_p \right)$ is a one-dimensional
$\mathbb{F}_p$-vector space on which $F$ acts trivially. Therefore,
\begin{eqnarray}
\Tr_F \left( H_0 \left( \Delta , \mathbb{F}_p \right) \right) & = & 1, \\
\Tr_F \left( H_n \left( \Delta, \mathbb{F}_p \right) \right) & = & 0 \hskip0.2in \textnormal{ for } n > 0.
\end{eqnarray}
Now we can carry out the proof using the additivity of the trace function. Suppose, for
the sake of contradiction, that there is \textit{no} simplex in $\Delta$ which is
stabilized by $F$. Then, for any $n$, the chain map $F$ acts on $K_n \left( \Delta ,
\mathbb{F}_p \right)$ by permuting the basis elements in a fixed-point free manner,
possibly changing signs. A matrix representation of this action would be a matrix with
entries from the set $\{ -1, 0, 1 \}$, having only zeroes on the main diagonal. Thus we
see that
\begin{eqnarray}
\Tr_F \left( K_n \left( \Delta , \mathbb{F}_p \right) \right) = 0.
\end{eqnarray}
Observe the following chain of equalities.
\begin{eqnarray*}
0 & = & \sum_{n \geq 0} (-1)^n \Tr_F \left( K_n \left( \Delta , \mathbb{F}_p \right) \right) \\
& = & \Tr_F \left( K_0 \left( \Delta , \mathbb{F}_p \right) \right) +
\sum_{n \geq 1} (-1)^n \left[
\Tr_F \left( \im d_n \right) +
\Tr_F \left( \ker d_n \right)
\right] \\
& = & \Tr_F \left( K_0 \left( \Delta , \mathbb{F}_p \right) \right) - \Tr_F \left( \im
d_1 \right)\\
&& +\, \sum_{n \geq 1} (-1)^n \left[ \Tr_F \left( \ker d_n \right) -
\Tr_F \left( \im d_{n+1} \right) \right] \\
& = & \Tr_F \left( H_0 \left( \Delta , \mathbb{F}_p \right) \right)
+ \sum_{n \geq 1} (-1)^n \Tr_F \left( H_n \left( \Delta ,
\mathbb{F}_p \right) \right) \\
& = & 1.
\end{eqnarray*}
We obtain a contradiction. Therefore, there must exist a simplex $Q$ in $\Delta$ such
that $f ( Q ) = Q$.
\end{proof}
\pagebreak
\vspace*{-20pt}
\begin{corollary}\label{lefschetzcorollary}
Let $\Sigma$ be a finite abstract simplicial complex which is collapsible. Let $g \colon
\Sigma \to \Sigma$ be a simplicial automorphism. Then there must exist a simplex $T \in
\Sigma$ such that $g ( T ) = T$.
\end{corollary}
\begin{proof}
This follows immediately from the above theorem and Theorem~\ref{acyclicitytheorem}.
\end{proof}
Let us consider what Theorem~\ref{acyclicimplieslft} means geometrically. Suppose that
$\Theta$ is an ordinary simplicial complex in $\mathbb{R}^N$ (see
\textit{\nameref{simplicialcomplexsection}}). Then a simplicial automorphism of $\Theta$
is simply a continuous permutation of the points of $\Theta$ which maps every $n$-simplex
of $\Theta$ to another $n$-simplex of $\Theta$ in an affine-linear manner.
Suppose that $V \subset \mathbb{R}^N$ is a single $n$-simplex spanned by $\mathbf{v}_0 ,
\mathbf{v}_1 , \ldots , \mathbf{v}_n \in \mathbb{R}^N$. Note that any affine-linear map
of $V$ onto itself must fix the point
\begin{eqnarray}
\sum_{i = 0}^n \left( \frac{1}{n+1} \right) \mathbf{v}_i \in V.
\end{eqnarray}
{\tra2Thus, any simplicial map which stabilizes $V$ must have a fixed point in~$V$.
Therefore, when we establish that a simplicial automorphism maps a particular simplex to
itself, we have in fact proved that it has a fixed point. This justifies
our calling Theorem~\ref{acyclicimplieslft} a ``fixed-point theorem.''}
\enlargethispage{10pt}
Let $f \colon \Delta \to \Delta$ be a simplicial automorphism which satisfies the
assumptions of Theorem~\ref{acyclicimplieslft}. We can use the reasoning from the proof
of Theorem~\ref{acyclicimplieslft} to draw further conclusions about the set $\Delta^f$.
Note that the quantity
\begin{eqnarray}
\Tr_F ( K_n \left( \Delta , \mathbb{F}_p \right) )
\end{eqnarray}
is equal to the number of $n$-simplicies $Q \in \Delta$ that satisfy $f ( Q ) = Q$. By
the reasoning from the proof of Theorem~\ref{acyclicimplieslft}, we have
\begin{eqnarray}
\sum_{n \geq 0} (-1)^n \Tr_F \left( K_n ( \Delta , \mathbb{F}_p ) \right) & = & 1.
\end{eqnarray}
This implies a different version of Theorem~\ref{acyclicimplieslft}. For any subset $S$
of a simplicial complex $\Delta$, let
\begin{eqnarray}
\chi \left( S \right) & = & \sum_{n \geq 0} (-1)^n \left| \left\{ Q \in S \mid \dim ( Q )
= n \right\} \right|\!.
\end{eqnarray}
The quantity $\chi ( S )$ is called the \textbf{Euler characteristic} of $S$.
\removelastskip\pagebreak
\begin{theorem}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices, and suppose
that $\Delta$ is $\mathbb{F}_p$-acyclic for some prime number $p$. Let $f \colon \Delta
\to \Delta$ be a simplicial automorphism. Then,
\begin{eqnarray}
\chi ( \Delta^f ) & = & 1.
\end{eqnarray}
\end{theorem}
\section{A Nonabelian Fixed-Point Theorem}\label{nonabeliansection}
In this part of the text we will prove a nonabelian fixed-point theorem which is
attributed to R.~Oliver \cite{oliver1975}.
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G$ be a finite group
which acts on $\Delta$ via simplicial automorphisms. By
Corollary~\ref{lefschetzcorollary}, we know that for any element $g \in G$, there must be
a simplex $Q \in \Delta$ such that $g \left ( Q \right) = Q$. We will prove that, under
certain conditions, a stronger statement can be made: there must exist a single
simplex~$Q$ which is stabilized by all the elements of $G$.
Our method of proof for this result is essentially an inductive one. We require that the
automorphism group $G$ has a certain filtration by subgroups,
\begin{eqnarray}
\{ 0 \} = G_0 \subset G_1 \subset G_2 \subset \ldots \subset G_r = G,
\end{eqnarray}
and we inductively deduce conditions on the $G_i$-fixed subsets of $\Delta$, for $i = 0,
1, 2, \ldots, r$. The key to this argument is the first result that we will prove,
Proposition~\ref{acyclicitypreservation}, which tells us that the property of
``$\mathbb{F}_p$-acyclicity'' can be carried forward along this filtration. The proof of
Proposition~\ref{acyclicitypreservation} is the most difficult part of the argument; once
that proposition is proved, the other elements of the argument fall into place easily.
\enlargethispage{6pt}
For now, we will be focusing our attention on simplicial automorphisms $f \colon \Delta
\to \Delta$ for which $\Delta^f$ \textit{is} a subcomplex of $\Delta$. That is, we will
be focusing on those maps $f$ satisfying the condition
\begin{eqnarray}
Q \in \Delta^f \textnormal{ and } Q' \subseteq Q \Longrightarrow
Q' \in \Delta^f.
\end{eqnarray}
for any $Q, Q' \in \Delta$. Geometrically, what this condition implies is that if $f$
stabilizes a simplex $Q$, then it also fixes all of the vertices of $Q$.
The \textbf{order} of a simplicial automorphism $f \colon \Delta \to \Delta$ is the least
$n \geq 1$ such that $f^n$ is the identity. (If no such $n$ exists, then the order of
$f$ is $\infty$.)
\begin{proposition}\label{acyclicitypreservation}
Let $\Delta$ be a finite abstract simplicial complex with ordered vertices. Let $p$ be
a prime, and suppose that $\Delta$ is $\mathbb{F}_p$-acyclic. Suppose that $f \colon
\Delta \to \Delta$ is an order-$p$ automorphism of $\Delta$ such that $\Delta^f$ is a
subcomplex of $\Delta$. Then, the complex $\Delta^f$ must be $\mathbb{F}_p$-acyclic.
\end{proposition}
\begin{proof}
Suppose that $\Delta$ is $\mathbb{F}_p$-acyclic. We know, by
Theorem~\ref{acyclicimplieslft}, that the subcomplex $\Delta^f$ must be nonempty. To
prove the proposition, we must show that the homology groups $H_n \left( \Delta^f ,
\mathbb{F}_p \right)$ are trivial for $n > 0$, and that $H_0 \left( \Delta^f ,
\mathbb{F}_p \right)$ is one-dimensional.
The proof that follows is based on the paper ``Fixed-point theorems for periodic
transformations'' by Smith~\cite{smith1941}. The approach of the proof is to
define some special subcomplexes of $K_\bullet \left( \Delta , \mathbb{F}_p \right)$ and
then exploit relationships between these subcomplexes.
Let
\begin{eqnarray}
F \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to
K_\bullet \left( \Delta , \mathbb{F}_p \right)
\end{eqnarray}
denote the chain map associated with $f$. Note that since $F$ is a map of
complexes, any linear combination of the maps $F , F^2, F^3, \ldots$ is also a map of
complexes. Define
\begin{eqnarray}
\delta \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta
, \mathbb{F}_p \right)
\end{eqnarray}
by
\begin{eqnarray}
\delta = \mathbb{I} - F.
\end{eqnarray}
(Here $\mathbb{I}$ denotes the identity map.) Define
\begin{eqnarray}
\sigma \colon K_\bullet \left( \Delta , \mathbb{F}_p \right) \to K_\bullet \left( \Delta
, \mathbb{F}_p \right)
\end{eqnarray}
by
\begin{eqnarray}
\sigma = \mathbb{I} + F + F^2 + \ldots + F^{p-1}.
\end{eqnarray}
The maps $\delta$ and $\sigma$ determine four subcomplexes of $K_\bullet \left( \Delta ,
\mathbb{F}_p \right)$:
\begin{eqnarray}
( \im \delta), (\ker\delta), (\im \sigma),\quad \textnormal{and}\quad (\ker \sigma).
\end{eqnarray}
We can describe these four complexes explicitly. Let $\Delta' = \Delta \smallsetminus
\Delta^f$. Let $S \subseteq \Delta'$ be a set which contains exactly one element from
every $f$-orbit in $\Delta'$. Then the following assertions hold (as the reader may
verify):
\begin{itemize}
\item The set
\begin{eqnarray*}
\left\{ \sum_{i=0}^{p-1} F^i \left( Q \right) \mid Q \in S \right\}
\end{eqnarray*}
is a basis\footnote{When we say that a set $T$ is a basis for a complex $X_\bullet$, we
mean that $T$ is a union of bases for the vector spaces $\{ X_i \}$.} for $\left( \im
\sigma \right)$.
\item The set
\begin{eqnarray*}
\left\{ F^i ( Q) - F^{i+1} ( Q ) \mid
Q \in S , 0 \leq i \leq p-2 \right\}
\end{eqnarray*}
is a basis for $\left( \im \delta \right)$.
\item The set
\begin{eqnarray*}
\left\{ F^i ( Q) - F^{i+1} ( Q ) \mid
Q \in S , 0 \leq i \leq p-2 \right\} \cup
\left\{ Q \mid Q \in \Delta^f \right\}
\end{eqnarray*}
is a basis for $\left( \ker \sigma \right)$.
\item The set
\begin{eqnarray*}
\left\{ \sum_{i=0}^{p-1} F^i \left( Q \right) \mid
Q \in S \right\}
\cup
\left\{ Q \mid Q \in \Delta^f \right\}
\end{eqnarray*}
is a basis for $\left( \ker \sigma \right)$.
\end{itemize}
From these bases, we can see that there are the following isomorphisms of complexes:
\begin{eqnarray}\label{keyisomorphism1}
\left( \ker \sigma \right) \cong \left( \im \delta \right)
\oplus K_\bullet \left( \Delta^f , \mathbb{F}_p \right)
\end{eqnarray}
and
\begin{eqnarray}\label{keyisomorphism2}
\left( \ker \delta \right) \cong \left( \im \sigma \right)
\oplus K_\bullet \left( \Delta^f , \mathbb{F}_p \right).
\end{eqnarray}
These imply isomorphisms of homology groups:
\begin{eqnarray}\label{homiso1}
H_n ( \ker \sigma ) \cong H_n ( \im \delta ) \oplus H_n ( \Delta^f , \mathbb{F}_p ), \\
\label{homiso2} H_n ( \ker \delta ) \cong H_n ( \im \sigma)
\oplus H_n ( \Delta^f , \mathbb{F}_p ).
\end{eqnarray}
Now, consider the exact sequences
\begin{eqnarray}
0 \to \left( \ker \sigma \right) \to K_\bullet \left( \Delta ,
\mathbb{F}_p \right) \to \left( \im \sigma \right) \to 0, \\
0 \to \left( \ker \delta \right) \to K_\bullet \left( \Delta ,
\mathbb{F}_p \right) \to \left( \im \delta \right) \to 0
\end{eqnarray}
By Proposition~\ref{snakelemmaprop}, these imply the existence of two long exact
sequences:
\begin{eqnarray*}
&& \ldots \to H_{n+1} ( \im \sigma ) \to H_n ( \ker \sigma ) \to H_n ( \Delta ,
\mathbb{F}_p ) \to H_n ( \im \sigma )\\
&&\qquad \to H_{n-1} ( \ker \sigma ) \to \ldots \\[6pt]
&& \ldots \to H_{n+1} ( \im \delta ) \to H_n ( \ker \delta ) \to H_n ( \Delta ,
\mathbb{F}_p ) \to H_n ( \im \delta )\\
&&\qquad \to H_{n-1} ( \ker \delta ) \to \ldots .
\end{eqnarray*}
Let us step through the terms in these sequences, starting from the left. Let $c$ be the
dimension of the complex $\Delta$ (that is, the dimension of the largest simplex in
$\Delta$). The exact sequences take the form
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow H_c ( \ker \sigma )
\to H_c ( \Delta , \mathbb{F}_p ) \to H_c ( \im \sigma )
\to H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow H_c ( \ker \delta ) \to H_c ( \Delta ,
\mathbb{F}_p ) \to H_c ( \im \delta ) \to H_{c-1} ( \ker \delta ) \to \ldots .
\end{eqnarray*}
Since $\Delta$ is acyclic, we know that $H_c \left( \Delta ,\mathbb{F}_p \right) = \{ 0
\}$, which clearly implies that both $H_c \left( \ker \sigma \right)$ and $H_c \left(
\ker \delta \right)$ are zero. So the exact sequences take the form
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow H_c ( \im \sigma )
\to H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0 \longrightarrow H_c ( \im
\delta ) \to H_{c-1} ( \ker \delta ) \to \ldots
\end{eqnarray*}
But isomorphisms \eqref{homiso1} and \eqref{homiso2} imply that $H_c \left( \im \sigma
\right)$ and $H_c \left( \im \delta \right)$ are also zero. So the exact sequences are
like so:
\begin{eqnarray*}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow 0
\longrightarrow H_{c-1} ( \ker \sigma ) \to \ldots \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow 0 \longrightarrow 0
\longrightarrow H_{c-1} ( \ker \delta ) \to \ldots
\end{eqnarray*}
We can apply the same reasoning to show that all terms in the sequences with index
$(c-1)$ are likewise zero. Continuing in this manner, we eventually find that
\textit{all} the homology groups in the sequences that have a positive index are zero.
We are left with the exact sequences in the following form:
\begin{eqnarray}\label{finallongexactsequence}
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow
H_0 \left( \ker \sigma \right) \to H_0 \left( \Delta , \mathbb{F}_p
\right) \to H_0 \left( \im \sigma \right) \longrightarrow 0\qquad\quad \\[6pt]
\ldots \longrightarrow 0 \longrightarrow 0 \longrightarrow H_0 \left( \ker \delta \right)
\to H_0 \left( \Delta , \mathbb{F}_p \right) \to H_0 \left( \im \delta \right)
\longrightarrow 0\qquad\quad
\end{eqnarray}
We have shown that all of the homology groups $H_n \left( \ker \sigma \right)$, $n > 0$
are trivial. This implies by isomorphism~(\ref{homiso1}) that $H_n \left( \Delta^f ,
\mathbb{F}_p \right)$ is trivial for all $n > 0$. Also, we know from
isomorphism~(\ref{homiso1}) and sequence~(\ref{finallongexactsequence}) that
\begin{eqnarray}
\dim H_0 \left( \Delta^f , \mathbb{F}_p \right) \leq \dim H_0 \left( \ker \sigma \right)
\leq \dim H_0 \left( \Delta , \mathbb{F}_p \right) = 1.
\end{eqnarray}
The dimension of $H_0 \left( \Delta^f , \mathbb{F}_p \right)$ cannot be zero (since
$\Delta^f$ is nonempty). So $H_0 \left( \Delta^f , \mathbb{F}_p \right)$ must be
one-dimensional. Therefore, $\Delta^f$ is\break $\mathbb{F}_p$-acyclic.
\end{proof}
\begin{corollary}\label{pgroupactioncorollary}
Suppose that $H$ is a group of order $p^m$, with $m \geq 1$, which acts on $\Delta$ in
such a way that $\Delta^h$ is a subcomplex of $\Delta$ for any $h \in H$. Then,
$\Delta^H$ is $\mathbb{F}_p$-acyclic.
\end{corollary}
\begin{proof}
Since $\left| H \right| = p^m$, there exists a filtration of $H$ by normal subgroups,
\begin{eqnarray}
\{ 0 \} = H_0 \subset H_1 \subset \ldots \subset H_m = H
\end{eqnarray}
such that $H_i / H_{i-1} \cong \mathbb{Z} / p \mathbb{Z}$ for any $i \in \{ 1, 2, \ldots,
m \}$. (See Chapter~I, Corollary~6.6 in \cite{lang}.) For any $i \in \{ 1, 2, \ldots ,
m \}$, we can choose an element $a_i \in H_i$ which generates $H_i / H_{i-1}$. Then,
\begin{eqnarray}
\Delta^{H_i} = \left( \Delta^{H_{i-1}} \right)^{a_i}.
\end{eqnarray}
By Proposition~\ref{acyclicitypreservation}, if $\Delta^{H_{i-1}}$ is
$\mathbb{F}_p$-acyclic, so is $\Delta^{H_i}$. The corollary follows by induction.
\end{proof}
Now we are ready to prove the main theorem.
\begin{theorem}\label{nonabelianfixedpointtheorem}
Let $G$ be a finite group satisfying the following \hbox{condition}:
\begin{itemize}
\item There is a normal subgroup $G' \subseteq G$ such that
$\left| G' \right|$ is a prime power and $G / G'$ is cyclic.
\end{itemize}
Let $\Delta$ be a collapsible abstract simplicial complex on which $G$ acts, satisfying
the condition that $\Delta^g$ is a simplicial complex for any $g \in G$. Then, $\chi (
\Delta^G ) = 1$.
\end{theorem}
\begin{proof}
We are given that $\left| G' \right| = p^m$ for some prime $p$ and $m \geq 0$. Choose a
total ordering on the vertices of $\Delta$. By Theorem~\ref{acyclicitytheorem}, $\Delta$
is $\mathbb{F}_p$-acyclic. By Corollary~\ref{pgroupactioncorollary}, $\Delta^{G'}$ is
$\mathbb{F}_p$-acyclic.
Choose an element $b \in G$ which generates $G/G'$. By Theorem~\ref{acyclicimplieslft},
the complex
\begin{eqnarray}
(\Delta^{G'})^b = \Delta^G
\end{eqnarray}
has Euler characteristic equal to $1$.
\end{proof}
Note that Theorem~\ref{nonabelianfixedpointtheorem} implies in particular that the
invariant subcomplex $\Delta^G$ is nonempty.
\section{Barycentric Subdivision}\label{barycentricsubdivisionsection}
In \textit{\nameref{nonabeliansection}} we proved
Theorem~\ref{nonabelianfixedpointtheorem}, which asserts that if a group action $G
\circlearrowleft \Delta$ satisfies certain requirements, then $\Delta^G$ must be
nonempty. The theorem as stated is unfortunately not general enough for our purposes.
Indeed the condition that all of the subsets $\{ \Delta^g \mid g \in G \}$ are
subcomplexes will not be satisfied by the simplicial complexes arising from graph
properties, except in trivial cases. Therefore we need a theorem which can be applied to
group actions that do not satisfy this condition.
\begin{figure}[!b
\centerline{\includegraphics{f4-1}}
\fcaption{The complexes $\Sigma$ and $\bar ( \Sigma )$.\label{barycentricfig}}
\end{figure}
Barycentric subdivision is a process of dividing up the simplicies in a simplicial
complex into smaller simplicies. Barycentric subdivision replaces an abstract simplicial
complex $\Delta$ with a larger complex $\Delta'$ that has similar properties. The
advantage of this construction is that for any simplicial automorphism $g \colon \Delta
\to \Delta$, there is an induced automorphism $g \colon \Delta' \to \Delta'$ which
satisfies the condition that $(\Delta' )^g$ is an abstract simplicial complex. Working
within this larger complex will allow us to prove a generalization of
Theorem~\ref{nonabelianfixedpointtheorem}.
\begin{definition}\label{barycentricdef}
Let $\Delta$ be an abstract simplicial complex. Then the \textbf{barycentric subdivision
of $\Delta$}, denoted $\bar ( \Delta )$, is the simplicial \hbox{complex}
\begin{eqnarray*}
\bar(\Delta ) = \big\{ \{ Q_1, Q_2, \ldots , Q_r \} \mid r \geq 1, Q_i \in \Delta , Q_1
\subset Q_2 \subset \ldots \subset Q_r \big\}.
\end{eqnarray*}
\end{definition}
Here is another way to phrase the above definition. Let $\Delta$ be an abstract
simplicial complex. Then the subset relation $\subset$ gives a partial ordering on the
elements of $\Delta$. The complex $\bar ( \Delta )$ is the set of all $\subset$-chains
in $\Delta$.
As an example, let
\begin{eqnarray}\label{baryexample}
\Sigma = \left\{ \{ 0 \} , \{ 1 \} , \{ 2 \} , \{ 0, 1 \} , \{ 1 ,2 \} , \{ 0, 2 \} , \{
0, 1, 2 \} \right\}\!.
\end{eqnarray}
The complex $\Sigma$ and its barycentric subdivision $\bar ( \Sigma )$ are shown in
Figure~\ref{barycentricfig}.
Geometrically, the operation $[ \Delta \mapsto \bar ( \Delta ) ]$ has the effect of
splitting every simplex of dimension $n$ in $\Delta$ into $(n+1)!$ simplicies of
\hbox{dimension~$n$}. Note that vertices in $\bar ( \Delta )$ are in one-to-one
correspondence with the simplicies of $\Delta$. Figure~\ref{barycentric2fig} shows
another example of barycentric subdivision.
\begin{figure
\centerline{\includegraphics{f4-2}}
\fcaption{An example of barycentric subdivision.\label{barycentric2fig}}
\end{figure}
As the reader can observe, the simplicial complex $\bar ( \Delta )$ has some similarities
with the original simplicial complex $\Delta$. It can be shown that the
homology groups of $\bar ( \Delta )$ are isomorphic to those of $\Delta$, although we
will not need to prove that here. The following propositions are proved in the Appendix
(as Proposition~\ref{baryeulerappendixprop} and Proposition~\ref{baryappendixprop}).
\begin{proposition}\label{barycollapseprop}
Let $\Delta$ be an abstract simplicial complex. If $\Delta$ is collapsible, the $\bar (
\Delta )$ is also collapsible.
\end{proposition}
\begin{proposition}\label{baryeulerprop}
Let $\Delta$ be a finite abstract simplicial complex. Then, $\chi ( \bar ( \Delta ) ) =
\chi ( \Delta )$.
\end{proposition}
Now, let us consider how this construction behaves under group actions. Let $f \colon
\Delta \to \Delta$ be a simplicial automorphism of $\Delta$. Then there is an induced
simplicial automorphism,
\begin{eqnarray}
f \colon \bar ( \Delta ) \to \bar ( \Delta ).
\end{eqnarray}
The invariant subset $\bar ( \Delta )^f$ can be expressed like so:
\begin{eqnarray*}
\bar ( \Delta )^f = \big\{\{ Q_1, Q_2, \ldots , Q_r \} \mid r \geq 1, Q_i \in \Delta^f ,
Q_1 \subset Q_2 \subset \ldots \subset Q_r \big\}.
\end{eqnarray*}
\pagebreak
\noindent It is easy to see that this set is always a simplicial complex. Thus the
following lemma holds true:
\begin{lemma}\label{barylemma}
Let $\Delta$ be an abstract simplicial complex, and let
$G \circlearrowleft \Delta$ be a group action on $\Delta$. Then,
for any $g \in G$, the set
\begin{eqnarray}
\left( \bar ( \Delta ) \right)^g
\end{eqnarray}
is a subcomplex of $\bar ( \Delta )$.
\end{lemma}
Lemma~\ref{barylemma} can be observed in the example complex $\Sigma$ which we discussed
above~\eqref{baryexample}. As we can see in Figure~\ref{barycentricfig}, any permutation
of the set $\{ 0, 1, 2 \}$ fixes a subcomplex of the complex $\bar ( \Sigma)$.
With the aid of barycentric subdivision, we can now prove the following fixed-point
theorem.
\begin{theorem}\label{fptinit}
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G \circlearrowleft
\Delta$ be a group action on $\Delta$. Suppose that $G$ has a normal subgroup $G'$ which
is such that $\left| G' \right|$ is a prime power and $G / G'$ is cyclic. Then, the set
$\Delta^G$ is nonempty.
\end{theorem}
\begin{proof}
By Proposition~\ref{barycollapseprop}, $\bar ( \Delta )$ is collapsible. By
Theorem~\ref{nonabelianfixedpointtheorem}, $\chi ( \bar ( \Delta )^G ) = 1$. Therefore
$\bar ( \Delta )^G$ is nonempty, and thus $\Delta^G$ is likewise nonempty.
\end{proof}
Now let $\Delta^{[G]}$ denote the complex constructed in
\textit{\nameref{groupactionsection}}. The set $\Delta^{[G]}$ is very similar to
$\Delta^G$; indeed, there is a one-to-one inclusion preserving map
\begin{eqnarray}\label{naturaliso}
i \colon \Delta^{[G]} \to \Delta^G
\end{eqnarray}
which is given simply by mapping any $S \in \Delta^{[G]}$ to the union of the elements of
$S$. (The main difference between $\Delta^G$ and $\Delta^{[G]}$ is that $\Delta^{[G]}$ is
a simplicial complex, whereas $\Delta^G$ generally is not.)
The map~(\ref{naturaliso}) induces a simplicial isomorphism
\begin{eqnarray}\label{barygroupiso}
\bar \left( \Delta^{[G]} \right) \to
\bar ( \Delta )^G
\end{eqnarray}
Figure~\ref{barygroupfig} illustrates the relationship between $\Delta^{[G]}$ and $\bar (
\Delta )^G$. \hbox{Isomorphism}~(\ref{barygroupiso}) enables our final generalization of
\hbox{Theorem}~\ref{nonabelianfixedpointtheorem}.
\begin{figure}[!t
\centerline{\includegraphics{f4-3}}
\fcaption{A continuation of the example from Figures~\ref{groupactionfig}
and \ref{groupaction2fig}. The barycentric subdivision of $\Sigma^{[H]}$ is
isomorphic to $\bar ( \Sigma )^H$.\label{barygroupfig}}
\end{figure}
\begin{theorem}\label{fpt}
Let $\Delta$ be a collapsible abstract simplicial complex. Let $G \circlearrowleft
\Delta$ be a group action on $\Delta$. Suppose that $G$ has a normal subgroup $G'$ which
is such that $\left| G' \right|$ is a prime power and $G / G'$ is cyclic. Then,
\begin{eqnarray}
\chi ( \Delta^{[G]} ) = 1.
\end{eqnarray}
\end{theorem}
\begin{proof}
By Proposition~\ref{barycollapseprop}, $\bar ( \Delta )$ is collapsible. Therefore the
Euler characteristic of $\bar ( \Delta )^G \cong \bar ( \Delta^{[G]} )$ is $1$. By
Proposition~\ref{baryeulerprop}, the Euler characteristic of $\Delta^{[G]}$ is likewise
equal to $1$.
\end{proof}
\chapter[Results on Decision-Tree Complexity]{Results on Decision-Tree Complexity}\label{resultschapter}
In this part of the text, we will give the proofs of three lower bounds on the
decision-tree complexity of graph properties, due to Kahn, Saks, Sturtevant, and Yao.
Then we will sketch (without proof) some more recent results.
Let
\begin{eqnarray}
h \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. The function $h$ satisfies two
conditions: it is increasing (meaning that if $Z$ is a subgraph of $Z'$ then $h ( Z )
\leq h ( Z' )$) and it is also isomorphism-invariant ($Y \cong Y' \Longrightarrow h ( Y )
= h ( Y' )$). Proofs of evasiveness exploit the interaction between these two
conditions.
As we saw in \textit{\nameref{basicconceptschapter}}, the monotone-increasing condition
implies that $h$ determines a simplicial complex, $\Delta_h$, whose \hbox{simplices}
correspond to graphs $Z$ that satisfy $h ( Z ) = 0$. The isomorphism-invariant property
implies that this complex $\Delta_h$ is highly symmetric. If $\sigma$ is any permutation
of $V$, and
\begin{eqnarray}
E \subseteq \left\{ \{ v, w \} \mid v, w \in V \right\}
\end{eqnarray}
is an edge set such that $h ( ( V, E ) ) = 0$, then the edge set
\begin{equation}
\sigma ( E ) = \left\{ \{ \sigma ( v ) , \sigma ( w ) \} \mid \{ v , w \} \in E \right\}
\end{equation}
also satisfies $h ( (V , \sigma ( E )) ) = 0$. Thus there is an induced automorphism
$\sigma \colon \Delta_h \to \Delta_h$.
If $h$ were nonevasive, then $\Delta_h$ would be collapsible, and we could apply
fixed-point theorems to $\Delta_h$. Corollary~\ref{lefschetzcorollary} would imply that
$\Delta_h$ must have a simplex which is stabilized by $\sigma$. Therefore, we have the
following interesting result: if $h$ is a nonevasive graph property, then for any
permutation $\sigma \colon V \to V$ there must be a nontrivial $\sigma$-invariant graph
which does not satisfy $h$. Figure~\ref{invariantgraphsfig} shows what we can deduce
when $\left| V \right| = 9$ and $\sigma$ is chosen to be a cyclic permutation.
\begin{figure}[!t
\centerline{\includegraphics{f5-1}}
\fcaption{Let $V$ be a set of size $9$, and let $h$ be a nontrivial increasing graph
property. If $h$ is not evasive, then at least one of the graphs above must fail to
satisfy $h$.\label{invariantgraphsfig}}
\vspace*{-3pt}
\end{figure}
\enlargethispage{6pt}
When we go further and consider the actions of finite groups on $\Delta_h$, we get
stronger results. Note that the entire symmetric group $\Sym ( V )$ acts on $\Delta_h$.
Unfortunately this group is too big for the application of any fixed-point theorems that
we have proved, and so we must restrict the action to some appropriate subgroup of $\Sym
( V )$. Making this choice of subgroup is a key step for many of the results that we will
discuss.
\section{Graphs of Order $p^k$}\label{mainresultsection}
\begin{theorem}[Kahn et~al. \cite{kss1984}]
Let $V$ be a finite set of order $p^k$, where $p$ is prime and $k \geq 1$. Let
\begin{eqnarray}
h \colon \mathbf{G} (V) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. Then, $h$ must be evasive.
\end{theorem}
\removelastskip\pagebreak
\begin{proof}
Without loss of generality, we may assume that $V$ is the set of elements of the finite
field $\mathbb{F}_{p^k}$. For any $a, b \in \mathbb{F}_{p^k}$ with $a \neq 0$, there is
a permutation of $V$ given by
\begin{eqnarray}
x \mapsto ax + b.
\end{eqnarray}
Let $G \subseteq \textnormal{Sym} ( V )$ be the group of all such permutations. Let $G'
\subseteq G$ be the subgroup consisting of permutations of the form $x \mapsto x + b$.
We make the following observations:
\begin{enumerate}
\item \textbf{The subgroup $G'$ is an abelian group of
order $p^k$.} It is isomorphic to the additive group of $\mathbb{F}_{p^k}$.
\item \textbf{The subgroup $G'$ is normal.}
This is apparent from the fact that for any $x, a, b \in \mathbb{F}_{p^k}$, with $a \neq
0$,
\begin{eqnarray}
a^{-1} ( a x + b ) = x + a^{-1} b.
\end{eqnarray}
\item \textbf{The quotient group $G / G'$ is cyclic.}
The quotient group $G / G'$ is isomorphic to the multiplicative group of
$\mathbb{F}_{p^k}$, which is known to be cyclic (see Theorem IV.1.9 from \cite{lang}).
\item \label{transitivepairs} \textbf{The action of $G$ is transitive on
pairs of distinct elements $( x, x' ) \in V \times V$.} This is a consequence of the fact
that for any pairs $(x, x')$ and $(y, y')$ with $x \neq x'$ and $y \neq y'$, the system
of equations
\begin{eqnarray}
ax + b & = & y \\
a x' + b & = & y'
\end{eqnarray}
has a solution, with $a \neq 0$.
\end{enumerate}
Consider the group action
\begin{eqnarray}
G \circlearrowleft \Delta_h
\end{eqnarray}
Suppose that the graph property $h$ is nonevasive. By
Theorem~\ref{collapsibilitytheorem}, the simplicial complex $\Delta_h$ is
collapsible.\footnote{Technically, this is not true if $\Delta_h$ is empty, and so we
need to address that case separately. If $\Delta_h$ is empty, then $h$ must be the
function that maps the empty graph to zero and all other graphs to $1$. This graph
property is easily seen to be evasive.} By Theorem~\ref{fptinit}, the set~$\left(
\Delta_h \right)^G$ is nonempty. Therefore there is a nonempty $G$-invariant graph which
does not satisfy $h$. But by property (\ref{transitivepairs}) above, the only nonempty
\hbox{$G$-invariant} graph is the complete graph. This makes $h$ a trivial graph
property, and thus we obtain a contradiction.
We conclude that $h$ must be an evasive graph property.
\end{proof}
\section{Bipartite Graphs}
Let $V$ be a finite set which is the disjoint union of two subsets, $Y$ and~$Z$. Then a
bipartite graph on $(Y, Z)$ is a graph whose edges are all elements of the set
\begin{eqnarray}\label{bipartiteedgepairs}
\left\{ \{ y, z \} \mid y \in Y, z \in Z \right\}\!.
\end{eqnarray}
A bipartite isomorphism between such graphs is a graph isomorphism which respects the
partition $(Y, Z)$.
Let $\mathbf{B} ( Y, Z)$ denote the set of all bipartite graphs on $(Y, Z)$. A
\textbf{bipartite graph property} is a function
\begin{eqnarray}\label{examplebipartiteprop}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
which respects bipartite isomorphisms. If this function is monotone increasing, it
determines a simplicial complex $\Delta_f$ whose vertices are elements of the
set~(\ref{bipartiteedgepairs}).
Naturally, we say that the bipartite graph property~(\ref{examplebipartiteprop}) is
evasive if its decision-tree complexity $D(f)$ is equal to $\left| Y \right| \cdot \left|
Z \right|$. The following proposition can be proved by the same method that we used to
prove Theorem~\ref{collapsibilitytheorem}.
\begin{proposition}
Let $Y$ and $Z$ be disjoint finite sets, and let
\begin{eqnarray}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
be a monotone-increasing bipartite graph property which is not evasive. If the complex
$\Delta_f$ is not empty, then it is collapsible.
\end{proposition}
Note that the complex $\Delta_f$ always has a group action,
\begin{eqnarray}
\left( \Sym ( Y ) \times \Sym (Z ) \right) \circlearrowleft \Delta_f.
\end{eqnarray}
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Yao \cite{yao1988}]
Let $Y$ and $Z$ be disjoint finite sets, and let
\begin{eqnarray}
f \colon \mathbf{B} ( Y, Z ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial bipartite graph property which is monotone increasing. Then, $f$ is
evasive.
\end{theorem}}
\begin{proof}
Let $\sigma \colon Y \to Y$ be a cyclic permutation of the elements of $Y$, and let $G
\subseteq \Sym ( Y )$ be the subgroup generated by $\sigma$. The edge set of any
$G$-invariant bipartite graph has the form
\begin{eqnarray}
H_S := \left\{ \{ y, z \} \mid y \in Y, z \in S \right\}
\end{eqnarray}
where $S$ is a subset of $Z$ (see Figure~\ref{invariantgraphfig}). Since $f$ is
isomorphism-invariant and monotone-increasing, the behavior of $f$ on such graphs can be
easily described: there is some integer $k \in \{ 1, 2, \ldots, \left| Z \right| \}$ such
that
\begin{eqnarray}
( V , H_S ) \textnormal{ has property $f$} \Longleftrightarrow \left| S \right| > k.
\end{eqnarray}
\begin{figure}[!b
\centerline{\includegraphics{f5-2}}
\fcaption{An example of a set $H_S$.\label{invariantgraphfig}}
\end{figure}
Let $\Delta = \Delta^{[G]}$. The vertices of $\Delta^{[G]}$ are the sets of the form
\begin{eqnarray}
H_z := \left\{ \{ y, z \} \mid y \in Y \right\}\!,
\end{eqnarray}
with $z \in Z$, and the simplicies are precisely the subsets of $\{ H_z \mid z \in Z \}$
whose union forms a graph that does not have property $f$. Thus we can calculate the
Euler characteristic directly:
\begin{eqnarray}
\chi ( \Delta^{[G]} )
& = & \sum_{j = 0}^{k-1} (-1)^j \binom{\left| Z \right|}{ j + 1} \\
& = & 1 + (-1)^{k-1} \binom{ \left| Z \right| - 1}{k}.
\end{eqnarray}
Suppose that $f$ is nonevasive. Then $\Delta$ is collapsible and by Theorem~\ref{fpt},
\begin{eqnarray}
\chi ( \Delta^{[G]} ) & = & 1.
\end{eqnarray}
But this is possible only if $k = \left| Z \right|$ and $f$ is trivial.
\end{proof}
\section{A General Lower Bound}
Now we prove a lower bound on decision-tree complexity which applies to graphs of
arbitrary size. Our method of proof is based on \cite{kss1984}.
\begin{proposition}\label{generallowerbound}
Let $V$ be a finite set and let
\begin{eqnarray}
h \colon \mathbf{G} ( V ) \to \{ 0, 1 \}
\end{eqnarray}
be a nontrivial monotone-increasing graph property. Let $p$ be the largest prime that is
less than or equal to $\left| V \right|$. Then,
\begin{eqnarray}
D ( h ) \geq \frac{p^2}{4}.
\end{eqnarray}
\end{proposition}
\begin{proof}
Assume that $\left| V \right| = n$. For any $r, s \geq 0$, let us write $K_r$ for the
complete graph on $\{ 1, 2, \ldots, r \}$, and let us write $K_{r, s}$ for the complete
bipartite graph on the sets $\{ 1, 2, \ldots, r \}$ and $\{ r+1 , \dots , r + s \}$. For
any two graphs $H = (V, E)$ and $H' = (V' , E' )$, let us abuse notation slightly and
write $H \cup H'$ for the graph $(V \cup V', E \cup E')$.
For any $k \geq 1$, let $C_k$ denote the least decision-tree complexity that occurs for
nontrivial monotone-increasing graph properties on graphs of size $k$. We prove a lower
bound for $D ( h )$ in three cases.
\medskip\textbf{Case 1:} $\mathbf{h ( K_{1,n-1} ) = 0}.$ In this case, the function $h$
induces a nontrivial graph property $h'$ on the vertex set $\{ 2, 3, \ldots, n \}$, given
by
\begin{eqnarray}
h' ( P ) & = & h ( P \cup K_{1, n-1} ).
\end{eqnarray}
This function has decision-tree complexity at
least $C_{n-1}$, and therefore $D ( h ) \geq C_{n-1}$.
\medskip
\textbf{Case 2:} $\mathbf{h ( K_{n-1} ) = 1.}$ In this case the function $h$ induces a
nontrivial graph property $h'$ on the vertex set $\{ 2, 3, \ldots, n \}$ given by
\begin{eqnarray}
h' ( P ) & = & h ( P \cup K_1 ),
\end{eqnarray}
which is likewise nontrivial. This function has decision-tree complexity at least
$C_{n-1}$, and so $D ( h ) \geq C_{n-1}$.
\medskip
\textbf{Case 3:} $\mathbf{h ( K_{1, n-1} ) = 1}$ \textbf{and} $\mathbf{h ( K_{n-1} ) =
0}$. Let $m = \lfloor n/2 \rfloor$. The property $h$ induces a bipartite graph property
on the sets $\{1, 2, \ldots, m \}$ and $\{ m+1, m+2, \ldots, m \}$ defined by
\begin{eqnarray}
h' ( P ) = h ( P \cup K_m ).
\end{eqnarray}
Since $h ( K_m ) \leq h ( K_{n-1} ) = 0$ and $h ( K_m \cup K_{m,n-m} ) \geq h ( K_{1,
n-1} ) = 1$, the property $h'$ is nontrivial. Therefore it has decision-tree complexity
at least $m (n-m)$. The decision-tree complexity of $h$ is likewise bounded by $m ( n-m
) \geq (n-1)^2/4$.
\medskip
In all cases, we have
\begin{eqnarray}
D ( h ) \geq \min \left\{ C_{n-1}, \frac{(n-1)^2}{4} \right\}\!.
\end{eqnarray}
The same reasoning shows that
\begin{eqnarray}
C_k \geq \min \left\{ C_{k-1} , \frac{(k-1)^2}{4} \right\}
\end{eqnarray}
for every $k \in \{ p+1, p+2 , \ldots, n-1 \}$. Therefore
by induction,
\begin{eqnarray}
D ( h ) \geq \min \left\{ C_p , \frac{p^2}{4} \right\}\!.
\end{eqnarray}
The quantity $C_p$ is $\binom{p}{2}$, and the desired result follows.
\end{proof}
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Kahn et~al. \cite{kss1984}]
\label{lowerboundtheorem} Let $C_n$ denote the least decision-tree complexity that occurs
among all nontrivial monotone-increasing graph properties of order $n$. Then,
\begin{eqnarray}
C_n \geq \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\begin{proof}
By the prime number theorem, there is a function $z ( n ) = o ( n )$ such that for any
$n$, the interval $[n - z(n), n]$ contains a prime.\footnote{The prime number theorem
\cite{zagier1997} asserts that if $\pi( n )$ denotes the number of primes less than or
equal to $n$, then $\lim_{n \to \infty} \pi ( n ) \left( n / \ln n \right)^{-1} = 1$. If
there were an infinite number of linearly sized gaps between the primes, this limit could
not exist.} By Proposition~\ref{generallowerbound},
\begin{eqnarray}
C_n & \geq & \frac{(n - z(n))^2}{4} \\
& \geq & \frac{n^2}{4} - o ( n^2 ).
\end{eqnarray}
as desired.
\end{proof}
\section{A Survey of Related Results}
Much work on the decision-tree complexity of graph properties has followed the papers of
Kahn, Saks, Sturtevant, and Yao. We briefly sketch some of the newer results in this
area.
V. King proved a lower bound for properties of \textbf{directed} graphs.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[King \cite{king1990}]
Let $C'_n$ denote the least decision-tree complexity
that occurs among all nontrivial monotone
\textbf{directed} graph properties of order $n$. Then,
\begin{eqnarray}
C'_n \geq \frac{n^2}{2} - o ( n^2 ).
\end{eqnarray}
\end{theorem}}
\noindent Triesch \cite{triesch1994, triesch1996} proved multiple results about the
evasiveness of particular subclasses of monotone graph properties.
Korneffel and Triesch improved on the asymptotic bound of
\hbox{Theorem}~\ref{lowerboundtheorem} by using a different group action on the set of
vertices. Let $V$ be a set of size $n$, and let $p$ be a prime that is close to $\left(
\frac{2}{5} \right) n$. Break the set $V$ up into disjoint subsets $V_1$, $V_2$, and
$V_3$, with $\left| V_1 \right| = \left| V_2 \right| = p$ and $\left| V_3 \right| = n -
2p$. Let $\mathbf{P}$ be the class of tripartite graphs on $(V_1, V_2, V_3)$ which,
when taken together with the complete graphs on the sets $V_i$, do not satisfy property
$h$. The abelian group
\begin{eqnarray}
G = \mathbb{Z} / p \mathbb{Z} \times \mathbb{Z} / p \mathbb{Z}
\times \mathbb{Z} / (n - 2p ) \mathbb{Z}
\end{eqnarray}
acts on the class $\mathbf{P}$ by cyclically permuting the elements of $V_1$, $V_2$,
and~$V_3$. From this action and some other arguments, the authors are able to prove the
following.
{\makeatletter
\newtheoremstyle{nowthm}{4pt plus6pt minus4pt}{0pt}{\upshape}{0pt}{\bfseries}{}{.6em}
{\rule{\textwidth}{.5pt}\par\vspace*{-1pt}\newline\thmname{#1}\thmnumber{\@ifnotempty{#1}{\hspace*{3.65pt}}{#2}$\!\!$}
\thmnote{{\the\thm@notefont\bf (#3).}}}
\def\@endtheorem{\par\vspace*{-7.8pt}\noindent\rule{\textwidth}{.5pt}\vskip8pt plus6pt minus4pt}
\ignorespaces \makeatother
\begin{theorem}[Korneffel and Triesch \cite{kt2010}]
Let $C_n$ denote the least decision-tree complexity that occurs among all nontrivial
monotone-increasing graph properties of order~$n$. Then,
\begin{eqnarray}
C_n \geq \frac{8 n^2}{25} - o ( n^2 ). \hskip0.2in
\end{eqnarray}
\end{theorem}}
The work of Chakrabati et~al. \cite{cks2002} considers the
\textbf{subgraph containment property}. For any finite graph $X$, let $h_{X,n}$ denote
the graph property for graphs of size $n$ which assigns a value of $1$ to a graph if and
only if it contains a subgraph isomorphic to $X$. This property is studied using another
group action. For appropriate values of $n$, the vertex set $V$ can be partitioned into
sets $V_1, \ldots , V_m$, where $\left| V_i \right| = q^{\alpha_i}$ for some prime power
$q$ which is greater than or equal to the number of vertices in $X$. Choose isomorphisms
$V_i \cong \mathbb{F}_{q^{\alpha_i}}$. Let $G$ be the group of permutations of $V$ that
is generated by the group $\mathbb{F}_{q^{\alpha_1}}^+ \times \ldots \times
\mathbb{F}_{q^{\alpha_m}}^+$ (acting on the sets $V_1 , \ldots , V_m$ in a component-wise
manner) and the group $\mathbb{F}_q^*$ (acting simultaneously on all the sets
$V_i$). If $h_X$ were nonevasive, then there would exist nontrivial $G$-invariant graphs
which do not satisfy $h_X$. Such graphs would have a uniform structure and would
correspond simply to graphs on the set $\{ 1, 2, \ldots, m \}$.
With this reduction the authors are able to prove that $h_{X,n}$ is evasive for all $n$
within a set of positive density. In general, the following asymptotic bound holds:
\begin{eqnarray}
D ( h_{X,n} ) \geq \frac{n^2}{2} - O ( n ).
\end{eqnarray}
This approach was further developed by Babai et al. \cite{bbkk2010}, who proved that
$h_{X,n}$ is evasive for almost all $n$, and that
\begin{eqnarray}
D ( h_{X, n } ) \geq \binom{n}{2} - O ( 1 ).
\end{eqnarray}
As one can observe from recent papers on evasiveness, advances in the strength of results
are paralleled by substantial increases in the difficulty of the proofs! The
increase in difficulty has become fairly steep at this point. Perhaps a new basic
insight, like the one in \cite{kss1984}, will be necessary to proceed further toward the
Karp conjecture.
|
1,108,101,565,751 | arxiv | \section{Introduction: The Argument}
\label{section:introduction}
The recent paper by Badenes et al.~(2009) describes their discovery as
part of the SWARMS survey of
a probable white dwarf-neutron star/black hole (WD-NS/BH)\footnote{
We adopt this terminology since it is not known if the unseen companion is
a NS or BH. Badenes et al.~(2009) discuss a third option ---
that the companion is another unseen WD --- but they do not favor this
option. We return to this issue in \S\ref{section:implications}.}
binary (SDSS 1257+5428) at a distance of $D\approx48^{+10}_{-19}$\,pc,
with orbital period of $\approx4.6$\,hr, and radial velocity semi-amplitude
of $\approx320$\,km s$^{-1}$. The WD is of spectral
type DA, has a $g$-band magnitude of 16.8, and has a cooling age of
$t_{\rm cool}\approx2.0\pm1.0$\,Gyr.
The system masses are $M_{\rm WD}\simeq0.92^{+0.28}_{-0.32}$ and
$M_{\rm NS/BH}\sin(i)\simeq1.62^{+0.20}_{-0.25}$\,M$_\odot$.
The transverse velocity of the system is $\approx11$\,km s$^{-1}$
and the total spatial velocity is plausibly $\sim120$\,km s$^{-1}$.
For $i=60^{\circ}$, the semi-major axis is $\approx0.01$\,AU.
The orbital period and masses imply a merger timescale for the system of
$t_{\rm merge} \leq 511^{+342}_{-141}$\,Myr and for an assumed inclination
angle of $i=60^{\circ}$, $t_{\rm merge} \sim 267^{+165}_{-70}$\,Myr.
Taken at face value, the detection of this remarkable binary
implies that the number of such systems in the Galaxy is very
large. The WD has an apparent $g$-band magnitude $\approx2$
magnitudes brighter than the limiting magnitude of the survey,
18.9 (Badenes et al.~2009; Mullally et al.~2009). Thus, the
volume probed by the survey for systems analogous to SDSS 1257+5428 is
$V_{\rm SWARMS}\sim f_{\rm SDSS}(4\pi/3)(125\,{\rm pc})^3\sim2\times10^6$\,pc$^3$,
where $f_{\rm SDSS}\sim1/4$ is the fraction of the sky probed by SDSS to $g\approx18.9$
in SWARMS, and where we have taken $f_{\rm SDSS}(4\pi/3)\sim1$.
The total volume of the Galaxy is $V_{\rm MW}\sim2\pi r^2 h$, where
$r\sim8.5r_{\rm 8,5\,kpc}$\,kpc is the galacto-centric radius and
$h\sim 4 h_{\rm 4\,kpc}$\,kpc
is the scale height accessible to objects with total space motion of
$\sim100$\,km s$^{-1}$. Thus,
\begin{equation}
\frac{V_{\rm SWARMS}}{V_{\rm MW}}
\sim1\times10^{-6}\,r_{8.5\,\rm kpc}^{-2} h_{4\,\rm kpc}^{-1}.
\label{Vratio}
\end{equation}
The primary point of this {\it Letter} is that this ratio
is exceedingly small. Since equation (\ref{Vratio}) is equivalent to the
probability of detecting such a system if it was the only such
system in the Galaxy, we conclude that it is not the only such system.
More probable is that each of the
$V_{\rm SWARMS}$-sized volumes that
constitute the Galaxy is populated by $N\gtrsim1$ such
binaries. The total number of WD-NS/BH binaries in the
Galaxy is then\footnote{We note that
the density of stars in the Solar neighborhood can be estimated
similarly, and with at least order-of-magnitude fidelity, from the
first measured parallax of 61 Cygni by Bessel (1838).}
\beqa
N_{\rm MW}\sim N\left(\frac{V_{\rm MW}}{V_{\rm SWARMS}}\right)
\sim10^6 N\,\,r_{8.5\,\rm kpc}^{2} h_{4\,\rm kpc}
\label{number}
\eeqa
and the current merger rate is
\beqa
\Gamma_{\rm MW}
\sim\frac{N_{\rm MW}}{t_{\rm life}}
\sim 5\times10^{-4}N\,{\rm yr^{-1}}\,\,r_{8.5\,\rm kpc}^{2}
h_{4\,\rm kpc}\,t_{\rm 2Gyr}^{-1},
\label{rate}
\eeqa
where $t_{\rm 2\,Gyr}=t_{\rm life}/2$\,Gyr and $t_{\rm life}=t_{\rm cool}+t_{\rm merge}$
(e.g., Phinney 1991).
We caution that the estimate of the number of such systems in the Galaxy in equation (\ref{number})
and the merger rate in equation (\ref{rate})
is highly uncertain. Ignoring the uncertainties in the ratio of volumes and
in $t_{\rm life}$, the Poisson error on $N$, given the single
detection, yields a lower limit of $N\gtrsim0.05$
($N_{\rm MW}\gtrsim5\times10^4$; $\Gamma_{\rm MW}\gtrsim2.5\times10^{-5}$\,yr$^{-1}$) and
$\gtrsim0.01$ ($N_{\rm MW}\gtrsim1\times10^4$;$\Gamma_{\rm MW}\gtrsim5\times10^{-6}$\,yr$^{-1}$)
at 95\% and 99\% confidence, respectively (e.g., Gehrels 1986).
A single additional such binary detected in the local
$\sim10^6$\,pc$^3$ would provide strong evidence for $N\sim1$.
Unfortunately, the lack of such a future detection in SWARMS
does not strongly constrain $\Gamma_{\rm MW}$.\footnote{Based on
the constraints on the mass function alone, there is a
significant chance ($\sim50$\.\%) that one of the systems recently
reported by Kilic et al.~(2009) is also a WD-NS binary.
Since the survey volume is much larger for the Kilic et al.~survey
(they target bright Helium WDs), if the system is confirmed
to be a WD-NS binary, $\Gamma_{\rm MW}$ is still likely to
be dominated by SDSS 1257+5428.}
In addition to the
uncertainty in $N$, there is uncertainty at the factor of $\sim2-4$
level in $V_{\rm MW}$; however, our fiducial values for $r$ and $h$
are reasonable, given the age and space velocity of the binary
considered. Note also that SWARMS selects for edge-on
binaries due to the low resolution of the SDSS spectra
used to target interesting WDs via radial velocity variations
(Badenes et al.~2009; Mullally et al.~2009). This effect
may increase the inferred rate by a factor of order $\sim2-4$.
The ambiguity in the cooling age of SDSS 1257+5428 adds
an additional factor of $\sim2-4$ uncertainty. Finally,
our estimate tacitly assumes that the binary is equally
detectable by SWARMS over its few Gyr lifetime from
birth to death; the fact that the WD was brighter in the
past implies a lower overall rate than given in equation (\ref{rate}),
but the magnitude of this correction depends on the luminosity function of WDs
in such binaries.
\section{Discussion: The Rate}
\label{section:discussion}
The nominal estimate of the merger rate for WD-NS/BH binaries
in equation (\ref{rate}) is very high. Taking the
core-collapse supernova (SN) rate to be $\sim1-2\times10^{-2}$\,yr$^{-1}$
in the Galaxy, we see that the WD-NS/BH merger rate is $\sim20-40$ times
smaller. Comparison with the observed rate of core-collapse SNe in the local
100\,Mpc volume (see, e.g., the compilation of data by Horiuchi et al.~2009),
$\sim1-2\times10^5$\,Gpc$^{-3}$ yr$^{-1}$, implies that
the average rate of WD-NS/BH mergers in the local universe is
$\sim0.5-1\times10^4$\,Gpc$^{-3}$ yr$^{-1}$. This is a
factor of just $\sim3-6$ lower than the local Type Ia rate,
$\sim3\times10^4$\,Gpc$^{-3}$ yr$^{-1}$ (Dilday et al.~2008).
See Table \ref{table}.
Although the rates of short- and long-duration
gamma-ray bursts (GRBs) are uncertain because of the overall beaming correction
for the two populations, $\Gamma_{\rm MW}$ in equation (\ref{rate})
is at least $\sim10^2$ times larger than the rates inferred for either population;
the long-duration GRB rate is observed to be $\sim1$\,Gpc$^{-3}$ yr$^{-1}$,
and the beaming-corrected rate is $\sim30$\,Gpc$^{-3}$ yr$^{-1}$ (e.g., Guetta et al.~2005).
The rate of short-duration GRBs is also small ($\sim10$\,Gpc$^{-3}$ yr$^{-1}$; Nakar 2007).
This disparity between $\Gamma_{\rm MW}$ and the GRB rate is particularly
interesting because these mergers have been suggested as a mechanism
for some long- and short-duration GRBs (\S\ref{section:implications}).
Interestingly, our rate for $\Gamma_{\rm MW}$ is only $\sim5$ times
higher than the current estimate for the rate of NS-NS mergers in the
Galaxy (Kalogera et al.~2004ab), often thought to be associated
with short-duration GRBs (see Table \ref{table}).
Our estimate of $\Gamma_{\rm MW}$ is also significantly larger than
the rate derived from statistical methods based on the
3 previously known WD-NS(pulsar) binaries that will merge
within a Hubble time,
$\sim0.2-10\times10^{-6}$\,yr$^{-1}$, obtained without making
an overall correction for pulsar beaming (Kim et al.~2004).
This result is statistically dominated by the single WD-NS
binary system PSR J1141-6545, whose merger timescale is similar
to the system described here. The rate derived in
Kim et al.~(2004) is about an order of magnitude smaller
than that obtained from population synthesis studies (e.g.,
Kalogera et al.~2005), and it is about $\sim10^2$ times
smaller than $\Gamma_{\rm MW}$ in equation (\ref{rate}).
The estimate by Fryer et al.~(1999a) is similarly small,
but with large uncertainties.
Our estimate of $\Gamma_{\rm MW}$ is $\sim1-10$ times higher
than the rate advocated by Davies et al.~(2002) in their
binary evolution scenario developed to explain the WD-pulsar
systems J1141-6545 and B2303+46.\footnote{Note that their
production rate of tight binaries is highest (closest to
eq.~\ref{rate}) for a flat (i.e., not Salpeter) distribution
of secondary masses and for large values of the common envelope
evolution parameter.} In their model, the initial primary
transfers enough of its mass to the secondary that the latter
becomes the more massive star. The core of the original primary
makes the WD, which enters a common envelope with the more
massive secondary when it evolves off the main sequence. In the
case of their PSR J1141-6545-like binaries,
the secondary helium star evolves to contact
with the primary and the majority of its envelope is ejected from
the system. The secondary then undergoes core collapse, producing
a NS. However, since SDSS 1257+5428 is not highly
eccentric it is unclear to what extent it may be viewed as
analogous to J1141-6545 (see [10], \S\ref{section:implications}).
Our 95\% confidence lower limit on the rate ---
$\Gamma_{\rm MW}\gtrsim2.5\times10^{-5}$\,yr$^{-1}$ --- is still
larger than either of the observed or beaming-corrected short- or
long-duration GRB rates, but overlaps with the probable distribution
of NS-NS merger rates for the Galaxy (Kalogera et al.~2004ab), the
upper end of the pulsar beaming-corrected rates derived from PSR
J1141-6545 (Kim et al.~2004), the rates inferred from population
synthesis studies (Davies et al.~2002; Kalogera et al.~2005),
the rate derived by Nelemans et al.~(2001), and the estimate
of Edwards \& Bailes (2001) of $\sim10^{-5}$\,yr$^{-1}$ based on
PSR J1141-6545.
\begin{table}[!t]
\begin{center}
\caption{Events \& Rates
\label{table}}
\begin{tabular}{lccccccccc}
\hline \hline
\\
\multicolumn{1}{c}{Event Type} &
\multicolumn{1}{c}{Rate} &
\multicolumn{1}{c}{Reference} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{ (10$^4$\,Gpc$^{-3}$ yr$^{-1}$)} &
\multicolumn{1}{c}{} \\
\\
\hline
\hline
\\
WD-NS/BH Merger & $0.5-1$ & this work \\
\\
\hline
\\
Type II SN & $10-20$ & Horiuchi et al.~(2009) \\
Type Ia SN & $3$ & Dilday et al.~(2008) \\
NS-NS Merger & $0.1$\tablenotemark{a} & Kalogera et al.~(2004ab) \\
Short GRB & \,\,$0.001$\tablenotemark{b,c} & Nakar (2007) \\
Long GRB & $0.0001$\tablenotemark{b} & Guetta et al.~(2005) \\
\\
\hline
\hline
\end{tabular}
\tablenotetext{a}{Calculated using the rate $\approx10^{-4}$\,yr$^{-1}$ derived in
Kalogera et al.~(2004ab) and the conversion factor of $10^{-2}$\,Mpc$^{-3}$ based on the
$B$-band luminosity of the local universe from
Kalogera et al.~(2001), their Section 4.}
\tablenotetext{b}{Uncorrected for beaming.}
\tablenotetext{c}{Short-duration GRBs with extended emission (e.g., 060614)
amount to $\sim25$\% of the total short-duration GRB population (Norris et al.~2009).}
\end{center}
\end{table}
\section{Implications}
\label{section:implications}
The estimate of the merger rate in equation (\ref{rate})
has a number of implications:
\noindent {\it 1.~GRBs \& Transients:}
The merger of WD-NS/BH binaries has been discussed as a mechanism
for producing long-duration GRBs (e.g., Fryer et al.~1999b)
and the new class of short-duration GRBs with extended emission (e.g.,
GRB 060614; King et al.~2007; see Gehrels et al.~2006 \& Gal-Yam et al.~2006).
A recent study by Norris et al.~(2009) finds that only $\sim25$\% of short
duration GRBs have extended emission. Thus, if King et al.~(2007) are
correct and WD-NS mergers are the central engine for 060614-like bursts,
then either equation (\ref{rate}) overestimates the rate by a factor
of $\sim10^2-10^3$, only a very small percentage of WD-NS/BH mergers produce
GRBs, or these events are very highly
beamed --- with jet opening angles many times smaller than most estimates
(e.g., Nakar 2007; Guetta et al.~2005).
Similar statements can be made for short-duration GRBs without extended emission,
or for long-duration GRBs unaccompanied by SNe (see Table \ref{table}). The natural
timescale for such transients is long ($\gtrsim100$\,s) since the WD will be tidally
disrupted at a radial distance from the NS comparable to its
own radius. Equation (\ref{rate}) implies that transients
generated by WD-NS/BH mergers should be relatively common in
Milky Way-like galaxies ($\sim10-30$\% of the Ia rate),
contrary to what is observed for long-duration GRBs
(Stanek et al.~2006).
Whatever transients these mergers produce (see [4] below),
they should be associated with relatively old ($\sim$\,Gyr)
stellar populations, may occur many kpc from their
host galaxies, and should not correlate with recent star formation.
\noindent {\it 2.~Potential Neutrino Signature:}
Whether the unseen companion in SDSS 1257+5428 is a NS
or BH, the merger may generate an interesting neutrino
signature, with a maximum possible total energy radiated of
$\sim3\times10^{53}$\,ergs (similar to
a NS-producing core-collapse SN). However,
because the accretion rate is likely to be modest,
$\lesssim10^{-2}$\,M$_\odot$ s$^{-1}$, the merger disk is
unlikely to be radiatively efficient in neutrinos
(e.g., Popham et al.~1998; Chen \& Beloborodov 2007).
Since BH formation is a likely outcome of the merger
(see [6] below), a distinct signature of this NS-to-BH
transformation may be seen in the neutrino (or potentially high-energy photon)
lightcurve.
\noindent {\it 3.~Nucleosynthesis:}
The outflow produced from the
NS/BH during the accretion phase of the WD --- either in a
jet or a wind --- may produce thermodynamic conditions favorable for
interesting nucleosynthesis. For very high accretion
rates ($\sim1$\,M$_\odot$ s$^{-1}$; Chen \& Beloborodov 2007; Metzger et al.~2009)
the outflow may become neutron rich, producing conditions
favorable for production of the $r$-process nuclei (e.g., Metzger et al.~2007).
Even in the absence of a neutron excess (as is more likely
for the low accretion rates one estimates for
such a merger/disk) if the dynamical timescale for ejection is
short enough, the $r$-process might still occur (Meyer 2002).
Since the rate estimated in equation (\ref{rate})
is relatively high with respect to NS-NS mergers
(a commonly discussed $r$-process production site; Freiburghaus et al.~1999),
WD-NS/BH mergers may make an interesting contribution
to the heavy-nucleus budget of the Galaxy.
\noindent {\it 4.~Optical Signature:}
Similar to the case of WD-WD mergers, or accretion-induced
collapse of a WD to a NS as a result of accretion from a companion,
as considered in Metzger et al.~(2009), there may be a low-luminosity
optical transient generated by $\lesssim10^{-2}$\,M$_\odot$ of $^{56}$Ni
associated with the merger of WD-NS binaries. This material may
be ejected as an initially proton-rich outflow from the disk around the
NS or BH. The total ejecta
mass would be small on the scale of normal SNe.
The rate in equation (\ref{rate}) is small enough that
such transients may be so far unknown, but we speculate that they
might be linked to classes of unusual SNe like SN 1991bg
(Filippenko et al.~1992),
2002bj (Poznanski et al.~2009), 2008ha (Foley et al.~2009; Valenti et al.~2009),
or 2005cz (Kawabata et al.~2009) and 2005E (Perets et al.~2009).
Such events should be relatively
common in the local universe ($\sim10-30$\% of the Ia rate),
trace relatively old stellar populations, have no optically-visible
progenitors, be seen many kpc from their host galaxies (e.g., SN 2005E), and
many should be seen by current transient survey efforts such as
LOSS (Li et al.~2000),
Palomar Transient Factory (PTF) and the Panoramic Survey Telescope and Rapid Response
System (PanSTARRS); the future Large Synoptic Survey Telescope
(LSST) should see many hundreds per year. The composition of the
ejecta may vary, depending on the type of WD disrupted (e.g., He, C/O, O/Ne/Mg).
\noindent {\it 5.~Gravitational Waves:}
Assuming $M_{\rm WD}=0.9$\,M$_\odot$, $M_{\rm NS/BH}=1.6$\, M$_\odot$, and $\cos i
= 0$ for SDSS 1257+5428 results in a gravitational wave (GW) strain
amplitude of $h \approx 2.3 \times 10^{-21}$, which exceeds the nominal
sensitivity of LISA of $\sim 10^{-21}$ at $f_{\rm GW} \sim 0.1$~mHz (Roelofs
et al.\ 2007). The estimate of equation (\ref{rate}) suggests that WD-NS/BH binaries
will also contribute significantly to the GW background in the LISA band
(Badenes et al.\ 2009; Kim et al.\ 2004), which may affect the detectability
of the SDSS 1257+5428 signal.
Importantly, individual mergers in the local universe may also be important
GW sources for LIGO if the merger results in BH formation or if
the massive NS merger remnant is initially rapidly rotating
(Garcia-Berro et al.~2007; Paschalidis et al.~2009).
\noindent {\it 6.~The Lowest Mass Black Holes:}
Note that the total mass of SDSS 1257+5428 probably exceeds
the maximum mass for a NS (e.g., Lattimer \& Prakash 2007). Thus,
if the WD was entirely accreted, this event would be accompanied
by BH formation with $\sim2.5$\,M$_\odot$ (see Brown et al.~2001).
The rate of WD-NS/BH mergers estimated in equation (\ref{rate})
is large enough that the Galaxy should be littered with $\sim2-3$\,M$_\odot$
BHs; there should be $\sim10^6-10^7$ low-mass free-floating BHs throughout
the Galaxy.
\noindent {\it 7.~The Origin of Ultra-High Energy Cosmic Rays:}
If WD-NS/BH mergers produce a BH and accretion disk
(see Popham et al.~1998; Fryer et al.~1999b), the magnetic luminosity via the
Blandford-Znajek mechanism will
likely be large enough for production of UHECRs (e.g., Waxman \& Loeb 2009; see also
Waxman 1995).
Because the energy reservoir in WD-NS mergers
is comparable to the energy budget of GRBs,
but the overall rate of WD-NS/BH mergers within the local GZK
volume ($\sim100$\,Mpc) is $\gtrsim100$ times larger (eq.~\ref{rate};
\S\ref{section:discussion}), WD-NS/BH mergers may
dominate the UHECR budget. The merger may also produce a
rapidly rotating (ms spin period) NS with potentially short-lived magnetar-strength
($\sim10^{15}$\,G) magnetic fields. In this case, the mechanism
advocated by Arons (2003) for UHECR production may obtain.
The rate needed by Arons to account for the observed UHECR
budget is close to the rate derived for WD-NS/BH mergers
estimated here (Table \ref{table}).
Additionally, note that the formation of
ms magnetar-like conditions via WD-NS/BH mergers alleviates the
problem of getting UHECRs out of the overlying dense massive
star progenitor discussed by Arons.
\noindent {\it 8.~Binaries in Type Ib/c SNe:}
If the scenario outlined by Davies et al.~(2002), which predicts
a formation rate of SDSS 1257+5428-like binaries within a factor of
$\sim1-10$ of the estimate in equation (\ref{rate}), is correct,
then the star that produces the NS (originally the secondary)
explodes after becoming a He star and transferring a significant fraction of
its envelope to the WD, and potentially expelling it from the system. The
fact that the overall rate of Type-Ib/c SNe is $\sim10-20$\% of
core-collapse SNe (Prieto et al.~2008)
implies that if the Davies et al.~mechanism
is correct, and if $\Gamma_{\rm MW}$ is correct, then many
Type Ib/c SNe ($\sim20-50$\%) explode with a very
close WD companion.
\noindent {\it 9.~Tight Companion Interaction:}
Regardless of the binary formation channel, the
SN explosion that produces the NS may interact with
the close secondary, be it a WD, main sequence star, or otherwise.
In this case, there
may be a signal of interaction in the very early-time lightcurve of
many stripped envelope Type-Ib/c SNe as a result of the break-out
flash and shockwave interacting with the nearby companion
(see, e.g., Marietta \& Burrows 2000; Kasen 2009).
\noindent {\it 10.~Star \& Binary Formation:}
The Galactic birth rate of massive stars is approximately equal to the
core-collapse SN rate. Thus, equation (\ref{rate}) implies that $\sim2 - 4$\% of all
massive stars are born with a close binary companion capable of producing a
SDSS 1257+5428-like binary. If true, this has important implications for the
physics of massive star formation and the demographics of young massive star
binaries (e.g., Krumholz et al.\ 2009), as well as for pulsar binaries. The
low eccentricity of the orbit of SDSS 1257+5428 suggests interaction with the
companion subsequent to the NS birth. Additionally, if WD production precedes the
NS, equation (\ref{rate})
implies a significant number of young pulsars with very tight WD companions.
However, only one such system has been detected among the $> 1000$
non-millisecond pulsars (MSPs) --- the highly-eccentric PSR J1141-6545 (see Edwards \&
Bailes 2001; Davies et al.\ 2002; Kim et al.\ 2004) --- disfavoring this as an
analog. If the NS preceded the WD and was recycled into a MSP,
then MSPs should be embedded in the centers of a fraction of young planetary
nebulae, potentially visible either in radio and/or X-rays and gamma-rays.
Indeed, the estimate of equation (\ref{rate}) is large enough that the simplest
explanation for the discovery of SDSS 1257+5428 is either that SWARMS was very
lucky, or that instead of a WD-NS/BH binary, this system is a tight WD-WD binary
(Badenes et al.\ 2009). Although the high nominal space velocity of the system
argues against this possibility, if SDSS 1257+5428 is in fact a WD-WD binary, then
the similarity between the derived rate and
the observed Type Ia SN rate is not a coincidence (see Table~\ref{table}),
particularly in light of the results of Mullally et al.\ (2009) (see also Kilic
et al.\ 2009). The relatively high value for the rate in equation (\ref{rate})
is yet more perplexing if the companion to the observed WD is in fact a BH, and
not a NS (see Brown et al.\ 2001). Nevertheless, the relative lack of
pulsars with tight binary companions may argue for a BH companion, and it
should be kept in mind that the SWARMS survey is perhaps the first where
such tight WD-BH binaries could have been detected.
Clearly, a more complete census and analysis --- as will be
provided by the SWARMS survey (Mullally et al.\ 2009) --- and follow-up
observations of SDSS 1257+5428, including a search for (potentially pulsed)
radio and X-ray emission as well as a parallax measurement, are needed. These
will be vital, since, if SDSS 1257+5428 is radio-quiet, this would imply a class of
binaries distinct from any presently known.
\vspace{-.6cm}
\acknowledgments
We thank C.~Badenes and B.~Metzger for a critical reading of the text and
for comments that improved the manuscript. We thank
J.~Beacom, C.~Kochanek, B.~Lacki, P.~Martini, O.~Pejcha, J.~Prieto, E.~Quataert, and
D.~Zhang for additional discussions.
T.A.T.~is supported in part by an Alfred P.~Sloan Foundation Fellowship.
M.D.K.~is supported in part by an OSU Presidential Fellowship and
by NSF CAREER grant PHY-0547102.
T.A.T.~and K.Z.S.~are supported in part by NSF grant AST-0908816,
|
1,108,101,565,752 | arxiv | \section{Introduction}
\label{sec:introduction}
Both in statistical mechanics and in quantum field theory, the numerical study of a large class of physical quantities by Monte~Carlo methods can be reduced to the evaluation of differences of free energies $F$. For lattice gauge theory, the most typical examples arise in the investigation of the phase diagram of QCD and QCD-like theories. For instance, in the study of the QCD equation of state at finite temperature $T$ (and zero baryon density), the difference between the pressure $p(T)$ and its value at $T=0$ can be computed using the fact that $p$ is opposite to the free energy density $f=F/V$, where $V$ denotes the system volume.\footnote{Strictly speaking, the $p=-f$ equality holds only for an infinite-volume system. In a periodic, cubic box of volume $V=L^3$, the relation is violated by corrections that depend on the aspect ratio $LT$ of the time-like cross-section of the hypertorus (for a gas of free, massless bosons)~\cite{Gliozzi:2007jh} or on the ratio of the linear size of the system $L$ over the inverse of the smallest screening mass (if screening effects are present)~\cite{DeTar:1985kx, Elze:1988zs, Meyer:2009kn}: see also ref.~\cite{Panero:2008mg} for a numerical study of these effects on lattices of typical sizes used in Monte~Carlo simulations.} In turn, $f$ can then be evaluated for example by ``integrating a derivative''~\cite{Engels:1990vr}: during the past few years, this method has led to high-precision determinations of the equation of state for QCD~\cite{Bazavov:2009zn, Borsanyi:2013bia} and for Yang--Mills theories based on different gauge groups~\cite{Umeda:2008bd, Panero:2009tv, Borsanyi:2012ve, Bruno:2014rxa} and/or in lower dimensions~\cite{Bialas:2008rk, Caselle:2011mn}. These results can be compared with those obtained in other recent works~\cite{Asakawa:2013laa, Giusti:2014ila}, in which novel techniques (respectively based on the Wilson flow~\cite{Luscher:2010iy, Suzuki:2013gza} and on shifted boundary conditions~\cite{Giusti:2010bb, Giusti:2012yj}) have been used.
Other objects having a natural interpretation in terms of free-energy differences in finite-temperature non-Abelian gauge theories are the interfaces separating different center domains and/or regions of space characterized by different realizations of center symmetry~\cite{Kajantie:1988hn, Kajantie:1989xk, Enqvist:1990ae, Huang:1990jf, Kajantie:1990bu, Bhattacharya:1990hk, Bhattacharya:1992qb, KorthalsAltes:1993ca, Iwasaki:1993qu, Monden:1997hb, Giovannangeli:2001bh, Pisarski:2002ji}: they could have phenomenological implications for heavy-ion collisions~\cite{Asakawa:2012yv} and for cosmology~\cite{Ignatius:1991nk} and have been studied quite extensively in lattice simulations~\cite{Lucini:2003zr, Lucini:2005vg, Bursa:2005yv}.
In the study of QCD at finite baryon chemical potential $\mu$, a possible computational strategy to cope with the notorious sign problem~\cite{deForcrand:2010ys, Philipsen:2012nu, Levkova:2012jd, Aarts:2013lcm, D'Elia:2015rwa, Gattringer:2016kco} is the one based on the method first introduced in ref.~\cite{Ferrenberg:1988yz} and later extended to applications in lattice QCD~\cite{Toussaint:1989fn, Fodor:2001au}, whereby importance sampling is carried out in an ensemble of configurations generated using the determinant of the Dirac operator $D$ at $\mu=0$, and the expectation value in the target ensemble at finite $\mu$ is obtained through reweighting by the expectation value of $\det D(\mu)/\det D(0)$, computed in the $\mu=0$ ensemble. The natural logarithm of the latter quantity can be interpreted as ($1/T$ times) the difference between the free energies associated with the partition functions of the $\mu=0$ and finite-$\mu$ ensembles. Note that the \emph{extensive} nature of these quantities implies that a severe overlap problem arises in a large volume: for a Markov chain generated using the determinant of the Dirac operator $D$ at $\mu=0$, the probability of probing those regions of phase space, where the measure of the finite-$\mu$ ensemble is largest, gets exponentially suppressed with the system hypervolume, resulting in extremely poor sampling.
Free-energy differences are also relevant for the study of operators in the ground state of gauge theories. For example, vacuum expectation values of extended operators like 't~Hooft loops ($\widetilde{\mathcal{W}}$)~\cite{'tHooft:1977hy}, which have been studied on the lattice in several works~\cite{Kovacs:2000sy, Hoelbling:2000su, DelDebbio:2000cx, deForcrand:2000fi, deForcrand:2001nd}, can be generically written in the form
\begin{equation}
\label{tHooft}
\langle \widetilde{\mathcal{W}} \rangle = \frac{\int \mathcal{D} \phi \widetilde{\mathcal{W}}[\phi] \exp \left(-S[\phi]\right)}{\int \mathcal{D} \phi \exp \left(-S[\phi]\right)} = \frac{Z_{\widetilde{\mathcal{W}}}}{Z} = \exp\left[ - \left( F_{\widetilde{\mathcal{W}}} - F \right) L \right],
\end{equation}
where $\mathcal{D} \phi$ denotes the measure for the (regularized) functional integration over the generic fields $\phi$, $S$ is the Euclidean action, $Z$ is the partition function, $F$ is the free energy, and $L$ is the system size in the Euclidean-time direction, while $Z_{\widetilde{\mathcal{W}}}$ denotes a \emph{modified} partition function, in which the observable has been included in the action (by twisting a set of plaquettes that tile the $\widetilde{\mathcal{W}}$ loop~\cite{Srednicki:1980gb}) and $F_{\widetilde{\mathcal{W}}}$ is the corresponding free energy. Note that, in the case of a ``maximal'' 't~Hooft loop, i.e. one extending through a whole cross-section of the system, this problem has a natural connection to the study of fluctuating interfaces in statistical mechanics. It is worth noting that there exist many experimental realizations of fluctuating interfaces, particularly in mesoscopic physics, in chemistry and in biophysics: some well-known examples include binary mixtures and amphiphilic membranes~\cite{Gelfand:1990fse, Privman:1992zv}.
Other extended operators, like Wilson loops or Polyakov-loop correlation functions, can be easily recast into simple expressions of the form of eq.~(\ref{tHooft}) in a dual formulation of the theory, at least for Abelian (or, more generally, solvable) gauge groups~\cite{Panero:2004zq, Panero:2005iu, Caselle:2014eka, Caselle:2016mqu}.
This list of examples is by no means exhaustive, as the class of physical observables whose expectation values can be written in a natural way in terms of a free-energy difference---i.e. as a ratio of partition functions---is much broader. Note that, while it is always possible to trivially \emph{define} the expectation value of any arbitrary operator $\mathcal{O}$ as a ratio of partition functions of the form $Z_{\mathcal{O}}/Z=\exp\left[-\left(F_{\mathcal{O}}-F\right)L\right]$, here we are interested in the cases in which the quantity $Z_{\mathcal{O}}$ can be written as an integral over positive weights, that can be sampled efficiently by Monte~Carlo methods.
The examples above (and the computational problems that they involve) show that, in general, the numerical evaluation of free-energy differences remains a non-trivial computational challenge---one that cannot be easily tackled by brute-force approaches---in particular for large systems.
In this work, we present an application of non-equilibrium methods from numerical statistical mechanics, in lattice gauge theory. More precisely, we show that the class of algorithms based on Jarzynski's relation (whose derivation is presented in section~\ref{sec:Jarzynski}, along with some comments relevant for practical implementations in Monte~Carlo simulations) can be applied to gauge theories formulated on a Euclidean lattice, in a straightforward way. In a nutshell, this is so, because the Euclidean lattice formulation of a gauge theory~\cite{Wilson:1974sk} can be interpreted as a statistical mechanics system of a countable (and, in actual Monte~Carlo simulations, finite) number of degrees of freedom~\cite{Kogut:1979wt}. The main difference of Euclidean lattice gauge theories with respect to statistical spin models, namely the existence of an invariance under \emph{local}, rather than \emph{global}, transformations of the internal degrees of freedom, does not play any r\^ole in Jarzynski's theorem, so that there is no conceptual obstruction to its application for lattice gauge theories. Nevertheless, this theorem has received surprisingly little attention in the lattice community. With the notable exception of some works carried out in the three-dimensional Ising model (see, e.g., ref.~\cite{Chatelain:2007ts} and additional references mentioned below), which is exactly equivalent to a three-dimensional $\mathbb{Z}_2$ lattice gauge theory, we are not aware of any large-scale numerical studies of lattice QCD or of other lattice gauge theories, using Jarzynski's theorem. A motivation of the present work is to partially fill this gap, by presenting examples of applications of Jarzynski's theorem in two computationally challenging problems, and, as will be discussed in more detail in the following, by initiating a study of the practical details of computationally efficient algorithmic implementations of Jarzynski's relation. We will discuss applications in two different problems, namely in a high-precision numerical study of the physics of fluctuating interfaces, and in the calculation of the equation of state in non-Abelian gauge theories. The body of literature about the dynamics of interfaces (in different statistical-mechanics models) is vast~\cite{Binder:1982mc, Burkner:1983mc, Berg:1991sn, Hasenbusch:1992zz, Potoff:2000st, Davidchack:2005cs, Caselle:1992ue, Caselle:1994df, Caselle:2006dv, Billo:2006zg, Caselle:2007yc, Billo:2007fm,
condmat0602580, Chatelain:2007ts, Hijar2007, 0905.4569, Limmer:2011tp, Binder:2011mc, 1401.7870, 1406.0616, 1411.5588}; for our present purposes, particularly relevant works include those that have been recently carried out by Binder and collaborators (see refs.~\cite{1401.7870, 1406.0616, 1411.5588} and references therein), as well as those reported in refs.~\cite{condmat0602580, Chatelain:2007ts, Hijar2007}. We will also compare our new results with those obtained in earlier works by the Turin group~\cite{Caselle:1992ue, Caselle:1994df, Caselle:2006dv, Billo:2006zg, Caselle:2007yc, Billo:2007fm}. The results obtained in this benchmark study are compared with state-of-the-art analytical predictions based on an effective-string model~\cite{Aharony:2009gg, Aharony:2010cx, Aharony:2010db, Kol:2010fq, Aharony:2011gb, Gliozzi:2012cx, Dubovsky:2012sh, Aharony:2013ipa, Caselle:2013dra, Ambjorn:2014rwa, Brandt:2016xsp}: the precision of the results that we obtain with this algorithm in $\mathbb{Z}_2$ gauge theory in three dimensions allows us to clearly resolve subleading corrections predicted by the effective theory, which scale like the \emph{seventh} and the \emph{ninth} inverse powers of the linear size of the interface. In section~\ref{sec:equation_of_state} we discuss an implementation of this type of algorithm in non-Abelian gauge theory with $\mathrm{SU}(2)$ gauge group, and present preliminary results for the computation of the equation of state in the confining phase of this theory. Finally, in section~\ref{sec:conclusions} we summarize the key features of non-equilibrium algorithms like the one discussed in this work, and discuss their potential for applications in computationally challenging problems, in particular those relevant for the calculation of free energies (or, more generally, effective actions) in QCD and in other strongly coupled field theories.
\section{Jarzynski's relation}
\label{sec:Jarzynski}
The class of algorithms that we are discussing in the present work are based on a theorem proven by Jarzynski in refs.~\cite{Jarzynski:1996ne, Jarzynski:1997ef} (for a discussion about the relation with earlier work by Bochkov and Kuzovlev~\cite{Bochkov:1977gt, Bochkov:1979fd, Bochkov:1981nf}, see refs.~\cite{condmat0612305, Kuzovlev:2011sr}; for the connection with entropy-production fluctuation theorems~\cite{Evans:1993po}, see ref.~\cite{Crooks:1999ep}). Remarkably, this relation has also been verified experimentally, as discussed, for instance, in ref.~\cite{Liphardt:2002ei}.
In a nutshell, Jarzynski's relation states the equality of the exponential average of the work done on a system in non-equilibrium processes, and the difference between the free energies of the initial ($F_{\mbox{\tiny{in}}}$) and the final ($F_{\mbox{\tiny{fin}}}$) ensembles, respectively associated with the system parameters realized at ``times'' $t_{\mbox{\tiny{in}}}$ and $t_{\mbox{\tiny{fin}}}$. Here, ``time'' can either refer to Monte~Carlo time (in a numerical simulation), or to real time (in an experiment), and the average is taken over a large number of realizations of such non-equilibrium evolutions from the initial and the final ensembles.
In the following, we summarize the original derivation presented in refs.~\cite{Jarzynski:1996ne, Jarzynski:1997ef}, using natural units ($\hbar=c=k_{\mbox{\tiny{B}}}=1$) and focusing, for definiteness, on a statistical-mechanics system---although, as we will show below, the generalization to lattice gauge theories is straightforward.
Consider a system, whose microscopic degrees of freedom are collectively denoted as $\phi$ (for instance, $\phi$ could represent the spins defined on the sites of a regular $D$-dimensional lattice: $\phi = \{ \phi_{(x_1,\dots,x_D)}\}$). Let the dynamics of the system be described by the Hamiltonian $H$, which is a function of the degrees of freedom $\phi$, and depends on a set of parameters (e.g. couplings). When the system is in thermal equilibrium with a large heat reservoir at temperature $T$, the partition function of the system is
\begin{equation}
\label{partition_function}
Z =\sum_{\phi} \exp \left( - \frac{H}{T} \right),
\end{equation}
where, as usual, $\sum_{\phi}$ denotes the multiple sum (or integral) over the values that each microscopic degree of freedom can take. The statistical distribution of $\phi$ configurations in thermodynamic equilibrium is given by the Boltzmann distribution:
\begin{equation}
\label{Boltzmann_distribution}
\pi[\phi] = \frac{1}{Z} \exp \left( - \frac{H}{T} \right),
\end{equation}
which, in view of eq.~(\ref{partition_function}), is normalized to $1$:
\begin{equation}
\label{Boltzmann_distribution_normalization}
\sum_\phi \pi[\phi] = 1.
\end{equation}
Let us denote the conditional probability (or the conditional probability density, if the degrees of freedom of the system can take values in a continuous domain) that the system undergoes a transition from a configuration $\phi$ to a configuration $\phi^\prime$ as $P[\phi\to\phi^\prime]$. The sum of such probabilities over all possible distinct final configurations is one,
\begin{equation}
\label{transition_probability}
\sum_{\phi^\prime} P[\phi\to\phi^\prime] = 1,
\end{equation}
because the system certainly must evolve to \emph{some} final configuration. Since the Boltzmann distribution is an equilibrium thermal distribution, it satisfies the property
\begin{equation}
\label{Boltzmann_distribution_stationarity}
\sum_{\phi} \pi[\phi] P[\phi\to\phi^\prime] = \pi[\phi^\prime].
\end{equation}
In the following, we will assume that the system satisfies the stronger, detailed-balance condition:
\begin{equation}
\label{detailed_balance}
\pi[\phi] P[\phi\to\phi^\prime] = \pi[\phi^\prime] P[\phi^\prime \to \phi];
\end{equation}
note that, if eq.~(\ref{transition_probability}) holds, then eq.~(\ref{detailed_balance}) implies eq.~(\ref{Boltzmann_distribution_stationarity}), but the converse is not true.
In general, the Boltzmann distribution $\pi$ (as well as $Z$ and $P$) will depend on the couplings appearing on the Hamiltonian and on the temperature $T$; denoting them collectively as $\lambda$, one can then highlight such dependence by writing the configuration distribution as $\pi_\lambda$ (and the partition function and transition probabilities as $Z_\lambda$ and $P_\lambda$, respectively).
Let us introduce a time dependence for the $\lambda$ parameters---including the couplings of the Hamiltonian and, possibly, also the temperature $T$~\cite{Chatelain:2007ts}. Starting from a situation, at the initial time $t=t_{\mbox{\tiny{in}}}$, in which the couplings of the Hamiltonian take certain values, and the system is in thermal equilibrium at the temperature $T_{\mbox{\tiny{in}}}$, the parameters of the system are modified as functions of time, according to some specified procedure, $\lambda(t)$, and are driven to final values $\lambda(t_{\mbox{\tiny{fin}}})$ over an interval of time $\Delta t = t_{\mbox{\tiny{fin}}}-t_{\mbox{\tiny{in}}}$. $\lambda(t)$ is assumed to be a continuous function; for simplicity, we take it to interpolate linearly in $(t-t_{\mbox{\tiny{in}}})$ between the initial, $\lambda(t_{\mbox{\tiny{in}}})$, and final, $\lambda(t_{\mbox{\tiny{fin}}})$, values. During the time interval between $t_{\mbox{\tiny{in}}}$ and $t_{\mbox{\tiny{fin}}}$, the system is, in general, out of thermal equilibrium.\footnote{For example, the parameters of the system could be changed in a sufficiently short interval of real time in an actual experiment, or of Monte~Carlo time in a numerical simulation. Unless $\Delta t \to \infty$, the system ``does not have enough time'' to thermalize.}
Now, discretize the $\Delta t$ interval in $N$ sub-intervals of the same width $\tau=\Delta t /N$, define $t_n=t_{\mbox{\tiny{in}}} + n \tau$ for integer values of $n$ ranging from $0$ to $N$ (so that $t_0=t_{\mbox{\tiny{in}}}$ and $t_N=t_{\mbox{\tiny{fin}}}$); correspondingly, the linear $\lambda(t)$ mentioned above can be discretized by a piecewise-constant function, taking the value $\lambda(t_n)$ for $t_n \le t < t_{n+1}$. Furthermore, let $\phi(t)$ denote one possible (arbitrary) ``trajectory'' in the space of field configurations, i.e. a mapping between the time interval $[t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}}]$ and the configuration space of the system; upon discretization of the $[t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}}]$ interval, such trajectory can be associated with the $(N+1)$-dimensional array of field configurations defined as $\left\{ \phi(t_{\mbox{\tiny{in}}}), \phi(t_1), \phi(t_2), \dots , \phi(t_{N-1}), \phi(t_{\mbox{\tiny{fin}}}) \right\}$. Finally, let us introduce the quantity $\mathcal{R}_N[\phi]$ defined as
\begin{equation}
\label{discretized_exponential_work}
\mathcal{R}_N[\phi] = \exp \left( - \sum_{n=0}^{N-1} \left\{ \frac{H_{\lambda\left(t_{n+1}\right)}\left[\phi\left(t_n\right)\right]}{T\left(t_{n+1}\right)} - \frac{H_{\lambda\left(t_n \right)}\left[\phi\left(t_n \right)\right]}{T\left(t_n \right)} \right\}\right)
\end{equation}
(where the Hamiltonian $H_\lambda$ depends on its couplings, not on the temperature $T$): each summand appearing on the right-hand side of eq.~(\ref{discretized_exponential_work}) is the work (over $T$) done on the system during a time interval $\tau$, by switching the couplings from their values at $t=t_{\mbox{\tiny{in}}} + n \tau$ to those at $t=t_{\mbox{\tiny{in}}} + (n+1) \tau$. Thus, $\mathcal{R}_N[\phi]$ provides a discretization of the exponentiated work done on the system in the time interval from $t=t_{\mbox{\tiny{in}}}$ to $t=t_{\mbox{\tiny{fin}}}$, during which the parameters are switched as a function of time, $\lambda(t)$, and the fields trace out the trajectory $\phi(t)$ in configuration space. This discretization gets more and more accurate for larger and larger values of $N$, and becomes exact in the $N \to \infty$ limit, whereby the sum on the right-hand side of eq.~(\ref{discretized_exponential_work}) turns into a definite integral.
Recalling that the usual mapping between statistical mechanics and lattice gauge theory~\cite{Wilson:1974sk} associates $H/T$ with the Euclidean action of the lattice theory, one easily realizes that, from the point of view of the lattice theory, each term within the braces on the right-hand side of eq.~(\ref{discretized_exponential_work}) can be interpreted as the \emph{difference in Euclidean action} for the field configuration denoted as $\phi\left(t_n \right)$, which is induced when the parameters are changed from $\lambda\left(t_n \right)$ to $\lambda\left(t_{n+1}\right)$. Thus, evaluating the work (over $T$) during a Monte Carlo simulation of this statistical system corresponds to evaluating the variation in Euclidean action in the lattice gauge theory---and this is precisely the quantity that was evaluated in the simulations discussed in sections~\ref{sec:interface} and~\ref{sec:equation_of_state}.
Using eq.~(\ref{Boltzmann_distribution}), eq.~(\ref{discretized_exponential_work}) can then be recast in the form
\begin{equation}
\label{discretized_exponential_work_Z_pi_ratios}
\mathcal{R}_N[\phi] = \prod_{n=0}^{N-1} \frac{Z_{\lambda(t_{n+1})} \cdot \pi_{\lambda(t_{n+1})}\left[\phi\left(t_n \right)\right]}{Z_{\lambda(t_n)} \cdot \pi_{\lambda(t_n)}\left[\phi\left(t_n \right)\right]} .
\end{equation}
Next, consider the average of eq.~(\ref{discretized_exponential_work_Z_pi_ratios}) over all possible field-configuration trajectories realizing an evolution of the system from one of the configurations of the initial ensemble (at $t=t_{\mbox{\tiny{in}}}$, when the parameters of the system take the values $\lambda(t_{\mbox{\tiny{in}}})$) to a configuration of the final ensemble (at $t=t_{\mbox{\tiny{fin}}}$, when the parameters of the system take the values $\lambda(t_{\mbox{\tiny{fin}}})$). In practice, in a Monte Carlo simulation, this is realized by averaging over a sufficiently large number of discretized trajectories starting from configurations of the initial, equilibrium ensemble (described by the partition function $Z_{\lambda(t_{\mbox{\tiny{in}}})}$ and by the canonical distribution $\pi_{\lambda(t_{\mbox{\tiny{in}}})}$), and assuming that, given a configuration of fields at a certain time $t=t_n$, a new field configuration at time $t=t_{n+1}$ is obtained by Markov evolution with transition probability $P_{\lambda(t_{n+1})}\left[ \phi(t_n) \to \phi(t_{n+1}) \right]$, which is assumed to satisfy the detailed balance condition eq.~(\ref{detailed_balance}). Note that $P$ is taken to depend on $\lambda(t_{n+1})$: for every finite value of $\tau$ (and for every Monte Carlo computation with finite statistics), this way of discretizing the non-equilibrium transformation introduces an ``asymmetry'' in the time evolution (one could alternatively carry out the two steps in the opposite order) and a related systematic uncertainty. As it will be discussed below, this leads to a difference in the results obtained when the transformation of the parameters is carried out in one direction or in the opposite one, but such ``discretization effect'' is expected to vanish for $\tau \to 0$ (i.e. for $N \to \infty$), and our numerical results do confirm that. Another, more important, reason why the evolution of the system is not ``symmetric'' under time reversal, is that, while the initial ensemble is at equilibrium, this is not the case at later times: the system is progressively driven (more and more) out of equilibrium.
Then, the average of eq.~(\ref{discretized_exponential_work_Z_pi_ratios}) over all possible field-configuration trajectories realizing an evolution of the system from $t=t_{\mbox{\tiny{in}}}$ to $t=t_{\mbox{\tiny{fin}}}$ can be written as
\begin{equation}
\label{averaged_discretized_exponential_work_Z_pi_ratios}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \sum_{\left\{ \phi(t) \right\} } \pi_{\lambda(t_{\mbox{\tiny{in}}})}\left[ \phi(t_{\mbox{\tiny{in}}}) \right] \prod_{n=0}^{N-1} \left\{ \frac{Z_{\lambda(t_{n+1})}}{Z_{\lambda(t_n)}} \cdot \frac{\pi_{\lambda(t_{n+1})}\left[\phi\left(t_n \right)\right]}{\pi_{\lambda(t_n)}\left[\phi\left(t_n \right)\right]} \cdot P_{\lambda(t_{n+1})}\left[ \phi(t_n) \to \phi(t_{n+1}) \right] \right\},
\end{equation}
where we used the fact that the system is initially in thermal equilibrium, hence the probability distribution for the configurations at $t=t_{\mbox{\tiny{in}}}$ is given by eq.~(\ref{Boltzmann_distribution}), and where $\sum_{\left\{ \phi(t) \right\} }$ denotes the $N+1$ sums over field configurations at all discretized times from $t_{\mbox{\tiny{in}}}$ to $t_{\mbox{\tiny{fin}}}$:
\begin{equation}
\sum_{\left\{ \phi(t) \right\} } \dotsc = \sum_{\phi(t_{\mbox{\tiny{in}}})} \sum_{\phi(t_1)} \sum_{\phi(t_2)} \dots \sum_{\phi\left(t_{\mbox{\tiny{fin}}}-\tau\right)} \sum_{\phi(t_{\mbox{\tiny{fin}}})} \dotsc .
\end{equation}
The telescopic product of partition-function ratios in eq.~(\ref{averaged_discretized_exponential_work_Z_pi_ratios}) simplifies, and the equation can be rewritten as
\begin{equation}
\label{discretized_exponential_work_pi_ratios}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}} \sum_{\left\{ \phi(t) \right\} } \pi_{\lambda(t_{\mbox{\tiny{in}}})}\left[ \phi(t_{\mbox{\tiny{in}}}) \right] \prod_{n=0}^{N-1} \left\{ \frac{\pi_{\lambda(t_{n+1})}\left[\phi\left(t_n \right)\right] \cdot P_{\lambda(t_{n+1})}\left[ \phi(t_n) \to \phi(t_{n+1}) \right]}{\pi_{\lambda(t_n)}\left[\phi\left(t_n \right)\right]} \right\}.
\end{equation}
Using eq.~(\ref{detailed_balance}), this expression can be turned into
\begin{equation}
\label{simplified_discretized_exponential_work_pi_ratios}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}} \sum_{\left\{ \phi(t) \right\} } \pi_{\lambda(t_{\mbox{\tiny{in}}})}\left[ \phi(t_{\mbox{\tiny{in}}}) \right] \prod_{n=0}^{N-1} \left\{ \frac{\pi_{\lambda(t_{n+1})}\left[\phi\left(t_{n+1} \right)\right]}{\pi_{\lambda(t_n)}\left[\phi\left(t_n \right)\right]} \cdot P_{\lambda(t_{n+1})}\left[ \phi(t_{n+1}) \to \phi(t_n)\right] \right\}.
\end{equation}
At this point, also the telescopic product of ratios of Boltzmann distributions can be simplified, reducing the latter expression to
\begin{equation}
\label{discretized_exponential_P_product}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}} \sum_{\left\{ \phi(t) \right\} } \pi_{\lambda(t_{\mbox{\tiny{fin}}})}\left[ \phi(t_{\mbox{\tiny{fin}}}) \right] \prod_{n=0}^{N-1} P_{\lambda(t_{n+1})}\left[ \phi(t_{n+1}) \to \phi(t_n)\right].
\end{equation}
Note that, in eq.~(\ref{discretized_exponential_P_product}), $\phi(t_{\mbox{\tiny{in}}})$ appears only in the $P_{\lambda(t_1)}\left[ \phi(t_1) \to \phi(t_{\mbox{\tiny{in}}})\right]$ term: thus, one can use eq.~(\ref{transition_probability}) to carry out the sum over the $\phi(t_{\mbox{\tiny{in}}})$ configurations, and eq.~(\ref{discretized_exponential_P_product}) reduces to
\begin{equation}
\label{simplified_discretized_exponential_P_product}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}} \sum_{\phi(t_1)} \sum_{\phi(t_2)} \dots \sum_{\phi\left(t_{\mbox{\tiny{fin}}}-\tau\right)} \sum_{\phi(t_{\mbox{\tiny{fin}}})} \pi_{\lambda(t_{\mbox{\tiny{fin}}})}\left[ \phi(t_{\mbox{\tiny{fin}}}) \right] \prod_{n=1}^{N-1} P_{\lambda(t_{n+1})}\left[ \phi(t_{n+1}) \to \phi(t_n)\right].
\end{equation}
Repeating the same argument, eq.~(\ref{simplified_discretized_exponential_P_product}) can then be simplified using the fact that the only remaining dependence on $\phi(t_1)$ is in the $P_{\lambda(t_2)}\left[ \phi(t_2) \to \phi(t_1)\right]$ term, and so on. One arrives at
\begin{equation}
\label{almost_completely_simplified_discretized_exponential_P_product}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}} \sum_{\phi(t_{\mbox{\tiny{fin}}})} \pi_{\lambda(t_{\mbox{\tiny{fin}}})}\left[ \phi(t_{\mbox{\tiny{fin}}}) \right].
\end{equation}
Finally, eq.~(\ref{Boltzmann_distribution_normalization}) implies that also the last sum yields one, so one gets
\begin{equation}
\label{almost_completely_simplified_discretized_exponential_P_product}
\sum_{\left\{ \phi(t) \right\} } \mathcal{R}_N[\phi] = \frac{Z_{\lambda(t_{\mbox{\tiny{fin}}})}}{Z_{\lambda(t_{\mbox{\tiny{in}}})}}.
\end{equation}
Recalling that, as we discussed above, in the large-$N$ limit $\mathcal{R}_N[\phi]$ equals the exponentiated work done on the system during the evolution from $t_{\mbox{\tiny{in}}}$ to $t_{\mbox{\tiny{fin}}}$, and writing $Z_{\lambda(t_{\mbox{\tiny{in}}})}$ and $Z_{\lambda(t_{\mbox{\tiny{fin}}})}$ in terms of the associated equilibrium free energies at the respective temperatures, eq.~(\ref{almost_completely_simplified_discretized_exponential_P_product}) yields the (generalized) Jarzynski relation:
\begin{equation}
\label{generalized_Jarzynski}
\left\langle \exp \left[ - \int \frac{\delta W}{T} \right] \right\rangle = \exp \left[ - \left( \frac{F_{\mbox{\tiny{fin}}}}{T_{\mbox{\tiny{fin}}}} - \frac{F_{\mbox{\tiny{in}}}}{T_{\mbox{\tiny{in}}}}\right)\right],
\end{equation}
where $\delta W$ denotes the work done on the system during an infinitesimal interval in the transformation from $t_{\mbox{\tiny{in}}}$ to $t_{\mbox{\tiny{fin}}}$, the integral is taken over all such intervals, and the average is taken over all possible realizations of this transformation.
In the particular case of a non-equilibrium transformation in which the temperature $T$ of the system is not varied, the latter expression can be written as~\cite{Jarzynski:1996ne}
\begin{equation}
\label{Jarzynski}
\left\langle \exp \left[ - \frac{W(t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})}{T} \right] \right\rangle = \exp \left( - \frac{F_{\mbox{\tiny{fin}}}-F_{\mbox{\tiny{in}}}}{T} \right),
\end{equation}
where $W(t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})$ denotes the total work done on the system during the transformation from $t_{\mbox{\tiny{in}}}$ to $t_{\mbox{\tiny{fin}}}$.
Before closing this section, we point out some important remarks.
First of all, as we discussed above, the evaluation of free-energy differences using Jarzynski's relation assumes $N \to \infty$ (with $t_{\mbox{\tiny{in}}}$ and $t_{\mbox{\tiny{fin}}}$ fixed and finite). In this limit, the time-discretization step $\tau$ becomes infinitesimally small, and from the continuity of $\lambda$ it follows that the $\pi_{\lambda(t_n)}$ and $\pi_{\lambda(t_{n+1})}$ distributions at all pairs of subsequent times become more and more overlapping. Correspondingly, in a Monte~Carlo simulation the aforementioned potential systematic effects related to the asymmetric r\^oles of $t_n$ and $t_{n+1}$ in the Markov evolution of a field configuration with transition probability $P_{\lambda(t_{n+1})}\left[ \phi(t_n) \to \phi(t_{n+1}) \right]$ depending on the parameter values at time $t=t_{n+1}$ (or, conversely, in the summands on the right-hand side of eq.~(\ref{discretized_exponential_work}), where the difference is evaluated by keeping the field configuration fixed to its value at $t=t_n$) are expected to vanish---an expectation which is indeed confirmed by our numerical results.
It is also instructive to discuss what happens in the opposite limit, i.e. for $N=1$. In this case, the calculation reduces to evaluating the exponential average of the work (in units of $T$) that is done on the system when its parameters are switched from $\lambda_{\mbox{\tiny{in}}}$ directly to $\lambda_{\mbox{\tiny{fin}}}$. In particular, according to the derivation above (in which the work done on the system is evaluated by computing the variation in energy on one of the configurations of the initial ensemble), one can realize that for $N=1$ the field configurations from the initial, equilibrium ensemble with parameters $\lambda_{\mbox{\tiny{in}}}$ are not ``evolved'' at all, and that the parameters of the system are instantaneously switched to their final values $\lambda_{\mbox{\tiny{fin}}}$ at $t=t_1=t_{\mbox{\tiny{fin}}}$: at this time, the work done on the system is calculated on the initial configuration, but then the configuration itself is not subject to any evolution, and, in particular, is not driven out of equilibrium at all. Interestingly, the exponential average of the work done on the system is exactly equal to $Z_{\lambda(t_{\mbox{\tiny{fin}}})}/Z_{\lambda(t_{\mbox{\tiny{in}}})}$ also in the $N=1$ case, as it was already pointed out in the first work in which Jarzynski's relation was derived~\cite{Jarzynski:1996ne}. In fact, the existence of a relation of this type has been known for a long time (see, e.g., ref.~\cite{Zwanzig:1954ht}), and does not involve any non-equilibrium evolution. From a lattice gauge theory point of view, in the $N=1$ case this calculation corresponds to computing the average value of the exponential of the difference in Euclidean action, that is induced by a change in the parameters characterizing the system; this average is performed in the starting ensemble, with partition function $Z_{\lambda(t_{\mbox{\tiny{in}}})}$. Using a terminology that may be more familiar among lattice practitioners, this can be recognized as a reweighting technique~\cite{Ferrenberg:1988yz, Barbour:1997bh, Fodor:2001au}. Although this method to compute the free-energy difference of the initial and final ensembles is in principle exact, its practical applications in Monte~Carlo simulations of lattice QCD (which necessarily involve finite configuration samples) is of very limited computational efficiency, being affected by dramatically large uncertainties when the configuration probability distributions of the simulated ($\pi_{\lambda(t_{\mbox{\tiny{in}}})}$) and target ($\pi_{\lambda(t_{\mbox{\tiny{fin}}})}$) ensembles are poorly overlapping. Such \emph{overlap problem} becomes more severe when the probability distributions are more sharply peaked (which is the case for systems with a large number of degrees of freedom---including, in particular, lattice gauge theories defined on large and fine lattices) and/or more widely separated in configuration space, so that the simulation of the ensemble specified by the parameters $\lambda(t_{\mbox{\tiny{in}}})$ samples only a very limited subset of the most likely configurations of the target ensemble.
What happens in the case when $N$ is finite and larger than one? In particular: in view of the previous observation, could one think that for finite $N>1$ the evaluation of the free-energy difference between the initial and the final ensemble by means of Jarzynski's relation is equivalent to a sequence of reweighting steps, at parameter values $\lambda(t_n)$, with $0 \le n < N$? The answer is no: a Monte~Carlo algorithm to compute the free-energy difference using Jarzynski's relation is crucially different from a combination of reweighting steps, because, in contrast to the former, the latter assumes that \emph{also} the field configurations at all later times $\phi(t_n)$, for $0 < n$, are drawn from equilibrium distributions. On the contrary, the sequence of field configurations produced during each trajectory in a numerical implementation of Jarzynski's relation are genuinely out of equilibrium: only the configurations at $t=t_{\mbox{\tiny{in}}}$ are drawn from an equilibrium distribution. As a consequence, there is no contradiction between the fact that the computation of the free-energy difference between two ensembles using Jarzynski's relation becomes exact only for \emph{infinite} $N$, and the fact that the same computation can also (at least in principle, i.e. neglecting the overlap problem mentioned above) be carried out exactly by reweighting the \emph{equilibrium} distributions defined at a \emph{finite} number of intermediate parameter values corresponding to $\lambda(t_n)$, with $0 \le n < N$. Similarly, there is no inconsistency in the fact that, using the algorithm based on Jarzynski's theorem for finite $N>1$, the results for $Z_{\lambda(t_{\mbox{\tiny{fin}}})}/Z_{\lambda(t_{\mbox{\tiny{in}}})}$ obtained from Monte~Carlo calculations in ``direct'' ($\lambda_{\mbox{\tiny{in}}} \to \lambda_{\mbox{\tiny{fin}}}$) and in ``reverse'' ($\lambda_{\mbox{\tiny{fin}}} \to \lambda_{\mbox{\tiny{in}}}$) evolutions of the system are not necessarily equal: they only have to agree in the large-$N$ limit---and, as our numerical results show, they do agree in that limit.
Note that these observations do not imply that a Monte~Carlo calculation of $Z_{\lambda(t_{\mbox{\tiny{fin}}})}/Z_{\lambda(t_{\mbox{\tiny{in}}})}$ using Jarzynski's relation, which requires $N$ to be large, is less efficient than one based on a combination of $N$ reweightings, which is exact for every value of $N$: on the contrary, the overhead of generating \emph{non-equilibrium} configurations at a larger number of intermediate values of the system parameters (whose computational cost grows like $O(N)$ and, for typical lattice gauge theory simulation algorithms, \emph{polynomially} in the number of degrees of freedom of the system), may be largely offset by the growth in statistics necessary for proper ensemble sampling in simulations using reweighting, which is \emph{exponential} in the number of degrees of freedom of the system~\cite{Gattringer:2016kco}.
For a given physical system, in Monte~Carlo simulations based on Jarzynski's relation, the optimal choice of $N$ and of the number $n_{\mbox{\tiny{r}}}$ of ``trajectories'' in configuration space (or ``realizations'' of the non-equilibrium evolution of the system) over which the averages appearing on the left-hand-side of eqs.~(\ref{generalized_Jarzynski}) and (\ref{Jarzynski}) are evaluated, is the one minimizing the total computational cost, for a desired maximum level of uncertainty on the numerical results. In general, determining the optimal values of $N$ and $n_{\mbox{\tiny{r}}}$ is non-trivial, as they depend strongly on the system under consideration (and, often, on the details of the Monte~Carlo simulation). During the past few years, some aspects of this problem have been addressed in detail in various works: see refs.~\cite{Jarzynski:2006re, Pohorille:2010gp, Rohwer:2014co, YungerHalpern:2016hm} and references therein.
Finally, note that the derivation of Jarzynski's relation does not rely on any strong assumption about the nature of the system, and can be applied to every system with a Hamiltonian bounded from below. As such, it can be directly applied to statistical systems describing lattice gauge theories in Euclidean space. In the following, we present two applications of Jarzynski's relation in lattice gauge theory, first in the computation of the interface free energy in a gauge theory in three dimensions, and then in the calculation of the equation of state in $\mathrm{SU}(2)$ Yang--Mills theory in $3+1$ Euclidean dimensions.
\section{Benchmark study I: The interface free energy}
\label{sec:interface}
As a first benchmark study, we apply Jarzynski's relation eq.~(\ref{Jarzynski}) for a computation of the free energy associated with a fluctuating interface in a lattice gauge theory in three dimensions. As mentioned in section~\ref{sec:introduction}, interfaces have important experimental realizations in condensed-matter physics and in various other branches of science~\cite{Gelfand:1990fse, Privman:1992zv}. Moreover, they are also interesting for high-energy physics, as they can be related to the world-sheets spanned by flux tubes in confining gauge theories. Because of quantum fluctuations, the energy stored in a confining flux tube has a non-trivial dependence on its length~\cite{Luscher:1980fr, Luscher:1980ac}, which can be systematically studied in the framework of an effective theory~\cite{Aharony:2013ipa} and investigated numerically in lattice simulations (see refs.~\cite{Kuti:2005xg, Teper:2009uf, Panero:2012qx, Lucini:2012gg} for reviews). In particular, the effective action that describes the dynamics of flux tubes joining static color sources may include non-trivial terms associated with the boundaries of the string world-sheet~\cite{Aharony:2010cx, Aharony:2010db}. A possible way to disentangle the effect of these boundary contributions to the effective string action consists in studying closed string world-sheets, like those describing the evolution of a torelon (a flux loop winding around a spatial size of a finite system) over compactified Euclidean time: in that case the string world-sheet has the topology of a torus, and can be interpreted as a fluctuating interface. A closely related setup is relevant for the study of maximal 't~Hooft loops~\cite{'tHooft:1977hy, Srednicki:1980gb}.
The simplest lattice gauge theory, in which one can carry out a high-precision numerical Monte~Carlo study of interfaces, is the $\mathbb{Z}_2$ gauge model in three Euclidean dimensions, whose degrees of freedom are $\sigma_\mu(x)$ variables (taking values $\pm 1$) defined on the bonds between nearest-neighbor sites of a cubic lattice $\Lambda$ of spacing $a$. Following ref.~\cite{Caselle:2005vq}, we take the Euclidean action of the model to be the Wilson action~\cite{Wilson:1974sk}
\begin{equation}
S_{\mathbb{Z}_2} = - \beta_{\mbox{\tiny{g}}} \sum_{x \in \Lambda} \sum_{0 \le \mu < \nu \le 2} \sigma_\mu(x) \sigma_\nu(x+a\hat{\mu}) \sigma_\mu(x+a\hat{\nu}) \sigma_\nu(x)
\end{equation}
(where $\beta_{\mbox{\tiny{g}}}$ denotes the Wilson parameter for the $\mathbb{Z}_2$ gauge theory); it is trivial to verify that the model enjoys invariance under local $\mathbb{Z}_2$ transformations, that flip the sign of the $\sigma_\mu(x)$ link variables touching a given site. The partition function of the model reads
\begin{equation}
Z_{\mathbb{Z}_2} = \sum_{ \left\{ \sigma_\mu(x) = \pm 1 \right\} } \exp\left( -S_{\mathbb{Z}_2}\right).
\end{equation}
For small values of $\beta_{\mbox{\tiny{g}}}$ this model has a confining phase, which terminates at a second-order phase transition at $\beta_{\mbox{\tiny{g}}} = 0.76141346(6)$~\cite{Deng:2003wv}.
$Z_{\mathbb{Z}_2}$ can be exactly rewritten as the partition function of the three-dimensional Ising model~\cite{Kramers:1941kn, Wegner:1984qt}, whose degrees of freedom are $\mathbb{Z}_2$ variables $s_x$ defined on the sites of a dual cubic lattice $\widetilde{\Lambda}$, and whose Hamiltonian reads
\begin{equation}
H = - \beta \sum_{x \in \widetilde{\Lambda}} \sum_{0 \le \mu \le 2} J_{x,\mu} s_x s_{x+a\hat{\mu}},
\end{equation}
where $J_{x,\mu}=1$ corresponds to ferromagnetic couplings, while $J_{x,\mu}=-1$ would yield antiferromagnetic couplings, and $\beta$ and $\beta_{\mbox{\tiny{g}}}$ are related to each other by
\begin{equation}
\label{symmetric_beta_betagauge_relation}
\sinh(2\beta)\sinh(2\beta_{\mbox{\tiny{g}}})=1.
\end{equation}
Note that, since $\sinh(2x)$ is a strictly increasing function, eq.~(\ref{symmetric_beta_betagauge_relation}) implies that the confining regime of the gauge theory (at small $\beta_{\mbox{\tiny{g}}}$) corresponds to the ordered phase of the Ising model (at large $\beta$). Eq.~(\ref{symmetric_beta_betagauge_relation}) can be rewritten as
\begin{equation}
\label{beta_betagauge_relation}
\beta=-\frac{1}{2} \ln \tanh \beta_{\mbox{\tiny{g}}}.
\end{equation}
Note that on a finite lattice, denoting the number of sites along the direction $\mu$ as $N_\mu$ and defining the site coordinates (in units of the lattice spacing) modulo $N_\mu$, one can impose periodic boundary conditions by setting all $J_{x,\mu}=1$, whereas antiperiodic boundary conditions in the direction $\mu$ can be imposed setting $J_{x,\mu}=-1$ only for the couplings between a spin in the first and a spin in the last lattice slice in direction $\mu$, i.e. $J_{x,\mu}=-1$ when $x_\mu/a=N_\mu-1$: in that case, a frustration is induced in the system, and an interface separating domains of opposite magnetization is formed. Finally, the choice $J_{x,\mu}=0$ for those bonds corresponds to decoupling the spins in the last lattice slice in direction $\mu$ from those in the first.
Thus, the ratio of the partition function with antiperiodic boundary conditions in one direction ($Z_{\mbox{\tiny{a}}}$) over the one with periodic boundary conditions in all directions ($Z_{\mbox{\tiny{p}}}$) is directly related to the expectation value of an interface separating domains of different magnetizations. More precisely, if $N_0$ denotes the lattice size (in units of the lattice spacing $a$) in the direction in which antiperiodic boundary conditions are imposed, one can introduce a first definition of the interface free energy $F^{(1)}$ from
\begin{equation}
\label{f1_defining_relation}
\frac{Z_{\mbox{\tiny{a}}}}{Z_{\mbox{\tiny{p}}}} = N_0 \exp\left( - F^{(1)} \right)
\end{equation}
(where the $N_0$ factor on the right-hand side accounts for the fact that the interface can be located anywhere along the direction in which antiperiodic boundary conditions are imposed), namely
\begin{equation}
\label{f1}
F^{(1)} = -\ln\left(\frac{Z_{\mbox{\tiny{a}}}}{Z_{\mbox{\tiny{p}}}}\right) + \ln N_0.
\end{equation}
Note that here $F^{(1)}$ is defined as a dimensionless quantity. For a system of sufficiently large transverse cross-section (i.e. when the sizes $L_1$ and $L_2$ in the directions normal to the one in which antiperiodic boundary conditions are imposed are large), $F^{(1)}$ is expected to be proportional to $L_1 L_2$, with a positive proportionality coefficient. As a consequence, the expectation value of large interfaces is exponentially suppressed with their area, and one can assume that only one ``large'' interface (i.e. one extending through a whole cross-section of the system) is formed in the presence of antiperiodic boundary conditions---whereas no large interfaces are formed in the system with periodic boundary conditions. For a finite-size system, however, one can also consider the case of multiple large interfaces (in particular: an odd number of them for antiperiodic boundary conditions in one direction, and an even number of them for periodic boundary conditions). As discussed in ref.~\cite{Caselle:2007yc}, under the assumption that these interfaces are indistinguishable, dilute and non-interacting, one can derive an improved definition of the dimensionless interface free energy:
\begin{equation}
\label{f2}
F^{(2)} = -\ln \arctanh \left( \frac{Z_{\mbox{\tiny{a}}}}{Z_{\mbox{\tiny{p}}}} \right) + \ln N_0.
\end{equation}
Note that $F^{(2)}$ tends to $F^{(1)}$ when $Z_{\mbox{\tiny{a}}} \ll Z_{\mbox{\tiny{p}}}$.
These definitions show that the dimensionless interface free energy can be evaluated in a numerical simulation, by computing the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio. As discussed above, $Z_{\mbox{\tiny{a}}}$ and $Z_{\mbox{\tiny{p}}}$ can be interpreted as the partition functions of two systems that differ by the value of the $J_{x,\mu}$ couplings in one direction, that we have assumed to be the one labelled by $0$, on one slice (say, the one corresponding to $x_0=N_0-1$): $Z_{\mbox{\tiny{a}}}$ is the partition function of the Ising spin system in which those couplings are set to $-1$ (while $J_{x,\mu}=1$ for $\mu \neq 0$ or for $x_0 \neq N_0-1$), whereas $Z_{\mbox{\tiny{p}}}$ is the partition function of the Ising spin system in which all couplings are ferromagnetic ($J_{x,\mu}=1$ for all $\mu$ and for all $x$). One can thus evaluate the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio by applying Jarzynski's relation eq.~(\ref{Jarzynski}), identifying the $J$ couplings on the $\mu=0$ bonds from the sites in the $x_0=N_0-1$ slice of the system as the $\lambda$ parameters to be varied as a function of Monte~Carlo time $t$. In particular, one can let those couplings vary linearly with time, interpolating from $J=1$ at $t=t_{\mbox{\tiny{in}}}$ to $J=-1$ at $t=t_{\mbox{\tiny{fin}}}$,
\begin{equation}
\label{J_evolution}
\lambda\left(t_{\mbox{\tiny{in}}} + n\tau \right) = J_{(N_0-1,x_1,x_2),0} \left( t_{\mbox{\tiny{in}}} + n\tau \right)= 1 - \frac{2n}{N}, \qquad \mbox{with}~\tau=\frac{t_{\mbox{\tiny{fin}}}-t_{\mbox{\tiny{in}}}}{N},
\end{equation}
for $~n \in \left\{ 0, 1, \dots , N \right\}$, or vice~versa. A similar application of Jarzynski's relation was used in the study of the Ising model in two dimensions~\cite{condmat0602580, Chatelain:2007ts, Hijar2007}.
It is worth remarking that parallelization (as well as other standard algorithmic techniques for spin systems, like multi-spin coding) is straightforward to implement in a computation of the free energy based on Jarzynski's relation.
We carried out a set of Monte~Carlo calculations of the interface free energy using this method (with $N=10^6$ and averaging over $n_{\mbox{\tiny{r}}}=10^3$ realizations of the discretized non-equilibrium transformation), at the parameters used in the study reported in ref.~\cite{Caselle:2007yc}, finding perfect agreement with the results of that study. We also observed that the exponential work averages corresponding to a ``direct'' (from $Z_{\mbox{\tiny{p}}}$ to $Z_{\mbox{\tiny{a}}}$) or a ``reverse'' (from $Z_{\mbox{\tiny{a}}}$ to $Z_{\mbox{\tiny{p}}}$) parameter switch converge to the same results, and that the latter are independent of the $\lambda(t)$ parametrization at large $N$.
This can be clearly seen in tables~\ref{tab:96_48_64}, \ref{tab:96_24_64} and \ref{tab:96_32_32}, where we report results for the interface free energies in the three-dimensional $\mathbb{Z}_2$ gauge model at $\beta_{\mbox{\tiny{g}}}=0.758264$, obtained from Monte~Carlo simulations of the Ising model at $\beta=0.223102$. These tables show that the free-energy estimates obtained from a ``direct'' and a ``reverse'' realization of the non-equilibrium transformation from $Z_{\mbox{\tiny{p}}}$ to $Z_{\mbox{\tiny{a}}}$ converge to the same value (which is consistent with earlier calculations carried out by different methods~\cite{Caselle:2007yc}), when the discretization of the parameter evolution involved in the non-equilibrium transformation is carried out with a sufficient number of points. The results obtained from simulations on a lattice of sizes $L_0=96a$, $L_1=24a$ and $L_2=64a$ are also displayed in fig.~\ref{fig:96_24_64}.
\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c||c|c|}
\hline
$N$ & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, direct & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, reverse \\
\hline
$10^{3}$ & $64 \cdot 320$ & $11.25(13)$ & $ 64 \cdot 80 $ & $12.19(11) $ \\
$5 \cdot 10^{3}$ & $64 \cdot 320$ & $11.23(8) $ & $ 64 \cdot 80 $ & $11.52(4) $ \\
$10^{4}$ & $64 \cdot 320$ & $11.33(5) $ & $ 64 \cdot 80 $ & $11.41(3) $ \\
$5 \cdot 10^{4}$ & $64 \cdot 80 $ & $11.25(3) $ & $ 64 \cdot 80 $ & $11.33(2) $ \\
$10^{5}$ & $64 \cdot 80 $ & $11.29(2) $ & $ 64 \cdot 80 $ & $11.32(1) $ \\
\hline
\end{tabular}
\caption{Results for the interface free energy defined in eq.~(\ref{f1}) from ``direct'' and ``reverse'' realizations of the non-equilibrium parameter transformation from periodic to antiperiodic boundary conditions in the $\mu=0$ direction, on a lattice with $N_0=96$, $N_1=48$, $N_2=64$, at $\beta=0.223102$ (i.e. at $\beta_{\mbox{\tiny{g}}}=0.758264$), and for a different number $N$ of intervals used to discretize the temporal evolution of $\lambda$. $n_{\mbox{\tiny{r}}}$ is the statistics used in the average over non-equilibrium processes. The interface free energy evaluated in ref.~\cite{Caselle:2007yc} for these parameters is $F^{(1)} = 11.3138(25)$.\label{tab:96_48_64}}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c||c|c|}
\hline
$N$ & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, direct & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, reverse \\
\hline
$10^{3}$ & $ 64 \cdot 320 $ & $ 6.27(20) $ & $ 64 \cdot 80 $ & $ 7.241(67) $ \\
$5 \cdot 10^{3}$ & $ 64 \cdot 320 $ & $ 6.794(20) $ & $ 64 \cdot 80 $ & $ 6.996(24) $ \\
$10^{4}$ & $ 64 \cdot 320 $ & $ 6.845(12) $ & $ 64 \cdot 80 $ & $ 6.941(17) $ \\
$5 \cdot 10^{4}$ & $ 64 \cdot 80 $ & $ 6.888(8) $ & $ 64 \cdot 80 $ & $ 6.893(8) $ \\
$ 10^{5}$ & $ 64 \cdot 80 $ & $ 6.881(6) $ & $ 64 \cdot 80 $ & $ 6.892(5) $ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_48_64}, but for $N_0=96$, $N_1=24$, $N_2=64$. The reference value of the interface free energy at these parameters, taken from ref.~\cite{Caselle:2007yc}, is $F^{(1)}=6.8887(20)$. The results listed in this table are also plotted in fig.~\ref{fig:96_24_64}.\label{tab:96_24_64}}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c||c|c|}
\hline
$N$ & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, direct & $n_{\mbox{\tiny{r}}}$ & $F^{(1)}$, reverse \\
\hline
$10^{3}$ & $64 \cdot 80$ & $5.68(7) $ & $64 \cdot 80$ & $6.32(6) $ \\
$10^{4}$ & $64 \cdot 80$ & $5.943(14)$ & $64 \cdot 80$ & $6.018(13)$ \\
$10^{5}$ & $64 \cdot 80$ & $5.979(4) $ & $64 \cdot 80$ & $5.982(4) $ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_48_64}, but for square interfaces with $N_0=96$, $N_1=N_2=32$.\label{tab:96_32_32}}
\end{table}
\begin{figure}[!htpb]
\centerline{\includegraphics[width=0.9\textwidth]{96_24_64.pdf}}
\caption{(Color online) Convergence of our results for the interface free energy---defined according to eq.~(\ref{f1})---obtained in direct (blue bullets) and reverse (red triangles) transformations from $Z_{\mbox{\tiny{p}}}$ to $Z_{\mbox{\tiny{a}}}$ in Monte~Carlo simulations at $\beta=0.223102$ (corresponding to $\beta_{\mbox{\tiny{g}}}=0.758264$) on a lattice of sizes $L_0=96a$, $L_1=24a$, $L_2=64a$. The green band denotes the value of the interface free energy determined in ref.~\cite{Caselle:2007yc} for these values of the parameters, and with a different method. $N$ is the number of intervals used to discretize the temporal evolution of the parameter by which the boundary conditions of the system in direction $\mu=0$ are switched from periodic to antiperiodic, according to eq.~(\ref{J_evolution}).}
\label{fig:96_24_64}
\end{figure}
It is interesting to study how our determination of the interface free energy using Jarzynski's relation compares with those based on different techniques. A state-of-the-art example of the latter was reported in ref.~\cite{1401.7870}, using the so-called ``ensemble-switch'' method. Carrying out some numerical tests, we found that the computational efficiency of the two algorithms is similar. In general, the structure of the ensemble-switch algorithm makes it more demanding in terms of CPU time. On the other hand, we observed that the algorithm based on Jarzynski's relation typically leads to results affected by somewhat larger intrinsic fluctuations. An important difference between the ensemble-switch algorithm and ours is that, in contrast to the former, the latter can be parallelized in a more straightforward way. For large $N$, our algorithm has a similar efficiency as (and in some cases even outperforms) the ensemble-switch algorithm.
Having verified the convergence of the interface free energy estimates from our algorithm based Jarzynski's relation (for non-equilibrium transformations from one ensemble to the other, in both directions), we report some results from simulations on lattices of different sizes in tables~\ref{tab:96_xx_xx_beta0223102}, \ref{tab:96_xx_32_beta0223102}, \ref{tab:96_xx_48_beta0223102}, \ref{tab:96_xx_64_beta0223102}, \ref{tab:96_xx_80_beta0223102} and \ref{tab:96_xx_96_beta0223102} (from simulations at $\beta=0.223102$) and in table~\ref{tab:96_xx_64_beta0226102} (from simulations at $\beta=0.226102$).
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1=N_2$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$18$ & $4.61969(21)$ & $3.9800(9)$ \\
$20$ & $4.68520(24)$ & $4.2252(6)$ \\
$22$ & $4.79156(32)$ & $4.4785(5)$ \\
$24$ & $4.94312(34)$ & $4.7412(5)$ \\
$28$ & $5.3850(5)$ & $5.3143(5)$ \\
$32$ & $5.9785(6)$ & $5.9583(6)$ \\
$36$ & $6.6849(7)$ & $6.6801(7)$ \\
$40$ & $7.4819(9)$ & $7.4809(9)$ \\
$44$ & $8.3653(12)$ & $8.3652(11)$ \\
$48$ & $9.3318(15)$ & $9.3318(13)$ \\
\hline
\end{tabular}
\caption{Interface free energies---evaluated according to eq.~(\ref{f1}) and to eq.~(\ref{f2}), and respectively reported in the second and in the third column---obtained from simulations at $\beta=0.223102$ (corresponding to $\beta_{\mbox{\tiny{g}}}=0.758264$) on lattices of square cross-section with $N_0=96$ and for different values of $N_1=N_2$ (first column). For a comparison, the corresponding values obtained in ref.~\cite{Caselle:2007yc} at the same $\beta$ and for $N_0=96$, $N_1=N_2=40$, are $F^{(1)}=F^{(2)}=7.481(1)$.}
\label{tab:96_xx_xx_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$22$ & $5.1677(4)$ & $5.0520(5)$ \\
$24$ & $5.3257(5)$ & $5.2450(5)$ \\
$26$ & $5.4868(5)$ & $5.4301(6)$ \\
$28$ & $5.6503(6)$ & $5.6103(7)$ \\
\hline
\end{tabular}
\caption{Interface free energies evaluated according to eq.~(\ref{f1}) and to eq.~(\ref{f2}) (second and third column) obtained from simulations at $\beta=0.223102$ (i.e. for $\beta_{\mbox{\tiny{g}}}=0.758264$) on lattices with $N_0=96$ and rectangular cross-section, for different values of $N_1$ (first column) at $N_2=32$.}
\label{tab:96_xx_32_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$22$ & $5.8304(5)$ & $5.8030(6)$ \\
$24$ & $6.1238(5)$ & $6.1088(6)$ \\
$28$ & $6.6984(8)$ & $6.6937(8)$ \\
$32$ & $7.2520(9)$ & $7.2504(9)$ \\
$36$ & $7.7876(11)$ & $7.7871(11)$ \\
$40$ & $8.3142(15)$ & $8.3140(15)$ \\
$44$ & $8.8255(16)$ & $8.8255(16)$ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_xx_32_beta0223102}, but from simulations on lattices with $N_0=96$ and $N_2=48$.}
\label{tab:96_xx_48_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$18$ & $5.6068(6)$ & $5.5629(5)$ \\
$20$ & $6.0369(6)$ & $6.0190(6)$ \\
$22$ & $6.4676(7)$ & $6.4601(7)$ \\
$24$ & $6.8868(8)$ & $6.8836(8)$ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_xx_32_beta0223102}, but from simulations on lattices with $N_0=96$ and $N_2=64$. The results obtained in ref.~\cite{Caselle:2007yc}
at this $\beta$ and for $N_0=96$, $N_1=24$, $N_2=64$, are $F^{(1)}=6.889(2)$ and $F^{(2)}=6.886(2)$.}
\label{tab:96_xx_64_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$18$ & $5.9318(6)$ & $5.9095(6) $ \\
$20$ & $6.5018(7)$ & $6.4948(7) $ \\
$22$ & $7.0654(8)$ & $7.0631(8) $ \\
$24$ & $7.6140(9)$ & $7.6132(9) $ \\
$26$ & $8.1412(11)$ & $8.1410(11)$ \\
$28$ & $8.6550(15)$ & $8.6549(13)$ \\
$32$ & $9.6341(17)$ & $9.6341(17)$ \\
$36$ & $10.5758(20)$ & $10.5758(20)$ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_xx_32_beta0223102}, but from simulations on lattices with $N_0=96$ and $N_2=80$.}
\label{tab:96_xx_80_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|}
\hline
$N_1$ & $F^{(1)} $ & $F^{(2)}$ \\
\hline
$18$ & $6.2314(7)$ & $6.2193(7)$ \\
$20$ & $6.9412(8)$ & $6.9383(8)$ \\
$22$ & $7.6392(9)$ & $7.6385(9)$ \\
$24$ & $8.3137(12)$ & $8.3135(10)$ \\
$26$ & $8.9583(12)$ & $8.9583(12)$ \\
$28$ & $9.5840(17)$ & $9.5840(17)$ \\
$32$ & $10.7834(20)$ & $10.7834(20)$ \\
\hline
\end{tabular}
\caption{Same as in table~\ref{tab:96_xx_32_beta0223102}, but from simulations on lattices with $N_0=N_2=96$.}
\label{tab:96_xx_96_beta0223102}
\end{table}
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|}
\hline
$N_1$ & $F^{(2)}$ \\
\hline
$18$ & $13.9858(24)$ \\
$20$ & $15.4881(29)$ \\
$22$ & $16.9667(31)$ \\
$24$ & $18.4127(34)$ \\
\hline
\end{tabular}
\caption{Interface free energy (second column), defined according to eq.~(\ref{f2}), from simulations at $\beta=0.226102$ (corresponding to $\beta_{\mbox{\tiny{g}}}=0.751805$) on lattices with $N_0=96$ and $N_2=64$, for various values of $N_1$ (first column). The result reported in ref.~\cite{Caselle:2007yc} at this $\beta$, for $N_0=96$, $N_1=24$, and $N_2=64$, is $F^{(2)}=18.4131(26)$.}
\label{tab:96_xx_64_beta0226102}
\end{table}
These high-precision results can be directly compared with an effective theory, describing the transverse fluctuations of the interface at low energies. In direct analogy with the effective description of the world-sheets associated with fluctuating, string-like flux tubes in confining gauge theories~\cite{Kuti:2005xg}, or with solitonic strings in Abelian Higgs models~\cite{Abrikosov:1956sx, Nielsen:1973cs}, this effective theory must be consistent with the Lorentz--Poincar\'e symmetries of the space in which the interface is defined~\cite{Aharony:2009gg} (see also ref.~\cite{Meyer:2006qx}). This condition puts strong constraints on the coefficients of the possible terms appearing in the effective action of the theory, making the latter very predictive: in particular, one finds that, on sufficiently long distances, the dynamics can be approximated very well by assuming that the possible ``configurations'' of the fluctuating interface occur with a Boltzmann weight $\exp(-S_{\mbox{\tiny{eff}}})$, in which $S_{\mbox{\tiny{eff}}}$ is proportional to the \emph{area} of the interface itself, i.e. the effective action tends to the Nambu--Got\={o} action~\cite{Goto:1971ce, Nambu:1974zg}
\begin{equation}
\label{Nambu-Goto_action}
S_{\mbox{\tiny{eff}}} \simeq \sigma \int {\rm{d}}^2 \xi \sqrt{\det g_{\alpha\beta}},
\end{equation}
where $\xi$ are coordinates parametrizing the interface surface, while $g_{\alpha\beta}$ is the metric induced by the embedding of the interface in the target space, while $\sigma$ can be thought of as the tension associated with the interface, in the classical limit. As discussed in ref.~\cite{Aharony:2009gg}, the actual form of the effective action deviates from the expression on the right-hand side of eq.~(\ref{Nambu-Goto_action}) by terms which, for the problem of interest (a closed interface of linear size denoted as $L$, in a three-dimensional space) scale at least with the seventh inverse power of $L$.
The partition function associated with an interface described by the Nambu--Got\={o} effective action in eq.~(\ref{Nambu-Goto_action}) has been calculated analytically in ref.~\cite{Billo:2006zg}: for a system in $D$ spacetime dimensions, this computation predicts
\begin{equation}
\label{interface_partition_function}
\frac{Z_{\mbox{\tiny{a}}}}{Z_{\mbox{\tiny{p}}}} = 2 \mathcal{C} \left( \frac{\sigma}{2\pi} \right)^{\frac{D-2}{2}}\, V_{\mbox{\tiny{T}}} \, \sqrt{\sigma L_1 L_2 u}
\sum_{k=0}^\infty \sum_{k'=0}^\infty c_k c_{k'}
\left(\frac{\mathcal{E}_{k,k'}}{u}\right)^{\frac{D-1}{2}}\, K_{\frac{D-1}{2}} \left(\sigma L_1 L_2 \mathcal{E}_{k,k'}\right) = \mathcal{C} \mathcal{I},
\end{equation}
where $u=L_2/L_1$, $V_{\mbox{\tiny{T}}}$ denotes the ``volume'' of the system along the dimensions transverse to the interface (so $V_{\mbox{\tiny{T}}}=L_0$ in our case), $K_\nu(z)$ denotes the modified Bessel function of the second kind of order $\nu$ and argument $z$, while $c_k$ and $c_{k'}$ are coefficients appearing in the expansion of an inverse power of Dedekind's $\eta$ function:
\begin{equation}
\frac{1}{\eta\left( i u \right)^{D-2}} = \sum_{k=0}^{\infty}c_k q^{k-\frac{D-2}{24}},\qquad \mbox{with}\;\; q=\exp\left( -2\pi u \right)
\end{equation}
(so that, for the $D=3$ case, $c_k$ equals the number of partitions of $k$) and
\begin{equation}
\label{string_energies}
\mathcal{E}_{k,k'} = \sqrt{ 1 + \frac{4\pi\, u}{\sigma L_1 L_2 }\left(k+k'-\frac{D-2}{12}\right) + \left[\frac{2\pi u (k-k')}{\sigma L_1 L_2 }\right]^2}.
\end{equation}
Finally, $\mathcal{C}$ is an undetermined, non-universal multiplicative constant, which is not predicted by the effective bosonic-string model (and, following the notations of ref.~\cite{Billo:2006zg}, in the last term of eq.~(\ref{interface_partition_function}) we define the ratio of $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ over $\mathcal{C}$ as $\mathcal{I}$). Similarly, the model does not predict the value of the multiplicative constant involved in the partition function associated with one (or more) static color source(s): see refs.~\cite{Mykkanen:2012ri, Mykkanen:2012dv} for a discussion. These aspects are related to the fact that the bosonic-string model is a low-energy effective theory, which cannot capture non-universal terms whose origin involves ultraviolet dynamics.
The accuracy of this effective theory depends on the dimensionless parameter $1/(\sigma L_1 L_2)$: when this parameter is small, the bosonic-string model is expected to provide a good description of the interface free energy.
Given that the algorithm based on Jarzynski's relation allows one to reach high numerical precision, it is particularly interesting to compare our results for the interface free energy with the predictions from the effective string model, trying to identify deviations from the terms predicted by a Nambu--Got\={o} action. To this purpose, we analyzed the results obtained at $\beta=0.223102$ expressing all dimensionful quantities in units of the interface tension $\sigma$: as determined in ref.~\cite{Caselle:2007yc}, at this value of $\beta$, one has $\sigma a^2 = 0.0026083(6)(7)$. Note that this implies that the lattice spacing is quite small, so discretization effects should be under control.
The first step in this analysis consists in subtracting the Nambu--Got\={o} prediction for the free energy, obtained from the logarithm of the r.h.s. of the first equality in eq.~(\ref{interface_partition_function}), from our data. Since eq.~(\ref{interface_partition_function}) predicts the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio only up to the undetermined multiplicative constant $\mathcal{C}$, it predicts the interface free energy only up to an additive term $q=-\ln\mathcal{C}$. Like $\mathcal{C}$, $q$ depends only on the ultraviolet details of the theory, namely it can depend on the lattice spacing $a$ (or, equivalently, on $\beta$), but not on the lattice sizes. At each $\beta$, the value of $q$ can be fixed, by observing that the corrections to the Nambu--Got\={o} prediction are expected to become negligible for sufficiently large interfaces, i.e. for $\sigma L_1 L_2 \gg 1$. To this purpose, for each combination of values of $\sigma a^2$ and lattice sizes, we define our numerical estimate of the free energy ($F_{\mbox{\tiny{num}}}$) according to eq.~(\ref{f1}), using the results of our Monte~Carlo simulations for the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio. Then, for the same combination of $\sigma a^2$ and lattice sizes, we compute the quantity $\mathcal{I}$ appearing in eq.~(\ref{interface_partition_function}), and we define a quantity (denoted as $F_{\mathcal{I}}$) using $\mathcal{I}$ in place of the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio in eq.~(\ref{f2}). It is easy to see that $q$ can be obtained from
\begin{equation}
\label{q_definition}
q = \lim_{\sigma L_1 L_2 \to \infty} \left( F_{\mbox{\tiny{num}}} - F_{\mathcal{I}} \right)
\end{equation}
(note that, when $ \sigma L_1 L_2$ is large, the $Z_{\mbox{\tiny{a}}}/Z_{\mbox{\tiny{p}}}$ ratio tends to zero, and, as discussed above, the free-energy definitions given by eqs.~(\ref{f1}) and (\ref{f2}) become equivalent). For every value of $\sigma a^2$, when $\sigma L_1 L_2$ becomes large our results for the $F_{\mbox{\tiny{num}}}-F_{\mathcal{I}}$ difference tend, indeed, to a constant, and can be fitted to $q=0.9168(5)$.
Then, we study the deviations of our Monte~Carlo results from the Nambu--Got\={o} predictions (for each value of $\beta$, and for each combination of lattice sizes), by defining the difference
\begin{equation}
\label{y_definition}
y = F_{\mbox{\tiny{num}}} - F_{\mathcal{I}} - q.
\end{equation}
This quantity depends on the interface sizes $L_1$ and $L_2$, and encodes the contributions to the free energy from terms appearing in the effective string action, that do not arise from in a low-energy expansion of the Nambu--Got\={o} action (and/or from possible systematic effects, related for example to the finiteness of the lattice spacing; however, previous studies indicate that the latter should be very modest for $\beta$ values in the range under consideration here---see, for example, ref.~\cite{Caselle:2005vq} and references therein). For each set at fixed $L_2 > 32a$, these data can be successfully fitted to the form expected for the leading and next-to-leading corrections to the Nambu--Got\={o} model, which, according to the discussion in ref.~\cite{Aharony:2013ipa}, scale with the seventh and with the ninth inverse power of the $L_1$ length scale:\footnote{The fact that, in three spacetime dimensions, the leading correction to the Nambu--Got\={o} model scales at least with the seventh inverse power of the length scale of the system has been recently observed also in lattice simulations of $\mathrm{SU}(N)$ gauge theories~\cite{Athenodorou:2016kpd}.}
\begin{equation}
\label{corrections_to_NG}
y = \frac{1}{\left( L_1 \sqrt{\sigma} \right)^{7}} \left[ k_{-7} + \frac{k_{-9}}{\left( L_1 \sqrt{\sigma} \right)^{2}} \right].
\end{equation}
Note that, for an interface in $D=3$ dimensions, the effective-string arguments indicate that additional subleading corrections, not included in the expression on the right-hand side of eq.~(\ref{corrections_to_NG}), are expected to be $O\left( (L_1 \sqrt{\sigma} )^{-11} \right)$, i.e. to be suppressed by at least one further factor of $1/(L_1^2 \sigma)$. The results of these fits are reported in table~\ref{tab:corrections_to_NG_fit}, where $\chi^2_{\tiny\mbox{red}}$ denotes the reduced $\chi^2$ obtained in the fit, i.e. the ratio of the $\chi^2$ over the number of degrees of freedom.
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$N_2$ & $k_{-7}$ & $k_{-9}$ & $\chi^2_{\tiny\mbox{red}}$ \\
\hline
$48$ & $0.389(1)$ & $0.03(3)$ & $1.09$ \\
$64$ & $0.432(2)$ & $0.22(3)$ & $1.06$ \\
$80$ & $0.593(2)$ & $0.25(3)$ & $1.47$ \\
$96$ & $0.650(5)$ & $0.410(7)$ & $0.07$ \\
\hline
\end{tabular}
\caption{Results of our fits of the difference between our numerical results for the interface free energy and the corresponding Nambu--Got\={o} prediction, as defined in the text, to eq.~(\ref{corrections_to_NG}).}
\label{tab:corrections_to_NG_fit}
\end{table}
We also observed that a single inverse-power (of $L_1 \sqrt{\sigma}$) correction is not sufficient to describe our data. When we tried to set $k_{-9}$ to zero, leaving $k_{-7}$ as the only parameter to be fitted in eq.~(\ref{corrections_to_NG}), we always obtained values of the $\chi^2$ per degree of freedom much larger than $1$ (e.g. $\chi^2_{\tiny\mbox{red}} \simeq 24$ for $N_2=80$, and $\chi^2_{\tiny\mbox{red}} \simeq 61.5$ for the $N_2=96$ case), indicating that a term of order $\left( L_1 \sqrt{\sigma} \right)^{-7}$ alone does not fit our numerical results for $y$. In addition, we also observed that, if $k_{-9}$ is set to zero, but the exponent of $L_1 \sqrt{\sigma}$ for the other term (besides its coefficient) is treated as a fit parameter, i.e. if we make the \emph{Ansatz}
\begin{equation}
\label{single-power_correction}
y = \frac{k}{\left( L_1 \sqrt{\sigma} \right)^{\alpha}}
\end{equation}
with $k$ and $\alpha$ as fit parameters, the fits yield values of $\alpha$ that are incompatible across the data sets corresponding to different $N_2$, and that increase with $N_2$, ranging from $7.10(8)$ (for $N_2=48$), to $7.54(7)$ (for $N_2=64$), to $7.44(5)$ (for the data set at $N_2=80$), to $7.60(2)$ (for $N_2=96$). While the value of $\alpha$ obtained from the data set at $N_2=48$ may be compatible with $7$, the others, clearly, are not: the results at $N_2=64$ and $N_2=80$ may be compatible with a half-integer exponent $15/2$ (for which, however, there is no theoretical justification), but this is not the case for those at $N_2=96$. We also observe that the values of the reduced $\chi^2$ for some of these fits are significantly larger than $1$ (for example, $\chi^2_{\tiny\mbox{red}}$ is around $1.7$ for the data sets corresponding to $N_2=48$ and to $N_2=80$). This led us to conclude that our numerical results for the deviations from the Nambu--Got\={o} model cannot be fitted to a functional form including a correction given by a single inverse power of $L_1 \sqrt{\sigma}$ of the form given in eq.~(\ref{single-power_correction}).
These results support the expectations from the effective string model discussed in ref.~\cite{Aharony:2013ipa}; however, a puzzle remains: the values of $k_{-7}$ and $k_{-9}$ extracted from the fits have a residual dependence on $L_2$, whose origin is not clear. This could indicate that, as already pointed out in ref.~\cite{Caselle:2010pf}, the effective action describing the low-energy dynamics of this gauge theory includes additional terms. One possible such term could be the one describing the string ``stiffness''~\cite{Polyakov:1986cs, Kleinert:1986bk, Braaten:1986bz, German:1989vk, Klassen:1990dx}. We postpone a detailed analysis of this problem to a future, dedicated study.
We conclude this section with an important remark: even though we have calculated the interface free energy of the $\mathbb{Z}_2$ lattice gauge in three dimensions by mapping it to the Ising model, this was not a necessary condition for the application of Jarzynski's relation. An explicit example of application of Jarzynski's relation directly in a lattice gauge theory is presented in the following section~\ref{sec:equation_of_state}.
\section{Benchmark study II: The equation of state}
\label{sec:equation_of_state}
As another example of application of Jarzynski's relation eq.~(\ref{Jarzynski}) in lattice gauge theory, we discuss the calculation of the pressure in $\mathrm{SU}(2)$ Yang--Mills theory in $D=4$ spacetime dimensions. As is well-known, this gauge theory has a second-order deconfinement phase transition at a finite critical temperature $T_c$~\cite{Fingberg:1992ju, Engels:1994xj, Lucini:2005vg}, which, when expressed in physical units, is approximately $300$~MeV~\cite{Teper:1998kw, Lucini:2002ku, Lucini:2003zr, Lucini:2005vg}. As usual, the main quantities describing the thermal equilibrium properties of this theory are the pressure ($p$), the energy density ($\epsilon=E/V$) and the entropy density ($s=S/V$); these observables are related to each other by standard thermodynamic identities:
\begin{equation}
\epsilon = (D-1)p + \Delta, \qquad s =\frac{Dp + \Delta}{T},
\end{equation}
where $\Delta$ is the trace of the energy-momentum tensor, which, in turn, satisfies the relation
\begin{equation}
\Delta = T^{D+1} \frac{\partial}{\partial T} \left( \frac{p}{T^D} \right).
\end{equation}
As we mentioned in section~\ref{sec:introduction}, in the thermodynamic limit $V\to \infty$, the pressure equals minus the free-energy density, $p=-f=-F/V$, and this opens up the possibility to evaluate it using Jarzynski's relation. More precisely, we focus our attention on determining how the pressure depends on the temperature in the confining phase, i.e. at temperatures $T< T_{\mbox{\tiny{c}}}$, assuming that the pressure vanishes for $T=0$. As it was recently shown in ref.~\cite{Caselle:2015tza}, the equilibrium-thermodynamics properties in the confining phase of this theory can be modelled very well in terms of a gas of free glueballs, using the masses of the lightest states known from previous lattice studies~\cite{Teper:1998kw} and assuming that the spectral density of heavier states has an exponential form~\cite{Hagedorn:1965st} (see also refs.~\cite{Buisseret:2011fq, Arriola:2014bfa} for discussions on related topics, and ref.~\cite{Caselle:2011fy} for an analogous lattice study in $2+1$ dimensions). Similar results have also been obtained in lattice studies of $\mathrm{SU}(3)$ Yang--Mills theory~\cite{Meyer:2009tq, Borsanyi:2012ve} and may be of direct phenomenological relevance even for real-world QCD~\cite{Stoecker:2015zea, Stocker:2015nka}.
It is worth remarking that the lattice determination of the equation of state in the confining phase of $\mathrm{SU}(N)$ Yang--Mills theory is not a computationally trivial problem: at low temperatures, the thermodynamic quantities mentioned above take values, that are significantly smaller than in the deconfined phase ($T > T_{\mbox{\tiny{c}}}$). In the hadron-gas picture, the exponential suppression of these thermodynamic quantities for $T \ll T_{\mbox{\tiny{c}}}$ is a direct consequence of confinement, i.e. of the existence of a finite mass gap---a relatively large one: when converted to physical units, the mass of the lightest glueball is around $1.6$~GeV for both $\mathrm{SU}(2)$~\cite{Teper:1998kw, Lucini:2001ej, Lucini:2004my} and $\mathrm{SU}(3)$~\cite{Morningstar:1999rf} Yang--Mills theories.
Here, we focus on $\mathrm{SU}(2)$ Yang--Mills theory in four spacetime dimensions, and, following the notations of ref.~\cite{Caselle:2015tza}, we discretize it on a isotropic hypercubic lattice of spacing $a$ by introducing Wilson's gauge action~\cite{Wilson:1974sk}:
\begin{equation}
\label{Wilson_action}
S_{\mathrm{SU}(2)} = -\frac{2}{g^2} \sum_{x \in \Lambda} \sum_{0 \le \mu < \nu \le 3} \Tr U_{\mu\nu} (x),
\end{equation}
where $g$ is the coupling, related to $\beta_{\mbox{\tiny{g}}}$ via $\beta_{\mbox{\tiny{g}}}=4/g^2$, and $U_{\mu\nu} (x)$ denotes the plaquette from the site $x$ and lying in the oriented $(\mu,\nu)$ plane:
\begin{equation}
\label{plaquette}
U_{\mu\nu} (x) = U_\mu (x) U_\nu \left(x+a\hat{\mu}\right) U_{\mu}^\dagger \left(x+a\hat{\nu}\right) U_{\nu}^\dagger (x),
\end{equation}
where $\hat{\mu}$ and $\hat{\nu}$ denote unit vectors in the positive $\mu$ and $\nu$ directions, respectively. In the following, we assume that the compactified Euclidean-time direction is the $\mu=0$ direction, so that $T=1/(aN_0)$, while we take the lattice sizes in the three other directions to be equal ($N_1=N_2=N_3$, that we denote as $N_s$) and sufficiently large, to avoid finite-volume effects. Note that, in order to control the temperature of the system, we used the relation between $a$ and the inverse coupling $\beta_{\mbox{\tiny{g}}}$ determined in ref.~\cite{Caselle:2015tza}, and discussed in the next paragraph, so that we were able to change the temperature $T$ simply by varying $\beta_{\mbox{\tiny{g}}}$ at fixed $N_0$. We denote the normalized expectation value of the average of the trace of the plaquette at a generic temperature $T$ as $\langle U_{\Box}\rangle_T$: this quantity is averaged over all sites of the lattice and over all of the distinct $(\mu,\nu)$ planes, and is normalized to $1$ by dividing the trace by the number of color charges, i.e. by $2$ for the $\mathrm{SU}(2)$ gauge theory.
In order to ``set the scale'' of the lattice theory (i.e. to define a physical value for the lattice spacing $a$, as a function of $\beta_{\mbox{\tiny{g}}}$), we use the same non-perturbative procedure as in ref.~\cite{Caselle:2015tza}, based on the determination of the value of the force between static fundamental color sources at asymptotically large distances (i.e. the string tension of the theory) in lattice units, $\sigma a^2$: in the $2.25 \le \beta_{\mbox{\tiny{g}}} \le 2.6$ range, the relation between $a$ and $\beta_{\mbox{\tiny{g}}}$ is parametrized as
\begin{equation}
\label{betaform}
\ln \left( \sigma a^2 \right) = \sum_{j=0}^{3} h_j \left( \beta_{\mbox{\tiny{g}}} - \beta_{\mbox{\tiny{g}}}^{\mbox{\tiny{ref}}} \right)^j,
\end{equation}
where $\beta_{\mbox{\tiny{g}}}^{\mbox{\tiny{ref}}}=2.4$, while $h_0 = -2.68 $, $h_1 = -6.82 $, $h_2 = -1.90 $ and $h_3 = 9.96$. In addition, we mention that, for this gauge theory, the value of the ratio of the deconfinement critical temperature over the square root of the string tension is $T_c/\sqrt{\sigma} = 0.7091(36)$~\cite{Lucini:2003zr}.
A popular technique to compute the pressure $p$ (as a function of the temperature $T$, and with respect to the pressure at a conventional reference temperature: usually one defines $p$ as the difference with respect to the value it takes at $T=0$, which can be assumed to vanish) in lattice gauge theory is the integral method introduced in ref.~\cite{Engels:1990vr}. Here we describe it for the pure Yang--Mills theory. The method is based on the fact that, as we mentioned above, in the thermodynamic limit the pressure equals minus the free energy density; in turn, this quantity is proportional to the logarithm of the partition function, which can be computed by integrating its derivative with respect to the Wilson parameter $\beta_{\mbox{\tiny{g}}}$. At $T=0$ the pressure is vanishing, hence one could think of defining it as
\begin{equation}
\label{unphysical_pressure}
p^{\mbox{\tiny{unphys}}} = - f = \frac{T}{V} \ln Z = \frac{1}{a^4 N_0 N_s^3} \int_{\beta_{\mbox{\tiny{g}}}^{(0)}}^{\beta_{\mbox{\tiny{g}}}^{(T)}} {\rm{d}} \beta_{\mbox{\tiny{g}}} \frac{\partial \ln Z}{\partial \beta_{\mbox{\tiny{g}}}},
\end{equation}
where the upper integration extremum $\beta_{\mbox{\tiny{g}}}^{(T)}$ is the value of Wilson's parameter at which the lattice spacing $a$ equals $1 / \left( N_0 T \right)$, while the lower integration extremum $\beta_{\mbox{\tiny{g}}}^{(0)}$ is a value of Wilson's parameter, corresponding to a lattice spacing $a^{(0)}$ sufficiently large, so that the temperature $1/\left(a^{(0)}N_0\right)$ is close to zero. Using the fact that the logarithmic derivative of $Z$ with respect to $\beta_{\mbox{\tiny{g}}}$ equals the plaquette expectation value times the number of plaquettes (which is $6 N_0 N_s^3$), eq.~(\ref{unphysical_pressure}) reduces to
\begin{equation}
\label{unphysical_pressure_bis}
p^{\mbox{\tiny{unphys}}} = \frac{6}{a^4} \int_{\beta_{\mbox{\tiny{g}}}^{(0)}}^{\beta_{\mbox{\tiny{g}}}^{(T)}} {\rm{d}} \beta_{\mbox{\tiny{g}}} \langle U_{\Box} \rangle_{\mathcal{T}(\beta_{\mbox{\tiny{g}}})},
\end{equation}
where $\mathcal{T}(\beta_{\mbox{\tiny{g}}})=1/\left[N_0 a(\beta_{\mbox{\tiny{g}}})\right]$ is the temperature of the theory defined on a lattice with $N_0$ sites along the Euclidean-time direction and at Wilson parameter $\beta_{\mbox{\tiny{g}}}$, corresponding to a lattice spacing $a(\beta_{\mbox{\tiny{g}}})$.
However, a definition of the pressure according to eq.~(\ref{unphysical_pressure_bis}) is actually unphysical (whence the $^{\mbox{\tiny{unphys}}}$ superscript), because it diverges in the continuum limit. This is easy to see, by inspection of eq.~(\ref{unphysical_pressure_bis}): in the $a \to 0$ limit, the integrand appearing on the right-hand side is a quantity that remains $O(1)$ in the whole integration domain, and the integral is multiplied by the divergent factor $6/a^4$. This unphysical ultraviolet divergence can be removed by subtracting the plaquette expectation value at $T=0$ (and at the same $\beta_{\mbox{\tiny{g}}}$) from the integrand on the right-hand side of eq.~(\ref{unphysical_pressure_bis}). This leads to the correct physical definition of the pressure according to the integral method:
\begin{equation}
\label{integral_method}
p = \frac{6}{a^4} \int_{\beta_{\mbox{\tiny{g}}}^{(0)}}^{\beta_{\mbox{\tiny{g}}}^{(T)}} {\rm{d}} \beta_{\mbox{\tiny{g}}} \left[ \langle U_{\Box} \rangle_{\mathcal{T}(\beta_{\mbox{\tiny{g}}})} - \langle U_{\Box} \rangle_0 \right],
\end{equation}
where $\langle U_{\Box}\rangle_0$ is evaluated from simulations on a symmetric lattice of sizes $N_s^4$ at the same value of $\beta_{\mbox{\tiny{g}}}$ (i.e. at the same lattice spacing) as $\langle U_{\Box}\rangle_{\mathcal{T}(\beta_{\mbox{\tiny{g}}})}$.
Accordingly, the dimensionless $p(T)/T^4$ ratio can be evaluated as
\begin{equation}
\label{lattice_pressure}
\frac{p}{T^4} = 6 N_0^4 \int_{\beta_{\mbox{\tiny{g}}}^{(0)}}^{\beta_{\mbox{\tiny{g}}}^{(T)}} {\rm{d}} \beta_{\mbox{\tiny{g}}} \left[ \langle U_{\Box}\rangle_{\mathcal{T}(\beta_{\mbox{\tiny{g}}})} - \langle U_{\Box}\rangle_0 \right].
\end{equation}
Thus, the integral method reduces the computation of the pressure to an integration of differences between the plaquette expectation values at finite ($\mathcal{T}$) and at zero temperature. Such integration can be carried out numerically (e.g. using the trapezoid rule, or some of the methods listed in ref.~\cite[Appendix]{Caselle:2007yc}), once the $\langle U_{\Box}\rangle_{\mathcal{T}(\beta_{\mbox{\tiny{g}}})} - \langle U_{\Box}\rangle_0$ differences are known to sufficient precision, and at a large enough number of values of the Wilson parameter in the $\left[\beta_{\mbox{\tiny{g}}}^{(0)},\beta_{\mbox{\tiny{g}}}^{(T)}\right]$ interval.
Note that eq.~(\ref{lattice_pressure}) reveals a potentially challenging aspect of the lattice determination of the equation of state obtained with the integral method, in the extrapolation to the continuum limit. The pressure, in units of the fourth power of the temperature, is evaluated as the product of $6N_0^4$ times the integral of a difference in plaquette expectation values. For a fixed temperature $T$, the number of lattice sites in the Euclidean-time direction $N_0=1/(aT)$ becomes large in the continuum limit $a \to 0$, and, since the $p(T)/T^4$ ratio tends to a finite constant (its physical value) in that limit, while the integration range remains finite, this means that at the same time the $\langle U_{\Box}\rangle_{\mathcal{T}} - \langle U_{\Box}\rangle_0$ differences must necessarily become small, scaling like $a^4$. This implies that, in a numerical simulation, both $\langle U_{\Box}\rangle_{\mathcal{T}}$ and $\langle U_{\Box}\rangle_0$ have to be determined with relative statistical uncertainties $O(a^4)$, which requires a computational effort scaling (at least) like $O(N_0^8)$.
This significant computational cost provides a motivation to use Jarzynski's relation for the numerical computation of the pressure; in this case, $\lambda$ can be taken to be Wilson's parameter, which is let vary from $\beta_{\mbox{\tiny{g}}}^{(0)}$ at $t=t_{\mbox{\tiny{in}}}$, to $\beta_{\mbox{\tiny{g}}}^{(T)}$ at $t=t_{\mbox{\tiny{fin}}}$. A potential advantage of determining the equation of state this way, is that, in contrast to the standard implementation of the integral method described above, it would not require complete equilibration of the system at all intermediate values of $\beta_{\mbox{\tiny{g}}}$, and, hence, could reduce the computational cost of the calculation, at least by a factor. While there is no obvious reason to expect that the computational costs of an algorithm based on Jarzynski's relation could scale with a lower power of $N_0$ when the continuum limit is approached, its intrinsic non-equilibrium nature suggests that it could nevertheless be significantly cheaper than a standard algorithm to implement eq.~(\ref{lattice_pressure}), because it would dramatically reduce the costs associated with thermalization (only the configurations in the starting ensemble need to be equilibrated).
We computed the pressure of the theory at different temperatures $0 < T < T_{\mbox{\tiny{c}}}$, using the method based on Jarzynski's relation eq.~(\ref{Jarzynski}), assuming the equality of the pressure and minus the density of free energy, and using the ``physical'' definition of the pressure, consistent with eq.~(\ref{integral_method}), in which the unphysical ultraviolet divergences are subtracted. For later convenience, in order to allow a direct comparison with the results obtained in ref.~\cite{Caselle:2015tza}, in which this subtraction was carried out using lattices of sizes $\widetilde{N}^4$ (where $\widetilde{N}$ can be different from $N_s$, but it must be sufficiently large to enforce that the temperature is close to zero, and to avoid systematic uncertainties due to finite-volume effects) instead of $N_s^4$, we include this slight generalization of the divergence-subtraction procedure discussed above in the present discussion. Moreover, we also relax the assumption that the starting temperature $T_0=1/\left[ a\left(\beta_{\mbox{\tiny{g}}}^{(0)}\right) N_0\right]$ is close to zero, and that $p(T_0)$ vanishes.
As already mentioned in section~\ref{sec:Jarzynski}, we interpreted the differences appearing on the right-hand side of eq.~(\ref{discretized_exponential_work}) as differences in the Euclidean action of the lattice theory---i.e. as differences in the Wilson action defined in eq.~(\ref{Wilson_action})---when the $\beta_{\mbox{\tiny{g}}}$ parameter is varied. This leads to the following formula for the determination of $p/T^4$:
\begin{equation}
\label{lattice_pressure_Jarzynski}
\frac{p(T)}{T^4} = \frac{p(T_0)}{T_0^4} + \left( \frac{N_0}{N_s} \right)^3 \ln \frac{ \langle \exp \left[ - \Delta S_{\mathrm{SU}(2)} (t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})_{N_0 \times N_s^3} \right] \rangle }{ \langle \exp \left[ - \Delta S_{\mathrm{SU}(2)} (t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})_{\widetilde{N}^4} \right] \rangle^{\gamma} } ;
\end{equation}
on the right-hand side of this expression, $\Delta S_{\mathrm{SU}(2)} (t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})_{N_0 \times N_s^3}$ is the total variation in Wilson action calculated on a lattice of sizes $N_0 \times N_s^3$ during a non-equilibrium trajectory starting from a configuration of the initial, equilibrium ensemble with Wilson parameter $\beta_{\mbox{\tiny{g}}}^{(0)}$ realized at $t=t_{\mbox{\tiny{in}}}$, to a final configuration, obtained driving the system out of equilibrium until $\beta_{\mbox{\tiny{g}}}$ reaches its value $\beta_{\mbox{\tiny{g}}}^{(T)}$ at $t=t_{\mbox{\tiny{fin}}}$, and the $\langle \dots \rangle$ notation indicates averaging over $n_{\mbox{\tiny{r}}}$ such trajectories, as discussed in section~\ref{sec:Jarzynski}. Similarly, $\Delta S_{\mathrm{SU}(2)} (t_{\mbox{\tiny{in}}},t_{\mbox{\tiny{fin}}})_{\widetilde{N}^4}$ denotes an analogous total variation in Wilson action, but evaluated on a lattice of sizes $\widetilde{N}^4$, while the exponent $\gamma = \left( N_0 \times N_s^3 \right) / \widetilde{N}^4$ is the ratio of the lattice hypervolumes.
Like for the determination of the interface free energy discussed in section~\ref{sec:interface}, we found that, when the transformation is discretized using a sufficiently large number of intervals $N$, the results obtained with a ``direct'' or a ``reverse'' non-equilibrium transformation converge to the same values, which are compatible with those obtained by the integral method used in ref.~\cite{Caselle:2015tza} at nearby temperatures. This is shown in table~\ref{tab:su2_pressure} and in fig.~\ref{fig:su2_pressure}, which report results for the pressure, in units of the fourth power of the temperature, from simulations on lattices at fixed $N_0=6$ (so that the temperature is varied by tuning $\beta_{\mbox{\tiny{g}}}$, and the results are shown as a function of it) and spatial sizes in units of the lattice spacing $N_s=72$, while the corresponding simulations at $T=0$ were run on lattices of sizes $\widetilde{N}=40$ in all the four directions (so that $\gamma=(6 \times 72^3)/40^4=0.8748$), like in ref.~\cite{Caselle:2015tza}. Note that these results were obtained using independent non-equilibrium transformations from one value of $\beta_{\mbox{\tiny{g}}}$ to the next, i.e. applying eq.~(\ref{lattice_pressure_Jarzynski}) to compute only the \emph{difference} in pressure. Furthermore, we did not determine the pressure at very small temperatures; instead, we started the analysis at a finite temperature $T_0$, corresponding to $\beta_{\mbox{\tiny{g}}}=2.4058$, and used the value obtained from the integral method (which is reported in table~\ref{tab:su2_pressure}) for $p(T_0)/T_0^4$ in eq.~(\ref{lattice_pressure_Jarzynski}).
\begin{table}[!htb]
\centering
\begin{tabular}{|c||c|c|c|}
\hline
$\beta_{\mbox{\tiny{g}}}^{(T)}$ & $p/T^4$, direct & $p/T^4$, reverse & $p/T^4$, integral method \\
\hline
\hline
$2.4058$ & -- & -- & $0.00980(22)$ \\
$2.4108$ & $0.01122(9)$ & $0.01130(11)$ & $0.01114(22)$ \\
$2.4157$ & -- & -- & $0.01274(22)$ \\
$2.4158$ & $0.01276(15)$ & $0.01304(14)$ & -- \\
$2.4186$ & -- & -- & $0.01381(22)$ \\
$2.4208$ & $0.01492(20)$ & $0.01505(16)$ & -- \\
$2.4214$ & -- & -- & $0.01501(22)$ \\
$2.4228$ & -- & -- & $0.01569(22)$ \\
$2.4243$ & -- & -- & $0.01656(22)$ \\
$2.4257$ & -- & -- & $0.01751(22)$ \\
$2.4258$ & $0.01780(35)$ & $0.01774(24)$ & -- \\
$2.4271$ & -- & -- & $0.01867(22)$ \\
$2.428$ & -- & -- & $0.01956(22)$ \\
$2.429$ & -- & -- & $0.02068(22)$ \\
$2.43$ & -- & -- & $0.02198(22)$ \\
$2.4308$ & $0.02354(37)$ & $0.02402(27)$ & -- \\
$2.431$ & -- & -- & $0.02341(22)$ \\
\hline
$2.4108$ & $0.01122(9)$ & $0.01130(11)$ & $0.01116(51)$ \\
\hline
\end{tabular}
\caption{Results for $p/T^4$ at different values of $\beta_{\mbox{\tiny{g}}}^{(T)}$ (first column), from simulations on lattices with $N_0=6$ and spatial sizes $N_s^3=72^3$ (while the simulations at $T=0$ were run on lattices of sizes $\widetilde{N}^4=40^4$) using Jarzynski's relation eq.~(\ref{Jarzynski}) with a direct (second column) or a reverse implementation (third column) of the parameter switch, in comparison with those obtained with the integral method~\cite{Engels:1990vr} in ref.~\cite{Caselle:2015tza} (fourth column). The data in the last line provide a comparison of results from the method based on Jarzynski's relation and the integral method, for the same number ($3 \times 10^4$) of gauge configurations. \label{tab:su2_pressure}}
\end{table}
\begin{figure}[!htpb]
\centerline{\includegraphics[width=0.9\textwidth]{su2_pressure.pdf}}
\caption{(Color online) Results for the pressure $p$ (in units of the fourth power of the temperature) in the confining phase of $\mathrm{SU}(2)$ Yang--Mills theory, as a function of the Wilson parameter $\beta_{\mbox{\tiny{g}}}^{(T)}$ (which controls the lattice spacing $a$, and, thus, the temperature $T=1/(aN_0)$), from simulations on lattices with $N_0=6$ and spatial sizes $N_s^3=72^3$ (while the corresponding simulations at $T=0$ were performed on lattices of sizes $\widetilde{N}^4=40^4$). The results obtained using Jarzynski's relation eq.~(\ref{Jarzynski}) with a direct (red squares) and a reverse (blue circles) implementation of the parameter transformation converge to those obtained in ref.~\cite{Caselle:2015tza} using the integral method~\cite{Engels:1990vr} (green triangles).}
\label{fig:su2_pressure}
\end{figure}
The computational cost to get these results using Jarzynski's relation was rather modest: each of the values of $p/T^4$ reported in table~\ref{tab:su2_pressure} was obtained from simulations with $N=10^3$ (or $N=2 \times 10^3$, at the two largest $\beta_{\mbox{\tiny{g}}}$ values) and $n_{\mbox{\tiny{r}}}=30$. Thus, we are in a position to compare the efficiency of the method based on Jarzynski's relation to that of the integral method: for the latter, plaquette expectation values $\langle U_{\Box}\rangle_{\mathcal{T}} $ and $ \langle U_{\Box}\rangle_0$ were calculated using about $10^5$ configurations for each value of $\beta_{\mbox{\tiny{g}}}$ and then integrated numerically; conversely, using the method based on Jarzynski's relation, each point required either $3 \times 10^4$ or $6 \times 10^4$ configurations, with errors generally comparable to those obtained from the integral method.
We have to emphasize that a comprehensive comparison in terms of CPU cost between the two methods is not straightforward, since it depends on how many values of $\beta_{\mbox{\tiny{g}}}$ for which the integrand of eq.~(\ref{lattice_pressure}) is computed are chosen, in order to obtain a reliable numerical integration. To address this issue we attempted a comparison at fixed number of configurations ($3 \times 10^4$) for a single point at $\beta_{\mbox{\tiny{g}}}=2.4108$: as it can be seen in the last line of table~\ref{tab:su2_pressure}, the statistical uncertainty of the result obtained with the integral method is larger, so that the method based on Jarzynski's relation proves to be computationally more efficient.
Moreover, we remark that, in principle, during a single trajectory $t_{\mbox{\tiny{in}}} \to t_{\mbox{\tiny{fin}}}$ it is possible to determine the work (and, hence, the pressure) at any intermediate step between the initial and final value of $\beta_{\mbox{\tiny{g}}}$, without having to thermalize the system. Even if in this analysis all the values of $p/T^4$ were computed in independent transformations, it is worth stressing that a rather detailed determination of the equation of state would be feasible in this way, provided the correlations among results obtained during a single out-of-equilibrium transformation are properly taken into account.
Finally, we can conclude that the method based upon Jarzynski's relation proved to be very efficient in the determination of the pressure in the temperature region of choice, making it a viable and CPU-cost-effective technique to determine the equation of state.
\section{Discussion and further applications}
\label{sec:conclusions}
In this article, we have shown that the non-equilibrium work relation derived by Jarzynski in statistical mechanics~\cite{Jarzynski:1996ne, Jarzynski:1997ef} can be successfully extended to study problems in lattice gauge theory. This relation links the ratio of the equilibrium partition functions describing a system at two different sets of physical parameters, to the exponential average of the work performed on the system during a non-equilibrium transformation, in which the system parameters and the fields are let evolve.
The generalization of Jarzynski's relation to lattice gauge theory is simply an application of a statistical-mechanics technique to a field-theory context, and does not involve any \emph{ad~hoc} assumptions: this elementary but important point is made clear by the detailed derivation of eqs.~(\ref{generalized_Jarzynski}) and~(\ref{Jarzynski}) in section~\ref{sec:Jarzynski}.
As examples of application, we used Jarzynski's relation to study the interface free energy in the $\mathbb{Z}_2$ gauge model in three dimensions (section~\ref{sec:interface}) and the equation of state in the confining phase of $\mathrm{SU}(2)$ Yang--Mills theory (section~\ref{sec:equation_of_state}).
In the study of the interface free energy, we compared our results with the expectations from effective string theory, and we were able to identify the leading and next-to-leading deviations from the behavior predicted by the Nambu--Got\={o} string. The form of these corrections agrees with theoretical expectations~\cite{Aharony:2013ipa}, but a more detailed quantitative analysis will be carried out later, in a larger-scale study.
In the study of the equation of state, the algorithm successfully reproduced the results obtained with the integral method in ref.~\cite{Caselle:2015tza}, and proved very competitive in terms of computational cost.
In both cases, the calculation of free energies based on this method gave precise results, which converge rapidly to those obtained by different techniques, when the transformation of parameters relating the initial and final partition functions of the system is discretized in a sufficiently smooth way, i.e. when $N$ is large enough: under such conditions, the computational efficiency of the algorithm based on Jarzynski's relation proves to be comparable or, in certain cases, superior with respect to other algorithms.
Numerical calculations involving Jarzynski's relation could also be carried out to study lattice gauge theories coupled to dynamical fermions, including QCD. Although in the present work we have not carried out any studies in this direction yet, there is no conceptual obstruction to generalizing the derivation presented in section~\ref{sec:Jarzynski} to Monte~Carlo calculations involving state-of-the-art fermionic algorithms~\cite{Hasenbusch:2001ne, Luscher:2005rx, Urbach:2005ji, Clark:2006fx, Luscher:2007se, Luscher:2007es}.
In view of the results obtained in the benchmark studies presented here, we envisage a number of further applications of Jarzynski's relation in lattice QCD.
A particularly interesting one could be in studies involving the Schr{\"o}dinger functional~\cite{Symanzik:1981wd, Luscher:1985iu}, which is a powerful method to evaluate running physical quantities in asymptotically free theories~\cite{Luscher:1991wu, Luscher:1992an}. The Schr{\"o}dinger functional provides an elegant, gauge-invariant, finite-volume scheme, which is free from many of the technical challenges related to the chiral limit or to the presence of bosonic-field zero-modes for theories defined on a torus, and which, in addition, is particularly suitable for perturbative computations.
In this approach, one considers the evolution of the system during a Euclidean-time interval $L$, from an initial state $\mathcal{I}$, to a final state $\mathcal{F}$. At the classical level, the action $S_{\mbox{\tiny{cl}}}$ of the field configuration induced by the presence of these boundary conditions is inversely proportional to the squared bare coupling of the theory $g$. At the quantum level, denoting the Hamiltonian of the system by $H$, one can compute the transition amplitude
\begin{equation}
Z_{\mathcal{I},\mathcal{F}; L} = \langle \mathcal{F} | \exp \left(- H L \right) | \mathcal{I} \rangle
\end{equation}
and define the effective action $\Gamma_{\mbox{\tiny{eff}}}$ as
\begin{equation}
\Gamma_{\mbox{\tiny{eff}}} = - \ln Z_{\mathcal{I},\mathcal{F}; L}.
\end{equation}
Then, one can define a renormalized coupling $\bar{g}$ at the momentum scale $L^{-1}$, by assuming that $\Gamma_{\mbox{\tiny{eff}}}$ is proportional to $1/\bar{g}^2$. In practical simulations, if the boundary states $\mathcal{I}$ and $\mathcal{F}$ depend on a set of parameters $\chi$, then $\bar{g}^2(L^{-1})$ can be computed from
\begin{equation}
\label{SF_coupling}
\bar{g}^2(L^{-1}) = g^2 \frac{S_{\mbox{\tiny{cl}}}^\prime}{\Gamma_{\mbox{\tiny{eff}}}^\prime},
\end{equation}
where the prime denotes derivation with respect to $\chi$.
This approach has been used for studies of pure-glue non-Abelian gauge theories~\cite{Luscher:1992zx, Luscher:1993gh, Bode:1998hd, Lucini:2008vi} and can be extended to include dynamical fermions~\cite{Sint:1993un}: this has direct applications in QCD~\cite{Sint:1998iq, Bode:1999sm} and in strongly interacting theories~\cite{Appelquist:2007hu, Shamir:2008pb, Hietanen:2009az, Karavirta:2011zg, DeGrand:2011qd, DeGrand:2012qa} that might provide viable models for dynamical breaking of electro-weak symmetry at the TeV scale~\cite{Sannino:2009za, DelDebbio:2010zz} and/or for composite dark matter~\cite{Kribs:2016cew}.
Using Jarzynski's relation, in principle one could evaluate $\bar{g}^2(L^{-1})$ by computing the variation induced in $Z_{\mathcal{I},\mathcal{F}; L}$ by a change in the $\chi$ parameters that specify $\mathcal{I}$ and $\mathcal{F}$.
Finally, it is tempting to think that Jarzynski's relation could also find applications in lattice QCD at finite density, where, as we mentioned in section~\ref{sec:introduction}, the loss of $\gamma_5$-Hermiticity of the Dirac operator induces a severe sign problem~\cite{deForcrand:2010ys, Philipsen:2012nu, Levkova:2012jd, Aarts:2013lcm, D'Elia:2015rwa, Gattringer:2016kco}. In particular, the connections between Jarzynski's relation and the reweighting technique~\cite{Ferrenberg:1988yz, Barbour:1997bh, Fodor:2001au}, that we mentioned in section~\ref{sec:Jarzynski}, deserve further investigation. We leave these issues for future work.
\vskip1.0cm
\noindent{\bf Acknowledgements.}\\
The simulations were run on INFN Pisa GRID Data Center and on CINECA machines. The work of A.~T. is partially supported by the Danish National Research Foundation grant DNRF90. We thank C.~Bonati, R.~C.~Brower, M.~D'Elia, M.~Mesiti, M.~Pepe, A.~Ramos and E.~Vicari for helpful comments and discussions.
|
1,108,101,565,753 | arxiv | \section{Introduction}
Ion channels are water filled holes which facilitate exchange
of electrolyte between the exterior and interior of a cell.
Pores are formed by specific proteins embedded
into the phospholipid membrane~\cite{Hi01}.
Depending on the conformation of the
protein, the pore can be open or closed. When open,
the protein is
very specific to the kind of ions that it allows to pass through
the channel~\cite{DoCaPf98,BeRo01}.
In order to function properly the channel has
to conduct thousands of ions in a period of few milliseconds. Considering
that the channel passes through a phospholipid membrane which has
a very low dielectric constant and is very narrow, producing
high energetic penalties for ions entering the nonopores,
it is fascinating
to contemplate how Natures manages to perform this amazing task.
In fact, as long ago as 1969, Parsegian observed that for an
infinitely long cylindrical channel~\cite{Sm50} of radius $a=3$ \AA,
the electrostatic barrier is over $16k_B T$, which should completely
suppress any ionic flow~\cite{Pa69}.
Later numerical work by Levitt~\cite{Le78}, Jordan~\cite{Jo82}
and others demonstrated that for more realistic finite channels
the barrier
is dramatically reduced. For example, for a channel of length $L=25$ \AA
$\,$ and radius $a=3$ \AA, the barrier is about $6k_BT$, which although
still quite large, should allow ionic conductivity.
Recently the study of ion channels
has expanded to other parts of applied physics. Water filled
nanopores are introduced into silicon oxide films, polymer membranes,
etc~\cite{LiSt01,SiFu02}. In
all of these cases the dielectric constant of the interior of a nanopore
greatly exceeds that of the surrounding media.
To quantitatively
study the conductance of a nanopore one has three options: the all atom
molecular dynamics simulation (MD)~\cite{BeRo01};
the Brownian dynamics simulation (BD)~\cite{MoCo00,CoKu00} with implicit
water treated as a uniform dielectric continuum;
or the mean-field Poisson-Nernst-Planck theory (PNP)~\cite{Ei99}
which treats both water and ions implicitly.
While clearly the most accurate, MD simulations are computationally very
expensive~\cite{Lev99}.
Brownian dynamics is significantly
faster than MD, but because of the dielectric discontinuities across the
various interfaces a new solution of the Poisson equation
is required for each new configuration of ions inside the pore.
The simplest approach to study the ionic conduction
is based on the PNP theory~\cite{Ei99}. This combines the continuity
equation with the Poisson equation and Ohm's and Fick's laws.
PNP is intrinsically mean-field and is, therefore, bound to fail when
ionic correlations become important. This has been well studied for its
static version --- the Poisson-Boltzmann equation, which
is known to break down for aqueous electrolytes with multivalent
ions and also for monovalent electrolytes in
low dielectric solvents~\cite{Le02,NaJu05}.
For narrow channels, the cylindrical geometry, combined with the
field confinement, results in a pseudo one dimensional potential of
very long range~\cite{Te05,ZhKaSh05}.
Under these conditions the correlational effects dominate, and
the mean-field approximation fails~\cite{Le02}.
Indeed recent comparison between the
BD and the PNP showed that PNP breaks down when the pore radius is smaller
than about two Debye lengths~\cite{MoCo00,CoKu00}.
At the moment, therefore, it appears that
a semi-continuum (implicit solvent) Brownian dynamics simulation is
the best compromise between the cost and
accuracy~\cite{Lev99,KuAnCh01,TiBiSm01} for narrow pores.
Unfortunately
even this, simplified strategy demands a tremendous computational effort.
The difficulty is that BD requires a new solution of
the Poisson partial differential equation
at each time step. This can be partially overcome
by using lookup tables~\cite{CoKu00} and variational methods~\cite{AlMeHa02},
but still requires a supercomputer.
If the interaction potential
between the ions inside the channel would be known, the simulation
could proceed orders of magnitude faster.
However, up to now the only exact
solution to the Poisson equation in
a cylindrical geometry was
for the case of an infinitely long pore~\cite{Sm50,Pa69,Te05}.
In this letter we shall
provide another exact solution, but now
for a finite trans-membrane channel.
We shall work in the context of a
primitive model of electrolyte and membrane. The membrane will be
modeled as a uniform dielectric slab of width $L$ located between $z=0$
and $z=L$. The dielectric constant of the membrane and the channel
forming protein is taken to be $\epsilon_p \approx 2$. On both sides of the
membrane there is an electrolyte solution composed of point-like ions and
characterized by the inverse Debye length $\kappa$.
A channel is a cylindrical hole of radius $a$ and length $L$
filled with water. As is usual for continuum electrostatic
models~\cite{Lev99},
we shall take the dielectric constant of water
inside and outside the channel
to be the same, $\epsilon_w \approx 80$.
It is convenient to set up a cylindrical
coordinate system $(z,\rho,\phi)$ with the origin located at the center
of the channel at $z=0$. Suppose that an ion is located at an
arbitrary position ${\bf x'}$ inside the channel. The electrostatic potential
$\varphi(z,\rho, \phi;{\bf x'})$ inside the channel and membrane
satisfies the Laplace equation
\begin{equation}
\label{1}
\nabla^2 \varphi=-\frac{4\pi q}{\epsilon_w} \delta({\bf x}-{\bf x'})\;.
\end{equation}
For $z>L$ and $z<0$, $\varphi({\bf x};{\bf x'})$ satisfies the linearized
Poisson-Boltzmann or the Debye-H\"uckel equation~\cite{Le02}
\begin{equation}
\label{2}
\nabla^2 \varphi=\kappa^2 \varphi\;.
\end{equation}
The inverse Debye length is related to the ionic strength $I$ of
electrolyte, $\xi_D^{-1}=\kappa= \sqrt{8 \pi \lambda_B I}$,
where $\lambda_B=q^2/\epsilon k_B T$ is the Bjerrum length and
$I=(\alpha^2 c_\alpha+\alpha c_\alpha +2 c)/2$. Here $c_\alpha$ is the
concentration of $\alpha:1$ valent electrolyte and $c$ is the concentration
of $1:1$ electrolyte. All the usual boundary conditions must be enforced:
the potential must vanish at infinity and be continuous
across all the interfaces; the tangential component of the electric field
and the normal component of the electric displacement must be
continuous across all
the interfaces. These boundary conditions guaranty the uniqueness of
solution. Unfortunately, even this relatively simple geometry can not,
in general, be solved exactly. We observe, however, that an {\it exact}
solution is possible in the limit that $\kappa \rightarrow \infty$.
In this special case the system of differential
equations becomes separable. Our, strategy then will be to solve exactly
this asymptomatic problem and then extend the solution to finite values of
Debye length.
We start by making the following fundamental observation. The condition
$\kappa \rightarrow \infty$ signifies that electrolyte perfectly screens
any electric field --- the Debye length is zero. This, combined with the
boundary condition --- electrostatic potential
must vanishes at infinity --- implies that in this limit
$\varphi(0,\rho,\phi;{\bf x'})=\varphi(L,\rho,\phi;{\bf x'})=0$,
for {\it any} position ${\bf x'}$ of an ion inside the pore. This is
a dramatic simplification. Now it is no longer necessary to solve
the Debye-H\"uckel equation, but only the
Poisson equation with a perfect grounded conductor boundary
conditions at $z=0$ and
$z=L$. To proceed
we expand the $\delta(z-z')$ in eigenfunctions of the differential
operator
\begin{equation}
\label{3}
\frac{\rm d^2 \psi_n}{\rm d z^2}+k_n^2 \psi_n=0\;,
\end{equation}
satisfying the perfect conductor boundary condition.
The normalized eigenfunctions are
$\psi_n(z)=\sqrt{2/L}\sin(k_n z)$, with
$k_n=n \pi/L$.
The Sturm-Liouville nature of the
differential Eq. (\ref{3}) guaranties us that
\begin{equation}
\label{4}
\delta(z-z')=\frac{2}{L}\sum_{n=1}^\infty \sin(k_n z)\sin(k_n z') \;.
\end{equation}
Similarly,
\begin{equation}
\label{5}
\delta(\phi-\phi')=
\frac{1}{2\pi}\sum_{m=-\infty}^\infty e^{i m(\phi-\phi')} \;.
\end{equation}
Next we write
\begin{equation}
\label{6}
\varphi({\bf x},{\bf {x'}})=\frac{q}{\pi\epsilon_w L}\sum_{n=1}^\infty\sum_{m=-\infty}^\infty e^{i m(\phi-\phi')} \sin(k_n z)\sin(k_n z') g_{nm}(\rho,\rho')\;.
\end{equation}
Substituting this into Eq.(\ref{1}) we find that the Green function
$ g_{nm}(\rho,\rho')$
satisfies the modified Bessel equation
\begin{equation}
\label{7}
\frac{1}{\rho}\frac{{\rm d}}{{\rm d} \rho}\rho \frac{{\rm d} g_{nm}}{{\rm d} \rho }- (k_n^2 +\frac{m^2}{\rho^2}) g_{nm}=
-\frac{4 \pi}{\rho}\delta(\rho-\rho') \,
\end{equation}
solution of which can be found using the usual techniques~\cite{Sm50,Ja99}.
We obtain,
\begin{equation}
\label{8}
g_{mn}(\rho,\rho')=4\pi I_m(k_n \rho_<)[K_m(k_n \rho_>)+\gamma_{mn}I_m(k_n \rho_>)]\,
\end{equation}
where $\rho_>$ and $\rho_<$ are the larger and the smaller of the
set $(\rho,\rho')$ and
\begin{equation}
\label{9}
\gamma_{mn}=\frac{K_m(k_n a)K_m'(k_n a)(\epsilon_p-\epsilon_w)}
{\epsilon_w I_m'(k_n a)K_m(k_n a)-\epsilon_p I_m(k_n a)K_m'(k_n a)}\;.
\end{equation}
Here $I_m,K_m,I_m',K_m'$ are the modified Bessel functions of the first and
second kind and their derivatives, respectively. Eqn.(\ref{8}) is valid
for $\rho_>\le a$. When $\rho> a$,
\begin{equation}
\label{10}
g_{mn}(\rho,\rho')=\frac{4 \pi \epsilon_w}{k_n a}\frac{ K_m(k_n \rho)I_m(k_n \rho')}{\epsilon_w I_m'(k_n a)K_m(k_n a)-\epsilon_p I_m(k_n a)K_m'(k_n a)} \;.
\end{equation}
Eqns.(\ref{6},\ref{8}), and (\ref{10})
are exact for an ion
inside a pore with perfect conductor boundary
conditions at $z=0$ and $z=L$. If the ion is located on the axis of
symmetry, $z'=z_0$, $\rho'=0$, only $m=0$ term in Eqn.(\ref{6})
survives, and the electrostatic potential {\it inside} the channel at position
$z,\rho$ takes a particularly simple form,
$\varphi_{in}(z,\rho;z_0)=\varphi_{1}(z,\rho;z_0)+\varphi_2(z,\rho;z_0)$, where
\begin{equation}
\label{11}
\varphi_1(z,\rho;z_0)=\frac{4 q}{\epsilon_w L}\sum_{n=1}^\infty
\sin(k_n z)\sin(k_n z_0)K_0(k_n \rho)\;,
\end{equation}
and
\begin{equation}
\label{12}
\varphi_2(z,\rho;z_0)=\frac{4 q(\epsilon_w-\epsilon_p)}{\epsilon_w L}\sum_{n=1}^\infty
\frac{K_0(k_n a)K_1(k_n a)I_0(k_n \rho)\sin(k_n z)\sin(k_n z_0)}
{\epsilon_w I_1(k_n a)K_0(k_n a)+\epsilon_p I_0(k_n a)K_1(k_n a)}\;.
\end{equation}
Eqs. (\ref{11},\ref{12})
are exact in the $\kappa \rightarrow \infty$ limit.
To see how
these equations can be
extended to finite values of $\kappa$, it is
important to first understand
their physical meaning.
Potential $\varphi_2$ is mostly the result of the charge induced on the
interface between the high dielectric aqueous interior of the pore
and the low dielectric membrane. We expect that this term will be affected
very little by the precise value of the Debye length of the
surrounding electrolyte solution.
The potential
$\varphi_1$ contains the contribution from the ion located at $z_0$ and from
the induced charge on the pore/electrolyte and the membrane/electrolyte
interfaces. It will, therefore, strongly depend on the
precise value of $\kappa$. Furthermore, we observe
that Eq.~(\ref{11}) is {\it exactly}
the potential produced by a charge $q$ located inside an {\it infinite}
slab of water of width $L$ bounded by two grounded
perfectly conducting planes~\cite{Ja99}.
This key observation allows us to explicitly
resum the series in Eq.~(\ref{11}). However, it is possible to
do even better, and now enforce the {\it exact} boundary condition,
namely that for $z<0$ and $z>L$ the
electrostatic potential must satisfy the
Debye-H\"uckel equation (\ref{2}). Using the Bessel $J$ representation
of the delta function one can constructs the Green function~\cite{LeMe01}
which satisfies all the boundary conditions for the slab geometry
and has the required symmetry property~\cite{Ja99} between
the source and the observation points. We then find
\begin{equation}
\label{13}
\varphi_1(z,\rho;z_0)=\int_0^\infty {\rm d}k \frac{J_0(k \rho)\left\{\alpha^2(k) e^{k|z-z_0|-2 k L}+
\alpha(k)\beta(k)[ e^{-k(z+z_0)}+e^{k(z+z_0)-2 k L}]+ \beta^2(k) e^{-k|z-z_0|}\right\}}
{\beta^2(k)-\alpha^2(k)\exp(- 2 k L)} \;,
\end{equation}
where $\alpha(k)=[k-\sqrt{k^2+\kappa^2}]/2 k$,
$\beta(k)=[k+\sqrt{k^2+\kappa^2}]/2 k$, and $J_0(x)$ is the Bessel
function of first kind and order zero.
Eq.~(\ref{13}) provides an analytic continuation of Eq.~(\ref{11}) into
finite $\kappa$ parameter space. It can be checked explicitly that in
the limit $\kappa \rightarrow \infty$, Eq.~(\ref{13}) exactly sums
the series in Eq.~(\ref{11}).
Finally, for the region $\rho > a$ the electrostatic potential is
\begin{equation}
\label{14}
\varphi_{out}(z,\rho;z_0)=
\frac{4 q}{L}\sum_{n=1}^\infty \frac{1}{k_n a}
\frac{K_0(k_n \rho)\sin(k_n z)\sin(k_n z_0)}
{\epsilon_w I_1(k_n a)K_0(k_n a)+\epsilon_p I_0(k_n a)K_1(k_n a)}\;.
\end{equation}
Eq~(\ref{14}) is
exact only for the perfect conductor boundary conditions, however,
the huge jump
in the dielectric constant going from the membrane's interior
to the aqueous
electrolyte will leave $\varphi_{out}$ mostly unaffected even for
finite values of $\kappa$.
If the channel contains $N$ ions and charged protein residues
their interaction energy
is given by
\begin{equation}
\label{15}
V=\frac{1}{2}\sum_{i,j}^N q_i \varphi^j\;,
\end{equation}
where $q_i$ is the charge of ion/residue $i$ and $\varphi^j$ is the
electrostatic
potential produced by the ion/residue $j$ at the position of ion/residue $i$.
Similarly the electrostatic barrier that an ion feels as
it moves through a charge free
channel is~\cite{Le02},
\begin{equation}
\label{16}
U(z)=\frac{q}{2}\lim_{\rho \rightarrow 0}\left[\varphi(z,\rho;z)-\frac{q}{\epsilon_w \rho}\right]+\frac{q \kappa}{2 \epsilon_w}\;.
\end{equation}
The last term in Eq.~(\ref{16}) is the electrostatic
``solvation'' energy that
a point-like ion looses as it moves from the bulk electrolyte
into the interior or a pore. This energy
can be calculated using the Debye-H\"uckel
theory and is equivalent to the
excess chemical potential resulting from the screening of ionic
electric field by the surrounding electrolyte~\cite{Le02}.
The limit in Eq.~(\ref{16}) is easily obtained by noting that
\begin{equation}
\label{17}
\frac{1}{\rho}=\int_0^\infty J_0(k \rho) {\rm d} k\;.
\end{equation}
We are now in a position to explore some of the quantitative
consequences of
the current theory.
In Fig.~(\ref{fig1})
we first plot the potential energy barrier for an ion of charge $q$ moving
through a channel
of $L=35$ \AA $\,$ and $a=3$ \AA $\,$ and various
external electrolyte concentrations.
\begin{figure}[th]
\begin{center}
\psfrag{U}{$U(z)\; (k_B T)$}
\psfrag{U1}{\hspace{-5mm}$U(z)\; (kJ/mol)$}
\psfrag{z}{$z$}
\twofigures[width=5cm,angle=270]{fig1.ps}{fig4.ps}
\caption{Electrostatic potential barriers for an ion of charge $q$ moving
along the axis of symmetry through a channel of $L=35$ \AA $\,$
and $a=3$ \AA $\,$. External electrolyte
concentration from the bottom up is $0.15$, $1$, $2$ and $3$M.\label{fig1}}
\caption{Electrostatic potential barrier for an ion of charge $q$ moving
through a pore of $L=25$ \AA $\,$
and $a=2.5$ \AA $\,$. External electrolyte
concentration is $2.5$M. Symbols are the result of a numerical integration
of the system of Poisson-Boltzmann (bulk electrolyte)
and Poisson equation (pore interior) from ref.~\cite{JoBa89}\label{fig1a}}
\end{center}
\end{figure}
For the physiological salt concentration ($150$mM) we find the barrier
height to be $8.13 k_BT$. Using
numerical solution of the Poisson equation, Levitt obtained a barrier
of $8.48 k_BT$. Some of the difference between the two values
can be attributed to the fact that
in numerical calculations presence of external electrolyte
was not taken into account. As the length of the channel increases,
the role of external electrolyte becomes
relatively less important. Indeed for
a channel of $L=50$ \AA $\,$
and $a=2$ \AA, we obtain a barrier of $18.65 k_B T$, while
the Levitt's numerical solution produced $17.2k_BT$~\cite{Le78} and
Jordan's $18.6 k_B T$~\cite{Jo82}.
The only numerical work known to us that
explicitly takes into presence existence of external
electrolyte is ref.~\cite{JoBa89}. The authors of that paper
numerically solved the non-linear Poisson-Boltzmann
equation for external electrolyte and the Poisson equation for the
interior of the channel. For a pore of $L=25$ and $a=2.5$ \AA,
and electrolyte concentration of $2.5$M, they find a
barrier of $9.5k_B T$, while we obtain $9.8 k_B T$.
In Fig.~(\ref{fig1a}) we compare the full electrostatic energy
barrier obtained from the numerical solution with our analytical
results. The agreement, once again, is quite good.
Availability of a semi-exact interaction potential allows us to
easily explore
the potential energy landscape $\Phi=V+U$ of an ion of charge $q$
moving
through a channel which also
contains some fixed charged protein residues.
For example, consider a channel
of $L=35$ \AA $\,$ and $a=3$ \AA $\,$ and suppose that
there is one protein residues of charge $-q$,
embedded into the surface of the channel at $(z=L/2,\rho=a, \phi)$.
In Fig.~\ref{fig2} we show the potential energy profile for an ion
moving along the axis of symmetry through such a channel.
\begin{figure}[th]
\begin{center}
\psfrag{U}{$\Phi(z)\; (k_B T)$}
\psfrag{z}{$z$}
\twofigures[width=5cm,angle=270]{fig2.ps}{fig3.ps}
\caption{Potential energy profile for an ion of charge $q$ moving
along the axis of symmetry through a channel of $L=35$ \AA $\,$
and $a=3$ \AA $\,$ containing a protein residue of charge $-q$ embedded
into its wall $(\rho=a)$ at $z=L/2$ (solid line);
containing two charged
residues at
$z=5$ and z=$30$ \AA $\,$ (dotted line); containing
three residues at $z=5,17.5$ and $z=30$ \AA $\,$ (dashed line).
External electrolyte
concentration is $150$mM.\label{fig2}}
\caption{Potential energy profile for an ion of charge $q$ moving along the
axis of symmetry through a pore
of $L=35$ \AA $\,$ and $a=3$ \AA,
containing two charged residues
hidden in the membrane's interior ($\rho=6$ \AA) located at
$z=5$ and $z=30$ \AA. External electrolyte
concentration $150$mM.\label{fig3}}
\end{center}
\end{figure}
Instead of
a potential barrier, this ion encounters a potential well
of depth more than $10 k_B T$! It will, therefore, find it extremely
difficult to pass through such a channel.
Now, suppose that two charged residues are
embedded into the channel wall,
one close to the entrance of the channel at
$z=5$ $\,$ \AA $\,$ and another close to its exit at $z=30$ \AA,
both at $\rho=3$ \AA.
The electrostatic potential, now develops a double-well structure,
Fig.~\ref{fig2}.
Each minimum is relatively less deep than in a channel with only one
central residue. One might then suppose that
adding more charged residues will diminish the depth of the wells even
further. This, however, is not the case. In Fig.\ref{fig2} we also show
the potential energy landscape of a channel containing three
uniformly spaced residues.
Evidently instead of decreasing the depth of the potential well, it
has dramatically increased!
There is, however, a mechanism which Nature can use to
diminish the depth of potential wells --- hide the charged residues
in the membrane's (or protein's) hydrophobic interior~\cite{KuBa03}. In
Fig.~\ref{fig3} we plot the potential energy profile for the same channel
as in Fig.~\ref{fig2}, but with the two charged residues hidden
in the membrane's hydrophobic interior at $\rho=6$ \AA.
In this case the deep potential well is replaced by a shallow binding
site followed by an activation barrier of only $2.5 k_B T$. This
can be easily overcome by an external electric field or
a chemical potential gradient. We find that there is
an optimum location
for hiding charged residues in order to
produce the smallest barrier for channel penetration. The formalism
developed above allows us to easily explore all the parameter
space in order to find this optimum position.
To conclude, we have presented an analytically solvable model
of electrostatics inside an ion channel.
The solution found is exact in the limit of
large electrolyte concentrations. However, comparison
with the numerical
work shows that it remains valid
even at intermediate and low electrolyte concentrations.
The analytical solution can be used to dramatically speed up
the Brownian dynamics simulations of ionic transport through
cylindrical pores. The biological and structural
information can be partially taken into account
through a proper placement of charged protein residues.
Furthermore, even if a more detailed
atomistic molecular dynamics simulation is necessary, availability
of a rapid Brownian dynamics model can serve for a initial exploration
of the parameter space.
In this work water inside the pore has been treated as a uniform dielectric
continuum identical to the bulk.
While this is acceptably for wider pores, the approximation
will certainly fail for very narrow pores such as gramicidin.
To properly account for the polarization of water in this geometry
one must go beyond the continuum dielectric approximation~\cite{PeHu05}.
Until now, the only option for these cases was to perform all atom
molecular dynamics simulations.
The current work suggests that another way might be possible.
The continuum description with $\epsilon_w=80$ and $\epsilon_p=2$
can be used for the bulk water outside the
channel, for the membrane, and for the trans-membrane protein,
while inside the channel (now with $\epsilon_{in}=1$)
one might try to obtain an accurate analytical electrostatic potential.
The Coulomb interactions between
the water molecules inside the channel could then be treated
explicitly, without any
need for continuum dielectric approximation. This would then allow
to perform very fast molecular dynamics simulations of ionic transport,
free of the drawbacks associated with the implicit solvent models.
The work in this direction is now in progress.
This work was supported in part by
Conselho Nacional de
Desenvolvimento Cient{\'\i}fico e Tecnol{\'o}gico (CNPq).
\bibliographystyle{prsty}
|
1,108,101,565,754 | arxiv | \section{Introduction}
\label{sec:intro}
The Covid-19 pandemic has turned everyone's lives upside down forcing national governments around the world to take measures to drastically reduce the rate of contagion. Localization information may support keeping \emph{social distancing} as well as \emph{contact tracing} of infected people thereby, playing a fundamental role in the fight against the virus spread.
The scientific community proposed \emph{social distancing} as the first effective measure against the uncontrolled virus spread, able to stop the transmission chains of the virus and prevent new ones from appearing.
This has resulted in strict rules to limit personal contacts and maintain interpersonal distance accordingly. However, in many situations (offices, grocery stores, shops, etc.) guaranteeing a social distance may be challenging as people can't continuously measure the inter-personal distance, especially if placed in small rooms or indoor environments, and some people might be less careful than others to comply social distancing.
This has potentially raised the industrial interest for developing advanced solutions to notify with alert messages upon minimum distance violation. They are mainly designed as infrastructure-free solutions that rely on wearable sensors based on ultra wide-band technology (UWB) or smartphones provided with bluetooth (BLE) or near-field communication (NFC) means, which discriminate whether the inter-social distance is below a certain threshold---by means of power measurements---thereby notifying the neighbors about the potential threat.
In addition, identity and contact duration information might result in privacy regulations violations~\cite{chan2021privacy}
The main drawback of such solutions is that the ability to detect interpersonal distance is only limited to people with such specific devices (or running given applications). This assumption might not hold in many cases, as neither telco operators nor national governments can force people in general to wear such devices or install such applications~\cite{nature_covid19}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/Figure1.jpg}
\caption{Passive detection of social distance in public scenarios using mmWave off-the-shelf devices provided with AI-based algorithms.}
\label{fig:social_distancing_issue}
\end{figure}
Therefore, \emph{pervasive wireless passive} solutions shall be preferred, which gather localization information to detect undesired contacts and trigger exposure alerts in a privacy-preserving manner. These information can be used for many other use-cases, including intrusion detection, privacy-preserving health application, location-based service, etc. The development of
advanced wireless sensing techniques are taking a prominent role thus in this space, since variations in wireless propagation signals can be exploited for such novel services.
Nonetheless, such applications generally require detailed and instantaneous information on the channel status, thus calling for specific radio-frequency hardware that can directly expose it via programmable interfaces~\cite{mec_covid19}. When only incomplete or partially hidden data is available, artificial intelligence (AI) might come to help: designed neural networks can continuously study the obtained pairwise channel information, learn unexpected variations and proactively map unusual fluctuations onto deterministic system state changes, i.e., the boolean state of social distancing violations in open and closed spaces.
In this work, we leverage the narrow directivity of millimeter-wave (mmWave) wireless communications to pioneer a \emph{new passive AI-based sensing system} capable of instantaneously detecting human gatherings violating social distancing policies and sending corresponding alerts to warn people. The proposed system does not rely on dedicated wearable devices, applications or active connections, thus it overcomes the limitations of infrastructure-free solutions.
At the same time, the system leverages reinforcement learning to automatically learn the human gathering detection task based on the observation of the reactions of people upon receiving the alert.
Hence, our passive solution is privacy-guaranteed and does not require human intervention.
To validate our proposed framework we carried out a synthetic trace simulations campaign and realistic deployment-based experiments with commercial mmWave devices in selected environments.
The proposed system can be seamlessly installed on existing WiFi network deployments in public or hybrid environments, such as rail stations (as depicted in Fig.~\ref{fig:social_distancing_issue}), airport terminals, bus stops, etc. with limited installation and maintenance costs.
\section{The technology race to limiting human interaction}
\label{sec:methods}
Localization information-based applications, such as social contact tracing, have become a key objective to effectively keep under control unwanted human virus spreading. However, this has proven to be a daunting task due to simultaneous lack of {\bf c1)} high reliability, {\bf c2)} extreme accuracy, {\bf c3)} agile deployment, and {\bf c4)} sustainable costs. We detail in the following the main effort in the literature towards those essential challenges, mostly taken individually.
Computer vision and image processing from surveillance or dedicated cameras offer means to constantly monitor social distancing enforcement in public environments. On this line, \cite{AHMED2020102571} exploits the deep learning technique to efficiently carry out the object recognition process that can identify (and automatically locate) humans in video sequences and, in turn, estimate the euclidean distance among them. This is approximately performed by counting the pixels within the snapshots where people are detected. Similarly, \cite{SHORFUZZAMAN2021102582} proposes a deep learning-based framework that analyzes the data from mass video surveillance to monitor social distancing and trigger, in case, instantaneous alerts.
Such solutions are proven to be very effective in terms of high reliability and extreme accuracy ({\bf c1, c2}) but they require the installation of appropriate cameras, which might not be allowed in many everyday environments (e.g., offices, factories, hospitals, schools) or often might even incur in untenable costs.
In parallel, the very limited energy consumption of small (and sometimes wearable) devices enables to focus on infrastructure-free approaches capable of building low-cost and flexible ad-hoc networks ({\bf c3, c4}). Attaining such desirable objectives precludes relying on a supporting (and fixed) infrastructure while pushing for widespread technologies, such as bluetooth (BT) with low-energy capabilities (BLE), near-field communication (NFC) or ultra wide-band (UWB) sensors. In particular, \cite{nguyen2020comprehensive} provides specific means to combat the virus spread by minimizing the human approaching time, e.g. with ultrasound-based proximity detection systems or by means of wearable magnetic field proximity sensors.
Finally, \cite{covid19_mobicom20} blends together the need for accurate group tracking models with the infrastructure-free requirement that helps to detect contagion-related misbehavior within whatever environment. On top of dedicated hardware, tracing applications may leverage on complex wearable devices, such as smartphones, tablets, or smart watches, to enquiry public repositories thus notifying all direct human contacts about potential (diffusion) threat~\cite{covid19_infocom21}.
Most of the data thus collected shall be carefully (and efficiently) analyzed to rapidly block uncontrolled diffusion ({\bf c1, c4}).
This would require proper and complex mathematical models~\cite{TNSE_covid19} ({\bf c2}) that present scalability issues for very-crowded scenarios.
To cope with the complexity of the scenario, machine learning can be exploited to facilitate good approximations or to reconstruct hidden or unavailable data~\cite{eboost_jiot20}.
All above-mentioned techniques may help covering all presented challenges and thus preventing infections, only if combined together. Generally, this does not apply in realistic contexts, therefore pushing for a sustainable solution that can trade off all described features: our pioneering proposal paves the road towards an agile and flexible framework that can be readily installed on existing wireless infrastructures without requiring people to wear electronic devices but, at the same time, providing comparable accuracy and reliability levels of an infrastructure-free system.
\section{mmWave Directivity Gain to passively detect Environmental Change}
\label{sec:system_design}
While existing solutions might achieve reliability, fast actuation, or high responsiveness, exclusively, they can be fatally impaired by human misbehaviors. Therefore, there is an impelling need of an agile and flexible solution able to pursue high accuracy with affordable installation costs.
Hereafter, we detail our solution that relies on conventional mmWave communication by shedding the light on the mathematical models and implementation details.
\subsection{Continuous mmWave 802.11ay channel monitoring}
Social distancing breaches can be ideally spotted by continuously monitoring the surrounding propagation environment to promptly detect suspicious variations. This operation can be performed in a passive way, wherein a transmitter pair interacts and keeps track of the channel response. Specifically, after establishing the mmWave link, power measurements can be regularly collected and analyzed to detect unexpected changes. To be compliant with IEEE 802.11ay standard guidelines\footnote{Details on the IEEE 802.11ay standard can be found at https://standards.ieee.org/standard/802\_11ay-2021.html}, power measurements are regularly performed during the beam training phase, i.e., when two mmWave devices discover each other by selecting the transmitting beam (direction) based on best response channel quality~\cite{wigig_infocom21}.
IEEE 802.11ay (like its predecessor 802.11ad) covers many relevant aspects to establish and sustain a communication link between mmWave-enabled devices. To provide the required beamforming capabilities, such devices are equipped with electronically steerable antenna arrays controlled by predefined weights vectors that are included in the so-called \textit{codebook}. Each wave vector in the codebook corresponds to the activation of a specific transmitting/receiving beam pattern. Those beam patterns are designed to be directional and their choice is subject to the instantaneous channel condition: a beam adaptation process is executed to avoid (nomadic) obstacles and efficiently follow the channel variations so that the communication is never disrupted.
The beam pattern selection is performed by means of a complex \emph{beam forming training} phase, wherein devices activate sequentially all available beam patterns---as per their codebook---and correspondingly collect power measurements that are used to select the best transmitting direction. It is started during the initial connection establishment (i.e., device paring phase) and periodically repeated to avoid connection drops~\cite{steinmetzer2017compressive}. In parallel, the wide spatial diversity provided by all available beam patterns allows obtaining a complete snapshot of the propagation environment, as shown in Section~\ref{sec:evaluation}, inspecting the surrounding area and keeping track of potential state changes.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/Figure2.jpg}
\caption{Effect of the interpersonal distance on the beam training outcome.}
\label{fig:mm_w_benefits}
\end{figure}
\subsection{How to exploit the beam training phase to track interpersonal distance}
\label{sec:exploit_beam_training}
The high-frequency feature of mmWave communication hinders the overall signal propagation as it appears strongly affected by the propagation environment itself, including human bodies, walls, and even glass objects~\cite{slezak2018empirical}. On the one hand, this aspect may require an additional effort for properly designing a mmWave-based network to guarantee affordable communication quality levels. On the other hand, a passive environment monitor system may capitalize on this issue to build a reliable and low-cost solution that uses the effect on the short wavelength signal to continuously monitor selected areas.
In particular, the different displacement of people in the environment can radically change the propagation conditions experienced by mmWave device pair.
Such changes are reflected in the outcome of the beam training procedure: we exploit such power measurements performed during standard operations to devise a complete sensing map of the propagation environment that can be smartly used to retrieve information on the environment itself, without requiring to extract advanced channel state information (CSI) from the devices or to deploy additional dedicated hardware. Hence, it will dramatically drop implementation costs.
Fig.~\ref{fig:mm_w_benefits} provides an example of different beam training phase outcomes as a function of the displacement of the people in the monitored area.
The example shows how the position of potential signal blockers (i.e., people) has an impact on the power measurements related to the different beam activation. Indeed, beams with directions towards potential signal blockers will experience larger attenuation with respect to the beams pointing towards non-blocked paths that translates into huge power variations reported in the beam training outcome. We leverage on this feature to build a system capable of detecting and reporting safe distance violations in areas covered by mmWave transmission service.
The rationale behind this is related to the beam training phase periodically performed between deployed devices (i.e., without involving user equipment), which we use to understand where blockages occur---based on selected transmitting/receiving beam and consequent directions---so as to infer the mutual distance between people, accordingly.
Naturally, as we show in Section~\ref{sec:evaluation}, the more directive the selected beam patterns, the higher the granularity of the environment sensing map and, consequently, the higher the sensitivity of the system against the position of the blocks, the higher the accuracy of the social distance detection model.
\section{AI-based mmWave channel monitoring}
\label{sec:ai_based_channel_monitorning}
Analytically modeling the effect of the interpersonal distance on the power measurements appears very challenging and biased due to the relevant dependence of the mmWave channel on the propagation environment. It is necessary to rely on machine learning techniques capable of automatically approximating the link between measured power and distance violation. This outstanding dependence on the environment turns into huge differences even in those areas that are relatively close to each other. Therefore, we analyzed a bench of classifiers specifically trained for each alert area.
\subsection{Safe distance violation detection system}
\label{sec:detection_system}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Figure3.jpg}
\caption{Safe distance violation detection and alert system. The system collects power measurements from device beam training phase, maps the obtained measurements onto the device deployment to define alert areas, and performs AI-driven human gathering estimation. Training is performed via reinforcement-learning.}
\label{fig:wireless_sensing}
\end{figure*}
Our safe distance violation detection and alert system is depicted in~Fig.~\ref{fig:wireless_sensing}. Let us consider a public area (e.g., train station, airport, shopping mall, office, etc.) wherein people, in accordance to the virus spread prevention measures, must keep a minimum safe distance, and wherein high-speed connectivity service is provided through IEEE 802.11ay-enabled access points (AP) (see Fig.~\ref{fig:social_distancing_issue}).
APs can provide connectivity to mobile stations (STAs) (e.g., smartphones, laptops, etc.), as well as fixed STAs (e.g., wireless displays, computers, cameras, etc.). Our system exploits the channel variations detected by mmWave devices. Those variations can be caused both by different displacement of blockers or movements of involved devices. Therefore, to filter out the device mobility effect, only devices that are fixedly deployed within the monitored area are considered as an information source for the system. Moreover, we assume the system to be deployed in controlled environments such as rail stations, shopping malls, etc., wherein the movements of nomadic obstacles (e.g., trains, buses, etc.) are periodically repeated in a quasi-deterministic fashion. Accordingly, their effect on the power measurements would affect many observations in the input of the classifier, which, given the high number of examples, would automatically filter out the contribution of such objects in the classification process.
According to the standard, devices in the area periodically activate the beam training procedure and perform power measurements. Note that APs and STAs can detect and collect beam training frames transmitted from nearby devices even if they are executing the connection handshake process.
The power measurements thus obtained are transmitted through a control plane to the safe distance violation system and collected to build a snapshot of the propagation environment state.
Given the limited coverage of mmWave access points, only a portion of the monitored area is considered relevant for the power measurement campaign related to a given pair of devices. Thus, we map all collected measurements onto the corresponding portion of the monitored area based on the device deployment and device coverage---assumed to be known---namely \textit{alert area}. This allows the system to send targeted alerts only to the specific areas wherein safe distance violations occur.
Note that we assume an optimal deployment phase being executed beforehand ensuring good channel conditions among devices to enable passive sensing: this allows to have strong communication paths, i.e., line-of-sight and/or second-order reflections, among devices covering a given alert area. Moreover, the denser the device deployment, the higher the granularity of the sensing/reflecting in smaller and more accurate target areas.
Such power measurements become the input feature of our detection system. For each alert area, the corresponding power measurements are collected and arranged into a feature vector whose elements contain power measurements corresponding to each activated beam pair and thus normalized. Feature vectors are fed into a bank of feed-forward neural network (FFNN) classifiers, one per alert area, which is in charge of performing safe distance violation detection, i.e., given the observation of the power measurements it provides the probability distribution over the set of classes \emph{safe distance violation}, and \emph{safe distance observance}.
We select FFNN due to their relatively low complexity and their ability of non-linear models generalization. However, different types of classifiers can be easily plugged into the proposed framework depending on the desired system efficiency and complexity.
It is worth pointing out that the lack of an active connection with the people in the area prevents our system from sending targeted notifications. Instead, if a violation is detected, a notification is sent to the specific alert area through advising systems, e.g., voice warnings or display boards. However, in situations where a group of people is not required to spread apart, e.g., people belonging to the same household, an unwanted alert might be triggered. Nonetheless, the alert is eventually sent to security officers in charge of evaluating the situation and enforcing safety policies within the target area if needed.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/Figure4.jpg}
\caption{Accuracy of the distance violation detection process in four different simulated environments, considering different numbers of beams available at the devices $N$ and varying the beam directionality from a maximum of $100\%$ (half-power beamwidth equal to ${2\pi}/{N}$) to a minimum of $0\%$ (omnidirectional).}
\label{fig:numerical_results}
\end{figure*}
\subsection{Reinforcement learning-based approach}
\label{sec:reinforcement_learning}
The bank of FFNN classifiers needs to be properly trained to detect safe distance violations.
To this extent, we rely on a reinforcement learning approach that exploits people's reaction to the alert to reveal the correctness of notified alerting messages and, based on this, it automatically learns how to truly detect safe distance violations.
The rationale behind this approach is the following. In the event of a safe distance violation detection, an alert is notified to the corresponding alert area. If the detection is correct, people in the area will automatically react to the alert by rearranging themselves to return in a safe condition (Hawthorne effect). This reaction reflects into a noticeable change of the propagation environment that can be clearly captured in the power measurement snapshots following the alert.
Conversely, if the safe distance violation is incorrectly detected, the sent alert will not push people to move far away. Consequently, an imperceptible change in the snapshots following the alert will occur.
In the reinforcement learning architecture we propose, the actor network is in charge of classifying safe distance violations, while the critic is in charge of monitoring the channel variations following an alert and provide a reward to the actor, accordingly. For this reason, we design the critic as a deep CNN---which is fed with the series of feature vectors following the alert concatenated over time---and involving the convolutional layers, particularly suitable to recognize the space-time features characterizing the movement of people from its input.
This mechanism allows to automatically learn how to correctly detect safe distance violations independently from the environment in which the system is deployed.
It is worth highlighting that the learning mechanism we propose keeps unaltered the level of privacy guaranteed by our passive solution. Moreover, although relying on an instinctive reaction such as the Hawthorne effect, which in some cases could be ignored, alert messages surely reach people in charge of guarding the monitored area, thus forcing lawbreakers to return to a safe situation.
\section{Experimental evaluation}
\label{sec:evaluation}
Hereafter, we provide the performance evaluation of our system, which we carry out through a simulation campaign, where we generate synthetic beam training phases traces, as well as through a real implementation of our solution as a software module installed in four commercial IEEE802.11ad-enabled devices deployed within a real office environment.
\subsection{Synthetic scenario}
\label{sec:synthetic_scenario}
For the beam training synthetic traces generation, we emulate the system by means of an ad-hoc MATLAB\textsuperscript{\textregistered} simulator.
We consider four different experimental environments as follows:
\begin{itemize}
\item \textit{office}, constituted by a $5m \times 5m$ squared indoor environment with $4$ mmWave-enabled devices deployed in the area, people can freely move within the area;
\item \textit{hall}, constituted by a $10m \times 10m$ squared indoor environment with $4$ mmWave enabled devices deployed in the area, people can freely move within the area;
\item \textit{underground}, constituted by a $10m \times 20m$ environment, wherein we place a $5m \times 20m$ platform. $4$ mmWave enabled devices are regularly deployed in the platform area, people can freely move on the platform;
\item \textit{station}, constituted by a $20m \times 20m$ environment with two $5m \times 20m$ platforms separated from each other by a $10m$ space wherein train rails are located. Two mmWave enabled devices per platform are deployed, people can freely move on the platforms.
\end{itemize}
Millimeter-wave propagation and beam patterns are modeled as per~\cite{devoti2018mm}. Moreover, to take into account the typical non-idealities of beam patterns generated by commercial devices~\cite{steinmetzer2017compressive}, we vary the half-power width of the beams $W$ according to the equation $W=2\pi\left[1-\left(1-\frac{1}{N}\right)\alpha\right]$, where $N$ is equal to the number of transmitting/receiving beam configurations in the codebook, and $\alpha$ is a scaling factor that allows us to modulate the directionality in our experiments and ranging from a maximum value of $1$--- resulting in $\frac{2\pi}{N}$ wide beams corresponding to the $100\%$ of directionality---to a minimum value of $0$---resulting in a beam width of $2\pi$ corresponding to the $0\%$ of directionality, i.e., omnidirectional beam patterns.
We consider devices with a number of $32$ transmitting beams, and $1$, $3$, or $6$ receiving beams.
Following the IEEE 802.11ad standard, beam pattern alignment procedure is performed every beacon interval (i.e., every $10ms$) per each device pair in both transmitting and receiving directions.
Person bodies are emulated as fully absorbing cylinders with a radius of $0.25m$.
We randomly drop up to $6$ people in the simulation playground.
The minimum safe distance to be kept is set to $1.5m$ as usually recommended by European health care institutions to reduce the spread of the virus. We consider a total of $200$ different people arrangements in which social distancing regulations are violated, and additional $200$ people arrangements where the minimum social distance is fulfilled.
For each people displacement, we collect the power measurements of $200$ beam training procedures, for a total of $80000$ channel measurement snapshots that are arranged to form input feature vectors and normalized via \textit{independent standardization}. The overall dataset of snapshots is split according to a $60/20/20$ ratio for the purposes of training, validation, and testing procedures,
respectively.
The classification process is performed by a fully connected feed forward neural network (FFNN) with a single hidden layer of neurons with \textit{relu} activation function.
We train our neural network with a batch size of $1000$, a number of $30$ epochs, a learning rate of $0.001$ and \textit{Adam} optimizer.
Fig.~\ref{fig:numerical_results} shows the performance of our system in terms of safe distance violation detection accuracy achieved in the different scenarios we consider, with different beam pattern configurations, both in terms of number of transmitting/receiving beam patterns available as well as beam directionality. We consider different neural network complexity by varying the number of neurons forming the hidden layer.
From the results, it can be seen how the directional capabilities of the devices has a direct impact on the achievable performance. Indeed, the beam directionality directly affects the directional sensing effectiveness of the system, as the more directional the beams, the higher the granularity of the sensing map available to the system, thus, the better the achievable performance (the higher the system accuracy).
Additionally, the number of available beam patterns also affects the overall system performances. Indeed, increasing the number of beams increases the different points of view that the system can exploit to efficiently run the violation detection process, with a consequent performances increase.
On the AI algorithm, results show that different neural network complexities (i.e., number of neurons in the hidden layer) slightly change the system performance when a high number of beams with high directionality is available. While it has a greater impact when the devices are equipped with fewer beam patterns.
This is a direct consequence of the quality of the input features in relation, as previously described, to the devices directional capabilities. The better the quality of the input features, the easier the detection task and vice versa. This is reflected in the neural network complexity required to achieve high detection performances.
Finally, results show how the different considered scenarios impact on the system performances: the wider the monitored area, the lower the system performances. Recalling that we are keeping constant the number of deployed devices in the simulated alert areas, this behavior is mainly due to the density of devices involved in the measurement process, which naturally affects the system performance. Nonetheless, our system shows oustanding performances that vary from $~75\%$ to $~99\%$ depending on the selected scenario.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/Figure5.jpg}
\caption{Executive experimental set in real office environment with four IEEE802.11ad devices mounted on the ceiling and human phantom. The proposed device deployment at the top right-hand corner, can efficiently and accurately cover the overall office environment.}
\label{fig:testbed}
\end{figure}
\subsection{Real office environment}
\label{sec:real-scenario}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth ]{figures/Figure6.jpg}
\caption{Accuracy of the distance violation detection process, obtained in a real office environment with two different configurations of the device deployment (left-hand side), and different areas of the office with the devices mounted on the ceiling (right-hand side).}
\label{fig:real_scenario_results}
\end{figure}
To validate our system in a real environment, we implement our solution as a software module running on commercial off-the-shelf devices.
Thus considering realistic beamforming patterns and propagation conditions.
In particular, we deploy $4$ Talon ad7200 devices in a real office environment. Devices are provided with $36$ transmitting beams with different pointing directions and one quasi-omnidirectional receiving beam. The default device firmware does not include an easy access to the beam training and received power values, therefore we use the LEDE-ad7200 custom firmware~\cite{steinmetzer2017compressive} on such devices that allows us to retrieve the power measurements performed by devices during the beam training phase, and use them to reveal infractions of the minimum safe distance. To comply with current social distancing regulations, we emulated the presence of two people in the office with a real person and a human phantom. Fig.~\ref{fig:testbed} shows our testbed implementation, wherein devices are mounted on the ceiling according to the office map. Additionally, we consider a second deployment setup where the two devices on the right-hand side of the office map are placed on the top of the corresponding desk. We refer the reader to~\cite{pasid} for additional information on the testbed.
To validate our system, per each deployment setup of the devices, we consider a total of $40$ different dispositions of person and human phantom in the office: $20$ with safe distance violation and the remaining $20$ with sufficient interpersonal distance. For each disposition, we collect the beam training measurements performed by the devices for $3$ minutes. Thus obtaining a total of $2$ hours of measurements, with about $80000$ beam training procedures.
The measurements thus collected are divided according to a $60/20/20$ ratio for the purposes of training, testing and validation. We report in Fig.~\ref{fig:real_scenario_results} the performance obtained with a FFNN classifier with a single hidden layer constituted by $32$ neurons. We follow the same training process as described in Section~\ref{sec:synthetic_scenario}.
From the obtained results it can be noticed that while our passive detection system can be easily installed on an existing WiFi network incurring in cost-effective, reliable and agile deployment, it is able to spot safe distance violations with an accuracy higher than $99\%$ when devices are mounted on the ceiling. Such performance is kept all over the office area, with a minimum variation depending on the relative proximity of devices and monitored persons. The overall accuracy is slightly lower when devices are placed according to the top desk setup, wherein office furniture (e.g., screens) causes higher attenuation between deployed devices, and makes channel variations caused by the presence of people more difficult to sense. Nevertheless, the overall accuracy in our settings is higher than $98\%$. However, in line with the simulations, accuracy performance might slightly reduce in other realistic deployments.
\section{conclusion}
\label{sec:conclusion}
The Covid-19 virus spreading explosion turned everyone's lives upside down. \emph{Social distancing} proved to be an effective measure to control virus spreading but unfortunately compliance was difficult for people due to deeply-rooted social habits.
In this paper, we have proposed an AI-based mmWave sensing solution that by passively monitoring changes on the wireless environment can infer localization information, thereby detecting social distancing breaches and triggering correction actions in a privacy-preserving manner without requiring an active connection with the user equipment.
Our advanced proposal combines cost-efficiency, agility, reliability, and accuracy challenges together into a novel passive detection system that can be supported by a variety of applications.
The solution has been evaluated through a simulation campaign and a real deployment with commercial mmWave devices. Proof-of-concept results show a promising detection accuracy of social distance above $99\%$. The system has been designed such that it can be seamlessly added as a software module to off-the-shelf commercial mmWave devices.
\bibliographystyle{IEEEtran}
|
1,108,101,565,755 | arxiv | \section{Introduction}
Quantization of the Hall conductance of insulating two-dimensional materials at low temperatures is one of the most remarkable phenomena in condensed matter physics. Starting with the seminal work of Laughlin \cite{Laughlin}, many theoretical explanations of this phenomenon have been proposed which vary in their assumptions and degree of rigor. For systems of non-interacting fermions with either an energy gap or a mobility gap there are several proofs that zero-temperature Hall conductance $\sigma_{Hall}$ times $2\pi\hbar/e^2$ is an integer in the infinite-volume limit \cite{TKNN,bellissard1985,bellissard1986,bellissard1994,avron1994}. In particular, Laughlin's flux-insertion argument was made rigorous in this setting by Avron, Seiler and Simon \cite{avron1994}. The case of interacting systems is more involved, since quantization of Hall conductance generally holds only in the absence of topological order. Much progress can be made using the relation between $\sigma_{Hall}$ and the curvature of the Berry connection of the system compactified on a large torus. This line of work originated with Avron and Seiler \cite{avronseiler} and culminated in the proof by Hastings and Michalakis that for a gapped system on a large torus with a non-degenerate ground state the difference between $2\pi\hbar\sigma_{Hall}/e^2$ and the nearest integer is almost-exponentially small in the size of the system $L$, as defined in \cite{hastingsmichalakis} (see also \cite{bachmannetal}).
Despite all the progress in understanding quantization of the Hall conductance in the interacting case, there is still room for improvement. From a modern perspective, $\sigma_{Hall}$ is a topological invariant of a quantum phase of gapped 2d systems with a $U(1)$ symmetry. Since the distinction between phases becomes sharp only in infinite volume, it would be desirable to have a formalism which can deal with systems on a 2d Euclidean space rather than a torus. Within such a formalism it should be possible to see that in a gapped system $\sigma_{Hall}$ is locally computable, i.e. can be approximated well by an expectation value of a local observable. This would clarify the role of $\sigma_{Hall}$ as an obstruction to having a gapped edge. It should also be possible to prove that $\sigma_{Hall}$ is the same for all systems in the same gapped phase.
Another recent development inspired by quantum information theory is the viewpoint that different quantum phases of matter are distinguished by different patterns of entanglement in the ground state \cite{QImeets}. If this is the case, then it should be possible to extract the value of $\sigma_{Hall}$ from the infinite-volume ground state, without specifying a concrete Hamiltonian. Indeed, in the case of non-interacting fermions, it is well-known how to extract the zero-temperature Hall conductance using only the projector to the ground state, see e.g. \cite{avron1994,kitaev2006anyons}. From this viewpoint, a system without topological order is a system with only short-range entanglement. One convenient definition of this notion (recalled in Section \ref{sec:InvPhases}) was proposed by A. Kitaev \cite{kitaevInv} under the name {\it an invertible gapped phase}. Gapped systems of free fermions are invertible \cite{hastings:free}, but there are many interacting systems of this kind as well. It should be possible to prove quantization of $2\pi\hbar\sigma_{Hall}/e^2$ for an arbitrary system in an invertible gapped phase.
Finally, it has been argued using the statistics of flux insertions that for bosonic short-range entangled gapped 2d systems with a $U(1)$ symmetry $2\pi \hbar\sigma_{Hall}/e^2$ is always an {\it even} integer \cite{LevinSenthil}. It would be desirable to prove this, and studying flux-insertion for systems on a 2d Euclidean space is a natural approach.
Another phenomenon closely related to the quantization of the Hall conductance is quantized charge transport in gapped non-degenerate 1d systems with a $U(1)$ symmetry. It was argued by Thouless \cite{Thouless} that one can attach a numerical invariant to a loop in the space such systems. This invariant is integral in the infinite-volume limit and is equal to the net charge pumped through a section of the system as it adiabatically cycles through the loop. Some of the questions mentioned above (such as local computability and independence of the particular Hamiltonian) can be also asked about the Thouless pump invariant. Recently Bachmann et al. \cite{bachmann2019many} proved approximate quantization of the Thouless pump invariant for gapped 1d systems on a large circle using quasi-adiabatic evolution and sub-exponential filter functions \cite{hastings2010quasi}. This proof makes explicit that the Thouless pump invariant is locally computable. One can hope to use similar methods to achieve a better understanding of the Hall conductance of gapped 2d systems.
In this paper we use the methods of \cite{bachmann2019many,hastings2010quasi,bachmann2020many,bachmann2020rational} to study both the Thouless pump invariant and the Hall conductance for gapped lattice systems in infinite volume. In the case of 1d systems, we merely adapt the approach of \cite{bachmann2019many} to the infinite-volume setting. In the case of 2d systems our main results can be summarized as follows (using the units where $\hbar=e^2=1$):
\begin{theorem}
Zero-temperature Hall conductance is locally computable for any 2d lattice system (either bosonic or fermionic) with an on-site $U(1)$ symmetry, exponentially decaying interactions, and a unique gapped ground state on ${\mathbb R}^2$. It is the same for all systems in the same gapped phase. If the system is an invertible gapped phase, $2\pi \sigma_{Hall}\in{\mathbb Z}$. If the system is also bosonic, then $2\pi \sigma_{Hall}\in 2{\mathbb Z}$.
\end{theorem}
In fact, we give two proofs of the quantization of Hall conductance. One of them is a version of Laughlin's flux-insertion argument adapted to the case of systems in an invertible phase. The other one uses the relation between Hall conductance and the statistics of flux insertions \cite{LevinSenthil}. We make this relation precise by defining local operators which transport flux and showing that their large-scale properties are controlled by $\sigma_{Hall}$. We also show that Hall conductance can be determined from the ground-state of a gapped Hamiltonian without specifying the Hamiltonian and that it is invariant under a certain equivalence relation on states. This equivalence relation is induced by automorphisms of the algebra of observables which are "fuzzy" analogues of finite-depth local unitary quantum circuits.
The paper is organized as follows. In Section 2 we give definitions and describe some constructions pertaining to gapped lattice systems with $U(1)$ symmetry in any dimension. These definitions and constructions are used throughout the remainder of the paper. In particular, we define the notions of a gapped phase and an invertible gapped phase. In Section 3 we adapt the results of \cite{bachmann2019many,bachmann2020rational} on charge pumping to infinite-volume systems. In Section 4 we study Hall conductance. We show that zero-temperature Hall conductance is locally computable, define vortex states and transport of vortices, show that the Hall conductance controls the statistics of vortices, and use this to argue that for systems in an invertible phase $2\pi\sigma_{Hall}$ is an integer, while for bosonic systems in an invertible phase $2\pi\sigma_{Hall}$ is an even integer. We also show that $\sigma_{Hall}$ is determined by the state and is invariant under a certain class of automorphisms with good locality properties (locally generated automorphisms). In Appendix A we collect some technical results used in the paper. In Appendix B we present a version of Lauglin's flux-insertion argument which shows that $2\pi\sigma_{Hall}$ is quantized for interacting systems in an invertible phase. It uses some of the results of Sections 3 and 4. In Appendix C we prove triviality of superselection sectors of certain states produced from a factorized state by locally generated automorphisms.\\
\section{Preliminaries}
\subsection{Basic definitions}
In this paper we study lattice many-body systems (bosonic or fermionic) in $d$ dimensions. We follow the operator-algebraic framework described in the monograph \cite{bratteli2012operator}. A lattice in $d$ dimensions is an infinite subset $\Lambda$ of the Euclidean space ${\mathbb R}^d$ which is uniformly discrete (that is, there is an $r>0$ such that for any two distinct $j,k\in\Lambda$ $\dist(j,k)\geq r$) and uniformly filling (that is, there is an $r'>0$ such that for any $x\in{\mathbb R}^d$ $\dist(x,\Lambda)\leq r'$). The algebra of observables on a site $j\in\Lambda$ is a matrix algebra ${\mathscr A}_j=\End({\mathcal H}_j)$, where ${\mathcal H}_j$ is a finite-dimensional complex vector space with the dimension $\dim {\mathcal H}_j = d_j^2$ that grows at most polynomially with the distance from the origin. In the fermionic case the vector spaces ${\mathcal H}_j$ are ${\mathbb Z}_2$-graded by fermion parity, so the algebras ${\mathcal A}_j$ are also ${\mathbb Z}_2$-graded. In the bosonic case for any finite subset $\Gamma\subset\Lambda$ we let ${\mathscr A}_\Gamma=\otimes_{j\in\Gamma} {\mathscr A}_j$. Then for any inclusion of finite subsets $\Gamma\subset \Gamma'$ we have an obvious injective homomorphism ${\mathscr A}_\Gamma\rightarrow{\mathscr A}_{\Gamma'}$, and the algebras ${\mathscr A}_\Gamma$ form a directed system over the directed set of finite subsets of $\Lambda$. A normed $*$-algebra ${{\mathscr A}_\ell}$ of local observables is defined as the direct limit of this directed system:
\begin{equation}\label{dirlim}
{{\mathscr A}_\ell}= \cup_{\Gamma} {\mathscr A}_\Gamma .
\end{equation}
Then the $C^*$-algebra ${\mathscr A}$ is defined as the norm completion of ${{\mathscr A}_\ell}$. Elements of ${\mathscr A}$ will be referred to as quasi-local observables, or simply observables. In the fermionic case, we do the same, except we define ${\mathscr A}_\Gamma$ using the graded tensor product. Also, since all observables appearing in this paper will be bosonic, in the fermionic case by a (quasi-local) observable we will always mean an even element of ${\mathscr A}$.
A quasi-local observable ${\mathcal A}$ is called local with a compact localization set $\Gamma\subset\Lambda$ if ${\mathcal A}\in{\mathscr A}_\Gamma.$ We may also say that such an ${\mathcal A}$ is localized on $\Gamma$. Equivalently, ${\mathcal A}$ commutes with any observable ${\mathcal B}\in {\mathscr A}_j$ for any site $j \in \bar{\Gamma} = \Lambda \backslash \Gamma$. We call an observable ${\mathcal A}\in{\mathscr A}$ almost local if it can be approximated well by a local observable. To be more precise, let us denote by $B_r(j)$ a ball of radius $r>0$ with the center at $j\in\Lambda$. That is, $B_r(j)=\{k\in\Lambda, \dist(k,j)<r\}.$ Also, let us pick a monotonically decreasing positive (MDP) function $a(r)$ on ${\mathbb R}_+=[0,+\infty)$ which has superpolynomial decay, i.e. it is of order ${O(r^{-\infty})}$. An observable ${\mathcal A}$ will be called $a$-localized on a site $j$ if for any $r>0$ there is a local observable ${\mathcal A}^{(r)}\in{\mathscr A}_{B_r(j)}$ such that $||{\mathcal A}-{\mathcal A}^{(r)}||\leq||{\mathcal A}|| a(r)$. An observable will be called almost local if it is $a$-localized on $j$ for some MDP function $a(r)={O(r^{-\infty})}$ and some $j\in\Lambda$. Almost local observables approximately commute when they are localized far from each other. More precisely, if ${\mathcal A}$ is $a$-localized on $j$ and ${\mathcal B}$ is $b$-localized on $k$, then $||[{\mathcal A},{\mathcal B}]||\leq ||{\mathcal A}||\cdot ||{\mathcal B}|| c(\dist(j,k)),$ where $c(r)=2(a(r/2)+b(r/2)+3 a(r/2) b(r/2))={O(r^{-\infty})}.$ Almost local observables can also be characterized in terms of their commutators with local observables, see Lemma \ref{lma:approximation}.
We will denote the $*$-algebra of almost local observables ${{\mathscr A}_{a\ell}}$. By definition, ${{\mathscr A}_\ell}$ is a dense sub-algebra of ${\mathscr A}$. Since ${{\mathscr A}_{a\ell}}\supset {{\mathscr A}_\ell}$, ${{\mathscr A}_{a\ell}}$ is also dense in ${\mathscr A}$.
A Hamiltonian $H$ for ${\mathscr A}$ is a formal sum
\begin{equation}
H = \sum_{j \in \Lambda} H_{j},
\end{equation}
where the interactions $H_j\in{\mathscr A}$ are assumed to be self-adjoint, uniformly bounded and exponentially decaying, i.e. there are $J>0$ and $R>0$ such that $\|H_{j}\| \leq J$ and for any local ${\mathcal A}$ localized on any site $k$ we have $\|[H_{j},{\mathcal A}]\|\leq J \|{\mathcal A}\| e^{-\text{dist}(j,k)/R}.$ It is well-known \cite{bratteli2012operator2} that such an $H$ gives rise to a strongly-continuous one-parameter family of automorphisms of ${\mathscr A}$ (the time evolution). We denote this family $\tau_t$, $t\in{\mathbb R}$.
More generally, we will consider formal linear combination of the form $F=\sum_{j\in\Lambda} F_{j}$, where $F_j\in{\mathscr A}$ satisfy the following two conditions. First, the observables $F_j$ are uniformly bounded, i.e. there exists $C>0$ such that $\|F_j\|\leq C$ for all $j$. Second, there is an MDP function $f(r)$ on ${\mathbb R}_+$ such that $f(r)={O(r^{-\infty})}$ and for all $j\in\Lambda$ the observable $F_j$ is $f$-localized on $j$. In particular, $F_j$ is an almost local observable. Such a formal linear combination $F$ will be called a 0-chain. If we want to specify the function $f$, we will say that $F$ is an $f$-local 0-chain. The Hamiltonian $H$ is an example of a 0-chain.
We will be also using 1-chains (currents) $J_{jk}$ and 2-chains (2-currents) $M_{jkl}$. A current is a skew-symmetric function $J:\Lambda\times\Lambda\rightarrow {\mathscr A}$ satisfying three conditions. First, $\|J_{jk}\|$ are uniformly bounded: $\|J_{jk}\| \leq C$ for some $C>0$. Second, there exists an MDP function $f(r)={O(r^{-\infty})}$ such that for any $j,k\in\Lambda$ $\|J_{jk}\| \leq C \, f(\dist(j,k)).$ Third, for any $j,k\in\Lambda$ and any $r>0$ there is a local $J^{(r)}_{jk}\in{\mathscr A}_{B_r(j)\cup B_r(k)}$
such that $||J_{jk}-J^{(r)}_{jk}||\leq C f(r).$ If we want to specify the function $f$, we will say that $J$ is an $f$-local current. An $n$-chain (or $n$-current) for general $n$ is defined similarly: it is a skew-symmetric function $M_{j_0\ldots j_n}$ on the Cartesian product of $n+1$ copies of $\Lambda$ which is valued in ${\mathscr A}$, is uniformly bounded, is rapidly decaying when the arguments are far apart, and such that for any $r>0$ $M_{j_0\ldots j_n}$ can be approximated well by a local observable on $B_r(j_0)\cup \ldots \cup B_r(j_n)$. Such objects were introduced by A. Kitaev \cite{kitaev2006anyons} and have found several interesting applications \cite{kapustin2019thermal,kapustin2020higherA,kapustin2020higherB}. In this paper we only use $n$-chains with $n\leq 2$.
We define a linear map $\partial$ from the space of currents to the space of 0-chains:
\begin{equation}
(\partial J)_j = \sum_{k \in \Lambda} J_{jk},
\end{equation}
and more generally from the space of $n$-chains to the space of $(n-1)$-chains:
\begin{equation}
(\partial M)_{j_0\ldots j_{n-1}} = \sum_{j_n \in \Lambda} M_{j_0\ldots j_n}.
\end{equation}
The definition of $n$-chains was chosen so that these maps are well-defined. Also, we have $\partial^2=0$.
To any 0-chain $F$ and any compact set $\Gamma$ we attach an almost local observable $F_\Gamma=\sum_{j\in\Gamma} F_j$. Similarly, we denote
\begin{equation}\label{defJAB}
J_{AB} = \sum_{j \in A} \sum_{k \in B} J_{jk},
\end{equation}
\begin{equation}
M_{ABC} = \sum_{j \in A} \sum_{k \in B} \sum_{l \in C} M_{jkl} .
\end{equation}
Here $A,B,C$ are some subsets of $\Lambda$ for which the above sums are convergent. It is important that these expressions can be well-defined observables even if the subsets $A,B,C$ are not compact. For example, if $d=1$ and $A$ and $B$ are complementary half-lines in ${\mathbb R}$, the sum defining $J_{AB}$ is convergent and defines an almost local observable which is localized near the boundary of $A$ and $B$.
For a general 0-chain $F$, the formal expression $F=\sum_{j\in\Lambda} F_j$ is not a well-defined element of ${\mathscr A}$. But it gives rise to a well-defined derivation $\ad_F=[F,\cdot]$ of the algebra ${{\mathscr A}_{a\ell}}$ defined by
\begin{equation}
\ad_F({\mathcal A})=\sum_{j\in\Gamma}[F_j,{\mathcal A}].
\end{equation}
This derivation is unbounded, therefore cannot be extended to the whole ${\mathscr A}$. Its main use is to define a class of automorphisms of the algebra ${\mathscr A}$ with good locality properties. Let $F(s)=\sum_{j\in\Lambda} F_j(s)$ be a self-adjoint 0-chain which is a differentiable function of a parameter $s\in [0,1]$.
Then one can define a family of automorphisms $\alpha_F(s)$ of ${\mathscr A}$ by solving the equations
\begin{equation}
-i \frac{d}{ds} \alpha_{F}(s)({\mathcal A}) = \alpha_F(s)\left([F(s),{\mathcal A}]\right), \,\,\,\, \alpha_{F}(0)=1,
\end{equation}
for all ${\mathcal A}\in{{\mathscr A}_{a\ell}}$ and then extending to the whole ${\mathscr A}$ by continuity.
These equation can be shown to have a unique solution \cite{bratteli2012operator2}. Using the Lieb-Robinson bound, one can show that for any self-adjoint 0-chain $F$ and any $s$ the automorphism $\alpha_F(s)$ maps ${{\mathscr A}_{a\ell}}$ to ${{\mathscr A}_{a\ell}}$. More precisely, if $F$ is a self-adjoint $f$-local 0-chain, and ${\mathcal A}\in{\mathscr A}$ is $a$-localized on a site $j$ for some MDP function $a(r)={O(r^{-\infty})}$, then $\alpha_F(s)({\mathcal A})$ is $h$-localized on $j$. Here $h(r)={O(r^{-\infty})}$ is an MDP function which depends on ${\mathcal A}$ only through $a$. For a proof, see Appendix \ref{app:lemmas}. Note also that $\alpha_{F}(ts)=\alpha_{t F}(s)$ and $\alpha_{-F}(s)=\alpha_F(s)^{-1}$. If $F$ is independent of $s$, then the automorphisms $\alpha_F(s)$ satisfy $\alpha_F(t)\alpha_F(s)=\alpha_F(t+s)$.
We will call an automorphism $\alpha$ of ${\mathscr A}$ locally generated if there exists an $s$-dependent self-adjoint 0-chain $F$ such that $\alpha=\alpha_F(1).$ In what follows, by a 0-chain we will always mean a self-adjoint 0-chain.
Above we have defined localization sets for local
observables. One can also define approximate localization sets for 0-chains.
A 0-chain $F$ is said to be approximately localized on $\Gamma\subset\Lambda$ if there is a $C>0$ such that for any $j\in\Lambda$, any MDP function $a$ such that $a(r)={O(r^{-\infty})}$, and any ${\mathcal A}$ which $a$-localized on $j$ one has $\|[F,{\mathcal A}]\|\leq C \|{\mathcal A}\| h({\rm dist}(j,\Gamma))$, where the MDP function $h(r)={O(r^{-\infty})}$ depends on ${\mathcal A}$ only through $a$. Given an arbitrary 0-chain $F=\sum_j F_j$, one can construct a truncated 0-chain $F_\Gamma$ approximately localized on any $\Gamma$ by letting $F_\Gamma=\sum_{j\in\Gamma} F_j$. If $\Gamma$ is finite, $F_\Gamma$ is an almost local observable, otherwise it is a 0-chain approximately localized on $\Gamma$. Similarly, given a current $J_{jk}$ and any two subsets $A,B,$ we can interpret $J_{AB}$ as a 0-chain with components $J_{AB,j}=\sum_{k\in B} J_{jk}$ for $j\in A $ and $J_{AB,j}=0$ for $j\in\bar A$ even if the sum (\ref{defJAB}) is not convergent. We will also say that an automorphism $\alpha$ is approximately localized on $\Gamma\subset\Lambda$ if for any $j\in\Lambda$ and any ${\mathcal A}$ which is $a$-localized on $j$ one has $\|\alpha({\mathcal A})-{\mathcal A}\|\leq \|{\mathcal A}\| h({\rm dist}(j,\Gamma))$ for some MDP function $h(r)={O(r^{-\infty})}$ which depends on ${\mathcal A}$ only through $a$. Lemma \ref{lma:FGA} shows that the action of the automorphism $\alpha_F(s)$ on an observable localized near $j\in\Lambda$ depends only on the behavior of $F$ in the neighborhood of $j$.
We will say that a state $\psi$ on ${\mathscr A}$ is superpolynomially clustering if there is an MDP function $h(r)={O(r^{-\infty})}$ such that for any two finite subsets $\Gamma$ and $\Gamma'$ and any two observables ${\mathcal A}\in{\mathscr A}_{\Gamma}$ and ${\mathcal A}'\in{\mathscr A}_\Gamma'$ one has
\begin{equation}
\left| \langle {\mathcal A}\CA'\rangle_\psi-\langle {\mathcal A}\rangle_\psi \langle {\mathcal A}'\rangle_\psi \right| \leq C |\Gamma|\cdot |\Gamma'|\cdot ||{\mathcal A}||\cdot ||{\mathcal A}'||\, h(\dist(\Gamma,\Gamma')).
\end{equation}
Here $|\Gamma|$ is the number of sites in $\Gamma$.
If $h(r)$ can be chosen to have the form $h(r)=C e^{-r/\xi}$ for some $C>0$ and $\xi>0$, we will say that the state $\psi$ is exponentially clustering.
If $\psi$ is superpolynomially clustering, ${\mathcal A}$ is $a$-localized on $j\in\Lambda$, and ${\mathcal B}$ is $b$-localized on $k\in\Lambda$, then $\left| \langle {\mathcal A}{\mathcal B}\rangle_\psi-\langle {\mathcal A}\rangle_\psi \langle {\mathcal B}\rangle_\psi \right| \leq \|{\mathcal A}\|\cdot \|{\mathcal B}\| f(\dist(j,k))$, where the MDP function $f(r)={O(r^{-\infty})}$ depends on ${\mathcal A},{\mathcal B}$ only through $a$ and $b$.
For a state $\psi$ and an automorphism $\alpha$ one can define a new state $\alpha(\psi)$ by $\langle{\mathcal A}\rangle_{\alpha(\psi)}=\langle \alpha({\mathcal A}) \rangle_{\psi}.$
It is easy to see that if the state $\psi$ is superpolynomially clustering and the automorphism $\alpha$ is locally-generated, then $\alpha(\psi)$ is also superpolynomially clustering.
\subsection{Quasi-adiabatic evolution}\label{QAevolution}
In this paper we will be studying lattice systems with an energy gap. That is, we assume that we are given a pure state $\psi$ on ${\mathscr A}$ which is a ground-state of $\tau_t$, i.e. for any ${\mathcal A} \in {{\mathscr A}_\ell}$ we have $\langle {\mathcal A}^* \ad_H({\mathcal A}) \rangle_{\psi} \geq 0$. In the fermionic case, we also assume that $\langle {\mathcal A}\rangle_\psi=0$ for any odd ${\mathcal A}$. We further assume that it is a unique gapped ground-state, in the sense that in the GNS Hilbert space corresponding to $\psi$ the GNS vacuum vector is a unique vector invariant under the unitary evolution corresponding to $\tau_t$, and that $0$ is an isolated eigenvalue of the Hamiltonian in the GNS representation (the generator of the unitary evolution). A gapped lattice system is a triple $({\mathscr A},H,\psi)$, where ${\mathscr A}$ is the algebra of observables, $H$ is a Hamiltonian, and $\psi$ is a state, with the properties described above. These properties imply that the state $\psi$ has the exponential clustering property with some characteristic length scale $\xi$ \cite{HastingsKoma,NachtergaeleSims}.
In the presence of the energy gap one can define certain useful linear maps from ${\mathscr A}$ to ${\mathscr A}$.
For any $\Delta>0$ we choose a continuous function $W_\Delta(t)$ which is a real, odd, bounded, superpolynomially decaying for large $|t|$, and $\hat{W}_\Delta(\omega) = - \frac{i}{\omega}$ for $|\omega| > \Delta $. Here $\hat W_\Delta$ is the Fourier-transform of $W_\Delta$, $\hat W_\Delta(\omega)=\int e^{i\omega t} W_\Delta(t) dt$. It was shown in \cite{hastings2010quasi} that such function exists. If $H=\sum_j H_j$ is a gapped Hamiltonian with respect to a ground state $\psi$, we pick a $\Delta$ smaller than the gap and for any observable ${\mathcal A}$ define
\begin{equation}
{\mathscr I}_\Delta({\mathcal A}):=\int^\infty_{-\infty} W_\Delta(t) \tau_t({\mathcal A}) dt.
\end{equation}
The map ${\mathscr I}_\Delta:{\mathscr A}\rightarrow{\mathscr A}$ is a bounded linear map which commutes with conjugation. It also maps ${{\mathscr A}_{a\ell}}$ to ${{\mathscr A}_{a\ell}}$. Indeed, suppose $a(r)={O(r^{-\infty})}$ and ${\mathcal A}$ is $a$-localized on a site $j$. By Lemma \ref{lma:approximation}, it is sufficient to prove that for any $k\in\Lambda$ and any ${\mathcal B}\in{\mathscr A}_k$ the norm of the commutator $[{\mathscr I}_\Delta({\mathcal A}),{\mathcal B}]$ is bounded from above by a quantity of order ${O(r^{-\infty})}$, where $r=\dist(k,j).$ Since one can approximate ${\mathcal A}$ with an $a(r/3)$ accuracy by a local observable ${\mathcal A}^{(r/3)}$ localized on $B_{r/3}(j)$, it is sufficient to prove that $[{\mathscr I}_\Delta({\mathcal A}^{(r/3)}),{\mathcal B}]$ is of order ${O(r^{-\infty})}$. The Lieb-Robinson bound for exponentially decaying interactions \cite{HastingsKoma} implies that one has an estimate $\|[\tau_t({\mathcal A}^{(r/3)}),{\mathcal B}]\|<C r^d \|{\mathcal A}^{(r/3)}\|\cdot \|{\mathcal B}\| \exp((v_0 |t|-2r/3)/R_0)$, where $C, v_0,$ and $R_0$ are positive numbers. Since $W_\Delta(t)={O(|t|^{-\infty})}$, this implies the desired result. Also, since $W_\Delta$ is odd, $\langle {\mathscr I}_\Delta({\mathcal A})\rangle_\psi=0$ for any ${\mathcal A}\in{\mathscr A}$.
Using the functional calculus for unbounded operators one can easily see that for any ${\mathcal A},{\mathcal B}\in{\mathscr A}$ one has identities
\begin{equation}\label{Kubo_static}
\langle {\mathscr I}_\Delta({\mathcal A}) {\mathcal B}\rangle_\psi=i\left\langle 0| {\mathcal A} {G_0} {\mathcal B}\right|0\rangle,\quad \langle {\mathcal B} {\mathscr I}_\Delta({\mathcal A}) \rangle_\psi=-i\left\langle0| {\mathcal B} {G_0} {\mathcal A}\right|0\rangle,
\end{equation}
where $|0\rangle$ is a cyclic vector for the GNS representation, observables are identified with their images in this representation, and ${G_0}=(1-P)\frac{1}{H}(1-P)$ with $P=|0\rangle \langle 0|$. Therefore for any two observables ${\mathcal A}$, ${\mathcal B}$ one has
\bqa\label{Kubo_basic_property}
\langle {\mathscr I}_\Delta(i[H,{\mathcal A}]){\mathcal B}\rangle_\psi=\langle {\mathcal A} {\mathcal B}\rangle_\psi-\langle {\mathcal A}\rangle_\psi\langle {\mathcal B}\rangle_\psi,
\\
\quad \langle {\mathcal B} {\mathscr I}_\Delta(i[H,{\mathcal A}])\rangle_\psi =\langle {\mathcal B} {\mathcal A}\rangle_\psi -\langle {\mathcal B}\rangle_\psi\langle {\mathcal A}\rangle_\psi.
\eqa
\begin{remark}\label{Wdelta ambig}
While the l.h.s. of eqs. (\ref{Kubo_static}) involves a map ${\mathscr I}_\Delta$ which depends both on $\Delta$ and the choice of the function $W_\Delta$, the r.h.s. does not depend on either. This is consistent because for any self-adjoint ${\mathcal A}\in{\mathscr A}$ changing $W_\Delta$ changes ${\mathscr I}_\Delta({\mathcal A})$ only by an observable which annihilates the ground state. One can easily check this property using functional calculus.
\end{remark}
We say that a self-adjoint observable ${\mathcal A}$ does not excite the state $\psi$, if for any observable ${\mathcal B}$ such that $\langle {\mathcal B} \rangle_{\psi} = 0$ we have $\langle {\mathcal A} {\mathcal B} \rangle_{\psi} = 0$. For brevity we denote this condition by $\langle {\mathcal A} ... \rangle_{\psi} = 0$. Note that if $\langle {\mathcal A} ... \rangle_{\psi} = 0$, then $\langle e^{i {\mathcal A}} ... \rangle_{\psi}=0$ and $\langle e^{i{\mathcal A}}\rangle=1$.
If $\psi$ is the ground state of a gapped Hamiltonian, then for any self-adjoint almost local observable ${\mathcal A}$ we can define a self-adjoint almost local observable $\tilde {\mathcal A}$ which does not excite $\psi$ by letting
\begin{equation}
\tilde {\mathcal A}:={\mathcal A}-{\mathscr I}_\Delta(i[H,{\mathcal A}]).
\end{equation}
This follows easily from (\ref{Kubo_basic_property}).
Although $\tilde{\mathcal A}$ depends on the choice of the function $W_\Delta$, by the Remark \ref{Wdelta ambig} varying $W_\Delta$ affects $\tilde{\mathcal A}$ only by a self-adjoint observable which annihilates the ground state.
Consider a differentiable family of Hamiltonians $H(s)=\sum_j H_j(s)$, $s\in [0,1],$ with a unique gapped ground state $\psi_s$ for each $s$. Let $\Delta$ be a positive number less than the lower bound of the gaps of the Hamiltonians $H(s)$. We define a one-form on $[0,1]$ with values in ${\mathscr A}$ by
\begin{equation}
G_j (s) ds := {\mathscr I}_\Delta(d H_j(s)) .
\end{equation}
The formal sum $G(s) = \sum_{j \in \Lambda} G_j(s)$ is a 0-chain.
The automorphism $\alpha_G(s)$ generated by $G(s) = \sum_j G_j(s)$ implements a quasi-adiabatic evolution introduced in \cite{hastings2004lieb,hastings2005quasiadiabatic}. It was shown in \cite{bachmann2012automorphic} that if $H(0)$ is a Hamiltonian with a gapped ground state which is a limit of finite-volume Hamiltonians with gapped ground states, this automorphism gives an automorphic equivalence of ground states of $H(s)$ for all $s$. The expectation value of an almost local observable ${\mathcal A}$ in the ground-state of $H(s)$ therefore satisfies
\begin{equation}
\frac{d}{ds} \langle {\mathcal A} \rangle(s) = \langle i[G(s), {\mathcal A}] \rangle .
\end{equation}
Using eq. (\ref{Kubo_static}), one can easily see that this is equivalent to the Kubo formula for static linear response (at zero temperature).
We can also quasi-adiabatically evolve the systems only on some region $S$ by an automorphism $\alpha_{G_{S}}$ generated by $G_{S}(s) = \sum_{j \in S} G_{j}(s)$. Since the ground state $\psi$ is exponentially clustering, the state $\alpha_{G_S}(s)(\psi)$ is superpolynomially clustering.
\subsection{Gapped systems with a $U(1)$ symmetry}\label{Gapped system with U(1) symmetry}
We say that a pair $({\mathscr A},H)$ has an on-site $U(1)$ symmetry if we are given a self-adjoint $0$-chain $Q=\sum_j Q_j$ where $Q_{j}\in {\mathscr A}_j$ satisfies $\exp(2\pi i Q_j)=1$ and $[Q,H_{j}] =0$ for all $j\in\Lambda$. The 0-chain $Q=\sum_j Q_j$ will be called the charge. The corresponding family of automorphisms $\alpha_Q(s)$ satisfies $\alpha_Q(2\pi)=1$. A gapped lattice system with a $U(1)$ symmetry is a quadruple $({\mathscr A},H,\psi,Q)$, where $({\mathscr A},H,\psi)$ is a gapped lattice system and the pair $({\mathscr A},H)$ has an on-site $U(1)$ symmetry with charge $Q$. One does not need to require $\psi$ to be invariant with respect to the automorphisms $\alpha_Q(s)$: this follows from the Goldstone theorem (see below).
If $({\mathscr A},H)$ has an on-site $U(1)$ symmetry with charge $Q$, one can define a current
\begin{equation}
J_{jk} = i [H_k, Q_j] - i [H_{j},Q_k]
\end{equation}
which satisfies the conservation law $\left.\frac{d}{dt}\right|_0\tau_t\left(Q_j\right) = (\partial J)_{j} = \sum_{k} J_{jk}$.
Suppose an on-site $U(1)$ symmetry preserves an $s$-dependent 0-chain $F(s)$, in the sense that $[Q,F_j(s)]=0$ for all $j$ and all $s$. Then we can associate to $F(s)$ a current
\begin{equation}
T^{F}_{jk} = \int_{0}^{1} \alpha_{F}(s) (i [F_k(s), Q_j] - i [F_j(s), Q_k]) ds.
\end{equation}
It satisfies
\begin{equation}\label{1st property of T}
\alpha_F(1)(Q_j)-Q_j=\left(\partial T^F\right)_j.
\end{equation}
Therefore for any finite region $\Gamma$ we have
\begin{equation}\label{2nd property of T}
\alpha_F(1)(Q_{\Gamma}) - Q_{\Gamma} = T^{F}_{\Gamma \bar{\Gamma}}.
\end{equation}
Thus the current $T^{F}$ measures the charge transported by the automorphism $\alpha_F(1)$. Note that the r.h.s. of eq. (\ref{2nd property of T}) can be well-defined even if $\Gamma$ is infinite. Then it can serve as a definition of the charge transported from $\bar\Gamma$ to $\Gamma$.
We say that a state $\psi$ has no local spontaneous symmetry breaking if there is a $U(1)$-invariant self-adjoint current $K_{j k}$, such that
\begin{equation}
\langle (Q - (\partial K))_j ... \rangle_{\psi} = 0.
\end{equation}
This condition implies the absence of spontaneous symmetry breaking, i.e. $\langle [Q,{\mathcal A}]\rangle_\psi=0$ for any almost local observable ${\mathcal A}$. Equivalently, $\langle \alpha_Q(s)({\mathcal A})\rangle_\psi=\langle {\mathcal A}\rangle_\psi$ for any observable ${\mathcal A}$.
We use the notation
\begin{equation}
\tilde{Q}_j := Q_j - (\partial K)_j
\end{equation}
for a modified local charge that does not excite the ground state. For a finite region $\Gamma$ we introduce a unitary observable
\begin{equation}
V_{\Gamma}(\phi) := e^{i \phi \tilde{Q}_{\Gamma}} = e^{i \phi (Q_{\Gamma} - K_{\Gamma \bar{\Gamma}})}.
\end{equation}
Since $\tilde Q_\Gamma$ does not excite the ground state, $V_\Gamma(\phi)$ satisfies $\langle V_\Gamma(\phi) ... \rangle_\psi=0$.
For a ground state of a gapped Hamiltonian one can always find such a $K_{jk}$ by letting
\begin{equation}
K_{jk} = {\mathscr I}_\Delta(J_{j k}).
\end{equation}
Therefore such a state does not break $U(1)$ symmetry spontaneously. This is the usual Goldstone theorem.
Moreover, if we have a state obtained from the ground state by some locally generated $U(1)$-invariant automorphism $\alpha=\alpha_{F}(1)$, then a suitable $K_{jk}$ also exists:
\begin{equation}\label{eq:Knew}
K_{jk} = \alpha^{-1}_F({\mathscr I}_\Delta(J_{jk}) + T^{F}_{jk}).
\end{equation}
There are two kinds of ambiguities for $K_{jk}$. First, one can add to $K_{jk}$ any $(\partial N)_{jk}$ for any 2-current $N_{jkl}$. Second, one can add any 1-current $K'_{jk}$ that does not excite the ground state. Varying the function $W_{\Delta}(t)$ and $\Delta$ in ${\mathcal I}_{\Delta}(J_{jk})$ results in the second kind of ambiguity. All physical quantities defined below do not depend on these ambiguities.
\begin{remark}\label{rmk:Poincare}
If for a ground state of a gapped Hamiltonian it is true that $\langle (\partial M)_{j_1 j_2... j_{n-1}} \, ... \rangle = 0$ for some $(n-1)$-chain $M$ for $j_a \in S$ for some region $S$, then there exists an $n$-chain $N_{j_1 j_2 ... j_{n}}$ such that
\begin{equation}\label{KN}
\langle (M_{{j_1 j_2 ... j_{n}}} - (\partial N)_{j_1 j_2 ... j_{n}}) ... \rangle = 0,\quad \text{if all } j_a \in S.
\end{equation}
Indeed, one can take
\begin{equation}
N_{j_1 ... j_{n+1}} = {\mathscr I}_\Delta(i[\tilde{H}_{j_1},M_{j_2 ... j_{n+1}}]) + \text{cyclic permutations of }\{j_1,...,j_{n+1}\}
\end{equation}
where
\begin{equation}
\tilde{H}_j = H_j - {\mathscr I}_\Delta(i[H,H_j]).
\end{equation}
In particular if $\langle (\partial K)_{jk} ... \rangle = 0$, then there is a 2-current $N_{jkl}$, such that $\langle (K_{jk}-(\partial N)_{jk}) ... \rangle=0$. This implies that the only ambiguities in the current $K$ are those noted above.
\end{remark}
\subsection{Invertible phases}\label{sec:InvPhases}
A gapped lattice system $({\mathscr A},H,\psi)$ is said to be trivial if for all $j\in\Lambda$ $H_j$ is a local operator localized on $j$, and $\psi$ is factorized, i.e. $\langle {\mathcal A}{\mathcal B}\rangle_\psi=\langle{\mathcal A}\rangle_\psi\langle{\mathcal B}\rangle_\psi$ whenever ${\mathcal A}$ and ${\mathcal B}$ are local observables localized on two different sites $j,k\in\Lambda$.
Two gapped lattice systems with the same algebra of observables are said to be in the same phase if there is a differentiable path of gapped lattice systems connecting them. A bosonic gapped lattice system $({\mathscr A},H,\psi)$ is said to be in an invertible phase, if there is another bosonic gapped lattice system $({\mathscr A}',H',\psi')$ (``the inverse system'') such that a combined system is in the trivial phase. That is, there is a path of gapped lattice systems $({\mathscr A}\otimes{\mathscr A}', H(s),\Psi(s))$ such that $H(s)$ is differentiable, $H(0)=H\otimes 1+1\otimes H'$, $\psi(0)=\psi\otimes\psi'$, and the system $({\mathscr A}\otimes{\mathscr A}',H(1),\Psi(1))$ is trivial. Note that by the results of \cite{bachmann2012automorphic} the state $\Psi(0)$ of the combined system and the factorized state $\Psi(1)$ are automorphically equivalent. In the fermionic case the definition is the same, except one uses the graded tensor product.
If the system $({\mathscr A},H,\psi)$ has a $U(1)$ symmetry, one may define a more restricted notion of an invertible phase by requiring the inverse system also to have a $U(1)$ symmetry and the path of systems interpolating between the composite system and the trivial system to preserve the diagonal $U(1)$ symmetry. We do not use this more restricted notion of an invertible phase in this paper.
\section{Charge pumping}\label{sec:chargepumping}
\subsection{General considerations} \label{ssec:chargepumping1}
Suppose we have an automorphism $\alpha=\alpha_{F}(1)$ locally generated by some $U(1)$-invariant self-adjoint 0-chain $F(s) = \sum_j F_j(s)$. There is a current $T^{F}_{jk}$ that measures the charge transported by this automorphism. In the following we omit the subscript $F$ and use the notation $T_{jk}$ for this current. Our goal in this section is to show that for certain subsets $A,B\subset{\mathbb R}^d$ the quantity $\langle T_{AB}\rangle$ is approximately quantized if the automorphism $\alpha$ preserves the ground state.
By a region we will mean an embedded $d$-dimensional submanifold of ${\mathbb R}^d$ whose boundary has a finite number of connected components and does not intersect $\Lambda$. By a slight abuse of notation, we will identify a region and its intersection with the lattice $\Lambda$. Further, we will consider sequences of regions and observables labeled by some parameter $L$ taking values in positive integers and study their behavior in the limit $L\rightarrow\infty$. A sequence of regions $\Gamma_L$ will be called large if the distance between any two connected components of $\partial\Gamma_L$ is $O(L)$. For simplicity, we will shorten "a large sequence of regions" to "a large region", keeping in mind that all regions depend on a parameter $L$. Similarly, a sequence of almost local observables labeled by $L$ will be identified with an $L$-dependent almost local observable.
All $L$-dependent almost local observables considered here will have the form ${\mathcal A}_L=F_{\Gamma_L}$ or ${\mathcal B}_L=J_{A_L B_L}$, or some function of these. Here $F$ is an $L$-independent 0-chain, $J$ is an $L$-independent current, $\Gamma_L$ is a large compact region, and $A_L, B_L$ are large compact regions all of whose boundary components are either disjoint or coinciding. The norm of such observables can be bounded from above by functions of $L$ which are $O(L^d)$ or $O(L^{d-1})$ for large $L$. Thus $\|[{\mathcal A}_L,{\mathcal C}]\|={O(L^{-\infty})}$ for any $L$-independent ${\mathcal C}\in{\mathscr A}_j$ with $\dist(j,\Gamma_L)=O(L)$ and
$\|[{\mathcal B}_L,{\mathcal C}]\|={O(L^{-\infty})}$ for any $L$-independent ${\mathcal C}\in{\mathscr A}_j$ with $\dist(j,A_L\bigcap B_L)=O(L)$. Also, for any two $L$-dependent regions $\Gamma_L$ and $\Gamma'_L$ such that the distance between them is $O(L)$ and any two $L$-independent 0-chains $F,F'$ the corresponding observables $F_{\Gamma_L}$ and $F'_{\Gamma'_L}$ commute with ${O(L^{-\infty})}$ accuracy. Similarly, if the distance between $\Gamma_L$ and $A_L\cap B_L$ is $O(L)$, the observables ${\mathcal A}_L$ and ${\mathcal B}_L$ commute with ${O(L^{-\infty})}$ accuracy. We will refer to these and similar properties of $L$-dependent observables ${\mathcal A}_L$ and ${\mathcal B}_L$ as "asymptotic localization", where the word "asymptotic" refers to the fact that we study the behavior of the commutators as $L\rightarrow\infty$.
For a boundary component ${\mathcal S}$ of a large region $\Gamma$ we define a thickening $\CT{\mathcal S}$ of ${\mathcal S}$ as a large region containing all points within a distance of order $L$ from ${\mathcal S}$ and such that all points of $\CT{\mathcal S}$ are at a distance of order $L$ from other boundary components.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.5]
\filldraw[color=blue!50, fill=blue!5, dashed, very thick](0,0) circle (4.3);
\filldraw[color=gray, fill=blue!5, very thick](0,0) circle (4);
\filldraw[color=blue!50, fill=white, dashed, very thick](0,0) circle (3.7);
\filldraw[color=blue!50, fill=blue!5, dashed, very thick](1.2,1.2) circle (1.3);
\filldraw[color=gray, fill=blue!5, very thick](1.2,1.2) circle (1);
\filldraw[color=blue!50, fill=white, dashed, very thick](1.2,1.2) circle (0.7);
\filldraw[color=blue!50, fill=blue!5, dashed, very thick](-1.2,-1.2) circle (1.3);
\filldraw[color=gray, fill=blue!5, very thick](-1.2,-1.2) circle (1);
\filldraw[color=blue!50, fill=white, dashed, very thick](-1.2,-1.2) circle (0.7);
\node at (-1.2,1.2) {$\Gamma$};
\end{tikzpicture}
\caption{A large compact region $\Gamma$ with the boundary components shown as solid gray lines, and a thickening $\CT \partial \Gamma$ with the boundaries shown as dashed blue lines.
}
\label{fig:thickening}
\end{figure}
Let ${\mathcal S}$ be an oriented codimension-one compact surface which is a connected component of the boundary of a large region $\Gamma$. We choose a thickening $\CT{\mathcal S}$ of ${\mathcal S}$ and denote $T_{\CT{\mathcal S}} = T_{(\Gamma\cap\CT{\mathcal S}) (\bar{\Gamma}\cap\CT{\mathcal S})}$. Note that $T_{\CT{\mathcal S}}=T_{\Gamma\bar\Gamma}+O(L^{-\infty})$.
We claim that the observable $(T_{\CT{\mathcal S}} + Q_{\Gamma \cap \mathcal{T} {\mathcal S}})$ has an integer spectrum, up to corrections of order $O(L^{-\infty})$. More precisely, $\exp\left(2\pi i (T_{\CT{\mathcal S}} + Q_{\Gamma \cap \mathcal{T} {\mathcal S}})\right)=1+O(L^{-\infty}).$ Indeed, using Lemma \ref{lma:FGA} we can write
\begin{multline}
T_{\CT {\mathcal S}} = (\alpha_{F}(Q_{\Gamma})-Q_{\Gamma}) + {O(L^{-\infty})} = (\alpha_{F_{\CT \partial \Gamma}}(Q_{\Gamma})-Q_{\Gamma}) + {O(L^{-\infty})} = \\ =
(\alpha_{F_{\CT \partial \Gamma}}(Q_{\tilde{\Gamma}})-Q_{\tilde{\Gamma}}) + {O(L^{-\infty})}.
\end{multline}
where $\tilde{\Gamma}$ is a compact region such that $\partial \tilde{\Gamma}$ contains $\partial \Gamma$, and there is a thickening, such that $\CT \partial \tilde{\Gamma} \backslash \CT \partial \Gamma$ does not intersect $\CT \partial \Gamma$. Therefore
\begin{equation}
(T_{\CT{\mathcal S}} + Q_{\Gamma \cap \CT{\mathcal S}}) + Q_{\tilde{\Gamma} \cap \overline{\CT{\mathcal S}}} = \alpha_{F_{\CT \partial \Gamma}}(Q_{\tilde{\Gamma}}) + {O(L^{-\infty})}.
\end{equation}
$Q_{\tilde{\Gamma} \cap \overline{\mathcal{T} {\mathcal S}}}$ commutes with $(T_{\CT{\mathcal S}} + Q_{\Gamma \cap \mathcal{T} {\mathcal S}})$ up to terms of order $O(L^{-\infty})$, and both $\alpha(Q_{\tilde{\Gamma}})$ and $Q_{\tilde{\Gamma} \cap \overline{\CT{\mathcal S}}}$ have integer spectra. This implies the desired result.
Let $\Gamma$ be a large compact region whose boundary $\partial\Gamma$ has a decomposition
$\partial \Gamma = \bigcup_{a} {\mathcal S}_a$. We can choose thickenings $\CT{\mathcal S}_a$ of all ${\mathcal S}_a$ such that all $\CT{\mathcal S}_a$ are far from each other (separated by distances of order $L$). Then we have
\begin{equation}\label{QGammaidentity}
\alpha (Q_{\Gamma}) - Q_{\Gamma} = \sum_{a} T_{\CT{\mathcal S}_a} + {O(L^{-\infty})}.
\end{equation}
Let us show that
\begin{equation}\label{VUidentity}
\alpha( V_{\Gamma}(\phi)) V_{\Gamma}(-\phi) = \prod_{a} Z_{\mathcal{T} {\mathcal S}_a}(\phi) + {O(L^{-\infty})},
\end{equation}
where $Z_{\mathcal{T} {\mathcal S}_a}(\phi)$ is a unitary almost local observable asymptotically localized on $\CT {\mathcal S}_a$. First we define $K_{\CT{\mathcal S}_a}=K_{(\Gamma\cap\CT{\mathcal S}_a)(\overline{\Gamma}\cap\CT{\mathcal S}_a)}$. By our assumption on the the thickenings, $K_{\Gamma\overline\Gamma}=\sum_a K_{\CT{\mathcal S}_a}+O(L^{-\infty})$. Then we get:
\begin{multline}\label{Zdef}
\alpha\left(V_{\Gamma}(\phi)\right) V_{\Gamma}(-\phi) = e^{i \phi (\alpha(Q_{\Gamma}) - \sum_{a}\alpha( K_{\CT{\mathcal S}_a})} e^{-i \phi (Q_{\Gamma} - \sum_a K_{\CT{\mathcal S}_a})} +O(L^{-\infty}) =
\\ = e^{i \phi (Q_{\Gamma} + \sum_{a} T_{\CT{\mathcal S}_a} - \sum_{a}\alpha( K_{\CT{\mathcal S}_a}))} e^{-i \phi (Q_{\Gamma} - \sum_a K_{\CT{\mathcal S}_a})}+O(L^{-\infty}) = \\ = e^{i \phi (\sum_{a} Q_{\Gamma \cap \mathcal{T}{\mathcal S}_a} + \sum_{a} T_{\CT{\mathcal S}_a} - \sum_a \alpha( K_{\CT{\mathcal S}_a})} e^{-i \phi (\sum_{a} Q_{\Gamma \cap \mathcal{T}{\mathcal S}_a} -\sum_a K_{\CT{\mathcal S}_a})}+O(L^{-\infty})=
\\ = \prod_{a} e^{i \phi (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} + T_{\CT{\mathcal S}_a} - \alpha( K_{\CT{\mathcal S}_a}))} e^{-i \phi (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})}+O(L^{-\infty}).
\end{multline}
Each factor in the above product is an almost local unitary observable asymptotically localized on some $\CT{\mathcal S}_a$.
Next we use the following lemma which is a minor variation of Lemma 4.2 from \cite{bachmann2019many}.
\begin{lemma}
\label{lma1v2}
Let ${\mathcal U}$ be a unitary observable that depends on a parameter $L$, and let $\psi$ be a pure state. Then $|\langle {\mathcal U} \rangle_{\psi}| = 1 - {O(L^{-\infty})}$ is equivalent to $\langle {\mathcal O} {\mathcal U} \rangle_{\psi} - \langle {\mathcal O} \rangle_{\psi} \langle {\mathcal U} \rangle_{\psi} = {O(L^{-\infty})}$ as well as to $\langle {\mathcal U} {\mathcal O} \rangle_{\psi} - \langle {\mathcal U} \rangle_{\psi} \langle {\mathcal O} \rangle_{\psi} = {O(L^{-\infty})}$ for all ${\mathcal O} \in {\mathscr A}$ with
$||{\mathcal O} ||=1$.
\end{lemma}
\begin{proof}
Let $U$ be an operator representing ${\mathcal U}$ in the GNS representation for the state $\psi$, and let $P$ be the corresponding vacuum vector projector $P=|0\rangle \langle 0 |$.
Then $|\langle {\mathcal U} \rangle_{\psi}| = 1 - {O(L^{-\infty})}$ is equivalent to $||(1-P)U|0\rangle|| = {O(L^{-\infty})}$.
The latter is true if and only if $\langle {\mathcal O} {\mathcal U} \rangle_{\psi} - \langle {\mathcal O} \rangle_{\psi} \langle {\mathcal U} \rangle_{\psi} = {O(L^{-\infty})}$ for all ${\mathcal O}\in{\mathscr A}$ with $||{\mathcal O}||=1$.
Taking complex conjugate, we also obtain equivalence with $\langle {\mathcal U} {\mathcal O} \rangle_{\psi} - \langle {\mathcal U} \rangle_{\psi} \langle {\mathcal O} \rangle_{\psi} = {O(L^{-\infty})}$ for all ${\mathcal O}\in{\mathscr A}$ with $||{\mathcal O}||=1$.
\end{proof}
Since by assumption $\alpha$ preserves the ground state and ${\tilde Q}_\Gamma$ does not excite it, we have
$\langle\alpha(V_{\Gamma}(\phi))V_\Gamma(-\phi) \rangle =\langle \alpha(V_{\Gamma}(\phi)) \rangle=1$. Then (\ref{Zdef}) and the exponential clustering property for the ground state $\psi$ imply $|\langle Z_{\CT {\mathcal S}_a} \rangle| = 1 - {O(L^{-\infty})}$. Therefore by the above lemma $\langle Z_{\CT {\mathcal S}_a} {\mathcal O}\rangle=\langle Z_{\CT {\mathcal S}_a}\rangle \langle {\mathcal O}\rangle+{O(L^{-\infty})}$ uniformly in ${\mathcal O}\in{\mathscr A}$.
Using this result we obtain a differential equation for $\langle Z_{\CT {\mathcal S}_a}\rangle$:
\begin{multline}
\left( -i \frac{d}{d \phi} \right) \langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) \rangle = \\ =
\langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) e^{i \phi (Q_{\Gamma \cap \mathcal{T} {\mathcal S}_a}-K_{\CT{\mathcal S}_a})} \left(T_{\CT{\mathcal S}_a} + K_{\CT{\mathcal S}_a} - \alpha(K_{\CT{\mathcal S}_a}) \right) e^{- i \phi (Q_{\Gamma \cap \mathcal{T} {\mathcal S}_a}-K_{\CT{\mathcal S}_a})} \rangle = \\ = \langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) \rangle \langle e^{i \phi (Q_{\Gamma \cap \mathcal{T} {\mathcal S}_a}-K_{\CT{\mathcal S}_a})} \left(T_{\CT{\mathcal S}_a} + K_{\CT{\mathcal S}_a} - \alpha(K_{\CT{\mathcal S}_a}) \right) e^{- i \phi (Q_{\Gamma \cap \mathcal{T} {\mathcal S}_a}-K_{\CT{\mathcal S}_a})} \rangle +O(L^{-\infty})= \\ = \langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) \rangle \langle e^{i \phi (Q_{\Gamma}-K_{\Gamma \bar{\Gamma}})} \left(T_{\CT{\mathcal S}_a} + K_{\CT{\mathcal S}_a} - \alpha(K_{\CT{\mathcal S}_a}) \right) e^{i \phi (Q_{\Gamma}-K_{\Gamma \bar{\Gamma}})} \rangle +O(L^{-\infty}) = \\ = \langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) \rangle \langle T_{\CT{\mathcal S}_a} + K_{\CT{\mathcal S}_a} - \alpha(K_{\CT{\mathcal S}_a}) \rangle +O(L^{-\infty}) = \langle Z_{\mathcal{T} {\mathcal S}_a}(\phi) \rangle \langle T_{\CT{\mathcal S}_a} \rangle +O(L^{-\infty}).
\end{multline}
In the fourth line we have used the fact that $(Q_{\Gamma \cap \CT {\mathcal S}_a} - K_{\CT {\mathcal S}_a})$ for different $a$ and $Q_{\Gamma \cap \overline{\CT \partial \Gamma}}$ commute with each other up to ${O(L^{-\infty})}$ terms. Thus
\begin{equation}
\langle Z_{\mathcal{T} {\mathcal S}_a} (\phi) \rangle = e^{i \phi \langle T_{\CT{\mathcal S}_a} \rangle}+O(L^{-\infty}).
\end{equation}
Next, consider self-adjoint observables
${\mathcal O}_b=T_{\CT{\mathcal S}_b} + Q_{\Gamma \cap \mathcal{T} {\mathcal S}_b}.$ Each of them is asymptotically localized on $\CT{\mathcal S}_b$ and satisfies $\exp(2\pi i {\mathcal O}_b)=1+{O(L^{-\infty})}$. Therefore for any boundary component ${\mathcal S}_a$ one has
\begin{equation}\label{trivialid}
e^{2\pi i ({\mathcal O}_a-\alpha(K_{\CT{\mathcal S}_a}))}=e^{2\pi i \left( \sum_b{\mathcal O}_b-\alpha(K_{\CT{\mathcal S}_a})\right)}+{O(L^{-\infty})} .
\end{equation}
On the other hand, eq. (\ref{QGammaidentity}) can be written as
$\sum_b{\mathcal O}_b+Q_{\Gamma \cap \overline{\CT \partial \Gamma}}=\alpha(Q_\Gamma).$ Taking into account that $\exp\left( 2\pi i Q_{\Gamma \cap \overline{\CT \partial \Gamma}}\right)=1$ we get
\begin{equation}
e^{2\pi i ({\mathcal O}_a-\alpha(K_{\CT{\mathcal S}_a}))}=e^{2\pi i (\alpha(Q_\Gamma)-\alpha(K_{\CT{\mathcal S}_a}))}+{O(L^{-\infty})}.
\end{equation}
Therefore
\begin{multline}
\langle Z_{\mathcal{T} {\mathcal S}_a}(2 \pi) \rangle = \langle e^{2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} + T_{\CT{\mathcal S}_a} - \alpha(K_{\CT{\mathcal S}_a}))} e^{-2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle = \\ =
\langle e^{2 \pi i (\alpha(Q_{\Gamma}) - \alpha(K_{\CT{\mathcal S}_a}) )} e^{-2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle +O(L^{-\infty})= \\ =
\langle \alpha \left( e^{2 \pi i (Q_{\Gamma} - K_{\CT{\mathcal S}_a})} \right) e^{-2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle+O(L^{-\infty}) = \\ =
\langle \alpha \left( e^{2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \right) e^{-2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle+O(L^{-\infty}).
\end{multline}
Now we note that
\begin{equation}
V_{\Gamma}(-2 \pi) = \prod_{a} e^{-2 \pi i (Q_{\Gamma \cap\CT{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} + {O(L^{-\infty})} .
\end{equation}
Since $\langle V_\Gamma(-2\pi)\rangle=1$, the exponential clustering property implies
\begin{equation}
|\langle e^{2 \pi i (Q_{\Gamma \cap\CT{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle| = 1 - {O(L^{-\infty})}.
\end{equation}
The above lemma then implies that for any boundary component ${\mathcal S}_a$ we have $\langle Z_{\mathcal{T} {\mathcal S}_a}(2 \pi) \rangle = 1-O(L^{-\infty})$. Therefore $\langle T_{\CT{\mathcal S}_a} \rangle \in {\mathbb Z}$ up to corrections of order $O(L^{-\infty}).$
\begin{remark}\label{rmk:pumpadditive}
For $\alpha = \alpha_1 \circ \alpha_2$, with $\alpha_{1,2}$ generated by $F^{(1,2)}$ and satisfying the properties above, we have
\begin{multline}
\langle T^F_{\CT {\mathcal S}_a} \rangle = \langle T^{F^{(1)}}_{\CT {\mathcal S}_a} + \alpha_1(T^{F^{(2)}}_{\CT {\mathcal S}_a}) \rangle + {O(L^{-\infty})} = \\ =
\langle T^{F^{(1)}}_{\CT {\mathcal S}_a} \rangle + \langle T^{F^{(2)}}_{\CT {\mathcal S}_a} \rangle + {O(L^{-\infty})}
\end{multline}
\end{remark}
\begin{remark}\label{rmk:alphapreserves}
The fact that $\alpha$ preserves the ground state was used only to show that $\langle \alpha(V_{\Gamma}(\phi)) \rangle = \langle V_{\Gamma}(\phi) \rangle$ and
\begin{equation}
\langle \alpha \left( e^{2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \right) \rangle = \langle e^{2 \pi i (Q_{\Gamma \cap\mathcal{T}{\mathcal S}_a} - K_{\CT{\mathcal S}_a})} \rangle.
\end{equation}
If these identities are true only up to ${O(L^{-\infty})}$ terms, the approximate integrality of $\langle T_{\CT {\mathcal S}_a} \rangle$ still holds. This fact will be useful in Appendix \ref{app:AvronSeilerSimon}.
\end{remark}
\begin{remark}
This result might seem very general and supply many numerical invariants describing charge transport. In fact for most choices of $\Gamma$ and ${\mathcal S}$ one finds that $\langle T_{\CT{\mathcal S}_a} \rangle ={O(L^{-\infty})}$ thanks to the the identity (\ref{2nd property of T}) and very simple topology of ${\mathbb R}^d$. One exception is the case of one-dimensional systems discussed in the next subsection. Another situation where $\langle T_{\CT{\mathcal S}_a} \rangle$ has a non-zero limit as $L\rightarrow\infty$ is described in Appendix \ref{app:AvronSeilerSimon}.
\end{remark}
\subsection{Charge pumping in one dimension}
Let $({\mathscr A}, H(s), \psi_s, Q)$ be a differentiable family of one-dimensional gapped lattice systems with a $U(1)$ symmetry for $s \in [0,1]$, such that $H(0)=H(1)$. If the system $({\mathscr A}, H(0), \psi_0, Q)$ satisfies the conditions of Ref. \cite{bachmann2012automorphic}, then there is an automorphic equivalence between the states $\psi_s$. In particular, this is the case for systems in the trivial phase. It is expected that all one-dimensional gapped lattice systems with a $U(1)$ symmetry are in the trivial phase \cite{chen2011classification}. Let $\alpha$ be the corresponding quasi-adiabatic automorphism generated by $G(s) = \sum_{j} G_{j}(s)$. By construction it preserves the ground state of $H(0)=H(1)$. Let $\Gamma$ be an interval of length $L$ with the boundary points $(\partial \Gamma)_-$ and $(\partial \Gamma)_+$. Since $\alpha$ preserves the ground state, we must have $\langle T_{\Gamma\overline{\Gamma}}\rangle=0.$ Choosing thickenings $\CT (\partial \Gamma)_-$ and $\CT (\partial \Gamma)_+$ which are separated by a distance of order $L$, we see that $\langle T_{\CT (\partial \Gamma)_-}\rangle=-\langle T_{\CT (\partial \Gamma)_+}\rangle +O(L^{-\infty})$. Taking the limit $L\rightarrow\infty$, we conclude that the quantity $\langle T_{A\overline{A}}\rangle$, where $A=[p,+\infty)$, is independent of the point $p$ and thus is canonically associated to the loop $H(s)$. Furthermore, as shown in the previous section, $\langle T_{A\overline{A}}\rangle$ is an integer. If we vary the loop $(H(s),\psi_s)$ continuously while preserving all the properties above, $\langle T_{A\overline{A}}\rangle$ changes continuously and thus remains constant. Therefore it is a homotopy invariant of the loop $(H(s),\psi_s)$. Its meaning is the charge transported across a point $p$ in the course of one period of quasi-adiabatic evolution.
\section{Quantization of Hall conductance}
\subsection{Hall conductance}
In the remainder of the paper we will study $U(1)$-invariant gapped lattice systems in two dimensions. Given such a system, one can form a current $2 \pi i [\tilde{Q}_j, \tilde{Q}_k]$ which is exact:
\begin{equation}\label{QtQt}
2 \pi i [\tilde{Q}_j, \tilde{Q}_k] = - (\partial M)_{jk}
\end{equation}
where
\begin{equation}\label{Mrep}
M_{jkl} := \pi i ( [Q_j + \tilde{Q}_j,K_{kl}] + [Q_k + \tilde{Q}_k,K_{lj}] + [Q_l + \tilde{Q}_l ,K_{jk}]).
\end{equation}
Importantly, one can define the 2-current $M$ for any $U(1)$-invariant state $\psi$ on ${\mathscr A}$ which has no local spontaneous symmetry breaking. No Hamiltonian needs to be specified.
Consider a point $p$ not in $\Lambda$ and three paths beginning at $p$ and going off to infinity while avoiding $\Lambda$. The paths are assumed to lie in non-overlapping cones with vertex at $p$. These paths divide $\Lambda$ into three noncompact regions which we denote $A,B,C$, see Fig. \ref{fig:magnetization}. They intersect only over the paths, which we denote $AB,BC,CA$. Fixing the orientation of ${\mathbb R}^2$ also fixes the cyclic order of $A,B,C$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.3]
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\node at (0.6,-0.6) {$p$};
\node at (0,3) {$C$};
\node at (-1.7321*3/2,-3/2) {$A$};
\node at (1.7321*3/2,-3/2) {$B$};
\end{tikzpicture}
\caption{Definition of $M_{ABC}$.
}
\label{fig:magnetization}
\end{figure}
Consider an observable
\begin{equation}
M_{ABC}=\sum_{i\in A,j\in B, k\in C} M_{ijk}.
\end{equation}
The infinite sum defining it is norm-convergent, so $M_{ABC}$ is well-defined. We claim that $M_{ABC}$ does not excite the ground state. It is sufficient to show that $\langle M_{ABC} {\mathcal O}\rangle=0$ for any local observable ${\mathcal O}$ with a compact localization set and $\langle {\mathcal O} \rangle=0$. Let $\Gamma_r$ be a disk $B_r(p)$, with the boundary deformed slightly to avoid $\Lambda$. We denote $A'=A\cap\Gamma_r,$ $B'=B\cap\Gamma_r,$ $C'=C\cap\Gamma_r$ and $D=\overline{A'+B'+C'}$, see Fig. 2. Since
\begin{equation}
2 \pi i [\tilde{Q}_{A'}, \tilde{Q}_{B'}] = - M_{A'B'C'} - M_{A'B'D},
\end{equation}
we get
\begin{equation}
\langle M_{A'B'C'} {\mathcal O} \rangle = - \langle M_{A'B'D} {\mathcal O} \rangle = - \langle M_{A'B'D} \rangle \langle {\mathcal O} \rangle + {O(r^{-\infty})} = {O(r^{-\infty})},
\end{equation}
where we used the exponential clustering property. Taking the limit $r\rightarrow\infty$ and noting that $\lim_{r\rightarrow\infty} M_{A'B'C'}=M_{ABC}$, we get that $\langle M_{ABC} {\mathcal O}\rangle=0$ for any local observable with a compact localization set and $\langle {\mathcal O}\rangle=0.$ This implies that the same is true for any quasi-local observable with a zero expectation value.
Let
\begin{equation}
h_{jkl} := 2 \langle M_{jkl} \rangle
\end{equation}
be a 2-current valued in ${\mathbb R}$. Since $\tilde{Q}_j$ does not excite the ground state, this 2-current is closed, $\partial h = 0$. For any three regions $A,B,C$ as above we define
\begin{equation}
\sigma := h_{ABC} .
\end{equation}
The cyclic order of $A,B,C$ is determined by the orientation of ${\mathbb R}^2$; changing it negates $\sigma$. Since $\partial h =0$, the quantity $\sigma$ does not actually depend on the choice of the regions $A,B,C$ or the point $p$. Indeed, if one deforms it by adding some region $D$ to $A$, such that $\partial D \cap \partial C$ is finite and $[\tilde{Q}_D,\tilde{Q}_C]$ is well-defined, and subtracting it from $B$ (see Fig. \ref{fig:sigma}), one gets
\begin{equation}
h_{(A+D)(B-D)C} = h_{ABC} + h_{DBC} - h_{ADC} = h_{ABC} + (\partial h)_{D C} = h_{ABC}.
\end{equation}
Note that the 2-current $M_{jkl}$ is not uniquely defined by eq. (\ref{QtQt}). One can add any exact 2-current to $M_{jkl}$ or modify $K_{jk}$ in eq. (\ref{Mrep}) to get a new 2-current that satisfies eq. (\ref{QtQt}). However, using the Remark \ref{rmk:Poincare} it is easy to check that $h_{jkl}$ and $\sigma$ are unaffected by these ambiguities, provided the statement of the remark applies to $\psi$. In particular, $\sigma$ can be computed for the ground state of any $U(1)$-invariant gapped lattice 2d system even if the Hamiltonian is not known.
Let us show that $\sigma$ does not change if one applies to $\psi$ an automorphism $\alpha=\alpha_F(1)$ locally generated by a $U(1)$-invariant 0-chain $F(s)$, $s\in[0,1]$. As explained in Section 2, the states $\psi(s)=\alpha_F(s)(\psi)$ do not have local spontaneous symmetry breaking. Let $M(s)$ be the 2-chain $M$ computed for the state $\psi(s)$ and $h(s)=\langle M(s)\rangle_{\psi(s)}$. Using (\ref{1st property of T}) and (\ref{eq:Knew}) we get
\begin{multline}
h_{jkl}(1)= \pi i \langle [(Q+\partial T^F)_{j}, (K+T^{F})_{kl}] + \text{cyclic permutations of }\{j,k,l\} \rangle_\psi
\end{multline}
This equation shows that $h_{ABC}(1)$ does not depend on the behavior of $F$ far from the the point $ABC$. Thus if we replace $F$ with $F_{\Gamma_r}$, where $\Gamma_r$ is a disc of radius $r$ centered at $ABC$, $\sigma$ will only change by an amount of order ${O(r^{-\infty})}$. On the other hand, even after we replace $F$ with $F_{\Gamma_r}$, the new 2-chain $h^r(1)$ still satisfies $\partial h^r(1)=0$. Thus one can compute $\sigma$ using any other point of the plane and three regions meeting at this point. In particular, one can take the point to be far from the disk $\Gamma_r$, so that $ h^r_{ABC}(1)=h_{ABC}(0)+{O(L^{-\infty})}$, $L$ being the distance from the chosen point to $\Gamma_r$. Taking the limit $L\rightarrow\infty$ and $r \to \infty$ we conclude that $h_{ABC}(1)=h_{ABC}(0)$. Thus $\sigma$ is invariant under $U(1)$-invariant locally generated automorphisms.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.3]
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\foreach \x in {-2.25,-1.75,...,2.25}{
\foreach \y in {-2.25,-1.75,...,2.25}{
\node[draw,gray,circle,inner sep=.5pt,fill] at (2*\x,2*\y) {};
}
}
\draw [gray, very thick] (4,0) arc [radius=4, start angle=0, end angle= 360];
\node at (0,2) {$C'$};
\node at (-1.7321,-1) {$A'$};
\node at (1.7321,-1) {$B'$};
\node at (3.5,3.5) {$D$};
\end{tikzpicture}
\caption{Verifying that the magnetization operator does not excite the ground state.
}
\label{fig:Hall}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=.5]
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\draw [red, very thick, dashed] (0.5*3.4641, 0.5*2) -- (0.5*3.4641, -4);
\node at (0,3) {$C$};
\node at (0.86602,-1.5) {$D$};
\node at (-2.5981,-1.5) {$A$};
\node at (2.5981,-1.5) {$B$};
\end{tikzpicture}
\caption{Verifying that $\sigma$ does not depend on the choice of the regions $A,B,C$.}
\label{fig:sigma}
\end{figure}
Let us show that $\sigma/2\pi$ is nothing but the zero-temperature Hall conductance. Let $X$ and $Y$ be the right and the upper half-planes, respectively. The Hall conductance is given by the Kubo formula \cite{niu1984quantised}
\begin{equation}
\sigma_{Hall}=\sum_{j\in X} \sum_{k\in \bar{X}} \sum_{l \in Y} \sum_{m \in \bar{Y}} i\langle0| J_{jk} (1-P)\frac{1}{H^2}(1-P)J_{lm}|0\rangle -(X\leftrightarrow Y)
\end{equation}
where $|0\rangle$ is a cyclic vector for the GNS representation, observables are identified with their images in this representation, and $P=|0\rangle \langle 0|$. Although the regions $X,\bar X,Y,\bar Y$ are non-compact, the quadruple sum is absolutely convergent and thus well-defined. To see this, we note that
for any two observables ${\mathcal A}$ and ${\mathcal B}$ we have
\begin{equation}
\langle 0| {\mathcal A} (1-P)\frac{1}{H^2}(1-P){\mathcal B} |0\rangle = \langle {\mathscr I}_\Delta({\mathcal A}) {\mathscr I}_\Delta({\mathcal B}) \rangle - \langle {\mathscr I}_\Delta({\mathcal A}) \rangle \langle {\mathscr I}_\Delta({\mathcal B}) \rangle .
\end{equation}
Therefore we can rewrite the formula for the Hall conductance in terms of correlators of almost local observables $K_{jk} = {\mathscr I}_\Delta(J_{jk})$:
\begin{equation}
\sigma_{Hall}=\sum_{j\in X} \sum_{k\in \bar{X}} \sum_{l \in Y} \sum_{m \in \bar{Y}} i\langle K_{jk} K_{lm}\rangle -(X\leftrightarrow Y).
\end{equation}
The sum is absolutely convergent thanks to the exponential decay of correlators in the ground state. In fact, one can re-write this expression as an expectation value of a single almost-local observable:
\begin{equation}\label{sigmaHallK}
\sigma_{Hall} = i \langle [K_{X\bar{X}},K_{Y \bar{Y}}] \rangle .
\end{equation}
While $K_{X\bar X}$ and $K_{Y\bar Y}$ are 0-chains localized on non-compact sets, their commutator is an almost local observable with a well-defined expectation value.
The expression (\ref{sigmaHallK}) does not change if one modifies $X$ by adding or subtracting any compact region $\Gamma$. Indeed, for any compact $\Gamma$ and any $r>0$ one can always find some finite $\Gamma'$ such that the distance between $\Gamma$ and $(Y-\Gamma')$ is of order $r$. From the definition of the current $K$, we have $K_{(X+\Gamma) (\overline{X+\Gamma})} - K_{X \bar{X}} = Q_{\Gamma} - \tilde{Q}_{\Gamma}$. Therefore the change of the Hall conductance is
\begin{multline}
i \langle [Q_{\Gamma}-\tilde{Q}_{\Gamma}, K_{Y \bar{Y}}] \rangle = i \langle [Q_{\Gamma}, K_{Y \bar{Y}}] \rangle = \\ =
i \langle [Q_{\Gamma}, K_{(Y-\Gamma') (\overline{Y-\Gamma'})} + Q_{\Gamma'} - \tilde{Q}_{\Gamma'} ] \rangle = {O(r^{-\infty})} .
\end{multline}
Here in the last step we used $[Q_\Gamma,Q_{\Gamma'}]=0$ for any two finite regions $\Gamma,\Gamma'$. Taking the limit $r\rightarrow\infty$ we get the desired result.
In the same way one can show that $\sigma_{Hall}$ is not affected when one modifies $Y$ by a finite region. Modifying $X$ and $Y$ by adding or subtracting infinite regions which lie within non-overlapping cones also does not change $\sigma_{Hall}$ since one can replace them by finite regions of size $r$ up to ${O(r^{-\infty})}$ terms, and then take the limit $r \to \infty$. Therefore instead of $X$ and $Y$ being half-planes, one can take $X=(C+D)$ and $Y=(A+D)$ with the regions $A,B,C,D$ as shown on Fig. \ref{fig:XY}.
This configuration has a free parameter $L$ (the distance between two triple points, or equivalently between $B$ and $D$).
Then we have
\begin{multline}
2 \pi \sigma_{Hall} = 2 \pi i \langle ([K_{CA},K_{AB}] + [K_{AB},K_{BC}] + [K_{BC},K_{CA}]) + \\ + ([K_{AC},K_{CD}] + [K_{CD},K_{DA}] + [K_{DA},K_{AC}]) \rangle +{O(L^{-\infty})} .
\end{multline}
For any three regions $A$,$B$,$C$ as in Fig.~\ref{fig:magnetization} consider a disk $\Gamma_r$ of radius $r$ with the center at the triple point. Let $A'=A\cap \Gamma_r$, etc., as in Fig.~\ref{fig:Hall}. Then
\begin{multline}
\langle [K_{AB}+K_{AC},K_{BC}] \rangle = \langle [K_{A'B'}+K_{A'C'},K_{BC}] \rangle + {O(r^{-\infty})}= \\ = \langle [Q_{A'}-\tilde{Q}_{A'}-K_{A'D},K_{BC}] \rangle + {O(r^{-\infty})} = \langle [Q_{A'},K_{BC}] \rangle + {O(r^{-\infty})} = \\ = \langle [Q_{A'}+\tilde{Q}_{A'},K_{BC}] \rangle + {O(r^{-\infty})} = \langle [Q_{A}+\tilde{Q}_{A},K_{BC}] \rangle + {O(r^{-\infty})},
\end{multline}
and since $r$ can be arbitrary, we have
\begin{equation}
\langle [K_{AB},K_{BC}] \rangle + \langle [K_{BC},K_{CA}] \rangle = \langle [Q_{A}+\tilde{Q}_{A},K_{BC}] \rangle .
\end{equation}
Therefore
\begin{equation}
2 \pi i \langle ([K_{CA},K_{AB}] + [K_{AB},K_{BC}] + [K_{BC},K_{CA}]) \rangle = \langle M_{ABC}\rangle + {O(L^{-\infty})},
\end{equation}
and
\begin{equation}
2 \pi \sigma_{Hall} = \langle (M_{ABC} + M_{ACD}) \rangle + {O(L^{-\infty})} = \sigma + {O(L^{-\infty})}
\end{equation}
Since $L$ can be arbitrary, we have $2 \pi \sigma_{Hall} = \sigma$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.5]
\draw[gray, very thick] (-1,-1) -- (-1,-3);
\draw[gray, very thick] (-1,-1) -- (-3,-1);
\draw[gray, very thick] (-1,-1) -- (1,1);
\draw[gray, very thick] (1,1) -- (1,3);
\draw[gray, very thick] (1,1) -- (3,1);
\node at (2,2) {$D$};
\node at (-1,1) {$A$};
\node at (-2,-2) {$B$};
\node at (1,-1) {$C$};
\end{tikzpicture}
\caption{A choice for the modified $X$ and $Y$.
}
\label{fig:XY}
\end{figure}
\subsection{Vortices}\label{vortices}
Let us consider three regions $A$,$B$,$C$ meeting at a point $p=ABC$ (see Fig. \ref{fig:vortexA}). As in the previous subsection, we assume that the paths $AB,BC,CA$ lie in non-overlapping open cones with vertex $ABC$. Let $\upsilon_{ABC}$ be an automorphism of the algebra of observables $\alpha_F(1)$ generated by the 0-chain $F=2\pi (Q_{A}-K_{AB})$. An equivalent way to define $\upsilon_{ABC}$ is to let $\Gamma_{r}$ be a disk $B_r(p)$, let $A'_r=A \cap \Gamma_{r}$, etc., as in Fig. \ref{fig:magnetization}, and for any observable ${\mathcal O}$ define
\begin{equation}
\upsilon_{ABC}({\mathcal O})=\lim_{r\rightarrow\infty} \exp\left(2\pi i (Q_{A'_r}-K_{A'_r B'_r})\right) {\mathcal O} \exp\left(-2\pi i (Q_{A'_r}-K_{A'_r B'_r})\right).
\end{equation}
The automorphism $\upsilon_{ABC}$ is approximately localized on the path $AB$. This follows from Lemma \ref{lma:FGA} and the fact that $\alpha_{Q_A}(2\pi)$ is the identity automorphism. We will be using this observation many times in what follows.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (0,-4) to (0,0) -- (-0.866025*1.5,0.5*1.5) -- (-0.866025*1.5,-4) -- (0,-4);
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\node at (0,3) {$C$};
\node at (-2.59815,-1.5) {$A$};
\node at (2.59815,-1.5) {$B$};
\end{tikzpicture}
\caption{}
\label{fig:vortexA}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (0,-4) to (0,0) -- (0-0.866025*1.5,0.5*1.5) -- (0+0.866025*1.5,0.5*1.5) -- (0+0.866025*1.5,-4) -- (0,-4);
\draw[gray, very thick] (0,0) -- (0+3.4641,2);
\draw[gray, very thick] (0,0) -- (0-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\node at (0,3) {$C$};
\node at (0-2.59815,-1.5) {$A$};
\node at (0+2.59815,-1.5) {$B$};
\end{tikzpicture}
\caption{}
\label{fig:vortexB}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (0,-3) to (0+3.4641/3,-2/3-3) -- (0-3.4641/3,-2/3-3) -- (0-3.4641/3, 3+0.45*1.5) -- (0,3) -- (0,-3);
\draw[red, very thick, ->] (0-3.4641/6,-2) -- (0-3.4641/6,2);
\draw[gray, very thick] (0,-3) -- (0+3.4641,-2-3);
\draw[gray, very thick] (0,-3) -- (0-3.4641,-2-3);
\draw[gray, very thick] (0,-3) -- (0,6-3);
\draw[gray, very thick] (0,-3) -- (0,4-3);
\draw[gray, very thick] (0,3) -- (0+3.4641,2+3);
\draw[gray, very thick] (0,3) -- (0-3.4641,2+3);
\node at (0,-5) {$C$};
\node at (0,5) {$D$};
\node at (0-2.59815,0) {$A$};
\node at (0+2.59815,0) {$B$};
\end{tikzpicture}
\caption{}
\label{fig:vortexC}
\end{subfigure}
\caption{Creation, annihilation and transport of vortices. The shaded region covers sites, for which operators $Q_j$ are involved.}
\label{fig:vortex}
\end{figure}
We will say that two states on ${\mathscr A}$ lie in the same superselection sector if one can be obtained from the other by conjugation with a unitary element of ${{\mathscr A}_{a\ell}}$. Note that this differs from both the Doplicher-Haag-Roberts definition and the Buchholz-Fredenhagen definition of superselection sectors as discussed for example in \cite{Haag}. This condition implies that the corresponding GNS representations are unitarily equivalent. Let $\psi_{ABC}$ be a state obtained from the ground state $\psi_0$ by the automorphism $\upsilon_{ABC}$. We claim that the superselection sector of $\psi_{ABC}$ does not depend on the precise location of the paths $AB$, $BC$ and $CA$. More precisely, suppose one chose three non-overlapping open cones with the vertex at $ABC$ which contain the paths $AB$, $BC$ and $CA$. Then changing the paths within these cones will change the state $\psi_{ABC}$ at most by conjugation with an element of ${{\mathscr A}_{a\ell}}$. Indeed, we can change the path $BC$ by adding an observable $2 \pi K_{AE}$ to $2\pi (Q_{A}-K_{AB})$, where $E$ is a (possibly non-compact) region inside the cone of the path $BC$. By Lemma \ref{lma:FA}, the superselection sector is not affected. Changing the path $CA$ corresponds to adding a 0-chain $2 \pi (Q_E-K_{E B})$, where $E$ is a (possibly non-compact) region inside the cone of the path $CA$. Since $[Q_{E}-K_{EB}, Q_{A}-K_{AB}]$ is an almost local observable, by Lemma \ref{lma:FpX} the superselection sector is not affected. Finally, we can modify the path $AB$ by adding a 0-chain $\tilde{Q}_{E}$, where $E$ is a (possibly non-compact) region inside the cone of the path $AB$. Since $2 \pi (Q_{A}-K_{AB})=2 \pi (\tilde{Q}_A + K_{AC})$ and $[K_{AC},\alpha_{\tilde{Q}_A}(s) (\tilde{Q}_E)]$ is an almost local observable, by Lemma \ref{lma:Ftilde} the superselection sector in unaffected.
The independence of the superselection sector on the choice of the paths has an important consequence. Let ${\mathcal A}$ be an almost local observable $a$-localized on some site $j$ which is at distance $r$ from the point $ABC$. Let us choose a cone $\Sigma$ with a vertex at $ABC$ and not containing $j$. Given any choice of regions $A,B,C$, we can re-arrange the paths and regions so that the new path $A'B'$ is inside $\Sigma$. Then
\begin{multline}
\langle {\mathcal A} \rangle_{\psi_{ABC}}=\langle {\mathcal U} \, {\mathcal A} \, {\mathcal U}^{-1} \rangle_{\psi_{A'B'C'}} + {O(r^{-\infty})} = \\ = \langle {\mathcal A} \rangle_{\psi_{A'B'C'}} + {O(r^{-\infty})} = \langle {\mathcal A} \rangle_{\psi_0} + {O(r^{-\infty})}.
\end{multline}
Here ${\mathcal U} \in {{\mathscr A}_{a\ell}}$, and we used the localization property of $\upsilon_{A'B'C'}$.
This implies that almost local observables localized in any cone with vertex $ABC$ and far from $ABC$ cannot detect the presence of a vortex at $ABC$.\ (This statement, however, might not be true if we consider local observables localized on a ring around $ABC$. In this case one cannot deform the paths such that there is no intersection between the ring and these paths.) In particular, this implies that the state $\psi_{ABC}$ has a finite energy. Such a state can be interpreted as a state with a vortex (unit of magnetic flux) at the point $ABC$.
\begin{remark}
The automorphism $\upsilon_{ABC}$ has ambiguities related to the choice of the current $K_{jk}$. However, the superselection sector of the state $\psi_{ABC}=\upsilon_{ABC}(\psi_0)$ is unambiguous. Indeed, modifying $K$ by some $\partial N$ leads to an addition of an almost local observable $N_{ABC}$, and by Lemma \ref{lma:FA} does not change the superselection sector. Another way to modify $K$ is to add some 1-current $K'$ that does not excite the ground state. That corresponds to addition of $2 \pi K'_{AB}$ to $2 \pi (Q_A-K_{AB}) = 2 \pi (\tilde{Q}_A+K_{AC})$, and since $[K_{AC},\alpha_{\tilde{Q}_A}(s)(K'_{AB})]$ is an almost local observable, by Lemma \ref{lma:Ftilde} the superselection sector is unchanged. By Remark \ref{rmk:Poincare}, these are the only ambiguities in the definition of $K$.
\end{remark}
Similarly, one can define an automorphism $\bar{\upsilon}_{ABC}$ generated by $2 \pi (Q_{B}+Q_{C}-K_{BA})$ with the same properties (see Fig. \ref{fig:vortexB}) and the state $\bar\psi_{ABC}=\bar\upsilon_{ABC}(\psi_0)$. Note that $(\bar{\upsilon}_{ABC} \circ \upsilon_{ABC})(\psi_{0}) = \psi_{0}$. This follows from $2 \pi (Q_B+Q_C-K_{BA})=2 \pi (Q-(Q_A-K_{AB}))$ and Lemma \ref{lma:FQ}. It is natural to interpret the state produced by the automorphism $\bar{\upsilon}_{ABC}$ as an anti-vortex. By applying automorphisms $\upsilon$ and $\bar{\upsilon}$ at different points and choosing the paths so that they do not overlap, one can create several vortices and anti-vortices at different points. The superselection sector of the resulting state does not depend on the choice of the paths, provided the paths are contained in non-overlapping cones.
In general a vortex state cannot be produced by an action of an almost local observable (or even any quasi-local observable) on the ground state. Thus a vortex state may belong to a different superselection sector than the ground state. However, one can create a vortex-anti-vortex pair by acting on the ground state with a unitary almost local observable. For example, suppose one wants to create a vortex at $ABD$ and an anti-vortex at $BCA$ (see Fig. \ref{fig:vortexC}). This can be accomplished using an automorphism generated by $2 \pi(Q_{A+C} - K_{AB})$. It can be obtained as a limit of automorphisms of the form
\begin{equation}
{\mathcal O}\mapsto \exp\left(2\pi i (Q_{A'_r+C'_r}-K_{A'_r B'_r})\right) {\mathcal O} \exp\left(-2\pi i (Q_{A'_r+C'_r}-K_{A'_r B'_r})\right).
\end{equation}
Since $K_{AB}\in{\mathscr A}$, by Lemma \ref{lma:FA} this automorphism is a conjugation by a unitary observable
\begin{equation}
\alpha_{Q_{A+C}-K_{AB}}(2\pi)\left(e^{-2\pi i K_{AB}}\right).
\end{equation}
In fact, since $K_{AB}\in{{\mathscr A}_{a\ell}}$, this observable is almost local. In the following we will be using the notation $e^{2 \pi i (Q_{A+C}-K_{AB})}$ for it.
\begin{figure}
\centering
\begin{tikzpicture}[scale=.5]
\filldraw[color=red!10, fill=red!10, ultra thick] (0,0) -- (2,-4) -- (-2,-4) -- cycle;
\filldraw[color=red!10, fill=red!10, ultra thick] (0,0) -- (3.4641-1,2+3.4641/2) -- (3.4641+1,2-3.4641/2) -- cycle;
\filldraw[color=red!10, fill=red!10, ultra thick] (0,0) -- (-3.4641+1,2+3.4641/2) -- (-3.4641-1,2-3.4641/2) -- cycle;
\draw[red, very thick] (0,0) -- (3.4641+1,2-3.4641/2);
\draw[red, very thick] (0,0) -- (3.4641-1,2+3.4641/2);
\draw[red, very thick] (0,0) -- (-3.4641-1,2-3.4641/2);
\draw[red, very thick] (0,0) -- (-3.4641+1,2+3.4641/2);
\draw[red, very thick] (0,0) -- (-2,-4);
\draw[red, very thick] (0,0) -- (2,-4);
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (0,0) -- (0,-4);
\draw[color=blue,ultra thick, ->] (0,2) arc (90:120:2);
\draw[color=blue,ultra thick, ->] (0,2) arc (90:60:2);
\draw[color=blue,ultra thick, ->] (3.4641/2,-2/2) arc (-30:0:2);
\draw[color=blue,ultra thick, ->] (3.4641/2,-2/2) arc (-30:-60:2);
\draw[color=blue,ultra thick, ->] (-3.4641/2,-2/2) arc (210:240:2);
\draw[color=blue,ultra thick, ->] (-3.4641/2,-2/2) arc (210:180:2);
\node at (0,0.9*3) {$\theta_{C}$};
\node at (-0.9*1.7321*3/2,-0.9*3/2) {$\theta_{A}$};
\node at (0.9*1.7321*3/2,-0.9*3/2) {$\theta_{B}$};
\node at (0,1.5*3) {$C$};
\node at (-1.5*1.7321*3/2,-1.5*3/2) {$A$};
\node at (1.5*1.7321*3/2,-1.5*3/2) {$B$};
\end{tikzpicture}
\caption{Admissible paths $AB$, $BC$ and $CA$ meeting at the point $ABC$.
}
\label{fig:cones}
\end{figure}
In Section 3 we considered sequences of almost local observables and defined sequences of approximate localization sets for them. In this section we will be dealing with more general infinite sets of almost local observables and it is convenient to generalize the notion of approximate localization sets to them. Let $\{{\mathcal A}_\alpha,a \in{\mathcal I}\}$ be an infinite collection of observables labeled by an infinite set ${\mathcal I}$ and $\{p_\alpha,\alpha\in{\mathcal I}\}$ be an infinite collection of points of $\Lambda$ labeled by the same set. We will say that $\{{\mathcal A}_\alpha\}$ is approximately localized on $\{p_\alpha\}$ if there exists a positive function $f(r)={O(r^{-\infty})}$ such that for any $r>0$ and any $\alpha\in{\mathcal I}$ there is ${\mathcal A}_{\alpha}^{(r)} \in {\mathscr A}_{B_r(p_{\alpha})}$ such that
$\|{\mathcal A}_\alpha-{\mathcal A}_\alpha^{(r)}\|\leq \|{\mathcal A}_\alpha\| f(r).$ In particular, this condition implies that ${\mathcal A}_\alpha$ is almost local for all $\alpha$.
To study the transport of vortices along paths, we will first define the set of vortex configurations and paths of interest. We only consider configurations with a finite number of vortices, so the set of initial and final positions of vortices is always finite. These positions are vertices of a trivalent graph, some of whose edges connect the vertices and some go off to infinity. The paths needed to define vortex states are paths on this graph. For simplicity we will assume that the graph is a tree. The edges of the graph need not be straight lines or segments, but we need to assume that they do not come close to each other. One way to achieve this is the following recursive procedure. Let us fix an angle $\theta_c$ and a vertex $ABC$ (see Fig.\ref{fig:cones}). Removing $ABC$ will cause the graph to fall into three components each of which is itself a tree. We require that each component is contained in a cone with vertex $ABC$ such that the angles between adjacent boundaries of different cones are greater than $\theta_c$ . Then for each component we take the vertex connected to $ABC$ as the basepoint and repeat the procedure. Any trivalent graph which satisfies these requirements will be called admissible.
Let us consider a process (see Fig. \ref{fig:transport}) in which we create a vortex at $(A-E)C(B-C)$ and an anti-vortex at $(A-E)(D-F)E$ and move the vortex to $(B-C)F(D-F)$ along the lines as shown on the figure. We assume that the graph formed by the lines and vertices is admissible. Let $L$ be the smallest distance between the triple points. Let $X_{A}=(Q_{A}-K_{(A-E)(B-C+D)})$ and $X_{B}=(Q_{B}-K_{(B-C)(A+D-F)})$ be 0-chains which generate automorphisms $\alpha_{X_A}(2 \pi)$ and $\alpha_{X_B}(2 \pi)$ corresponding to these movements. Note that $(X_A - Q_{A})$ and $(X_B - Q_{B})$ are almost local observables. Let us denote by $X^r_{A}$ and $X^r_{B}$ the regularized operators $(Q^r_{A'_r}-K_{(A-E)(B-C+D)})$ and $(Q_{B'_r}-K_{(B-C)(A+D-F)})$, correspondingly. Then we have an identity:
\begin{equation}\label{eq:composition of transport}
e^{2 \pi i X_B} e^{2 \pi i X_A} |0\rangle
= \left( e^{\pi i \langle M_{ABD} \rangle} + {O(L^{-\infty})} \right) e^{2 \pi i(X_A+X_B) } |0\rangle.
\end{equation}
where ${O(L^{-\infty})}$ is in an observable with the norm less then some ${O(L^{-\infty})}$ function, which can be chosen the same for any configuration under consideration, $|0\rangle$ is the vacuum vector in the ground-state GNS representation, and observables are interpreted as operators using the GNS representation.
\begin{proof}
Let ${\mathcal I}$ be the set of all labeled configurations of regions and admissible graphs as in Fig. \ref{fig:transport}. To any such a configuration we can attach a self-adjoint almost local observable $2\pi i [X_B,X_A]$. It is easy to see that
\begin{equation}\label{eq:XXM}
2 \pi i [X_{B},X_{A}] = M_{ABD} + {O(L^{-\infty})} ,
\end{equation}
and thus this collection of observables is approximately localized on the points $ABD$. The Lieb-Robinson bound implies
\begin{multline}
-i \frac{d}{d \phi} \alpha_{X_{A,B}}(\phi)([X_B,X_A]) = \ad_{X_{A,B}} \left( \alpha_{X_{A,B}}(\phi)([X_B,X_A]) \right) = \\ =
\ad_{\tilde{Q}_{A,B}} \left( \alpha_{X_{A,B}}(\phi)([X_B,X_A])] \right) + {O(L^{-\infty})} ,
\end{multline}
and therefore
\begin{equation}
\alpha_{X_{A,B}}(\phi)([X_B,X_A]) = \alpha_{\tilde{Q}_{A,B}}(\phi)([X_B,X_A]) + {O(L^{-\infty})} .
\end{equation}
This implies that
\begin{multline}
\left( \alpha_{X_B}(\phi)(X_A) - X_A \right) = i \int_{0}^{\phi} ds \: \alpha_{X_B}(s)([X_B,X_A]) = \\ =
i \int_{0}^{\phi} ds \: \alpha_{\tilde{Q}_B}(s)([X_B,X_A]) + {O(L^{-\infty})}
\end{multline}
defines an infinite collection of almost local observables labeled by ${\mathcal I}$ which is localized at the points $ABD$ and satisfies
\begin{equation}\label{eq:eadXX}
\left( \alpha_{X_B}(\phi)(X_A) - X_A \right) | 0 \rangle = \left( \frac{\phi}{2 \pi} \langle M_{ABD} \rangle + {O(L^{-\infty})} \right) |0\rangle.
\end{equation}
Let $V(\phi)$ and $W(\phi)$ be almost local unitaries
\begin{equation}
V(\phi) = \lim_{r \to \infty} e^{- i \phi (Q_{A'_r} + Q_{B'_r})} e^{i \phi (X^r_A + X^r_B)}
\end{equation}
\begin{equation}
W(\phi) = \lim_{r \to \infty} e^{- i \phi (Q_{A'_r} + Q_{B'_r})} e^{i \phi (X^r_B)} e^{i \phi (X^r_A)}
\end{equation}
which satisfy
\begin{equation}\label{eq:VdV}
V^{\dagger}(\phi) \left( -i \frac{d}{d \phi} \right) V(\phi) = \alpha^{-1}_{X_A+X_B}(\phi) \left( (X_A-Q_{A})+(X_B-Q_{B})\right),
\end{equation}
\begin{multline}\label{eq:WdW}
W^{\dagger}(\phi) \left( -i \frac{d}{d \phi} \right) W(\phi) = \\ = \alpha^{-1}_{X_A}(\phi) \circ \alpha^{-1}_{X_B}(\phi) \left( (X_A-Q_{A})+(X_B-Q_{B})+(\alpha_{X_B}(\phi)(X_A)-X_A) \right) .
\end{multline}
By comparing eq. (\ref{eq:WdW}) and eq. (\ref{eq:VdV}) and using
\begin{multline}
\left( \alpha^{-1}_{X_A}(\phi) \circ \alpha^{-1}_{X_B}(\phi) \left( \alpha_{X_B}(\phi)(X_A)-X_A \right) \right) |0\rangle = \\ = \left( \alpha^{-1}_{\tilde{Q}_A}(\phi) \circ \alpha^{-1}_{\tilde{Q}_B}(\phi) \left( \alpha_{X_B}(\phi)(X_A)-X_A \right) + {O(L^{-\infty})} \right) |0\rangle = \\ =
\left( \frac{\phi}{2 \pi} \langle M_{ABD} \rangle + {O(L^{-\infty})} \right) |0\rangle ,
\end{multline}
we conclude
\begin{multline}
W^{\dagger}(\phi) \left( -i \frac{d}{d \phi} \right) W(\phi) |0\rangle = \\ =
V^{\dagger}(\phi) \left( -i \frac{d}{d \phi} \right) V(\phi) |0\rangle + \left( \frac{\phi}{2 \pi} \langle M_{ABD}\rangle + {O(L^{-\infty})} \right) |0\rangle.
\end{multline}
Since $V(2 \pi) = e^{2 \pi i (X_A + X_B)}$ and $W(2 \pi ) = e^{2 \pi i X_{B}} e^{2 \pi i X_{A}}$, that implies eq. (\ref{eq:composition of transport}).
\end{proof}
We see that vortex-transport operators for large enough paths without "sharp" turns compose in the expected way except for a phase $e^{\pi i \langle M_{ABD} \rangle}$. Similarly, one can show that if one transports a vortex so that shaded regions intersect (see Fig. \ref{fig:transport2}), one gets a phase $e^{\pi i \langle M_{BAD} \rangle}=e^{-\pi i \langle M_{ABD}\rangle}$.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.4]
\draw [color=red!50, fill=red!20, very thick] (0,0) -- (-3.4641,2) -- (-3.4641,3) -- (-3.4641-0.86602,2-0.5) -- (-0.86602,-0.5) -- (-0.86602,-4-0.5) -- (0.86602,-4-0.5) -- (0.86602,-0.5) -- (3.4641+0.86602,2-0.5) -- (3.4641,2) ;
\draw[red, very thick, ->] (-3.4641/2-0.86602/2,2/2-0.5/2) -- (-0.86602/2,-0.5/2) -- (-0.86602/2,-0.5/2-2);
\draw[red, very thick, ->] (0.86602/2,-0.5/2-2) -- (0.86602/2,-0.5/2) -- (3.4641/2+0.86602/2,2/2-0.5/2) ;
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (3.4641,2) -- (3.4641,4);
\draw[gray, very thick] (3.4641,2) -- (1.5*3.4641, 2-1);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (-3.4641,2) -- (-3.4641,4);
\draw[gray, very thick] (-3.4641,2) -- (-1.5*3.4641,1);
\draw[gray, very thick] (0,0) -- (0,-4);
\draw[gray, very thick] (0,-4) -- (2*0.86602,-4-2*0.5);
\draw[gray, very thick] (0,-4) -- (-2*0.86602,-4-2*0.5);
\node at (0+0,1.5*2) {$D-F$};
\node at (0-1.5*1.7321,-1.5*1) {$A-E$};
\node at (0+1.5*1.7321,-1.5*1) {$B-C$};
\node at (0+0,-6) {$C$};
\node at (0+1.5*3.4641,3) {$F$};
\node at (0-1.5*3.4641,3) {$E$};
\draw [color=red!50, fill=red!20, very thick] (15,0) -- (15-3.4641,2) -- (15-3.4641,3) -- (15-3.4641-0.86602,2-0.5) -- (15,-1) -- (15+3.4641+0.86602,2-0.5) -- (15+3.4641,2) ;
\draw[red, very thick, ->] (15-3.4641/2-0.86602/2,2/2-0.5/2) -- (15,-0.5) -- (15+3.4641/2+0.86602/2,2/2-0.5/2);
\draw[gray, very thick] (15,0) -- (15+3.4641,2);
\draw[gray, very thick] (15+3.4641,2) -- (15+3.4641,4);
\draw[gray, very thick] (15+3.4641,2) -- (15+1.5*3.4641, 2-1);
\draw[gray, very thick] (15,0) -- (15-3.4641,2);
\draw[gray, very thick] (15-3.4641,2) -- (15-3.4641,4);
\draw[gray, very thick] (15-3.4641,2) -- (15-1.5*3.4641,1);
\draw[gray, very thick] (15+0,0) -- (15+0,-4);
\draw[gray, very thick] (15+0,-4) -- (15+2*0.86602,-4-2*0.5);
\draw[gray, very thick] (15+0,-4) -- (15-2*0.86602,-4-2*0.5);
\node at (15+0,1.5*2) {$D-F$};
\node at (15-1.5*1.7321,-1.5*1) {$A-E$};
\node at (15+1.5*1.7321,-1.5*1) {$B-C$};
\node at (15+0,-6) {$C$};
\node at (15+1.5*3.4641,3) {$F$};
\node at (15-1.5*3.4641,3) {$E$};
\end{tikzpicture}
\caption{ Transport of vortices.
}
\label{fig:transport}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.4]
\draw [color=red!50, fill=red!20, very thick] (0,0) -- (-3.4641,2) -- (-3.4641-0.86602,2-0.5) -- (-3.4641,3) -- (3.4641/4,2/4) -- (0.86602,-4-0.5) -- (0,-4) -- (0,0);
\draw [color=red!50, fill=red!20, very thick] (0,0) -- (0,-4) -- (0.86602,-4-0.5) -- (-0.86602,-4-0.5) -- (-3.4641/4,2/4) -- (3.4641,3) -- (3.4641,2) -- (0,0);
\draw[red, very thick, ->] (-3.4641/2+0.86602/2,1+2/2-0.5/2-1/2) -- (+0.86602/2,1-0.5/2-1/2) -- (+0.86602/2,1-0.5/2-2-1/2);
\draw[red, very thick, ->] (-0.86602/2,-0.5/2-2+1/2) -- (-0.86602/2,-0.5/2+1/2) -- (3.4641/2-0.86602/2,2/2-0.5/2+1/2) ;
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (3.4641,2) -- (3.4641,4);
\draw[gray, very thick] (3.4641,2) -- (1.5*3.4641, 2-1);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (-3.4641,2) -- (-3.4641,4);
\draw[gray, very thick] (-3.4641,2) -- (-1.5*3.4641,1);
\draw[gray, very thick] (0,0) -- (0,-4);
\draw[gray, very thick] (0,-4) -- (2*0.86602,-4-2*0.5);
\draw[gray, very thick] (0,-4) -- (-2*0.86602,-4-2*0.5);
\node at (0+0,1.5*2) {$D-F$};
\node at (0-1.5*1.7321,-1.5*1) {$A-E$};
\node at (0+1.5*1.7321,-1.5*1) {$B-C$};
\node at (0+0,-6) {$C$};
\node at (0+1.5*3.4641,3) {$F$};
\node at (0-1.5*3.4641,3) {$E$};
\draw [color=red!50, fill=red!20, very thick] (15,0) -- (15-3.4641,2) -- (15-3.4641-0.86602,2-0.5) -- (15-3.4641,3) -- (15,1) -- (15+3.4641,3) -- (15+3.4641,2) -- (15,0);
\draw[red, very thick, ->] (15-3.4641/2-0.86602/2,2/2-0.5/2+1) -- (15,-0.5+1) -- (15+3.4641/2+0.86602/2,2/2-0.5/2+1);
\draw[gray, very thick] (15,0) -- (15+3.4641,2);
\draw[gray, very thick] (15+3.4641,2) -- (15+3.4641,4);
\draw[gray, very thick] (15+3.4641,2) -- (15+1.5*3.4641, 2-1);
\draw[gray, very thick] (15,0) -- (15-3.4641,2);
\draw[gray, very thick] (15-3.4641,2) -- (15-3.4641,4);
\draw[gray, very thick] (15-3.4641,2) -- (15-1.5*3.4641,1);
\draw[gray, very thick] (15+0,0) -- (15+0,-4);
\draw[gray, very thick] (15+0,-4) -- (15+2*0.86602,-4-2*0.5);
\draw[gray, very thick] (15+0,-4) -- (15-2*0.86602,-4-2*0.5);
\node at (15+0,1.5*2) {$D-F$};
\node at (15-1.5*1.7321,-1.5*1) {$A-E$};
\node at (15+1.5*1.7321,-1.5*1) {$B-C$};
\node at (15+0,-6) {$C$};
\node at (15+1.5*3.4641,3) {$F$};
\node at (15-1.5*3.4641,3) {$E$};
\end{tikzpicture}
\caption{Transport of vortices along intersecting paths.
}
\label{fig:transport2}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[scale=.5]
\node[rotate=90] at (0,0) {
\begin{tikzpicture}[scale=.5]
\draw [color=red!50, fill=red!20, very thick] (0,3) -- (-3.4641,2+3) -- (-3.4641,2+3+1) -- (-1.25*3.4641,+2+3-0.5) -- (0, 3-1) -- (1.25*3.4641,+2+3-0.5) -- (3.4641,2+3);
\draw [color=red!50, fill=red!20, very thick] (0,-3) -- (-3.4641,-2-3) -- (-1.25*3.4641,-2-3+0.5) -- (-3.4641,-2-3-1) -- (0, -4) -- (3.4641,-2-3-1) -- (3.4641,-2-3);
\draw[red, very thick, ->] (-3.4641/2-0.86602/2, 3+2/2-0.5/2) -- (0,3-0.5) -- (3.4641/2+0.86602/2, 3+2/2-0.5/2);
\draw[red, very thick, ->] (-3.4641/2-0.86602/2, -4-2/2+0.5/2) -- (0,-4+0.5) -- (3.4641/2+0.86602/2, -4-2/2+0.5/2);
\draw [color=blue!50, fill=blue!20, very thick, opacity=.5] (0,0) -- (0,3) -- (0+3.4641,2+3) -- (5*0.866025,+2+3-0.5) -- (3.4641,+2+3+1) -- (-0.25*3.4641,3+.5) -- (-0.25*3.4641,-3+.5) -- (-1.25*3.4641,-2-3+0.5) -- (0-3.4641,-2-3) -- (0,-3);
\draw [color=blue!50, fill=blue!20, very thick, opacity=.5] (0,0) -- (0,-3) -- (0+3.4641,-2-3) -- (3.4641,-2-3-1) -- (5*0.866025,-2-3+0.5) -- (0.25*3.4641,-3+.5) -- (0.25*3.4641,3+.5) -- (-3.4641,+2+3+1) --(0-3.4641,2+3) -- (0,3);
\draw[blue, very thick, ->, opacity=.5] (3.4641/2+0.86602/2+3.4641/8, -4-2/2+0.5/2+1/2+2/8) -- (0+3.4641/8,-4+0.5+1/2+2/8) -- (0+3.4641/8,0) -- (0+3.4641/8,2/8+3) -- (0-3.4641/2,2/2+3+0.5);
\draw[blue, very thick, <-, opacity=.5] (-3.4641/2-0.86602/2-3.4641/8, -4-2/2+0.5/2+1/2+2/8) -- (0-3.4641/8,-4+0.5+1/2+2/8) -- (0-3.4641/8,0) -- (0-3.4641/8,2/8+3) -- (0+3.4641/2,2/2+3+0.5);
\draw[gray, very thick] (0,-3) -- (0+3.4641,-2-3);
\draw[gray, very thick] (0+3.4641,-2-3) -- (3.4641,-2-3-1);
\draw[gray, very thick] (0+3.4641,-2-3) -- (1.25*3.4641,-2-3+0.5);
\draw[gray, very thick] (0,-3) -- (0-3.4641,-2-3);
\draw[gray, very thick] (-3.4641,-2-3) -- (-3.4641,-2-3-1);
\draw[gray, very thick] (-3.4641,-2-3) -- (-1.25*3.4641,-2-3+0.5);
\draw[gray, very thick] (0,-3) -- (0,6-3);
\draw[gray, very thick] (0,-3) -- (0,4-3);
\draw[gray, very thick] (0,3) -- (0+3.4641,2+3);
\draw[gray, very thick] (3.4641,2+3) -- (3.4641,+2+3+1);
\draw[gray, very thick] (3.4641,2+3) -- (1.25*3.4641,+2+3-0.5);
\draw[gray, very thick] (0,3) -- (0-3.4641,2+3);
\draw[gray, very thick] (-3.4641,2+3) -- (-3.4641,2+3+1);
\draw[gray, very thick] (-3.4641,2+3) -- (-1.25*3.4641,+2+3-0.5);
\end{tikzpicture}
};
\node at (-5,0) {$C$};
\node at (5,0) {$D$};
\node at (0,2) {$A$};
\node at (0,-2) {$B$};
\node at (-5,5) {$2$};
\node at (5,5) {$4$};
\node at (5,-5) {$3$};
\node at (-5,-5) {$1$};
\end{tikzpicture}
\caption{One first creates vortex/anti-vortex pairs by operators $12$ and $34$ (shaded in red), and then annihilates them by first applying operator $23$, and then $41$ (shaded in blue).
}
\label{fig:c+cc+c}
\end{figure}
One can create vortices and anti-vortices at the corners of a rectangle by applying transport operators $12$ and $34$ to the vacuum vector $|0\rangle$ (see Fig. \ref{fig:c+cc+c}) and then annihilating them in a different order by applying transport operators $23$ followed by $41$. After the application of the operator $23$ one gets the inverse of the application of the operator $41$ times a phase factor $e^{\pi i (\langle M_{ABD} \rangle + \langle M_{ACB} \rangle)}$. Therefore the net result of this operation is multiplication of the vacuum vector by $e^{\pi i \sigma}$ plus ${O(L^{-\infty})}$ corrections.
\subsection{Hall conductance for systems in an invertible phase}\label{HallInv}
In general, one does not expect that one can create a single vortex by applying some almost local or even quasi-local observable to the ground state vector, i.e. the single-vortex state and the ground state can belong to different superselection sectors. However, systems in an invertible phase are special in this regard.
Let $\left({\mathscr A}^{(+)},H^{(+)},\psi^{(+)}_0,Q^{(+)}\right)$ be a $U(1)$-invariant gapped lattice system in an invertible phase. Let $\left({\mathscr A}^{(-)},H^{(-)},\psi^{(-)}_0\right)$ be its inverse. Note that we do not require the inverse to have a non-trivial $U(1)$ symmetry. Formally, we may say that it has a $U(1)$ symmetry whose charge is zero. The composite system has a $U(1)$ symmetry whose charge is $Q=Q^{(+)}\otimes 1^{(-)}$. Vortex states for the composite system are defined in the usual manner. We will show that in the composite system vortex states can be obtained from the ground state by conjugation with an almost local unitary. Using that and the relation between the transport properties of vortices and the Hall conductance, we will show that for the original system the Hall conductance is quantized, $\sigma \in {\mathbb Z}$. Moreover, we show that for an bosonic system in an invertible phase $\sigma \in 2 {\mathbb Z}$, while for a fermionic system in an invertible phase $\sigma$ is even (odd) if and only if vortices are bosons (fermions).
We start with the bosonic case. Recall that the composite system is defined as follows. Its algebra of observables is ${\mathscr A}={\mathscr A}^{(+)}\otimes{\mathscr A}^{(-)}$, its Hamiltonian is $H=H^{(+)}\otimes 1^{(-)}+1^{(+)}\otimes H^{(-)}$, and its ground state $\Psi$ is a state is defined by $\Psi({\mathcal A}^{(+)}\otimes{\mathcal A}^{(-)})=\psi^{(+)}_0({\mathcal A}^{(+)})\psi^{(-)}_0({\mathcal A}^{(-)}).$ Let $({\mathscr A},H(s),\Psi(s))$, $s\in [0,1],$ be a path of bosonic gapped lattice systems connecting the composite system to a gapped system $({\mathscr A},H(1),\Psi(1)={\tilde\Psi})$ with a factorized ground state ${\tilde\Psi}$. As discussed in section \ref{QAevolution}, the ground state $\Psi$ for $H(0)$ is automorphically equivalent to ${\tilde\Psi}$ via an a automorphism $\alpha_G$ locally generated by a 0-chain $G(s) = \sum_{j} G_j (s)$. We denote by $\Pi$ and ${\tilde\Pi}$ the GNS representations of ${\mathscr A}$ corresponding to the states $\Psi$ and ${\tilde\Psi}$.
Consider three regions $A$,$B$,$C$ meeting at a point $ABC$ (see Fig. \ref{fig:vortexA}) such that the boundaries $AB,BC,CA$ form an admissible graph. Let $\Upsilon_{ABC}$ be the vortex inserting automorphism for the composite system $({\mathscr A},H,\Psi,Q)$. It has the form $\Upsilon_{ABC}={\upsilon^{(+)}}_{ABC}\otimes 1^{(-)}$. Let ${\tilde\Upsilon}_{ABC}$ be the automorphism $\left( \alpha_G \circ \Upsilon_{ABC} \circ \alpha^{-1}_G \right)$.
Since the automorphism $\Upsilon_{ABC}$ is approximately localized on the path $AB$, the same is true about ${\tilde\Upsilon}_{ABC}$.
Since the superselection sector of the state $\Upsilon_{ABC}(\Psi)$ is invariant under re-arranging the paths $AB,BC,CA$, the same is true about the state ${\tilde\Upsilon}_{ABC}({\tilde\Psi}).$ Thus the state ${\tilde\Upsilon}_{ABC}$ is asymptotically locally indistinguishable from the vacuum state ${\tilde\Psi}$. By itself, this does not imply ${\tilde\Upsilon}_{ABC} ({\tilde\Psi})$ and ${\tilde\Psi}$ are in the same superselection sector. For example, topologically non-trivial excitations in a toric code are produced from the ground state precisely in this manner \cite{toriccode}. But since ${\tilde\Psi}$ is a factorized state, one expects that every pure state which is asymptotically locally indistinguishable from ${\tilde\Psi}$ is in the same superselection sector. For the states ${\tilde\Upsilon}_{ABC}({\tilde\Psi})$ and ${\tilde\Psi}$ this is shown in Appendix \ref{app:trivial} using a result of T. Matsui \cite{matsui2013boundedness}.
Automorphic equivalence of $\Psi$ and ${\tilde\Psi}$ (by means of a locally generated automorphism) implies that $\Upsilon_{ABC}(\Psi)$ is a vector state in the GNS representation for $\Psi$ which can be produced by an almost local unitary ${\mathcal V}_{ABC}$:
\begin{equation}
|\Upsilon_{ABC} \rangle = \Pi({\mathcal V}_{ABC}) |0 \rangle.
\end{equation}
Note that because all bounds on the operators used in the construction of vortex states are uniform, there is a function $f(r) = {O(r^{-\infty})}$, such that for any vortex state $\Upsilon_{ABC}(\Psi)$ constructed using an admissible graph there is an almost local unitary ${\mathcal V}_{ABC}$ which is $f$-localized.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (0,0) -- (-3.4641,2) -- (-3.4641-0.86602,2-0.5) -- (-0.86602,-0.5) -- (-0.86602,-4) -- (0, - 4) ;
\draw[red, very thick, <-] (-3.4641/2-0.86602/2,2/2-0.5/2) -- (-0.86602/2,-0.5/2) -- (-0.86602/2,-0.5/2-2);
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (3.4641,2) -- (3.4641,4);
\draw[gray, very thick] (3.4641,2) -- (1.5*3.4641, 2-1);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (-3.4641,2) -- (-3.4641,4);
\draw[gray, very thick] (-3.4641,2) -- (-1.5*3.4641,1);
\draw[gray, very thick] (0,0) -- (0,-4);
\node at (-1.4*3.4641,1.4*2) {$1$};
\node at (1.4*3.4641,1.4*2) {$2$};
\end{tikzpicture}
\caption{}
\label{fig:exchA}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (0,0) -- (0,-4) -- (-0.86602,-4) -- (-3.4641/4,2/4) -- (3.4641,3) -- (3.4641,2) -- (0,0);
\draw[red, very thick, ->] (-0.86602/2,-0.5/2-2+1/2) -- (-0.86602/2,-0.5/2+1/2) -- (3.4641/2-0.86602/2,2/2-0.5/2+1/2) ;
\draw[gray, very thick] (0,0) -- (3.4641,2);
\draw[gray, very thick] (3.4641,2) -- (3.4641,4);
\draw[gray, very thick] (3.4641,2) -- (1.5*3.4641, 2-1);
\draw[gray, very thick] (0,0) -- (-3.4641,2);
\draw[gray, very thick] (-3.4641,2) -- (-3.4641,4);
\draw[gray, very thick] (-3.4641,2) -- (-1.5*3.4641,1);
\draw[gray, very thick] (0,0) -- (0,-4);
\node at (-1.4*3.4641,1.4*2) {$1$};
\node at (1.4*3.4641,1.4*2) {$2$};
\end{tikzpicture}
\caption{}
\label{fig:exchB}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[scale=.3]
\draw [color=red!50, fill=red!20, very thick] (15,0) -- (15-3.4641,2) -- (15-3.4641-0.86602,2-0.5) -- (15-3.4641,3) -- (15,1) -- (15+3.4641,3) -- (15+3.4641,2) -- (15,0);
\draw[red, very thick, ->] (15-3.4641/2-0.86602/2,2/2-0.5/2+1) -- (15,-0.5+1) -- (15+3.4641/2+0.86602/2,2/2-0.5/2+1);
\draw[gray, very thick] (15,0) -- (15+3.4641,2);
\draw[gray, very thick] (15+3.4641,2) -- (15+3.4641,4);
\draw[gray, very thick] (15+3.4641,2) -- (15+1.5*3.4641, 2-1);
\draw[gray, very thick] (15,0) -- (15-3.4641,2);
\draw[gray, very thick] (15-3.4641,2) -- (15-3.4641,4);
\draw[gray, very thick] (15-3.4641,2) -- (15-1.5*3.4641,1);
\draw[gray, very thick] (15+0,0) -- (15+0,-4);
\node at (15-1.4*3.4641,1.4*2) {$1$};
\node at (15+1.4*3.4641,1.4*2) {$2$};
\end{tikzpicture}
\caption{}
\label{fig:exchC}
\end{subfigure}
\caption{The processes corresponding to ${\mathcal V}_1$, ${\mathcal V}_2$ and ${\mathcal W}_{\bar{1} 2}$.
}
\label{fig:exch}
\end{figure}
Let us consider an admissible graph depicted on Fig. \ref{fig:exch} with segments connecting triple points having length $L$ . Let ${\mathcal V}_{1}$ and ${\mathcal V}_{2}$ be almost local unitary observables creating vortices at points $1$ and $2$ as shown on Fig. \ref{fig:exchA} and Fig. \ref{fig:exchB}, and let ${\mathcal W}_{\bar{1} 2}$ be a transport operator shown on Fig. \ref{fig:exchC}. Then, using the results from section \ref{vortices}, we have
\begin{multline}
|0 \rangle = \Pi \left( ({\mathcal V}_2^{-1} {\mathcal V}_1)( {\mathcal V}_2 {\mathcal V}_1^{-1}) \right) |0\rangle + {O(L^{-\infty})} = \\ = e^{-\pi i \sigma/2}
\Pi \left( {\mathcal V}_2^{-1} {\mathcal V}_1 {\mathcal W}_{\bar{1} 2} \right) |0\rangle + {O(L^{-\infty})} =
e^{-\pi i \sigma }
|0\rangle + {O(L^{-\infty})}
\end{multline}
Therefore, for a bosonic spin systems $\sigma \in 2 {\mathbb Z}$.
For fermionic systems the arguments are the same, but the unitary equivalence ${\mathcal U}$ relating the vortex state and the ground state can either preserve or flip fermionic parity. In the former case, the almost local observable ${\mathcal V}_{ABC}$ has even fermionic parity, and the same arguments as above show that $\sigma\in 2{\mathbb Z}$. In the latter case, ${\mathcal V}_{ABC}$ has odd fermionic parity, and thus operators creating vortices at widely separated points approximately anti-commute. The above argument then shows that $\sigma$ is an odd integer. Thus vortices are bosons or fermions depending on whether $\sigma$ is even or odd, in agreement with \cite{LevinSenthil}.
\begin{remark} Let us say that a pure state $\psi^{(+)}$ on ${\mathscr A}^{(+)}$ is in an invertible phase if there is another pure state $\psi^{(-)}$ on ${\mathscr A}^{(-)}$ and a locally generated automorphism $\alpha_F$ of ${\mathscr A} = {\mathscr A}^{(+)} \otimes {\mathscr A}^{(-)}$ such that the state $\alpha_F(\psi^{(+)} \otimes \psi^{(-)})$ is factorized. For a factorized state we can always choose a gapped local Hamiltonian $H=\sum_j H_j$, such that it is a ground state of this Hamiltonian. Then $\alpha_F(H)$ is gapped and has $\psi^{(+)} \otimes \psi^{(-)}$ as a ground state. If $\psi^{(+)}$ is invariant under a $U(1)$ symmetry with charge $Q$ on ${\mathscr A}$, then $\psi^{(+)} \otimes \psi^{(-)}$ is also the ground state of a gapped Hamiltonian $(\alpha_Q(\phi) \circ \alpha_F)(H)$ for any $\phi\in {\mathbb R}/2\pi{\mathbb Z}$. Therefore $\psi^{(+)}$ is also the ground state of a $U(1)$-invariant gapped Hamiltonian
\begin{equation}
H' = \int_0^{2\pi} (\alpha_Q(\phi) \circ \alpha_F)(H) d \phi .
\end{equation}
Thus to any $U(1)$-invariant invertible state $\psi^{(+)}$ one can associate a quantized invariant, which is the Hall conductance of the composite system. In the case when $\psi^{(+)}$ satisfies no local spontaneous symmetry breaking condition, this invariant coincides with the Hall conductance of $\psi^{(+)}$. Therefore one does not actually have to use the Hamiltonians $H^{(\pm)}$ of the original and the inverse system anywhere in this section.
\end{remark}
\section{Concluding remarks}
We have shown that both the zero-temperature Hall conductance and the Thouless pump invariant of gapped lattice systems are locally computable. Similar results were recently obtained in \cite{kapustin2019thermal,kapustin2020higherA}. This implies that a 2d gapped system with a nonzero Hall conductance cannot have a gapped interface with a trivial 2d gapped system (or equivalently, cannot have a gapped edge). Similarly, a 1d gapped system with a nonzero Thouless pump invariant cannot have a gapped interface with a trivial 1d system. (In the case of the Thouless pump, constructing the interface involves interpolating both the Hamiltonians and the locally-generated automorphism $\alpha$).
In this paper we adopted a definition of a gapped phase of matter (for a fixed lattice and an algebra of observables) as a homotopy equivalence class of gapped Hamiltonians and their ground states. Another attractive possibility is to keep track of just the states and declare two ground states to be equivalent (and thus in the same gapped phase) if they are related by a locally-generated automorphism of ${\mathscr A}$. Thanks to the results of \cite{bachmann2012automorphic,moon2020automorphic}, if $(H,\psi)$ and $(H',\psi')$ are equivalent in the former sense, then $\psi$ and $\psi'$ are equivalent in the latter sense. Note also that $\sigma_{Hall}$ is unaffected by locally generated $U(1)$ invariant automorphisms and so can be regarded as an invariant of a gapped phase with $U(1)$ symmetry in this new sense.
We can completely avoid the usage of the Hamiltonian if we restrict our attention to states in the invertible phase. By the results of Section \ref{HallInv}, if the lattice $\Lambda$ is two-dimensional, the Hall conductance of a pure $U(1)$-invariant state in an invertible phase is well-defined and quantized. Such a Hamiltonian-free definition of an invertible phase could be a useful alternative to the one based on finite-depth local unitary quantum circuits \cite{QImeets} since it guarantees that homotopies in the space of invertible states do not affect the phase to which the system belongs. \\
\noindent
{\bf Acknowledgements:}
This research was supported in part by the U.S.\ Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632. A.K. was also supported by the Simons Investigator Award. N.S. gratefully acknowledges the support of the Dominic Orr Fellowship at Caltech. \\
\noindent
{\bf Data availability statement:}
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
|
1,108,101,565,756 | arxiv | \section{Introduction}\label{Introduction}
Let $(M,g)$ be a closed Riemannian manifold of dimension $n\ge 3$, and let $L_g$ denote the Conformal Laplacian, defined by
\begin{equation}
L_g := -\Delta_g + c_nR_g.
\end{equation}
Here $c_n := \frac{n-2}{4(n-1)}$, and $\Delta_g$ is the negative Laplace-Beltrami operator. This operator is conformally invariant in the following sense: if $ g_u := u^{\frac{4}{n-2}}g\in [g]$, then for any $\phi\in C^\infty(M^n)$ it holds that
\begin{equation}\label{Conf-Invar}
L_{g_u}(\phi) = u^{-\frac{n+2}{n-2}}L_g(u\phi).
\end{equation}
This formula implies the following conformal invariance of the Dirichlet energy:
\begin{align} \label{cid}
\int_M \phi L_{g_u} \phi \, dv_{g_u} = \int_M (u \phi) \, L_g (u \phi)\, dv_g.
\end{align}
Therefore, the Rayleigh quotient satisfies
\begin{align} \label{Ruu} \begin{split}
\dfrac{ \int_M \phi L_{g_u} \phi \, dv_{g_u} }{ \int_M \phi^2 \, dv_{g_u} } &= \dfrac{ \int_M (u \phi) \, L_g (u \phi)\, dv_g}{ \int_M (u \phi)^2 u^{\frac{4}{n-2}} \, dv_g} \\
&= \dfrac{ \int_M \psi \, L_g \psi \, dv_g}{ \int_M \psi^2 u^{\frac{4}{n-2}} \, dv_g } \\
&=: \mathcal{R}^u_g(\psi),
\end{split}
\end{align}
where $\psi = u \phi$.
Since $M^n$ is compact, the spectrum $\text{Spec}(L_g)$ of $L_g$ is discrete, and we denote it by
\begin{equation}
\lambda_1(L_g) < \lambda_2(L_g)\le \cdots \le \lambda_k(L_g) \longrightarrow \infty,
\end{equation}
where the eigenvalues are repeated according to their multiplicities. If we fix a choice of a conformal representative $g\in[g]$, then given $g_u = u^{\frac{4}{n-2}}g$ we denote
\begin{align} \label{lug}
\lambda_k(u) := \lambda_k(L_{g_u}).
\end{align}
In view of the conformal invariance of the Rayleigh quotient in (\ref{Ruu}), the min-max characterization of the $k^{th}$-eigenvalue can be expressed as
\begin{align} \label{Lku}
\lambda_k(u) = \inf_{\Sigma_k \subset W^{1,2}(M^n,g) } \sup_{\psi \in \Sigma_k \setminus \{ 0 \}} \mathcal{R}^u_g(\psi),
\end{align}
where $\Sigma_k \subset W^{1,2}:=W^{1,2}(M^n,g)$ denotes a $k$-dimensional subspace of $W^{1,2}$.
The conformal invariance of $L_g$ also implies the conformal invariance of various spectrally-defined quantities:
\begin{enumerate}[(i)]
\item The sign of $\lambda_1(L_g)$ is a conformal invariant (see \cite{Kazdan}), and agrees with the
sign of the Yamabe invariant
\begin{align*}
Y(M^n,[g]) := \inf_{u \in W^{1,2} \setminus \{ 0 \}} \dfrac{ \int_M u \, L_g u \, dv_g }{ \left( \int_M |u|^{\frac{2n}{n-2}} \, dv_g \right)^{\frac{n-2}{n}}}.
\end{align*}
\item The dimension of $\ker L_g$ is a conformal invariant. This is immediate from (\ref{Conf-Invar}).
\item The number of negative eigenvalues of $L_g$, $\nu([g])$, is also a conformal invariant, and its size is not topologically obstructed (see \cite{Canzani}).
\end{enumerate}
Our main focus in this work will be the existence of an extremal for the normalized eigenvalue functional $F_k$ defined by
\begin{equation}\label{E-Functional}
u \longmapsto F_k(u) := \lambda_k(u)\left(\int_M u^{\frac{2n}{n-2}}\;dv_g\right)^{\frac{2}{n}}.
\end{equation}
As pointed out in \cite{Ammann}, if the Yamabe invariant $Y(M^n,[g]) \geq 0$, then
\begin{align} \label{infY}
\inf_{ g_u \in [g]} F_1(u) = Y(M^n,[g]).
\end{align}
When $Y(M^n,[g]) < 0$ the same argument shows
\begin{align} \label{supY}
\sup_{ g_u \in [g]} F_1(u) = Y(M^n,[g]).
\end{align}
In particular, by the resolution of the Yamabe problem, a minimizer (respectively, a maximizer) for $F_1(u)$ exists if $Y(M^n,[g]) \geq 0$ (resp., $< 0$).
We will therefore be interested in the variational properties of $F_k$ when $k\geq 2$.
For conformal classes with $Y(M^n,[g]) \ge 0$, Ammann-Humbert \cite{Ammann} called
\begin{align} \label{infY2}
\mu_2(M^n,[g]) := \inf_{ g_u \in [g]} F_2(u)
\end{align}
the {\em second Yamabe invariant}. Under certain assumptions they were able to prove the existence of a ``generalized metric'' attaining the second Yamabe invariant; i.e., a metric of the form $g_u = u^{\frac{4}{n-2}}g$, with
\begin{align} \label{LNdef}
u \in L^{\frac{2n}{n-2}}_{+}(M^n,g) := \{ u \in L^{\frac{2n}{n-2}}(M^n,g)\, :\, u \geq 0 \ a.e. \}\setminus \{ 0 \}.
\end{align}
As we will explain below, the existence of minimizers for $F_2$ is related to the existence of nodal solutions for the Yamabe equation. We point out that, in the case of positive Yamabe invariant, the supremum of $F_k$ is always infinity for any $k\ge1$ (see \cite{Ammann2}).
If the Yamabe invariant $Y(M^n,[g]) < 0$ and $\nu([g]) = 1$, then $\lambda_2(u) \geq 0$ for all $g_u \in [g]$. In this case, El Sayed \cite{ElSayed} showed that the approach of Ammann-Humbert still works, and the second Yamabe invariant is attained. However, if $\nu([g]) > 1$, then it is not difficult to see that
\begin{align} \label{inf2neg}
\inf_{ g_u \in [g]} F_2(u) = -\infty.
\end{align}
Indeed, (\ref{supY}) suggests that in this setting one should seek to {\em maximize} $F_2$ in $[g]$. Our main result is that this is always possible, provided $\ker L_g$ is trivial:
\begin{theorem}\label{MainTheorem}
Let $(M^n,g)$ be a closed Riemannian manifold equipped with a conformal class $[g]$ satisfying $\nu([g])>1$ and $0\not \in \text{Spec}(L_g)$. Then there is a nonnegative and nontrivial function $\bar u\in C^{\alpha}(M^n)\cap C^\infty(M^n\setminus\{\bar u=0\})$, $\alpha \in (0,1)$, which is maximal for the normalized eigenvalue functional
\begin{equation}
F_2 : u\in L^{\frac{2n}{n-2}}_+(M^n,g) \longmapsto \lambda_2(u)\left(\int_M u^{\frac{2n}{n-2}}\;dv_g\right)^{\frac{4}{n-2}}.
\end{equation}
Moreover, for any maximizer $\bar{u} \in L^{\frac{2n}{n-2}}_+(M^n,g)$ there exists a collection $\{\bar \phi_i\}_{i=1}^k\in C^{2,\alpha}(M^n)$ of second generalized eigenfunctions (see Section \ref{SetUp}) satisfying
\begin{equation}\label{limitEulerEq}
\bar u^2 - \sum_{i=1}^k \bar \phi_i^2 =0.
\end{equation}
Here $1 \leq k \leq \dim E_2(\bar{u})$, where $E_2(\bar{u})$ is the space of generalized eigenfunctions corresponding to $\lambda_2(\bar{u})$.
\end{theorem}
By Theorem 1.7 of \cite{HS}, the zero locus $\{ \bar{u} = 0 \}$ in Theorem \ref{MainTheorem} can be decomposed into a (possibly empty) $C^1$ submanifold of dimension $(n-1)$ and finite volume given by $\{ \bar{u} = 0 \} \cap \{ |\nabla \bar{u}| > 0 \}$, and the closed set $\{ \bar{u} = 0 \} \cap \{ |\nabla \bar{u}| = 0 \}$ of dimension $\leq n-2$.
As a consequence of Theorem \ref{MainTheorem}, in each conformal class satisfying the assumptions of the theorem, there is either a nodal solution of the Yamabe equation, or a harmonic map into a sphere, depending on the integer $k$ in the conclusion:
\begin{corollary}\label{NodalHarmonic}
Let $\bar u\in C^{\alpha}(M^n)\cap C^\infty(M^n\setminus\{\bar u=0\})$ be a maximal function provided by Theorem \ref{MainTheorem}. We have the following two cases:
\begin{enumerate}
\item If $k=1$, then $\bar u=|\bar \phi|$ on $M^n$, and $\bar \phi$ is a nodal solution of
\begin{equation}
L_g\bar \phi = \lambda_2(\bar u) |\bar \phi|^{\frac{4}{n-2}} \, \bar \phi.
\end{equation}
\item If $k>1$, then the map
\begin{equation}\label{Harmonic}
\bar U:=(\bar \phi_1/\bar u,\cdots, \bar \phi_k/\bar u): (M^n\setminus\{\bar u=0\}, \bar u^{\frac{4}{n-2}}g) \longrightarrow (\mathbb{S}^{k-1},g_{\text{round}})
\end{equation}
defines a harmonic map.
\end{enumerate}
\end{corollary}
As we now explain, it is possible to construct examples for both cases. For the first case, observe that by definition
\begin{align*}
1 \leq k \leq \nu([g]).
\end{align*}
In particular, if $\nu([g]) =2$, then $k=1$ (since $\lambda_1(\bar u)$ must be simple by Lemma \ref{simple}). Moreover, Corollary \ref{NodalHarmonic} implies that any maximal function $\bar u$ of the functional $F_2$ on $L^{\frac{2n}{n-2}}_+(M^n,g)$ is of the form $\bar u = |\bar \phi|$ for $\bar \phi\in E_2(\bar u)$. Since $\bar \phi$ changes sign, this means that $\bar u$ cannot be strictly positive and consequently there is no bona fide Riemannian metric in $[g]$ maximizing $F_2$.
By contrast, in Section \ref{Example} we will construct explicit examples for which $k \ge 2$ and the maximal metric induces a harmonic map:
\begin{theorem} \label{IntroExample} Let $(H,g)$ be a closed Riemannian manifold with constant negative scalar curvature, suitably normalized (see Section \ref{Example}). Then the product metric $(M,g) = (H \times S^1, h + d\theta^2)$ is maximal in its conformal class. In particular, eigenfunctions $\{ \psi_1, \psi_2 \}$ for the laplacian on the $S^1$-factor are eigenfunctions for $\lambda_2(L_g)$, and define a harmonic map
\begin{align*}
\Psi = (\psi_1, \psi_2) : M \rightarrow S^1,
\end{align*}
given by projection onto the $S^1$-factor.
\end{theorem}
\medskip
\noindent {\bf Remarks.} \bigskip \begin{enumerate}
\item In the work of Ammann-Humbert, if $g_u \in [g]$ is a (generalized) metric that minimizes $F_2$, then $\lambda_2(\bar{u})$ is always simple\footnote{The simplicity of a minimizer in this case can also be shown by a first variation argument; see Remark \ref{Simplicity}.} (see Theorem 3.4 of \cite{Ammann}). When $k > 1$ in Corollary \ref{NodalHarmonic}, then $\lambda_2$ is not simple; for example, the multiplicity of $\lambda_2$ for the product metric on $H \times S^1$ is two. \medskip
\item In fact, it is easy to adapt the proof of Theorem \ref{IntroExample} to construct examples with $k$ arbitrarily large, by taking products with spheres of high enough dimension. \medskip
\item Theorem \ref{IntroExample} also shows that maximal metrics can be smooth. This is another significant contrast to the problem of minimizing $\lambda_2$: in the work of Ammann-Humbert, if $g_u \in [g]$ is a (generalized) metric maximizing $F_2$, then $\bar{u} = |\phi|$, where $\phi$ is a nodal solution of the Yamabe equation. In particular, $g_u$ is never a smooth Riemannian metric. \medskip
\item Another surprising aspect of Theorem \ref{IntroExample} is that the product metric is a Yamabe metric, hence (as we observed above) is simultaneously maximal for $\lambda_1(L)$. This is remarkably different from the case of the Laplace operator on surfaces, where it is known that metrics cannot maximize consecutive eigenvalues (\cite{ElSoufi2}). \medskip
\end{enumerate}
In the case of surfaces, the connection of maximal eigenvalues to harmonic maps is well known. If $\Sigma$ is a closed Riemannian surface, then Petrides \cite{Petrides} and Nadirashvilli-Sire \cite{Nadirashvilli} have shown that for any conformal class $[g]$ on $\Sigma$, there exists a metric $\bar g \in [g]$, which is smooth except possibly at finitely many points, such that
\begin{equation}\label{ConformalEigenvalue}
\lambda_1(-\Delta_{\bar{g}}) \, \text{Area}(\Sigma,\bar{g}) = \Lambda_1(\Sigma,[g]) := \sup_{\tilde g \in [g]} \lambda_1(-\Delta_{\tilde g}) \, \text{Area}(\Sigma, \tilde g)
\end{equation}
(see also work by Kokarev in \cite{Kokarev}). In addition, to this maximal metric there is an associated harmonic map into a higher dimensional sphere (\cite{Fraser}). In a similar spirit, Fraser-Schoen have shown the connection between extremal properties of Steklov eigenvalues on surfaces with boundary and free boundary minimal surfaces. To our knowledge, the relation between extremal properties of the spectrum of the Conformal Laplacian and harmonic maps is new.
The outline of the paper is as follows. In Section \ref{SetUp} we go through some basic definitions and notation, and we properly set up our maximization problem. In particular, we define the $k^{th}$-eigenvalue associated to a function $u$ in $L^{\frac{2n}{n-2}}_+(M^n,g)$. Section \ref{VariationFormulas} is devoted to the derivation of the first variation formulas for the normalized eigenvalue functional $F_2$. The main result we prove is that along certain deformations $u_t$ in $L^{\frac{2n}{n-2}}_+(M^n,g)$, both of the one-sided derivatives of $F_2(u_t)$ exist at $t=0$. The techniques we use in this section follow closely those employed by Kokarev in \cite{Kokarev}. In Section \ref{RegularizedFunctional} we explained a way to regularize our problem. The main idea behind including such regularizing term is to control the size of the zero set of extremal functions. We prove existence of maximizers for the regularized functional and derived an associated Euler Lagrange Equation via classical separation theorems as it was done in \cite{ElSoufi}, \cite{ElSoufi2} for Laplace eigenvalues, and in \cite{Fraser} for Laplace and Steklov eigenvalues. In Section \ref{Estimates} we obtain uniform estimates for the sequence of maximizers, and in Section \ref{TakingLimit} is where we show our main result Theorem \ref{MainTheorem} and Corollary \ref{NodalHarmonic}. Finally, in Section \ref{Example} we provide an example showing that the second case in Corollary \ref{NodalHarmonic} ($k>1$) holds.
\vskip.2in
\section{Preliminaries} \label{SetUp}
Assume we are given a conformal class $[g]$ for which $\nu([g])>1$ and $0\not \in \text{Spec}(L_g)$. As pointed out in Section \ref{Introduction}, these assumptions are conformally invariant. Since $\nu([g])>1$ implies that $Y(M^n,[g])<0$, we are allowed to select as reference metric some $g\in [g]$ for which $R_g<0$ everywhere on $M$. From now on these are the assumptions that we will be working with.
Some notation and definitions are necessary. For $p \neq 0$, let us denote
\begin{align*}
L^p_+:= L^p_{+}(M^n,g) = \{ u \in L^p(M^n,g)\, :\, u \geq 0 \ a.e.\}\setminus \{0\}.
\end{align*}
We denote by $N:= 2^*=\frac{2n}{n-2}$ the critical Sobolev exponent. Given $u \in C^{\infty}(M^n)$ with $u>0$, $g_u$ denotes the conformal metric $g_u = u^{N-2}g \in [g]$. As discussed in Section \ref{Introduction}, by conformal invariance and the min-max characterization, we have
\begin{align} \label{Lku}
\lambda_k(u):=\lambda_k(L_{g_u}) = \inf_{\Sigma_k \subset W^{1,2} } \sup_{\psi \in \Sigma_k \setminus \{ 0 \}} \mathcal{R}^u_g(\psi).
\end{align}
We take this as motivation to define the $k^{th}$-eigenvalue $\lambda_k(u)$ associated to an arbitrary function $u\in L^{N}_{+}$. Since the zero set of a function $u\in L^{N}_{+}$ could be large, there is the possibility of having a $k$-dimensional subspace span by functions which are linearly independent only on $\{u=0\}$. To rule out this scenario we define $\lambda_k(u)$ via (\ref{Lku}), but using the $k^{th}$ modified Grassmannian $Gr_k^u(W^{1,2}):= Gr_k^u(W^{1,2}(M^n,g))$ instead (see Section 2.2. in \cite{Ammann}). Note that $\langle \phi_1,\cdots, \phi_k \rangle \in Gr_k^u(W^{1,2})$ if and only if $\{\phi_1,\cdots, \phi_k\}\subset W^{1,2}$ and the functions $\phi_1 u^{\frac{N-2}{2}},\cdots, \phi_k u^{\frac{N-2}{2}}$ are linearly independent.
\begin{definition}\label{GeneralizedEigenvalue}
For $u\in L^{N}_{+}$, the {\em $k^{th}$-generalized eigenvalue} $\lambda_k(u)$ is defined via
\begin{equation}\label{Lkudef}
\lambda_k(u):=\inf_{\Sigma_k \subset Gr_k^u(W^{1,2}) } \sup_{\psi \in \Sigma_k \setminus \{ 0 \}} \mathcal{R}^u_g(\psi)
\end{equation}
\end{definition}
\noindent The function $u$ is called a {\em generalized conformal factor}, and the object $g_u=u^{N-2}g$ is called a {\em generalized conformal metric}. In this context, a {\em $k^{th}$ generalized eigenfunction} $\phi_k$ means a weak solution in $W^{1,2}$ of the equation
\begin{equation} \label{genev}
L_g\phi_k = \lambda_k(u)\phi_ku^{N-2}.
\end{equation}
Heuristically, the reason behind introducing the spaces $Gr_k^u(W^{1,2})$ is that we are only taking into account what happens on regions where the generalized conformal metric $g_u=u^{N-2}g$ does not vanish.
\vskip.2in
\section{First Variation Formulas}\label{VariationFormulas}
Throughout this section we assume that we are given a function $u\in L^{N}_{+}$ for which $\{x\in M:\; u(x)=0\}$ has zero Riemannian measure, that is, $u$ is positive almost everywhere. As shown by El Sayed in \cite{ElSayed}, this guarantees that $\lambda_1(u)>-\infty$, and, therefore, it provides us with the existence of generalized eigenfunctions. Now, for any $h \in L^\infty := L^\infty(M^n,g)$, consider
\begin{equation}
u_t = u + th u = u(1+th).
\end{equation}
We refer to $h$ as the generating function of the deformation $u_t$ of $u$, and it will be fixed throughout this section. The goal is to understand the behavior of $\lambda_k(u_t)$ around $t=0$, thus it is always possible to think of $t$ as living in a small neighborhood $(-\delta,\delta)$ of zero. In particular, we choose $\delta>0$ small enough such that
\begin{equation}
|1+th| \ge 1 - \delta \|h\|_{\infty} > 0.
\end{equation}
\begin{proposition}\label{T-functions} Test functions for $\lambda_k(u)$ are also test functions for $\lambda_k(u_t)$, and vice versa. Also, $\lambda_k(u_t)$ is finite for all $t\in (-\delta,\delta)$.
\end{proposition}
\begin{proof}
Let $w$ and $\tilde w$ be arbitrary functions in $L^{N}_+$. If $\{x\in M:\;w(x)>0\}\subseteq \{x\in M:\; \tilde w(x)>0\}$, then $Gr_k^w(W^{1,2})\subseteq Gr_k^{\tilde w}(W^{1,2})$. The proof follows after noticing that the positive set of $u$ and $u_t$ are identical for $t\in(-\delta, \delta)$.
\end{proof}
In what follows the continuity of $\lambda_k(u_t)$ at $t=0$ is proven in the cases where $k=1,2$.
\begin{proposition}\label{cont-1} $\lim_{t\to 0} \lambda_1(u_t) = \lambda_1(u)$.
\end{proposition}
\begin{proof}
By Proposition \ref{T-functions}, $Gr_1^u(W^{1,2}) = Gr_1^{u_t}(W^{1,2})$ for all $t\in(-\delta,\delta)$. Since the sign of both $\lambda_1(u_t)$ and $\lambda_1(u)$ is negative, among these test functions it is enough to consider those $\phi \in Gr_1^u(W^{1,2})$ for which
\begin{equation}
\int_M \phi L_g \phi \;dv_g <0.
\end{equation}
Now, observe that
\begin{equation} \label{norm1}
\begin{split}
\int_M \phi^2 u_t^{N-2}\;dv_g & = \int_M \phi^2u^{N-2}(1+th)^{N-2}\;dv_g\\ &\le (1+|t| \|h\|_\infty)^{N-2} \int_M \phi^2u^{N-2}\;dv_g,
\end{split}
\end{equation}
and
\begin{equation}\label{norm2}
\begin{split}
\int_M \phi^2 u_t^{N-2}\;dv_g & = \int_M \phi^2u^{N-2}(1+th)^{N-2}\;dv_g \\ &\ge (1-|t|\|h\|_\infty)^{N-2}\int_M \phi^2u^{N-2}\;dv_g.
\end{split}
\end{equation}
Therefore,
\begin{equation}
\begin{split}
(1-|t|\|h\|_\infty)^{-(N-2)} \frac{\int_M \phi L_g \phi \;dv_g}{\int_M \phi^2u^{N-2}\;dv_g} & \le \frac{\int_M \phi L_g \phi \;dv_g}{\int_M \phi^2u_t^{N-2}\;dv_g} \\ &\le (1+|t|\|h\|_\infty)^{-(N-2)} \frac{\int_M \phi L_g \phi \;dv_g}{\int_M \phi^2u^{N-2}\;dv_g},
\end{split}
\end{equation}
and, after taking the infimum over all such $\phi$'s, we get
\begin{equation}
(1-|t|\|h\|_\infty)^{-(N-2)}\lambda_1(u) \le \lambda_1(u_t)\le (1+|t|\|h\|_\infty)^{-(N-2)} \lambda_1(u).
\end{equation}
This finishes the proof.
\end{proof}
To prove the continuity of $\lambda_2(u_t)$, we need a few technical lemmas. As above, let $u\in L^{N}_{+}$ and assume that $\lambda_1(u)>-\infty$. Then the first eigenvalue $\lambda_1(u)$ has multiplicity one. This is follows from the fact that if $\psi_1$ is a generalized eigenfunction for $\lambda_1(u)$, then $\psi_1 \geq 0 \ a.e.$ and satisfies (\ref{genev}). However, since $u \in L^{N}$, $\psi_1$ is an eigenfunction for a Shrodinger operator with potential in $L^{n/2}$, and we could not find a precise reference for this setting. Therefore, we include a proof here:
\begin{lemma} \label{simple} Let $u \in L^N_{+}$, and assume
\begin{align} \label{Rayr}
\lambda_1(u) := \inf_{\phi \in W^{1,2}} \mathcal{R}^u_g(\phi) = \inf_{\phi \in W^{1,2}} \dfrac{ E_g(\phi) }{ \int \phi^2 \, u^{N-2} \, dv_g } > -\infty.
\end{align}
If $E_1(u)$ is the space of generalized first eigenfunctions, then $\dim E_1(u) = 1$. Moreover, the first generalized eigenfunction has constant sign.
\end{lemma}
\begin{proof} We first observe that if $\psi \in E_1(u)$ and $\psi^{+}(x) = \max \{ \psi(x), 0\}$ denotes the positive part of $\psi$, then $\psi^{+} \in E_1(u)$. This follows
from the fact that $\psi$ is a $W^{1,2}$-solution of
\begin{align} \label{E1def}
L_g \psi = \lambda_1(u) \psi \bar{u}^{N-2}.
\end{align}
If we use $\psi^{+}$ as a test function, we easily find that $\mathcal{R}^u_g(\psi^{+}) = \lambda_1(u)$. By the variational characterization of $\lambda_1(u)$, it follows
that $\psi^{+} \in E_1(u)$.
Next, suppose $\psi \in E_1(u)$ with $\psi \geq 0 \ a.e.$, $\psi \not\equiv 0$. We claim that there is a $q_0 > 0$ such that
\begin{align} \label{JNE1}
\int_M \psi^{-q_0} \, dv_g < C_0,
\end{align}
where $C_0 = C_0(\| \psi \|_{q_0}^{-1})$. To see this, let $B(r) \subset M$ be a geodesic ball of radius $r > 0$, and $\eta \in C^{\infty}(M)$ a cut-off function
with $\eta \equiv 1$ in $B(r)$, $\eta \equiv 0$ on $M \setminus B(2r)$, and $|\nabla \eta | \leq C r^{-1}$. Also, for $\epsilon > 0$ let
\begin{align*}
\psi_{\epsilon} = \big( \psi^2 + \epsilon)^{1/2}.
\end{align*}
Then $\eta^2 \psi_{\epsilon}^{-1} \in W^{1,2}$, and using this as a test function in (\ref{E1def}),
\begin{align} \label{JN2}
\int_M \langle \nabla (\eta^2 \psi_{\epsilon}^{-1}), \nabla \psi \rangle \, dv_g + \int_M c_n R \psi \, (\eta^2 \psi_{\epsilon}^{-1}) \, dv_g = \lambda_1(u) \int_M \psi \, (\eta^2 \psi_{\epsilon}^{-1}) \, u^{N-2} \, dv_g.
\end{align}
Since $\psi \leq \psi_{\epsilon}$, we can expand and estimate in the standard way to obtain
\begin{align*}
\int_{B(r)} |\nabla \log \psi_{\epsilon}|^2 \, dv_g &\leq \int_M \eta^2 |\nabla \psi_{\epsilon}|^2 \, dv_g \\
&\leq C \int_M \big( \eta^2 + |\nabla \eta|^2 \big) \psi \, \psi_{\epsilon} \, dv_g + C \int \psi \, \psi_{\epsilon} \, \bar{u}^{N-2} \, dv_g \\
&\leq C r^{n-2} + C \int_{B(2r)} u^{N-2} \, dv_g \\
&\leq C r^{n-2} + C \| u \|_{L^N} r^{n-2}.
\end{align*}
By the Monotone Convergence Theorem it follows that
\begin{align*}
\int |\nabla \log \psi|^2 \, dv_g \leq C r^{n-2}.
\end{align*}
where $C = C(|\lambda_1(u), \| u \|_N )$. By the John-Nirenberg inequality, there is a $q_0 > 0$ such that
\begin{align} \label{JN5}
\big( \int_M \psi^{q_0} \, dv_g \big) \big( \int_M \psi^{-q_0} \, dv_g \big) \leq C,
\end{align}
and (\ref{JNE1}) follows.
An immediate consequence of (\ref{JNE1}) is that any $\psi \in E_1(u)$ must have fixed sign: if $\psi^{+} = 0$ ({\em resp.,} $\psi^{-} = 0$) on a set of positive measure, then since $\psi^{+}$ ({\em resp.,} $\psi^{-}$) is also an eigenfunction, (\ref{JNE1}) holds, which is obviously a contradiction.
Now suppose $\psi_1, \psi_2 \in E_1(u)$, normalized so that
\begin{align*}
\int_M \psi_1 \, dv_g = \int_M \psi_2 \, dv_g = 1.
\end{align*}
Since $\psi = \psi_1 - \psi_2 \in E_1(u)$, $\psi \geq 0$ or $\psi \leq 0 \ a.e.$. Since
\begin{align*}
\int_M \psi \, dv_g = \int_M \psi_1 \, dv_g - \int_M \psi_2 \, dv_g = 0,
\end{align*}
it follows that $\psi = 0 \ a.e.$, hence $\psi_1 = c \psi_2$ for some constant $c$.
\end{proof}
Denote by $E_1(u)$ the space of first generalized eigenfunctions with respect to $\lambda_1(u)$. By the previous lemma, $E_1(u)$ is finite dimensional with dimension equal to one. On the other hand, we remark $\lambda_2(u)$ can be realized by
\begin{equation}\label{Char-e}
\lambda_2(u) = \inf \mathcal{R}_g^u(\phi),
\end{equation}
where the infimum is taken over functions $\phi\in W^{1,2}$ for which $\int_M \phi \psi_1 u^{N-2}\;dv_g = 0$. This is proven in the following lemma.
\begin{lemma}\label{Lambda2}
Let $u\in L^N_+$ be positive almost everywhere. Formula (\ref{Char-e}) holds for $\lambda_2(u)$.
\end{lemma}
\begin{proof}
Denote the quantity on the right hand side of (\ref{Char-e}) by $\tilde \lambda_2(u)$. For any two dimensional subspace $\Sigma_2 \in Gr_k^u$, there is a function $\psi_2 \in \Sigma_2$ satisfying the orthogonality condition
\begin{equation}\label{orthogonality}
\int_M \psi_1\psi_2u^{N-2}\;dv_g = 0,
\end{equation}
where $\psi_1$ is the first generalized eigenfunction associated to $\lambda_1(u)$. Then we have
\begin{equation}
\sup_{\phi\in\Sigma_2}\mathcal{R}_g^u(\phi) \ge \mathcal{R}_g^u(\psi_2) \ge \tilde \lambda_2(u),
\end{equation}
where the second inequality follows from the definition of $\tilde \lambda_2(u)$. Taking the infimum over all such $\Sigma_2\in Gr_2^u(W^{1,2})$ yields
\begin{equation}
\lambda_2(u)\ge \tilde \lambda_2(u).
\end{equation}
To deduce the opposite inequality, we start with an arbitrary function $\psi_2\in W^{1,2}$ satisfying (\ref{orthogonality}). Define $\Sigma_2^{\psi_2} := \langle \psi_1,\psi_2\rangle$ and note that it belongs to $Gr_2^u(W^{1,2})$. Therefore,
\begin{equation}
\mathcal{R}_g^u(\psi_2) = \sup_{\phi\in \Sigma_2^{\psi_2}} \mathcal{R}_g^u(\phi) \ge \lambda_2(u),
\end{equation}
where the equality follows from the orthogonality condition (\ref{orthogonality}), while the inequality follows from Definition \ref{GeneralizedEigenvalue}. Taking the infimum over all such $\psi_2$ yields
\begin{equation}
\tilde \lambda_2(u)\ge \lambda_2(u).
\end{equation}
This concludes the proof of (\ref{Char-e}).
\end{proof}
We now proceed to show the continuity for $\lambda_2(u_t)$ at $t=0$. Since $E_1(u_t)$ can be consider a subspace in either $L^2(u_t) := L^2(M^n,u_t^{N-2}dv_g)$ or $L^2(u) := L^2(M^n,u^{N-2}dv_g)$, there are at least two possible orthogonal projections that we could define. Let us denote by $P_t^*$ and $P_t$ the orthogonal projections onto $E_1(u_t)$ in $L^2(u_t)$ and $L^2(u)$, respectively.
\begin{proposition}\label{cont-2} $\lim_{t\to 0} \lambda_2(u_t) = \lambda_2(u)$.
\end{proposition}
\begin{proof}
Pick an arbitrary function $\phi\in E_2(u)$ normalized such that $\int_M \phi^2u^{N-2}\;dv_g = 1$. By (\ref{Char-e}) we have
\begin{equation}
\begin{split}
\lambda_2(u_t) &\le \frac{\int_M \{|\nabla_g (\phi - P_t^*\phi)|^2 + c_nR_g (\phi-P_t^*\phi)^2 \}\;dv_g}{\int_M (\phi - P_t^*\phi)^2u_t^{N-2}\;dv_g} \\ &= \frac{\lambda_2(u) -2\int_M \{\langle\nabla_g \phi, \nabla_g P_t^*\phi\rangle + c_nR_g \phi(P_t^*\phi) \}\;dv_g + \lambda_1(u_t)\int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g}{ \int_M \{\phi^2 - (P_t^*\phi)^2\}u_t^{N-2}\;dv_g } \\ &= \frac{\lambda_2(u) - \lambda_1(u_t)\int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g}{\int_M \phi^2 u_t^{N-2}\;dv_g - \int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g}.
\end{split}
\end{equation}
Notice that $\int_M \phi^2u_t^{N-2}\;dv_g \to 1$ as $t\to 0$ by estimates (\ref{norm1}) and (\ref{norm2}), and thus Lemma \ref{Projections} (see below) leads to
\begin{equation}\label{liminf1}
\limsup_{t\to0} \lambda_2(u_t) \le \lambda_2(u).
\end{equation}
To deduce the other inequality, pick $\phi_t\in E_2(u_t)$ normalized such that $\int_M \phi_t^2u^{N-2}\;dv_g=1$. It is important to note that the normalization is with respect to $u^{N-2}dv_g$ and not with respect to $u_t^{N-2}dv_g$. As above, and by means of (\ref{Char-e}),
\begin{equation}
\begin{split}
\lambda_2(u) &\le \frac{\int_M \{|\nabla_g (\phi_t - P_0\phi_t)|^2 + c_nR_g (\phi_t-P_0\phi_t)^2 \}\;dv_g}{\int_M (\phi_t - P_0\phi_t)^2u^{N-2}\;dv_g} \\ &= \frac{\lambda_2(u_t)\int_M \phi_t^2u_t^{N-2}\;dv_g - \lambda_1(u)\int_M (P_0\phi_t)^2u^{N-2}\;dv_g}{1 - \int_M (P_0\phi_t)^2u^{N-2}\;dv_g}.
\end{split}
\end{equation}
The goal is to take the limit as $t\to 0$ of the expression on the right-hand side. In order to do so, two more estimates are needed. First, notice that $\phi_t^2u_t^{N-2} = \phi_t^2u^{N-2}(1+th)^{N-2}$, therefore
\begin{equation}
\begin{split}
(1-|t|\|h\|_\infty)^{N-2} \underbrace{\int_M \phi_t^2 u^{N-2}\;dv_g}_{=1} & \le \int_M \phi_t^2u_t^{N-2}\;dv_g\\ &\le (1+|t|\|h\|_\infty)^{N-2}\int_M \phi_t^2 u^{N-2}\;dv_g
\end{split}
\end{equation}
Thus $\lim_{t\to 0}\int_M \phi_t^2u_t^{N-2}\;dv_g$ exists and it is equal to $1$. The term involving $P_0w_t$ goes to zero as $t\to 0$ by Lemma \ref{Projections}.
Hence,
\begin{equation}\label{limsup1}
\lambda_2(u)\le \liminf_{t\to 0} \lambda_2(u_t).
\end{equation}
The result follows after combining (\ref{liminf1}) and (\ref{limsup1}).
\end{proof}
\begin{lemma}\label{Projections}
Let $P_t^*$ and $P_0$ be as before. For any $\phi\in E_2(u)$ normalized such that $\int_M \phi^2u^{N-2}\;dv_g = 1$,
\begin{equation} \label{Pr-1}
\|P_t^*\phi\|_{L^2(u_t)}=O(t^2),
\end{equation}
as $t\to 0$. Also, for any $\phi_t\in E_2(u_t)$ with $\int_M \phi_t^2u^{N-2}\;dv_g=1$, we have
\begin{equation}\label{Pr-2}
\|P_0\phi_t \|_{L^2(u)} = O(t^2),
\end{equation}
as $t\to 0$.
\end{lemma}
\begin{proof}
Let $\psi_{1,t}$ be the spanning eigenfunction of $E_1(u_t)$ normalized such that
\begin{equation}
\int_M \psi_{1,t}^2u_t^{N-2}\;dv_g = 1.
\end{equation}
As $E_1(u_t)$ is one dimensional, for any $\phi \in E_2(u)$ we deduce
\begin{equation}
\begin{split}
\|P^*_t\phi\|^2_{L^2(u_t)} & = |\langle \phi, \psi_{1,t} \rangle_{L^2(u_t)}|^2\\
& = \left|\int_M \phi \psi_{1,t}u_t^{N-2}\;dv_g\right|^2
\end{split}
\end{equation}
Since $\psi_{1,t} \in E_1(u_t)$ and $\phi \in E_2(u)$, the following two equations are satisfied:
\begin{equation} \label{weq}
L_g\phi = \lambda_2(u)\phi u^{N-2}
\end{equation}
and
\begin{equation}
L_g \psi_{1,t} = \lambda_1(u_t) \psi_{1,t} u_t^{N-2}.
\end{equation}
Testing the above equation against $\phi$ gives us
\begin{equation}
\begin{split}
\lambda_1(u_t) \int_M \phi \psi_{1,t} u_t^{N-2}\;dv_g & = \lambda_2(u) \int_M \phi \psi_{1,t} u^{N-2}\;dv_g \\ & = \lambda_2(u) \int_M \phi \psi_{1,t} (u^{N-2} - u_t^{N-2})\;dv_g\\ &\hspace{.15in}+ \lambda_2(u) \int_M \phi \psi_{1,t} u_t^{N-2}\;dv_g,
\end{split}
\end{equation}
where we have used (\ref{weq}) to get the first equality.
Rearranging the terms above we deduce
\begin{equation} \label{Pr-1-eq1}
(\lambda_2(u) - \lambda_1(u_t)) \int_M \phi \psi_{1,t} u_t^{N-2}\;dv_g = \lambda_2(u) \int_M \phi \psi_{1,t} (u_t^{N-2} - u^{N-2})\;dv_g
\end{equation}
Recall that we are working in a neighborhood of $t=0$ on which $|th|<1$. Then, by the power series expansion of $(1+th)^{N-2}$,
\begin{equation}\label{Pr-1-eq2}
\begin{split}
u_t^{N-2} - u^{N-2} &= u^{N-2}[(1+th)^{N-2} - 1] \\ & = (N-2)th\cdot u^{N-2} + O(t^2)\cdot u^{N-2}
\end{split}
\end{equation}
Also, by Proposition \ref{cont-1},
\begin{equation}
\lambda_1(u_t) = \lambda_1(u) + o(1),
\end{equation}
and since $|\lambda_2(u) - \lambda_1(u)|\ge \gamma > 0$ by Lemma \ref{simple}, this implies
\begin{equation}\label{Pr-1-eq3}
|\lambda_2(u) - \lambda_1(u_t)| \ge \frac{\gamma}{2}
\end{equation}
for small $t$. Combining (\ref{Pr-1-eq1}), (\ref{Pr-1-eq2}), and (\ref{Pr-1-eq3}), we conclude
\begin{equation}
\begin{split}
\frac{\gamma^2}{4} \left|\int_M \phi \psi_{1,t} u_t^{N-2} \;dv_g\right|^2 &\le Ct^2 \left(\int_M|\phi| \psi_{1,t} u^{N-2} \; dv_g\right)^2 \\ &\le Ct^2.
\end{split}
\end{equation}
This finishes the proof of (\ref{Pr-1}).
To prove (\ref{Pr-2}), note that
\begin{equation}
\|P_0\phi_t\|_{L^2(u)}^2 = \left|\int_M\phi_t\psi_{1,0}u^{N-2}\;dv_g\right|^2
\end{equation}
Using the equation that $\phi_t$ and $\psi_{1,0}$ satisfy weakly, we deduce
\begin{equation}
\begin{split}
\lambda_1(u)\int_M \phi_t\psi_{1,0}u^{N-2}\;dv_g &= \lambda_2(u_t)\int_M \psi_{1,0}\phi_tu_t^{N-2}\;dv_g\\ &= \lambda_2(u_t)\int_M \psi_{1,0}\phi_t(u_t^{N-2} - u^{N-2})\;dv_g\\ &\hspace{.15in} + \lambda_2(u_t)\int_M \psi_{1,0}\phi_tu^{N-2}\;dv_g
\end{split}
\end{equation}
The remaining part of the argument is similar to the one used for (\ref{Pr-1}) and it is, hence, omitted.
\end{proof}
\begin{proposition}\label{Ines} The following two inequalities hold:
\begin{equation}\label{Ine1}
\lambda_2(u_t)\le \inf_{\phi \in E_2(u)} \mathcal{R}_g^{u_t}(\phi) + o(t)\;\;\;\; (t\to 0),
\end{equation}
and
\begin{equation}\label{Ine2}
\lambda_2(u)\le \inf_{\phi \in E_2(u_t)} \mathcal{R}_g^u(\phi) + o(t)\;\;\;\; (t\to 0).
\end{equation}
\end{proposition}
\begin{proof}
Let $\phi \in E_2(u)$ be arbitrary, and assume that $\int_M \phi ^2u^{N-2}\;dv_g=1$. We estimate the difference between $\mathcal{R}^{u_t}_g(\phi-P_t^*\phi)$ and $\mathcal{R}_g^{u_t}(\phi)$ as follows:
\[
\begin{split}
&\left|\frac{\int_M \{ |\nabla_g(\phi - P_t^*\phi)|^2 + c_nR_g (\phi-P_t^*\phi)^2\}\;dv_g}{\int_M (\phi-P_t^*\phi)^2u_t^{N-2}\;dv_g} - \frac{\int_M \{ |\nabla_g \phi|^2 + c_nR_g \phi^2\}\;dv_g}{\int_M \phi^2u_t^{N-2}\;dv_g}\right| \\
& = \left|\frac{\lambda_2(u) - \lambda_1(u_t) \int_M (P_t^*\phi)^2u_t^{N-2} \;dv_g}{ \int_M (\phi-P_t^*\phi)^2u_t^{N-2} \;dv_g} - \frac{\lambda_2(u)}{ \int_M \phi^2u_t^{N-2}\;dv_g}\right| \\
& = \Bigg{|}\frac{\lambda_2(u) \left\{\int_M \phi^2u_t^{N-2}\;dv_g - \int_M (\phi-P_t^*\phi)^2u_t^{N-2}\;dv_g\right\}}{ \int_M (\phi -P_t^*\phi)^2u_t^{N-2} \; dv_g \cdot \int_M \phi^2u_t^{N-2}\;dv_g} \\
& \hspace{.15in} -\frac{ \lambda_1(u_t) \int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g\cdot \int_M \phi^2u_t^{N-2}\;dv_g}{ \int_M (\phi -P_t^*\phi)^2u_t^{N-2} \; dv_g \cdot \int_M \phi^2u_t^{N-2}\;dv_g}\Bigg{|} \\
& =\left|\frac{\lambda_2(u)\int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g -\lambda_1(u_t) \int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g\cdot \int_M \phi^2u_t^{N-2}\;dv_g}{\int_M (\phi-P_t^*\phi)^2u_t^{N-2} \; dv_g \cdot \int_M \phi^2u_t^{N-2}\;dv_g}\right| \\
& = \left| \frac{ \int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g}{\int_M (\phi-P_t^*\phi)^2u_t^{N-2}\;dv_g} \cdot \frac{\lambda_2(u) - \lambda_1(u_t) \int_M \phi^2 u_t^{N-2}\;dv_g }{\int_M \phi^2u_t^{N-2}\;dv_g}\right| \\
& \le \frac{ \int_M (P_t^*\phi)^2u_t^{N-2}\;dv_g}{\int_M (\phi-P_t^*\phi)^2u_t^{N-2}\;dv_g}\cdot \left\{\frac{|\lambda_2(u)|}{\int_M \phi^2 u_t^{N-2}\;dv_g} + |\lambda_1(u_t)|\right\}
\end{split}
\]
It follows from Lemma \ref{Projections} that this last expression is of order $O(t^2)$ as $t\to 0$. From the above estimates we deduce,
\begin{equation}
\lambda_2(u_t)\le \frac{\int_M \{ |\nabla_g(\phi - P_t^*\phi)|^2 + c_nR_g (\phi-P_t^*\phi)^2\}\;dv_g}{\int_M (\phi-P_t^*\phi)^2u_t^{N-2}\;dv_g} \le \mathcal{R}_g^{u_t}(\phi) + O(t^2).
\end{equation}
Therefore, taking the infimum over all $\phi\in E_2(u)$,
\begin{equation}
\lambda_2(u_t) - \inf_{\phi\in E_2(u)} \mathcal{R}_g^{u_t}(\phi) \le O(t^2)\;\;\;\; (|t|\to0).
\end{equation}
As for inequality (\ref{Ine2}), we select an arbitrary function $\phi \in E_2(u_t)$, and estimate the difference between $\mathcal{R}_g^{u}(\phi -P_0\phi)$ and $\mathcal{R}_g^u(\phi)$. Since the argument is similar we omit the details of the proof.
\end{proof}
For any generating function $h\in L^\infty $, we define $L_h(\cdot, u)$ for functions in $L^2(u)$ or $L^2(u_t)$ by
\begin{equation}
L_h(\phi,u):= -(N-2)\mathcal{R}_g^u(\phi)\cdot \frac{ \int_M h\phi^2 u^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g}
\end{equation}
For functions $\phi \in E_2(u)$ with $\int_M \phi^2u^{N-2}\;dv_g=1$, $L_h(\phi,u)$ takes the simpler form
\begin{equation}-(N-2)\lambda_2(u) \int_M h\phi^2 u^{N-2}\;dv_g.\end{equation}
Note that if one think of $\mathcal{R}_g^{u_t}(\phi)$ as a function in $t\in (-\delta,\delta)$, where $\phi \in L^2(u)$ is fixed, the functional $L_h(\phi,u)$ is merely the derivative of $\mathcal{R}_g^{u_t}(\phi)$ at $t=0$.
\begin{proposition} \label{liminf-limsup-L} The following two limits hold:
\begin{equation}\label{Conv-Linf}
\lim_{t\to0} (\inf_{\phi\in E_2(u_t)} L_h(\phi,u)) = \inf_{\phi\in E_2(u)} L_h (\phi,u),
\end{equation}
and
\begin{equation}\label{Conv-Lsup}
\lim_{t\to0} (\sup_{\phi\in E_2(u_t)} L_h(\phi,u)) = \sup_{\phi\in E_2(u)} L_h (\phi,u).
\end{equation}
\end{proposition}
\begin{proof}
Let $\Pi_t$ be the orthogonal projection from $L^2(u)$ onto $E_2(u_t)$. Our first goal is to show that
\begin{align*}
A_t(\phi)& :=|L_h(\phi,u)-L_h(\Pi_t\phi,u)| \\ & =(N-2)\left|\mathcal{R}_g^u(\phi)\frac{\int_M h \phi^2 u^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g} - \mathcal{R}_g^u(\Pi_t\phi)\frac{\int_M h (\Pi_t\phi)^2 u^{N-2}\;dv_g}{\int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}\right| \to 0,
\end{align*}
uniformly for $\phi\in E_2(u)$. To see this, we estimate
\begin{align*}
(N-2)^{-1}A_t(\phi)& \le \underbrace{|\lambda_2(u)|\left| \frac{\int_M h\phi^2 u^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g} - \frac{\int_M h(\Pi_t\phi)^2 u^{N-2}\;dv_g}{\int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}\right|}_{:=B_t(\phi)} \\
&\hspace{.15in} + \|h\|_\infty\underbrace{\left|\lambda_2(u) - \lambda_2(u_t)\frac{\int_M (\Pi_t\phi)^2u_t^{N-2}\;dv_g}{\int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}\right|}_{:=C_t(\phi)}
\end{align*}
The term $B_t(\phi)$ is bounded from above by
\begin{align*}
B_t(\phi) &\le |\lambda_2(u)|\cdot\left|\frac{\left(\int_M h\phi^2 u^{N-2}\;dv_g - \int_M h (\Pi_t\phi)^2 u^{N-2}\;dv_g\right)\cdot \int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g\cdot\int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}\right| \\
&\hspace{.15in} + |\lambda_2(u)| \cdot\left|\frac{\left(\int_M(\Pi_t\phi)^2 u^{N-2}\;dv_g - \int_M \phi^2 u^{N-2}\;dv_g\right)\cdot \int_M h (\Pi_t\phi)^2 u^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g\cdot\int_M (\Pi_t\phi)^2u^{N-2}\;dv_g}\right| \\
& \le C\cdot\frac{\int_M |(\Pi_t\phi)^2-\phi^2|u^{N-2}\;dv_g}{ \int_M \phi^2u^{N-2}\;dv_g},
\end{align*}
which goes to zero uniformly in $\phi \in E_2(u)$ by Lemma \ref{Pr-3}. As for $C_t(\phi)$, we have
\begin{equation}
\begin{split}
C_t(\phi) &\le |\lambda_2(u) - \lambda_2(u_t)| + |\lambda_2(u_t)|\cdot \left|1 - \frac{\int_M (\Pi_t \phi)^2 u_t^{N-2}\;dv_g}{ \int_M (\Pi_t\phi)^2u^{N-2}\;dv_g} \right|\\
&= |\lambda_2(u) - \lambda_2(u_t)| + |\lambda_2(u_t)|\cdot \left|\frac{\int_M (\Pi_t \phi)^2(u^{N-2}- u_t^{N-2})\;dv_g}{ \int_M (\Pi_t\phi)^2u^{N-2}\;dv_g} \right|\\
&\le |\lambda_2(u) - \lambda_2(u_t)| + C|\lambda_2(u_t)||t|\cdot (1).
\end{split}
\end{equation}
This last expression goes to zero by Proposition \ref{cont-2}. Therefore, our first step is finished, and we conclude that
\begin{equation}
\lim_{t\to 0} \inf_{\phi \in E_2(u)}L_h(\Pi_t\phi,u) = \inf_{\phi \in E_2(u)} L_h(\phi,u).
\end{equation}
An immediate consequence of the previous equality is
\begin{equation}
\lim_{t\to 0} \inf_{\phi\in E_2(u_t)}L_h(\phi,u) \le \inf_{\phi\in E_2(u)} L_h(\phi,u) .
\end{equation}
To show the opposite inequality we proceed as follows. Take a sequence of eigenfunctions $\phi_t\in E_2(u_t)$ normalized such that $\int_M \phi_t^2u^{N-2}\;dv_g=1$, and satisfying
\begin{equation}
\inf_{\phi\in E_2(u_t)}L_h(\phi,u) + |t| \ge L_h(\phi_t,u)
\end{equation}
By Lemma \ref{conv-eigen} (see below), such a sequence is bounded in $W^{1,2}$, and thus, up to the extraction of a subsequence, there is a function $\phi_0$ such that $\phi_t\rightharpoonup \phi_0$ in $W^{1,2}$ and $\phi_t\to \phi_0$ in $L^2$. This function is in $E_2(u)$ and satisfies $\int_M \phi_0^2u^{N-2}\;dv_g=1$. Therefore,
\begin{equation}
\lim_{t\to 0} \inf_{\phi\in E_2(u_t)}L_h(\phi,u) \ge L_h(\phi_0,u) \ge \inf_{\phi\in E_2(u)}L_h(\phi,u).
\end{equation}
This concludes the proof of (\ref{Conv-Linf}). The proof of (\ref{Conv-Lsup}) is similar and it is hence omitted.
\end{proof}
\begin{lemma}\label{conv-eigen} A sequence of generalized eigenfunctions $\phi_t\in E_k(u_t)$ $(k=1,2)$ with $\int_M \phi_t^2u^{N-2}\;dv_g=1$ is bounded in $W^{1,2}$ as $t\to 0$. Moreover, the limit function $\phi_0\in W^{1,2}$ belongs to $E_k(u)$ $(k=1,2)$ and satisfies
\begin{equation}
\int_M \phi_0^2u^{N-2}\;dv_g =1.
\end{equation}
\end{lemma}
\begin{proof}
On the contrary, let us supposed that $\{\phi_t\}_{t>0}$ is unbounded in the $W^{1,2}$-norm $\|\cdot\|$, and define a new sequence by
\begin{equation}
\bar \phi_t := \frac{\phi_t}{\|\phi_t\|}.
\end{equation}
This sequence is bounded with norm equal to 1. Therefore, by weak compactness and Rellich-Kondrachov Theorem, there exists a function $\bar \phi\in W^{1,2}$ such that
\begin{equation}
\begin{cases}
\bar \phi_t &\rightharpoonup \bar \phi \text{ in }W^{1,2} \;\;\;\; (t\to0)\\
\bar \phi_t & \rightarrow \bar \phi \text{ in } L^2:= L^2(M^n,g),
\end{cases}
\end{equation}
along some subsequence. Since the eigenvalue equation is linear, the following holds for any $\varphi\in C^\infty(M)$:
\begin{equation}\label{weak-eq}
\int_M\{\langle\nabla_g \varphi,\nabla_g \bar \phi_t\rangle + c_nR_g\varphi \bar \phi_t\}\;dv_g = \lambda_k(u_t)\int_M \varphi \bar \phi_t u_t^{N-2}\;dv_g.
\end{equation}
By weak convergence in $W^{1,2}$, and strong convergence in $L^2$, the left-hand side of (\ref{weak-eq}) tends to
\begin{equation}
\lim_{t\to 0} \int_M\{\langle\nabla_g \varphi,\nabla \bar \phi_t\rangle + c_nR_g\varphi \bar \phi_t\}\;dv_g = \int_M\{\langle\nabla_g \varphi,\nabla_g \bar \phi\rangle + c_nR_g\varphi \bar \phi\}\;dv_g.
\end{equation}
As for the right-hand side of (\ref{weak-eq}), using that $\lambda_k(u_t)$ is finite by Proposition \ref{cont-1} and Proposition \ref{cont-2}, we deduce that it goes to zero. Indeed,
\begin{equation} \label{conv-eigen2}
\begin{split}
\left|\int_M\varphi \bar \phi_t u_t^{N-2}\;dv_g\right| &\le \|\varphi\|_\infty \int_M u_t^{\frac{N-2}{2}} |\bar \phi_t| u_t^{\frac{N-2}{2}}\;dv_g \\
&\le \|\varphi\|_\infty \left(\int_M u^{N-2}(1+th)^{N-2}\;dv_g\right)^{\frac{1}{2}} \cdot \frac{\left(\int_M \phi_t^2u_t^{N-2}\;dv_g\right)^{\frac{1}{2}}}{\|\phi_t\|}\\
&\le\|\varphi\|_\infty \underbrace{\left(\int_M u^{N-2}(1+th)^{N-2}\;dv_g\right)^{\frac{1}{2}}}_{\to 1\text{ by DCT}} \cdot \frac{\left(\int_M \phi_t^2u^{N-2}\;dv_g\right)^{\frac{1}{2}}\cdot (1+|t|\|h\|_\infty)^{\frac{N-2}{2}}}{\|\phi_t\|}\\ &\to 0.
\end{split}
\end{equation}
Our assumption that $0\not \in \text{Spec}(L_g)$ gives us $\bar \phi \equiv 0$.
Now, equation (\ref{weak-eq}) with $\varphi = \bar \phi_t$ implies
\begin{equation}
\lim_{t\to 0}\int_M |\nabla_g \bar \phi_t|^2\;dv_g = 0.
\end{equation}
However,
\begin{equation}
1 = \int_M |\nabla_g \bar \phi_t|^2\;dv_g + \int_M \bar \phi_t^2\;dv_g \to 0.
\end{equation}
This is a contradiction. Hence, the original sequence $\{\phi_t\}_{t>0}$ is bounded in $W^{1,2}$.
Let $\phi_0\in W^{1,2}$ be the limit function of the sequence $\{\phi_t\}_{t>0}$, that is, up to a subsequence,
\begin{equation}
\begin{cases}
\phi_t &\rightharpoonup \phi_0 \text{ in }W^{1,2}\;\;\;\; (t\to0) \\
\phi_t & \rightarrow \phi_0 \text{ in }L^2\;\;\;\; (t\to0)
\end{cases}
\end{equation}
This implies that $\phi_0$ satisfies weakly the equation
\begin{equation}
L_g\phi_0 = \lambda_k(u)\phi_0u^{N-2}.
\end{equation}
Notice here that we have utilized Proposition \ref{cont-1} and Proposition \ref{cont-2}. In order to conclude that $\phi_0\in E_k(u)$, it remains to show that $\phi_0$ is nontrivial on $\{u>0\}$. To this end, consider
\begin{equation}
\begin{split}
\left|\int_M \phi_0^2u^{N-2}\;dv_g - 1\right| &= \left|\int_M \phi_0^2u^{N-2}\;dv_g - \int_M \phi_t^2u^{N-2}\;dv_g\right|\\
&\le \underbrace{\int_M |\phi_t^2 - \phi_0^2|u^{N-2}\;dv_g}_{=: A_t}
\end{split}
\end{equation}
In order to show that $A_t\to 0$ as $t\to 0$, we introduce the auxiliary function $u_C:=\inf\{u,C\}$ for $C\in\mathbb{R}_+$ large. As shown in the following lines, $A_t$ approaches $0$ as $t\to 0$:
\begin{equation}
\begin{split}
A_t &\le \int_M |\phi_t^2-\phi_0^2||u_C^{N-2}-u^{N-2}|\;dv_g + \int_M|\phi_t^2 - \phi_0^2|u_C^{N-2}\;dv_g\\
&\le \left(\int_M (|\phi_t|^2+|\phi_0|^2)^{\frac{N}{2}}\;dv_g\right)^{\frac{2}{N}}\left(\int_M |u_C^{N-2}-u^{N-2}|^{\frac{N}{N-2}}\;dv_g\right)^{\frac{N-2}{N}}\\
&+ C^{N-2}\int_M|\phi_t^2 - \phi_0^2|\;dv_g.
\end{split}
\end{equation}
Since $\{\phi_t\}_{t>0}$ is bounded in $L^N$, and since
\begin{equation}
\lim_{C\to \infty} \left(\int_M |u_C^{N-2}-u^{N-2}|^{\frac{N}{N-2}}\;dv_g\right)^{\frac{N-2}{N}} = 0
\end{equation}
by the Dominated Convergence Theorem, we conclude that the term $A_t$ goes to zero as $t\to 0$. This completes the proof.
\end{proof}
\begin{lemma}\label{Pr-3}
Let $\Pi_t$ be defined as in the proof of Proposition \ref{liminf-limsup-L}. Then $\|\Pi_t\phi -\phi\|_{L^2(u)} \to 0$ for any $\phi\in E_2(u)$ with $\int_M \phi^2u^{N-2}\;dv_g=1$.
\end{lemma}
\begin{proof}
We start by selecting an $L^2(u)$-orthonormal basis $\{\psi^i_{2,t}\}$ of $E_2(u_t)$, i.e. the collection of functions $\{\psi^i_{2,t}\}$ span the subspace $E_2(u_t)$, and satisfy $\int_M\psi^i_{2,t}\psi^j_{2,t}u^{N-2}\;dv_g = \delta_{ij}$. By Lemma \ref{conv-eigen}, such a collection is bounded in $W^{1,2}$, and, therefore, there exist a corresponding collection of functions $\{\psi^i_{2,0}\}\subset E_2(u)$ satisfying $\int_M\psi^i_{2,0}\psi^j_{2,0}u^{N-2}\;dv_g = \delta_{ij}$. This is a basis for $E_2(u)$.
We utilize these two orthogonal bases to write
\begin{equation}
\Pi_t\phi - \phi = a_i(\phi,t)\psi^i_{2,t} - a_j(\phi,0)\psi^j_{2,0},
\end{equation}
where
\begin{equation}
a_i(\phi,t):= \int_M \phi\psi^i_{2,t}u^{N-2}\;dv_g
\end{equation}
Therefore,
\begin{equation}
\begin{split}
\int_M |\Pi_t\phi - \phi|^2u^{N-2}\;dv_g & = \sum_ia_i(\phi,t)^2 +\sum_i a_i(\phi,0)^2\\ &\hspace{.15in} - 2a_i(\phi,t)a_j(\phi,0)\int_M\psi^i_{2,t}\psi^j_{2,0}u^{N-2}\;dv_g,
\end{split}
\end{equation}
where the orthogonality of the functions $\psi^i_{2,t}$'s with respect to the $L^2(u)$-norm has been used. From the conditions on the family $\{\psi^i_{2,t}\}$ we deduce that this last expression converges to zero. This ends the proof.
\end{proof}
\begin{proposition}\label{OSD} The one-sided derivatives of $\lambda_2(u_t)$ exists at $t=0$. Moreover,
\begin{equation}\label{OSD+}
\frac{d}{dt} \lambda_2(u_t)\Big{|}_{t=0^+} = \inf_{\phi\in E_2(u)} L_h(\phi,u),
\end{equation}
and
\begin{equation}\label{OSD-}
\frac{d}{dt} \lambda_2(u_t)\Big{|}_{t=0^-} = \sup_{\phi\in E_2(u)} L_h(\phi,u)
\end{equation}
\end{proposition}
\begin{proof}
For any smooth function $\phi$ for which $\int_M \phi L_g\phi\;dv_g<0$, we have
\[
\begin{split}
&\left|\frac{1}{t}\left(\int_M \phi^2u_t^{N-2}\;dv_g - \int_M \phi^2u^{N-2}\;dv_g\right) - (N-2)\int_M \phi^2h u^{N-2}\;dv_g\right| \\
=&\left|\frac{1}{t}\int_M \phi^2u^{N-2}\left\{(1+th)^{N-2} - \left(1 +(N-2)th\right)\right\}\; dv_g\right| \\
\le &\int_M \phi^2u^{N-2}\;dv_g\cdot C|t|,
\end{split}
\]
where $C>0$ is a constant depending on $h$ alone. The last inequality in the previous estimates follows from arguments explained in (\ref{Pr-1-eq2}). With this at hand, we compute
\begin{equation}\label{EstimateRL}
\begin{split}
&\left|\frac{1}{t}(\mathcal{R}^{u_t}_g(\phi) - \mathcal{R}_g^u(\phi)) - L_h(\phi,u) \right|\\
&= |\mathcal{R}_g^{u_t}(\phi)|\left| \frac{1}{t}\left( 1 - \frac{\int_M \phi^2u_t^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g}\right) +(N-2) \frac{\int_M \phi^2u_t^{N-2}\;dv_g}{\int_M \phi^2u^{N-2}\;dv_g}\cdot \frac{\int_M h \phi^2 u^{N-2}\;dv_g}{\int_M \phi^2 u^{N-2}\;dv_g}\right|\\
&\le \frac{|\mathcal{R}_g^{u_t}(\phi)|}{ \int_M \phi^2u^{N-2}\;dv_g} \cdot \left|\frac{1}{t}\left(\int_M \phi^2 u_t^{N-2}\;dv_g - \int_M \phi^2u^{N-2}\;dv_g\right) - (N-2)\int_M h \phi^2 u^{N-2}\;dv_g \right| \\
&\hspace{.15in} + \frac{|\mathcal{R}_g^{u_t}(\phi)|}{ \int_M \phi^2u^{N-2}\;dv_g} \cdot \left| (N-2) \int_M h \phi^2 u^{N-2}\;dv_g\left(1 - \frac{\int_M \phi^2u_t^{N-2}\;dv_g}{ \int_M \phi^2u^{N-2}\;dv_g}\right)\right| \\
&\le |\lambda_1(u_t)| \left\{C|t|+ \tilde C\cdot(N-2)\|h\|_\infty |t| \right\} = o(1),
\end{split}
\end{equation}
where the constant $\tilde C$ is independent of $\phi$.
Now, for $t>0$, and taking the infimum over functions $\phi\in E_2(u)$, we deduce
\begin{equation}
\frac{1}{t} \left(\inf_{\phi\in E_2(u)} \mathcal{R}_g^{u_t}(\phi) - \lambda_2(u)\right) \le \inf_{\phi\in E_2(u)} L_h(\phi,u) + o(1).
\end{equation}
Inequality (\ref{Ine1}) (see Proposition \ref{Ines}) gives us
\begin{equation}
\limsup_{t\to0^+}\frac{\lambda_2(u_t) - \lambda_2(u)}{t} \le \inf_{\phi\in E_2(u)} L_h(\phi,u).
\end{equation}
On the other hand, by plugging into $(\ref{EstimateRL})$ a function $\phi\in E_2(u_t)$ we get
\begin{equation}
L_h(\phi,u) - \frac{1}{t}\left(\lambda_2(u_t) - \mathcal{R}_g^u(\phi)\right)\le o(1).
\end{equation}
Therefore, after taking the infimum over such functions and applying (\ref{Ine2}) with $t>0$,
\begin{equation}
\inf_{\phi\in E_2(u_t)} L_h(\phi,u) \le \frac{1}{t} (\lambda_2(u_t) - \lambda_2(u)) + o(1)
\end{equation}
By means of Proposition \ref{liminf-limsup-L} we finally obtain
\begin{equation}
\inf_{E_2(u)} L_h(\phi,u) \le \liminf_{t\to0^+} \frac{\lambda_2(u_t)-\lambda_2(u)}{t}.
\end{equation}
This finishes the proof for (\ref{OSD+}). The proof of (\ref{OSD-}) is similar and it is, hence, omitted.
\end{proof}
\vskip.2in
\section{Regularized Functional}\label{RegularizedFunctional}
For each $\epsilon>0$, denote by $\mathcal{D}_\epsilon$ the set
\begin{equation}
\mathcal{D}_\epsilon: = \left\{u\in L^N_+:\; u^{-\epsilon}\in L^1\right\}.
\end{equation}
We define the functional $F_{2,\epsilon}: \mathcal{D}_\epsilon \rightarrow \mathbb{R}$ by
\begin{align} \label{Fdef}
F_{2,_\epsilon} (u) &=\lambda_2(u) \left(\int_Mu^N\;dv_g\right)^{\frac{N-2}{N}} - \left( \int_M u^{-\epsilon} \, dv_g \right) \left(\int_M u^N\;dv_g\right)^{\frac{\epsilon}{N}}.
\end{align}
By construction, $F_{2,\epsilon}(u) < 0$ for each $u\in \mathcal{D}_\epsilon$. It is not difficult to see that there is the possibility for a function $u\in L^N_+$ to have $\lambda_2(u)=-\infty$, which means that our functional would not be well defined on all of $\mathcal{D}_\epsilon$. However, as El Sayed showed in \cite{ElSayed} (see Proposition 2.2), if the nodal set $\{x\in M: u(x)=0\}$ of a function $u\in L^N_+$ has zero measure, then $\lambda_2(u)> \lambda_1(u)>-\infty$. Since every function $u\in \mathcal{D}_\epsilon$ satisfies this condition, we conclude that our regularized functional $F_{2,\epsilon}: \mathcal{D}_\epsilon \rightarrow \mathbb{R}$ is well defined. For future reference, we state this as a lemma.
\begin{lemma} \label{finiteLemma} If $u\in \mathcal{D}_\epsilon$, then
\begin{align} \label{finite}
\lambda_1(u) > -\infty.
\end{align}
\end{lemma}
The following result is a consequence of Proposition (\ref{OSD}).
\begin{proposition} \label{OSDProp} Suppose $u \in \mathcal{D}_\epsilon$ with
\begin{align} \label{unormal}
\int_M u^{N} \, dv_g = 1.
\end{align}
For $h \in L^{\infty}$, let
\begin{align} \label{ut}
u_t = u(1 + t h).
\end{align}
Then the one-sided derivatives of $F_{2,\epsilon}(u_t)$ at $t=0$ exist, and are given by
\[ \begin{split}
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}} &= (N-2) \lambda_2(u) \Bigg\{ \Big( 1 - (N-2)^{-1} \frac{\epsilon}{\lambda_2(u)} \int_M u^{-\epsilon} \, dv_g \Big) \int_M h u^{N} \, dv_g \\
& \ \ \ \ - \inf_{\phi \in E_2(u)} \dfrac{ \int_M h \phi^2 u^{N-2} \, dv_g}{ \int_M \phi^2 u^{N-2} \, dv_g} + (N-2)^{-1}\frac{\epsilon}{\lambda_2(u)} \int_M h u^{-\epsilon} \, dv_g \Bigg\} , \\
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{-}} &= (N-2) \lambda_2(u) \Bigg\{ \Big( 1 - (N-2)^{-1} \frac{\epsilon}{\lambda_2(u)} \int_M u^{-\epsilon} \, dv_g \Big) \int_M h u^{N} \, dv_g \\
& \ \ \ \ - \sup_{\phi \in E_2(u)} \dfrac{ \int_M h \phi^2 u^{N-2} \, dv_g }{ \int_M \phi^2 u^{N-2} \, dv_g} + (N-2)^{-1}\frac{\epsilon}{\lambda_2(u)} \int_M h u^{-\epsilon} \, dv_g\Bigg\}.
\end{split}
\]
\end{proposition}
\begin{proof} Let
\begin{align} \label{Idef}
\mathcal{I}_{\epsilon}(u) = \left( \int_M u^{-\epsilon} \, dv_g \right) \left(\int_M u^N\;dv_g\right)^{\frac{\epsilon}{N}},
\end{align}
so that
\begin{align*}
F_{2,\epsilon}(u) =\lambda_2(u) \left(\int_M u^N\;dv_g\right)^{\frac{N-2}{N}} - \mathcal{I}_{\epsilon}(u).
\end{align*}
If $u$ is normalized as in (\ref{unormal}) and $u_t$ is given by (\ref{ut}), then a simple calculation gives
\begin{align} \label{Idot}
\frac{d}{dt} \mathcal{I}_{\epsilon}(u_t) \big|_{t=0} = -\epsilon \int_M h u^{-\epsilon} \, dv_g + \epsilon \left( \int_M u^{-\epsilon} \, dv_g \right) \int_M h u^{N} \, dv_g.
\end{align}
Therefore,
\begin{align*}
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}} &= \frac{d}{dt}\left( \lambda_2(u_t)\left(\int_M u_t^{N}\;dv_g\right)^{\frac{N-2}{N}}\right)\big|_{t = 0^{+}} - \frac{d}{dt} \mathcal{I}_{\epsilon}(u_t) \big|_{t=0} \\
&= (N-2) \lambda_2(u) \Bigg\{ \int_M h u^{N} \, dv_g - \inf_{\phi\in E_2(u)} \dfrac{ \int_M h \phi^2 u^{N-2} \, dv_g }{ \int_M \phi^2 u^{N-2} }\Bigg\} \\
&\ \ \ \ + \epsilon \int_M h u^{-\epsilon} \, dv_g - \epsilon \left( \int_M u^{-\epsilon} \, dv_g \right) \int_M h u^{N} \, dv_g \\
&= (N-2) \lambda_2(u) \Bigg\{ \Big( 1 - (N-2)^{-1} \frac{\epsilon}{\lambda_2(u)} \int_M u^{-\epsilon} \, dv_g \Big) \int_M h u^{N} \, dv_g \\
& \ \ \ \ - \inf_{\phi\in E_2(u)} \dfrac{ \int_M h \phi^2 u^{N-2} \, dv_g }{ \int_M \phi^2 u^{N-2} \, dv_g} + (N-2)^{-1}\frac{\epsilon}{\lambda_2(u)} \int_M h u^{-\epsilon} \, dv_g\Bigg\},
\end{align*}
as claimed. The computation for the one-sided derivative from the left is analogous and it is, therefore, omitted.
\end{proof}
\begin{definition} We say that $u \in \mathcal{D}_\epsilon$ is {\em extremal} for the regularized functional $F_{2,\epsilon}$ if the one-sided derivatives, $\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}}$ and $\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}}$, have different signs along $u_t:=u(1+th)$ for any generating function $h\in L^\infty$.
An extremal function $u \in \mathcal{D}_\epsilon$ is said to be {\em maximal} if
\begin{equation}\label{comp}
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{-}}\ge 0 \ge \frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}}.
\end{equation}
\end{definition}
We remark that this last condition is equivalent to
\begin{equation}
F_{2,\epsilon}(u_t) \le F_{2,\epsilon}(u) + o(t) \;\;\;\; (t\to 0),
\end{equation}
for any deformation $u_t$ of $u$. Hence, our definition of maximality is an equivalent definition to the more standard one found in \cite{Nadirashvilli} for Laplace eigenvalues. In the following proposition, we derived the Euler Lagrange Equation for maximal functions of the regularized functional $F_{2,\epsilon}$. This will be the key for getting uniform estimates in Section \ref{Estimates}.
\begin{proposition} \label{EulerProp} Suppose $u \in \mathcal{D}_\epsilon$ is maximal and normalized as in (\ref{unormal}). Then there is a set of eigenfunctions $\{\phi_i \}_{i=1}^k$ associated to $\lambda_2(u)$, normalized by
\begin{equation}
\int_M \phi^2u^{N-2}\;dv_g=1,
\end{equation}
and a set of real numbers $c_1,\dots,c_k \geq 0$ with $\sum_{i=1}^k c_i^2 = 1$, such that
\begin{align} \label{Euler}
\gamma_1 u^{N} - u^{N-2} \sum_{i=1}^k c_i \phi_i^2 - \gamma_2 u^{-\epsilon} = 0,
\end{align}
where
\begin{align} \label{gammas} \begin{split}
\gamma_1 &= 1 - (N-2)^{-1} \frac{\epsilon}{\lambda_2(u)} \int_M u^{-\epsilon} \, dv_g > 1, \\
\gamma_2 &= (N-2)^{-1} \frac{\epsilon}{ |\lambda_2(u)|} > 0.
\end{split}
\end{align}
\end{proposition}
\begin{proof}
Consider the subset $K\subseteq L^1 = L^1(M,g)$ defined by
\begin{equation}
K:= \left\{\phi^2\cdot u^{N-2}: \phi\in E_2(u),\|\phi\|_{L^2(u)}=1\right\}.
\end{equation}
This set is compact, and since $E_2(u)$ is finite dimensional, it lies in a finite dimensional subspace of $L^1$. Caratheodory's Theorem for Convex Hulls implies that the convex hull of $K$,
\begin{align*}
\text{Conv}(K) &= \left\{\sum_{\text{finite}}c_j\psi_j: c_j\ge 0, \sum c_j =1, \psi_j\in K\right\} \\ & =\left\{u^{N-2}\sum_{\text{finite}}c_j\phi_i^2: c_j\ge 0, \sum c_j =1, \phi_j\in E_2(u), \|\phi\|_{L^2(u)}=1\right\},
\end{align*}
is compact as well.
Notice that the proof will be completed if we can show that $\gamma_1u^{N} - \gamma_2u^{-\epsilon}\in \text{Conv}(K)$. Assume, on the contrary, that
\begin{equation}
\{\gamma_1u^{N} - \gamma_2u^{-\epsilon}\}\cap\text{Conv}(K)=\emptyset.
\end{equation}
This gives us two disjoint convex sets, the first one of them being closed, and the second one of them being compact. By Hahn Banach Separation Theorem, there exists a functional $\Psi\in(L^1)^*$ separating these two sets:
\begin{equation}\label{HBS1}
\Psi(\gamma_1u^{N} - \gamma_2u^{-\epsilon}) > 0,
\end{equation}
and
\begin{equation}\label{HBS2}
\Psi(\varphi)\le 0\text{ for all }\varphi\in\text{Conv}(K).
\end{equation}
Furthermore, Ries'z Representation Theorem provides us with the existence of a function $h\in (L^1)^* = L^\infty$ so that $\Psi$ is given by integration against $h\; dv_g$. In particular, inequalities (\ref{HBS1}) and (\ref{HBS2}) can be rewritten as
\begin{equation}\label{HBS3}
\int_M h\cdot(\gamma_1u^{N} - \gamma_2u^{-\epsilon})\;dv_g > 0,
\end{equation}
and
\begin{equation}\label{HBS4}
\int_M h\cdot \phi^2u^{N-2}\;dv_g \le 0,
\end{equation}
where in (\ref{HBS4}) we have taken $\varphi\in\text{Conv}(K)$ to be a convex combination in $K$ of length one. Estimates (\ref{HBS3}) and (\ref{HBS4}) imply that for any $\phi\in E_2(u)$ with $\|\phi\|_{L^2(u)}=1$, we have
\begin{equation}\label{contradiction}
\int_M h(\gamma_1u^{N} - \gamma_2u^{-\epsilon})\;dv_g - \int_M h\phi^2u^{N-2}\;dv_g > 0.
\end{equation}
Now, we use $h$ as a generating function, that is, we consider $u_t = u(1+th)$, and select an interval $(-\delta,\delta)$ on which $1-|th|>0$. Since $h\in L^\infty$, Proposition \ref{OSDProp} holds, and the maximality of $u$ implies, in particular,
\begin{equation}
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}} \le 0.
\end{equation}
However, by (\ref{contradiction}),
\begin{align*}
\frac{d}{dt} F_{2,\epsilon}(u_t) \big|_{t = 0^{+}} & = (N-2)\lambda_2(u)\left\{ \int_M h(\gamma_1u^{N} - \gamma_2u^{-\epsilon})\;dv_g - \inf_{\phi \in E_2(u)}\int_M h\phi^2u^{N-2}\;dv_g\right\} \\ & >0.
\end{align*}
This is a contradiction, and hence $\gamma_1u^{N} - \gamma_2u^{-\epsilon}\in \text{Conv}(K)$, as claimed.
\end{proof}
In Proposition \ref{RegEx} we will show the existence of a maximizer for the regularized functional $F_{2,\epsilon}$. We mentioned that one of the reasons for introducing such regularizing term,
\begin{equation}
u \longmapsto \int_M u^{-\epsilon}\;dv_g,
\end{equation}
was to control the size in measure of the zero set of possible extremal functions. By Lemma \ref{finiteLemma}, this gives us the finiteness of $\lambda_1(u)$, and, with this, the existence of generalized eigenfunctions. Another crucial reason is that this integral functional is weakly lower semicontinuous on $\mathcal{D}_\epsilon$. In general, weak lower semicontinuity of integral functionals on Sobolev spaces depend on some kind of convexity of the integrand function. See Chapter 2 in \cite{Rindler} for a complete discussion.
\begin{proposition} \label{RegEx} For each $\epsilon > 0$, there is a $u_{\epsilon} \in \mathcal{D}_\epsilon$, normalized by
\begin{align} \label{Vol}
\int_M u_{\epsilon}^N \, dv_g = 1,
\end{align}
that is maximal for $F_{2,\epsilon}$. Moreover, there is a constant $C= C(g)$, independent of $\epsilon > 0$, such that
\begin{align} \label{ue}
\int_M u_{\epsilon}^{-\epsilon} \, dv_g \leq C.
\end{align}
\end{proposition}
\begin{proof} Fix $\epsilon > 0$, and let $\{ u_i \}_{i=1}^\infty \subset \mathcal{D}_\epsilon$ be a normalized maximizing sequence for $F_{2,\epsilon}$, i.e. a sequence such that
\begin{align} \label{seq} \begin{split}
&\int_M u_i^N \, dv_g = 1, \\
&F_{2,\epsilon}(u_i) \rightarrow \sup_{ u \in \mathcal{D}_\epsilon } F_{2,\epsilon}(u).
\end{split}
\end{align}
The sequence $\{ u_i \}_{i=1}^\infty$ is bounded in $L^N$ and, by weak compactness, there is a subsequence, not explicitly denoted, that converges weakly in $L^N$ to some $\bar{u} \in L^N$, with
\begin{align} \label{bun}
\int_M \bar{u}^N \, dv_g \leq 1.
\end{align}
The above inequality is a consequence of the weak lower semicontinuity of the $L^N$-norm. Also, by the second condition in (\ref{seq}), we may assume for $i >> 1$ large that $F_{2,\epsilon}(u_i) \geq F_{2,\epsilon}(1)$, and therefore
\begin{equation}\label{EigenBound}
\lambda_2(u_i) - \int u_i^{-\epsilon} \, dv_g \geq \lambda_2(L_g) - 1,
\end{equation}
Here we have used the assumption $\text{Vol}(M,g) = 1$. As $\lambda_2(u_i) < 0$, it follows that
\begin{align} \label{Leb}
\int_M u_i^{-\epsilon} \leq C(g).
\end{align}
Since $f(t) = t^{-\epsilon}$ is convex, the functional
\begin{align*}
u \longmapsto \int_M u^{-\epsilon} \, dv_g
\end{align*}
is weakly lower semicontinuous on $L^N_{+}$, thus
\begin{align} \label{wlsc}
\int_M \bar{u}^{-\epsilon} \, dv_g \leq \liminf_{i \to \infty} \int_M u_i^{-\epsilon} \, dv_g.
\end{align}
In particular,
\begin{align} \label{ube}
\int_M \bar{u}^{-\epsilon} \, dv_g \leq C.
\end{align}
This shows (\ref{ue}), and we conclude $\bar{u} \in \mathcal{D}_\epsilon$. It remains to show that $\bar u$ is maximal.
By further restricting to a subsequence if necessary, we may assume
\begin{align} \label{lamlim}
\lambda_2(u_i) \rightarrow \bar \lambda_2 := \limsup_{i \to \infty} \lambda_2(u_i) < 0.
\end{align}
That $\bar \lambda_2$ is strictly negative will follow from showing that it is an eigenvalue associated to $\bar u$. In particular, we claim that
\begin{equation}\label{maxat}
\bar\lambda_2 = \lambda_2(\bar u).
\end{equation}
We proceed now to prove (\ref{maxat}). Note that by Lemma \ref{finiteLemma}, $\lambda_1(u_i)$ is finite for all $i\in \mathbb{N}$, and thus we have existence of generalized eigenfunctions for the negative part of the spectrum: for each $l\in\{1,\cdots,\nu([g])\}$, there exists $\phi_i^l\in W^{1,2}$ such that
\begin{align} \label{seq2} \begin{split}
& L_g\phi_i^l = \lambda_l(u_i) \phi_i^lu_i^{N-2} \\
& \int_M (\phi_i^l)^2u_i^{N-2}\;dv_g = 1.
\end{split}
\end{align}
On the other hand, by further restricting our subsequence if necessary, we may assume
\begin{equation}
\lambda_l(u_i) \rightarrow \bar \lambda_l := \limsup_{i\to\infty} \lambda_l(u_i).
\end{equation}
The key observation is that, for each $l\in\{1,\cdots,\nu([g])\}$, the sequence $\{\phi_i^l\}_{i=1}^\infty$ of generalized eigenfunctions is bounded in the $W^{1,2}$-norm, and therefore there exist a function $\bar \phi^l\in W^{1,2}$ such that $\phi_i^l \rightharpoonup \bar \phi^l$ in $W^{1,2}$, and $\phi_i^l\rightarrow \bar\phi^l$ in $L^2$ (up to the extraction of a subsequence). The boundedness of $\{\phi_i^l\}_{i=1}^\infty$ follows from arguments similar to those found in the proof of Lemma \ref{conv-eigen}; see also discussion in Lemma \ref{EFs}. After appropriately taking limits in $(\ref{seq2})$, we conclude that each $\bar \lambda_l$ is a generalized eigenvalue for $\bar u$, and thus $\bar\lambda_l\in\{\lambda_1(\bar u),\cdots, \lambda_{\nu([g])}(\bar u)\}$ for each $l=1,\cdots,\nu([g])$. Since $\bar\lambda_1 \le \cdots \le \bar \lambda_{\nu([g])}$, we conclude that $\bar\lambda_2 = \lambda_2(\bar u)$, as claimed. This concludes the proof of (\ref{maxat}).
Now, by (\ref{bun}) and \ref{maxat},
\begin{align} \label{one}
\limsup_{i \to \infty} \lambda_2(u_i) = \lambda_2(\bar{u}) \leq \lambda_2(\bar{u}) \left( \int_M \bar{u}^N \, dv_g \right)^{\frac{N-2}{N}}.
\end{align}
Also, by (\ref{bun}) and (\ref{wlsc}),
\begin{align} \label{two}
\left( \int_M \bar{u}^N \, dv_g \right)^{\frac{\epsilon}{N}} \left( \int_M \bar{u}^{-\epsilon} \, dv_g \right) \leq \liminf_{i \to \infty} \int u_i^{-\epsilon} \, dv_g,
\end{align}
and thus
\begin{align} \label{twop}
- \left( \int_M \bar{u}^N \, dv_g \right)^{\frac{\epsilon}{N}} \left( \int_M \bar{u}^{-\epsilon} \, dv_g \right) \geq \limsup_{i \to \infty} \left\{-\int_M u_i^{-\epsilon} \, dv_g \right\}.
\end{align}
Combining (\ref{one}) and (\ref{twop}), we conclude
\begin{align*}
F_{2,\epsilon}(\bar{u}) &= \lambda_2(\bar{u}) \left( \int_M \bar{u}^N \, dv_g \right)^{\frac{N-2}{N}} - \left( \int_M \bar{u}^N \, dv_g \right)^{\frac{\epsilon}{N}} \left( \int_M \bar{u}^{-\epsilon} \, dv_g \right) \\
&\geq \limsup_{i \to \infty} \left\{ \lambda_2(u_i) - \int_M u_i^{-\epsilon} \, dv_g \right\} \\
&= \sup_{ u \in \mathcal{D}_\epsilon } F_{2,\epsilon}(u),
\end{align*}
and it follows that $\bar{u}$ is maximal. The estimate (\ref{ue}) follows from (\ref{ube}). This ends the proof.
\end{proof}
\vskip.2in
\section{Estimates}\label{Estimates}
In the following we suppose that $u_{\epsilon} \in \mathcal{D}_\epsilon$ is maximal for $F_{2,\epsilon}$, and that it is normalized as in (\ref{unormal}):
\begin{align*}
\int_M u_{\epsilon}^N \, dv_g = 1.
\end{align*}
By Proposition \ref{EulerProp} this means that there is a set of second generalized eigenfunctions $\{ \phi_{i,\epsilon}\}_{i=1}^{k(u_\epsilon)} $ associated to $\lambda_2(u_{\epsilon})$, normalized to have unit $L^2(u_\epsilon)$-norm,
\begin{align} \label{phiL2}
\int_M \phi^2_{i,\epsilon} u_{\epsilon}^{N-2} \, dv_g = 1,
\end{align}
and a set of real numbers $c_{1,\epsilon},\dots,c_{k(u_\epsilon),\epsilon} \geq 0$ with $\sum_{i=1}^{k(u_\epsilon)} c_{i,\epsilon}^2 = 1$, for which
\begin{align} \label{Euler}
\gamma_{1,\epsilon} u_{\epsilon}^N - u_{\epsilon}^{N-2} \sum_{i=1}^{k(u_\epsilon)} c_{i,\epsilon} \phi^2_{i,\epsilon} - \gamma_{2,\epsilon} u_{\epsilon}^{-\epsilon} = 0
\end{align}
holds. Here $\gamma_{1,\epsilon}$ and $\gamma_{2,\epsilon}$ denote the real numbers
\begin{align} \label{gammas} \begin{split}
\gamma_{1,\epsilon} &= 1 - (N-2)^{-1} \frac{\epsilon}{\lambda_2(u_{\epsilon})} \int u_{\epsilon}^{-\epsilon} \, dv_g > 1, \\
\gamma_{2,\epsilon} &= (N-2)^{-1} \frac{\epsilon}{ |\lambda_2(u_{\epsilon})|} > 0.
\end{split}
\end{align}
Denote by $m(u_\epsilon)$ the multiplicity of $\lambda_2(u_\epsilon)$, that is, the dimension of $E_2(u_\epsilon)$. Note that for any such extremal function, we have
\begin{equation}
1\le k(u_\epsilon) \le m(u_\epsilon) \le \nu([g]).
\end{equation}
Since these are sequences of positive integer numbers, both sequences, $k(u_\epsilon)$ and $m(u_\epsilon)$, are eventually constant. Up to the extraction of a subsequence, without loss of generality we can assume that both $k(u_\epsilon)$ and $m(u_\epsilon)$ are independent of $\epsilon$, and simply write $k$ and $m$.
Our ultimate goal is to take the limit as $\epsilon$ goes to $0$, $\epsilon\to 0^+$, and, in doing so, obtain an extremal function for the eigenvalue functional itself. To this end, all quantities which depend on $\epsilon$ need to be controlled. In what follows, and for the remaining part of this section, we obtain uniform estimates and deduce important results which will allow us to take the limit $\epsilon \to 0^+$ in Section \ref{TakingLimit}.
To the sequence of maximal functions $\{u_\epsilon\}_{\epsilon>0}$, there is a corresponding sequence of eigenvalues which we denote by $\{\lambda_2(u_\epsilon)\}_{\epsilon>0}$. It is straightforward to see that there are uniform bounds for this sequence. Indeed, from $F_{2,\epsilon}(u_\epsilon)\ge F_{2,\epsilon}(1)$, we deduce
\begin{equation} \label{EV1}
0 < |\lambda_2(u_\epsilon)| \le |\lambda_2(L_g) - 1|.
\end{equation}
This simple observation allows us to get uniform estimates on the eigenfunctions $\{\phi_{i,\epsilon}\}_{\epsilon>0}$.
\begin{lemma} \label{EFs} There is a bound
\begin{align} \label{W12p}
\| \phi_{i,\epsilon} \|_{W^{1,2}} \leq C,
\end{align}
where $C$ is independent of $\epsilon$.
\end{lemma}
\begin{proof} For each $i\in\{ 1,\cdots,k\}$, we find uniform $W^{1,2}$-estimates for the sequence $\{\phi_{i,\epsilon}\}_{\epsilon>0}$. The proof is almost identical to that of Lemma \ref{conv-eigen}, but with two main differences. First, the normalization of the eigenfunctions is slightly different as it is with respect to $u_\epsilon$ rather than with respect to a fixed function:
\begin{equation}
\int_M \phi_{i,\epsilon}^2u_\epsilon^{N-2}\;dv_g = 1.
\end{equation}
Secondly, we have not discussed yet any convergence of the sequence of eigenvalues $\lambda_2(u_\epsilon)$ as $\epsilon \to 0^+$. However, the fact that the sequence $\{u_\epsilon\}_{\epsilon>0}$ has unit $L^N$-norm, and that $\{\lambda_2(u_\epsilon)\}_{\epsilon>0}$ is uniformly bounded, see (\ref{EV1}), make the arguments in (\ref{conv-eigen2}) work in this case as well. The remaining part of the argument is similar and it is, hence, omitted.
\end{proof}
It is important to note that our assumption on the spectrum of $L_g$ provides us a way to better understand limiting quantities. $\text{Ker}(L_g)$ being trivial was used in Lemma \ref{conv-eigen} and Lemma \ref{EFs}. It also allows us to get stronger estimates on the sequence $\{\lambda_2(u_\epsilon)\}_{\epsilon>0}$ of second generalized eigenvalues.
\begin{lemma} \label{EVbound}
There is a constant $C$, independent of $\epsilon$, such that
\begin{equation}
0<C\le |\lambda_2(u_\epsilon)|\le |\lambda_2(L_g)-1|.
\end{equation}
\end{lemma}
\begin{proof}
If there is no such constant, then we can find a sequence $\{\epsilon_l\}_{l\ge1}$ with $\epsilon_l\to 0^+$ such that $\lambda_2(u_{\epsilon_l})$ increases to zero. For each element in this sequence, there is a corresponding eigenfunction $\phi_{\epsilon_l}\in E_2(u_{\epsilon_l})$ normalized by
\begin{equation}
\int_M \phi_{\epsilon_l}^2u_{\epsilon_l}^{N-2}\;dv_g =1.
\end{equation}
We claim that $\{\phi_{\epsilon_l}\}_{l\ge1}$ is bounded in $W^{1,2}$. The arguments for this are similar to those found in the proofs of Lemma \ref{conv-eigen} and Lemma \ref{EFs}, and are, therefore, omitted. This provide us with the existence of a function $\phi\in W^{1,2}$, which is the weak $W^{1,2}$-limit and the strong $L^2$-limit of a subsequence of $\{\phi_{\epsilon_l}\}_{l\ge 1}$, and which therefore satisfies, in a weak sense, the equation $L_g\phi = 0$. Since $0\not\in\text{Spec}(L_g)$, we conclude that $\phi \equiv 0$.
On the other hand, from
\begin{equation}
\int_M \{|\nabla_g\phi_{\epsilon_l}|^2 + c_nR_g\phi_{\epsilon_l}^2\}\;dv_g = \lambda_2(u_{\epsilon_l}) < 0,
\end{equation}
we deduce
\begin{equation}
0\le\int_M |\nabla_g \phi_{\epsilon_l}|^2\;dv_g\le -c_n\int_MR_g\phi_{\epsilon_l}^2\;dv_g \to 0.
\end{equation}
This implies that $\|\phi_{\epsilon_l}\|_{W^{1,2}}\to 0$. However,
\begin{equation}
\begin{split}
1 = \int_M \phi_{\epsilon_l}^2u_{\epsilon_l}^{N-2}\;dv_g & \le \left(\int_M u_{\epsilon_l}^N\;dv_g\right)^\frac{N-2}{N}\left(\int_M \phi_{\epsilon_l}^N\;dv_g\right)^{\frac{2}{N}} \\ & = \|\phi_{\epsilon_l}\|_{L^N}^2 \to 0,
\end{split}
\end{equation}
where the convergence follows by the Sobolev Embedding Theorems. This is a contradiction, and the proof is complete.
\end{proof}
We recall that by Proposition \ref{RegEx}, the sequence of extremal functions $\{u_\epsilon\}_{\epsilon>0}$ satisfy:
\begin{align} \label{ube2}
\int_M u_{\epsilon}^{-\epsilon} \, dv_g \leq C,
\end{align}
where $C$ is independent of $\epsilon$. This together with Lemma \ref{EVbound} gives us the estimates
\begin{align} \label{gams} \begin{split}
1 &\leq \gamma_{1,\epsilon} \leq c_1, \\
c_2^{-1} \epsilon &\leq \gamma_{2,\epsilon} \leq c_2 \epsilon,
\end{split}
\end{align}
for constants $c_1, c_2 > 1$ that only depend on the background metric $g$.
Let us return to discuss estimates for the family of eigenfunctions. If we define a new function $f_{i,\epsilon} := \sqrt{ 1 + \phi^2_{i,\epsilon} }$, then $f_{i,\epsilon}$ satisfies
\begin{align*}
\Delta_g f_{i,\epsilon} \geq c_n R_g f_{i,\epsilon}.
\end{align*}
Furthermore, Lemma \ref{EFs} implies $\| f_{i,\epsilon} \|_{W^{1,2}} \leq C$. It then follows from a basic Moser iteration argument that
\begin{align} \label{psup}
\| \phi_{i,\epsilon} \|_{L^{\infty}} \leq C.
\end{align}
The first consequence of this estimate is the following:
\begin{lemma}\label{u-upper} There is a constant $C = C(g)$, independent of $\epsilon$, such that
\begin{align} \label{sup}
\emph{ess sup } u_{\epsilon} \leq C.
\end{align}
\end{lemma}
\begin{proof} Let $x \in M$ such that $u_{\epsilon}(x) \geq 1$. Then at $x$, the Euler equation (\ref{Euler}) implies
\begin{align*}
\gamma_{1,\epsilon} u_{\epsilon}^N(x) &= u_{\epsilon}^{N-2}(x) \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon}(x) + \gamma_{2,\epsilon} u_{\epsilon}^{-\epsilon}(x) \\
&\leq u_{\epsilon}^{N-2}(x) \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon}(x) + \gamma_{2,\epsilon} \ \ \ \ \mbox{(since $u_{\epsilon}(x) \geq 1$)} \\
&\leq C (u_{\epsilon}^{N-2}(x) + \epsilon). \ \ \ \ \mbox{ (by (\ref{psup}) and (\ref{gams}))}
\end{align*}
Therefore, $u_{\epsilon}(x) \leq C$, and we are done.
\end{proof}
As an immediate consequence of Lemma \ref{u-upper}, we have
\begin{corollary} \label{efCor} For any $\alpha \in (0,1)$ and $p > 1$, there is a $C = C(\alpha, p)$ such that
\begin{align} \label{Hold}
\| \phi_{i,\epsilon} \|_{C^{1,\alpha}(M^n)} + \| \phi_{i,\epsilon}\|_{W^{2,p}(M^n)} \leq C.
\end{align}
\end{corollary}
\begin{proof}
The uniform bound on the $W^{2,p}$-norm, for any $p>1$, follows from standard elliptic regularity theory. The bound on the first term follows then by a routine application of the Sobolev Embedding Theorems in their second part.
\end{proof}
As we have seen in Lemma \ref{u-upper}, there is a uniform, $\epsilon$-independent, upper bound for the family of extremal functions $\{u_\epsilon\}_{\epsilon>0}$. Additionally, having $u_\epsilon\in \mathcal{D}_\epsilon$ means, in particular, that the sets $\{x\in M: u_\epsilon(x) = 0\}_{\epsilon>0}$ have zero riemannian measure. This does not guarantee that the zero set of the limit function is also small in measure. However, the following $\epsilon$-dependent lower bound is still possible thanks to the Euler equation (\ref{Euler}).
\begin{lemma} \label{u-lower} There is a constant $C = C(g)$, independent of $\epsilon$, such that
\begin{align} \label{inf}
\emph{ess inf } u_{\epsilon} \geq (C \epsilon)^{\frac{1}{\epsilon +N}}.
\end{align}
\end{lemma}
\begin{proof} Let $x \in M$ be such that $u_{\epsilon}(x)>0$. Then at $x$, the Euler equation (\ref{Euler}) implies
\begin{align*}
\gamma_{1,\epsilon} u_{\epsilon}^N(x)
&= u_{\epsilon}^{N-2}(x) \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon}(x) + \gamma_{2,\epsilon} u_{\epsilon}^{-\epsilon}(x) \\
&\geq \gamma_{2,\epsilon} u_{\epsilon}^{-\epsilon}(x) \\
&\geq C \epsilon u_{\epsilon}^{-\epsilon}(x).
\end{align*}
Since $\gamma_{1,\epsilon} \leq C$ by (\ref{gams}), we conclude
\begin{align*}
u_{\epsilon}^{\epsilon + N}(x) \geq C \epsilon,
\end{align*}
as claimed.
\end{proof}
Combining the preceding results, we now deduce that $u_{\epsilon} \in C^{1,\alpha}(M^n)$. To obtain this, divide through the Euler equation (\ref{Euler}) by $u_{\epsilon}^{N-2}$ and rearrange terms to obtain
\begin{align} \label{NewEuler}
\gamma_{1,\epsilon} u_{\epsilon}^2 - \gamma_{2,\epsilon} u_{\epsilon}^{-\epsilon - N + 2} = \Phi_{\epsilon},
\end{align}
where
\begin{align} \label{Phi}
\Phi_{\epsilon} = \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon}.
\end{align}
If we define $f : (0,\infty) \rightarrow \mathbb{R}$ by
\begin{align*}
f(t) := \gamma_{1,\epsilon} t^2 -\gamma_{2,\epsilon} t^{-\epsilon - N + 2},
\end{align*}
then $f \in C^{\infty}(\mathbb{R}^{+})$, and by (\ref{NewEuler}),
\begin{align} \label{new2}
f(u_\epsilon) = \Phi_{\epsilon}.
\end{align}
Since $f$ is strictly increasing on $(0,\infty)$, its inverse exists and we can write
\begin{align*}
u_\epsilon = f^{-1} \circ \Phi_{\epsilon}.
\end{align*}
This means that $u$ can be written as the composition of two $C^{1,\alpha}$-functions. Note that we cannot obtain uniform estimates for the $C^{1,\alpha}$ norm of $u_{\epsilon}$, since the derivative of $f^{-1}(t)$ can be quite large when $t$ is small; that is, on a set where $\Phi_{\epsilon}$ is small, the $C^{1,\alpha}$-norm of $u_\epsilon$ will be large. However, we can prove that $u_{\epsilon}$ has a uniform $C^1$-bound, that is, $u_\epsilon$ is a Lipschitz function.
\begin{lemma} \label{C1Lemma} There is constant $C = C(g)$, independent of $\epsilon$, such that
\begin{align} \label{C12}
|\nabla_g u_{\epsilon}| \leq C.
\end{align}
\end{lemma}
\begin{proof} Differentiating both sides of (\ref{new2}) we find
\begin{align} \label{new3}
f'(u)\nabla_g u = \nabla_g \Phi_{\epsilon},
\end{align}
and, therefore,
\begin{equation}
f'(u_\epsilon) |\nabla_g u_\epsilon| = |\nabla_g \Phi_{\epsilon}|
\end{equation}
Now, from (\ref{Phi}) and Cauchy-Schwarz inequality,
\begin{equation}
|\nabla_g \Phi_{\epsilon}|\le 2\Phi_\epsilon^{\frac{1}{2}}\left(\sum_{i=1}^k|\nabla_g\phi_{i,\epsilon}|^2\right)^{\frac{1}{2}} \le C \Phi_\epsilon^{\frac{1}{2}},
\end{equation}
where the last inequality follows from Corollary \ref{efCor}. Using (\ref{NewEuler}) we deduce
\begin{equation}
|\nabla_g\Phi_\epsilon|\le C u_\epsilon
\end{equation}
On the other hand, for $t > 0$,
\begin{align*}
f'(t) = 2 \gamma_{1,\epsilon} t + \gamma_{2,\epsilon} (\epsilon + N - 2) t^{-\epsilon - N + 1} \geq 2 t,
\end{align*}
and therefore it follows that
\begin{align*}
2u_\epsilon|\nabla_g u_\epsilon| \le f'(u_\epsilon)|\nabla_gu_\epsilon| = |\nabla_g \Phi_\epsilon| \le C u_\epsilon.
\end{align*}
This finishes the proof.
\end{proof}
The following lemma is needed in the proof of Corollary \ref{keycor} below. This corollary will be key when taking the limit $\epsilon \to 0^+$ in the next section.
\begin{lemma} \label{Int2}
\begin{align} \label{int2}
\epsilon \int_M u_{\epsilon}^{-\epsilon - N} \, dv_g \leq C,
\end{align}
where $C$ only depends on $g$.
\end{lemma}
\begin{proof} Once more, we utilize the Euler Lagrange equation for the extremal function $u_\epsilon$ to deduce this result. Divide through on both sides of (\ref{Euler}) by $u_{\epsilon}^N$ and integrate to obtain
\begin{align*}
\gamma_{2,\epsilon} \int_M u_{\epsilon}^{-\epsilon- N} \, dv_g = \gamma_{1,\epsilon} \int_M \, dv_g - \int_M u_{\epsilon}^{-2} \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon} \, dv_g \leq \gamma_{1,\epsilon} \leq C.
\end{align*}
Since $\gamma_{2,\epsilon} \geq c_2^{-1} \epsilon$ by (\ref{gams}), the result follows.
\end{proof}
\begin{corollary} \label{keycor} Let $\eta \in C^{\infty}(M^n)$. Then
\begin{align} \label{asy}
\int_M \eta \left\{ \gamma_{1,\epsilon} u_{\epsilon}^2 - \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon} \right\} \, dv_g = O(\epsilon^{\beta_\epsilon}) \| \eta \|_{L^{\infty}(M^n)}, \ \ \ \epsilon \to 0^+,
\end{align}
where
\begin{align} \label{beta}
\beta_\epsilon:= \frac{2}{\epsilon + N} > 0.
\end{align}
Moreover,
\begin{align} \label{gammalim}
\gamma_{1,\epsilon} = 1 + O(\epsilon)
\end{align}
as $\epsilon \rightarrow 0^+$.
\end{corollary}
\begin{proof} We multiply through the Euler equation (\ref{Euler}) by $\eta u_{\epsilon}^{2-N}$ to get
\begin{align} \label{f1}
\int_M \eta \left\{ \gamma_{1,\epsilon} u_{\epsilon}^2 - \sum_{i=1}^k c_{i,\epsilon} \phi^2_{i,\epsilon} \right\} \, dv_g = \gamma_{2,\epsilon} \int_M \eta u_{\epsilon}^{- \epsilon - (N-2)} \, dv_g.
\end{align}
Now, the term on the right can be estimated via H\"older's inequality as follows:
\begin{align*}
\left| \gamma_{2,\epsilon} \int_M \eta u_{\epsilon}^{2 - (\epsilon + N)} \, dv_g \right| &\leq \gamma_{2,\epsilon} \| \eta \|_{L^{\infty}} \int_M u_{\epsilon}^{- \epsilon - (N-2)} \, dv_g \\
&\leq C \epsilon \| \eta \|_{L^{\infty}} \left( \int_M u_{\epsilon}^{- \epsilon - N} \, dv_g \right)^{\frac{ \epsilon + N - 2}{\epsilon + N}} \left( \int_M \, dv_g \right)^{\frac{2}{\epsilon + N}} \\
&\leq C \epsilon^{\frac{2}{\epsilon + N}} \| \eta \|_{L^{\infty}}\left( \epsilon \int_M u_{\epsilon}^{- \epsilon - N} \, dv_g \right)^{\frac{ \epsilon + N - 2}{\epsilon + N}}.
\end{align*}
To finish the proof of the first part of the statement, note that the integral in parentheses is bounded by Lemma \ref{Int2}. Therefore,
\begin{align*}
\left| \gamma_{2,\epsilon} \int_M \eta u_{\epsilon}^{2 - (\epsilon + N)} \, dv_g \right| \leq C \epsilon^{\beta_\epsilon} \| \eta \|_{L^{\infty}},
\end{align*}
where $\beta_\epsilon$ is given by (\ref{beta}). The estimate for $\gamma_{1,\epsilon}$ is immediate from (\ref{ube2}) and (\ref{gams}).
\end{proof}
\vskip.2in
\section{Taking the limit $\epsilon \rightarrow 0$}\label{TakingLimit}
Choose a sequence $\epsilon_l \rightarrow 0$, and let $u_{\epsilon_l}$ be maximal for $F_{2,\epsilon_l}$ as in the previous section. By Lemma \ref{C1Lemma} we may assume that, along some subsequence,
\begin{align*}
u_{\epsilon_l} \rightarrow \bar{u} \ \ \mbox{in } C^{\alpha_0}(M^n),
\end{align*}
for some $\alpha_0\in(0,1)$. If we let $\{ \phi_{i, \epsilon_l} \}_{i=1}^k$ be the corresponding set of eigenfunctions associated to $\lambda_2(u_{\epsilon_l})$, then each eigenfunction satisfies
\begin{align} \label{efef}
L_g \phi_{i,\epsilon_l} = \lambda_2(u_{\epsilon_l}) \phi_{i,\epsilon_l} u_{\epsilon_l}^{N-2},
\end{align}
with the normalization
\begin{align} \label{normlast}
\int_M \phi^2_{i,\epsilon_l} u_{\epsilon_l}^{N-2} \, dv_g = 1.
\end{align}
Thanks to Corollary \ref{efCor}, by further restricting our subsequence if necessary, we may assume
\begin{equation} \label{eflim}
\phi_{i,\epsilon_l} \rightarrow \bar{\phi}_i \ \ \mbox{in } C^{1,\alpha_0}(M^n).
\end{equation}
By Corollary \ref{keycor}, for any $\eta \in C^{\infty}(M^n)$ we have
\begin{equation} \label{asy2}
\lim_{l \to \infty} \int \eta \left\{ \gamma_{1,\epsilon_l} u_{\epsilon_l}^2 - \sum_{i=1}^k c_{i,\epsilon_l} \phi^2_{i,\epsilon_l} \right\} \, dv_g = 0.
\end{equation}
Hence,
\begin{align} \label{NewEuler2}
\bar{u}^2 - \sum_{i=1}^k \bar{c}_i \bar{\phi}_i^2 = 0.
\end{align}
Some remarks are in order. First, we claim that the collection of functions $\{\bar \phi_i\}_{i=1}^k$ lies in $E_2(\bar u)$. To see this, choose a subsequence of $\{\epsilon_l\}_{l=1}^\infty$, not explicitly denoted, such that $\lambda_2(u_{\epsilon_l})\to \bar\lambda_2:=\limsup_{l\to\infty}\lambda_2(u_{\epsilon_l})$. Then the collection $\{\bar\phi_i\}_{i=1}^k$ are eigenfunctions associated to the eigenvalue $\bar \lambda_2$, and $\bar \lambda_2 = \lambda_2(\bar u)$. The arguments for showing this are similar to those used in proving (\ref{maxat}) and are, therefore, omitted. Since $\bar u\in C^{\alpha_0}(M^n)$ and each $\bar \phi_i \in C^{1,\alpha_0}(M^n)$, global elliptic regularity gives us that $\bar\phi_i \in C^{2,\alpha_0}(M^n)$.
Secondly, equation (\ref{NewEuler2}) implies that the limit function $\bar u\in C^{\alpha_0}(M^n)$, which is nonnegative and nontrivial, is extremal for the normalized eigenvalue functional
\begin{equation}
u\in L^N_+ \longmapsto \lambda_2(u)\left(\int_M u^N\;dv_g\right)^{\frac{N-2}{N}}.
\end{equation}
Indeed, let $h\in L^\infty$ be arbitrary, and multiply equation (\ref{NewEuler2}) through by $h \bar u^{N-2}$ to get
\begin{equation}
h\left\{\bar u^N - \bar u^{N-2}\sum_{i=1}^k \bar c_i \bar \phi_i^2\right\}=0
\end{equation}
From this we deduce the existence of two unit $L^2(\bar u)$-norm eigenfunctions $\bar \phi_+, \bar \phi_-\in E_2(\bar u)$, which may coincide, satisfying
\begin{equation}
\int_M h\left\{\bar u^N - \bar u^{N-2} \bar \phi_+^2\right\}\;dv_g \ge 0,
\end{equation}
and
\begin{equation}
\int_M h\left\{\bar u^N - \bar u^{N-2} \bar \phi_-^2\right\}\;dv_g \le 0.
\end{equation}
The result now follows from the formulas of the one-sided derivatives for $F_2$ established in (\ref{OSD+}) and (\ref{OSD-}).
What remains to be discussed, in order to finish the proof of Theorem \ref{MainTheorem}, is the maximality of $\bar u$ and its regularity outside its zero set. We address these aspects below.
\begin{proof}[Proof of Theorem \ref{MainTheorem}]
The only aspect that remains to be proven is the maximality of $\bar u$. Let $\bar{u} \in L^N_{+}$ be the conformal factor given by Theorem \ref{MainTheorem}, normalized so that $\| \bar{u} \|_{L^N} = 1$. The proof is by contradiction: if $\bar{u}$ is not maximal, then there is a
$w \in L^N_{+}$ with $\| w \|_{L^N} = 1$ and
\begin{align*}
\lambda_2(w) = \lambda_2(\bar{u}) + \eta_0
\end{align*}
for some $\eta_0 > 0$ small.
Let $w_{\delta} := \sup \{ w , \delta \}$ for $0<\delta<1$. We claim that we can fix $\delta > 0$ small enough such that
\begin{align} \label{claim1}
\lambda_2(w_{\delta}) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{N-2}{N}} \geq \lambda_2(\bar{u}) + \frac{1}{2} \eta_0.
\end{align}
To prove (\ref{claim1}), we first observe that since $w_{\delta} \geq w$, then by comparing the Rayleigh quotient we get
\begin{equation}
\mathcal{R}_g^{w_\delta}(\phi)\ge \mathcal{R}_g^{w}(\phi)
\end{equation}
as we are allow to take as test functions those $\phi\in W^{1,2}$ for which
\begin{equation}
\int_M\{|\nabla_g\phi|^2+c_nR_g\phi^2\}\;dv_g <0.
\end{equation}
The min-max characterization then yields
\begin{align*}
\lambda_2(w_{\delta}) \geq \lambda_2(w).
\end{align*}
As for the volume factor, we have
\begin{align*}
\int_M w_{\delta}^N\;dv_g &= \int_{\{ w \leq \delta\} } w_{\delta}^N\;dv_g + \int_{\{w > \delta\}} w_{\delta}^N\;dv_g
= \int_{\{w \leq \delta\}} \delta^N\;dv_g + \int_{\{w > \delta\}} w^N\;dv_g \\
& \leq \delta^N + 1.
\end{align*}
Since $( 1 + \delta^N)^{\frac{N-2}{N}} \leq 1 + c_n \delta^N$ for some $c_n > 0$, it follows that
\begin{align*}
\left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{N-2}{N}} \leq 1 + c_n \delta^N.
\end{align*}
Therefore,
\begin{align*}
\lambda_2(w_{\delta}) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{N-2}{N}} &\geq ( 1 + c_n \delta^N) \big( \lambda_2(\bar{u}) + \eta_0 \big) \\
&\geq \lambda_2(\bar{u}) + \eta_0 - c_n \delta^N |\lambda_2(\bar{u})|.
\end{align*}
Inequality (\ref{claim1}) follows after choosing $\delta=\delta(|\lambda_2(\bar u)|)>0$ small enough.
Notice that $w_\delta$ is in $\mathcal{D}_\epsilon$. By (\ref{claim1}), for each $\epsilon > 0$, we have
\begin{align} \label{Fw} \begin{split}
F_{2,\epsilon}(u_{\epsilon}) &\geq F_{2,\epsilon}(w_{\delta}) \\
&= \lambda_2(w_{\delta}) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{N-2}{N}} - \left( \int_M w_{\delta}^{-\epsilon} \right) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{\epsilon}{N}} \\
&\geq \lambda_2(\bar{u}) + \frac{1}{2} \eta_0 - \left(\int_M w_{\delta}^{-\epsilon}\;dv_g \right) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{\epsilon}{N}}.
\end{split}
\end{align}
Now, choose a sequence $\epsilon_l \rightarrow 0$ such that
\begin{equation}\label{Fep}
\limsup_{\epsilon_l \to 0} \lambda_2(u_{\epsilon_l}) = \lambda_2(\bar{u}),
\end{equation}
\begin{equation}\label{Fep2}
\lim_{\epsilon_l \to 0} \int u_{\epsilon_l}^{-\epsilon_l} \rightarrow 1.
\end{equation}
In what follows we show why (\ref{Fep2}) holds. By Lemma \ref{Int2}, we deduce
\begin{align*}
\int u_{\epsilon_l}^{-\epsilon_l} &\leq \left( \int_M u_{\epsilon_l}^{-N-\epsilon_l} \right)^{\frac{\epsilon_l}{\epsilon_l + N}} \text{Vol}(M^n,g)^{\frac{N}{N + \epsilon_l}} \\
&\leq \left( C \epsilon_l^{-1} \right)^{\frac{\epsilon_l}{\epsilon_l + N}} \ \ \ \ \mbox{ (since $\text{Vol}(M^n,g) = 1$)} \\
&= C^{\frac{\epsilon_l}{\epsilon_l + N}} \epsilon_l^{\frac{- \epsilon_l}{\epsilon_l + N}} \\
&\rightarrow 1
\end{align*}
as $l \to \infty$, and, therefore,
\begin{align} \label{lequ}
\lim_{l\to\infty} \int_M u_{\epsilon_l}^{-\epsilon_l}\;dv_g \leq 1.
\end{align}
Also,
\begin{align*}
1 = \int_M 1 \, dv_g = \int_M u_{\epsilon_l}^{-\frac{\epsilon_l}{2}} u_{\epsilon_l}^{\frac{\epsilon_l}{2}} \;dv_g \leq \left( \int_M u_{\epsilon_l}^{-\epsilon_l}\;dv_g \right)^{\frac{1}{2}} \left( \int_M u_{\epsilon_l}^{\epsilon_l}\;dv_g \right)^{\frac{1}{2}},
\end{align*}
and since $u_{\epsilon_l}$ is uniformly bounded, this implies
\begin{align} \label{gequ}
\lim_{l\to \infty}\int_M u_{\epsilon_l}^{-\epsilon_l}\;dv_g \geq 1.
\end{align}
Formula (\ref{Fep2}) follows then from (\ref{lequ}) and (\ref{gequ}).
An immediate consequence of (\ref{Fep}) and (\ref{Fep2}) is that
\begin{align} \label{conc}
F_{2, \epsilon_l}(u_{\epsilon_l}) \rightarrow \lambda_2(\bar{u}) - 1.
\end{align}
Recall that by (\ref{Fw}),
\begin{align} \label{limF2}
F_{2,\epsilon_l}(u_{\epsilon_l}) \geq \lambda_2(\bar{u}) + \frac{1}{2} \eta_0 - \left( \int_M w_{\delta}^{-\epsilon_l}\;dv_g \right) \left( \int_M w_{\delta}^N\;dv_g \right)^{\frac{\epsilon_i}{N}}.
\end{align}
Taking the limit as $\epsilon_l \rightarrow 0$, using the fact that $w_{\delta} \geq \delta > 0$, and plugging in (\ref{conc}), we conclude
\begin{align}
\lambda_2(\bar{u}) - 1 \geq \lambda_2(\bar{u}) + \frac{1}{2} \eta_0 - 1.
\end{align}
This is a contradiction.
The regularity of $\bar u$ outside its zero set follows by the fact that each $\bar \phi_i$ is in $C^{2,\alpha_0}(M^n)$ and a standard bootstrap argument. This finishes the proof of our main result.
\end{proof}
Finally, it is important to discuss the behavior of the sequence of real numbers $\{c_{i,\epsilon} \}_{i=1}^k$ as $\epsilon\to 0^+$. These are the coefficients of the elements in $\text{Conv}(K)$ found while proving Proposition \ref{EulerProp}. In particular, for each $\epsilon > 0$, we have $c_{i,\epsilon}\in [0,1]$ for all $i\in\{1,\cdots,k\}$, and $\sum_{i=1}^kc_{i,\epsilon} = 1$. This last condition rules out the possibility of all of them collapsing to zero in the limit. The difference between only one of the $c_{i,\epsilon}$'s surviving in the limit, versus more than one, is the difference between having a nodal solution for a Yamabe type equation, or a harmonic map into a sphere. This is summarized in Corollary \ref{NodalHarmonic}, which we now proceed to prove.
\begin{proof}[Proof of Corollary \ref{NodalHarmonic}]
Let $\bar u\in C^{\alpha_0}(M^n)\cap C^\infty(M^n\setminus\{\bar u=0\})$ be the maximal function given by Theorem \ref{MainTheorem}. In the case where $k=1$, from the Euler equation (\ref{NewEuler2}) we deduce that $\bar u = |\bar \phi|$ for some $\bar \phi \in E_2(\bar u)\cap C^{2,\alpha_0}(M^n)$. Therefore,
\begin{equation}
L_g\bar\phi = \lambda_2(\bar u) \bar \phi |\bar \phi|^{N-2}.
\end{equation}
We remark that in the case where $u\in C(M)$, Lemma \ref{simple} can be modified to show that first eigenfunctions are strictly positive. Therefore, since $\phi$ is a second eigenfunction, it has to change in sign.
We now assume that $k>1$ and that we are working on the open set $M^n\setminus\{\bar u =0\}$. Here the symmetric 2-tensor $g_u:= u^{N-2}g$ is a smooth Riemannian metric. If we define $\psi_i$ by
\begin{equation}
\psi_i := \frac{\bar c_i^{\frac{1}{2}}\bar\phi_i}{\bar u},
\end{equation}
then from the Euler equation (\ref{NewEuler2}) we get
\begin{equation}\label{NewEuler3}
1 = \sum_{i=1}^k \psi_i^2.
\end{equation}
Also, by conformal invariance (\ref{Conf-Invar}), these new functions satisfy
\begin{equation}\label{NewEigenvalueEq}
L_{g_{\bar u}}(\psi_i) = \lambda_2(\bar u) \psi_i,
\end{equation}
for each $i\in\{1,\cdots, k\}$.
We apply $L_{g_{\bar u}}$ on both sides of (\ref{NewEuler3}) to obtain
\begin{equation}
\begin{split}
c_nR_{g_{\bar u}} & = L_{g_{\bar u}}(1) = \sum_{i=1}^kL_{g_{\bar u}}(\psi_i^2) \\ & = \sum_{i=1}^k\left(2\psi_i L_{g_{\bar u}}\psi_i - 2|\nabla_{g_{\bar u}}\psi_i|^2 - c_nR_{g_{\bar u}}\psi_i^2\right) \\ & = 2 \lambda_2(\bar u)\left(\sum_{i=1}^k\psi_i^2\right) - 2\sum_{i=1}^k|\nabla_{g_{\bar u}}\psi_i|^2 - c_nR_{g_{\bar u}}\left(\sum_{i=1}^k\psi_i^2\right) \\ & = 2\lambda_2(\bar u) - 2\sum_{i=1}^k|\nabla_{g_{\bar u}}\psi_i|^2 - c_nR_{g_{\bar u}},
\end{split}
\end{equation}
therefore,
\begin{equation}
\lambda_2(\bar u) = \sum_{i=1}^k|\nabla_{g_{\bar u}}\psi_i|^2 + c_n R_{g_{\bar u}}.
\end{equation}
Plugging this expression back into (\ref{NewEigenvalueEq}) gives us
\begin{equation}
-\Delta_{g_{\bar u}} \psi_j = \left(\sum_{i=1}^k|\nabla_{g_{\bar u}}\psi_i|^2\right)\psi_j
\end{equation}
for each $j\in\{1,\cdots,k\}$. Therefore, the map $U$ defined in (\ref{Harmonic}) is weakly harmonic; see \cite{Helein} and references therein for further details. Since $U$ is $C^2$ on $M^n\setminus\{\bar u=0\}$, the proof is complete.
\end{proof}
\begin{remark}\label{Simplicity} As a concluding remark, we note that the multiplicity of $\lambda_2(\bar u)$ could be, in general, bigger than one. To explain this, we discuss what happens in the case of positive Yamabe invariant. Let $u_e\in L^N_+$ be an extremal function for $F_2$, normalized such that
\begin{equation}
\int_M u_e^N\; dv_g = 1.
\end{equation}
For any $h\in L^\infty$, we deform $u_e$ as usual by $u_{e,t} = u_e(1+th)$. Proposition \ref{OSD} still holds in this case and can be proven by similar techniques. Since $-(N-2)\lambda_2(u_e) <0$, we obtain
\begin{equation}\label{min1}
\frac{d}{dt} F_2(u_{e,t}) \Big{|}_{t=0^+}= (N-2)\lambda_2(u_e)\left\{\int_M hu_e^N\;dv_g -\sup_{\phi}\int_M h\phi^2 u_e^{N-2}\;dv_g\right\},
\end{equation}
and
\begin{equation}\label{min2}
\frac{d}{dt} F_2(u_{e,t})\Big{|}_{t=0^-} = (N-2)\lambda_2(u_e)\left\{\int_M hu_e^N\;dv_g - \inf_{\phi}\int_M h\phi^2 u_e^{N-2}\;dv_g\right\},
\end{equation}
where the supremum and infimum are being taken over all $\phi \in E_2(u_e)$ with unit $L^2(u_e)$-norm.
Now, if $u_e$ is, in particular, minimal, meaning that
\begin{equation}
\frac{d}{dt} F_2(u_{e,t}) \Big{|}_{t=0^-} \le 0 \le F_2(u_{e,t})\frac{d}{dt}\Big{|}_{t=0^+}
\end{equation}
for any deformation $u_{e,t}$, then from (\ref{min1}), (\ref{min2}), and the fact that $\lambda_2(u_e)>0$, we get
\begin{equation}\label{min3}
\sup_{\phi}\int_M h\phi^2 u_e^{N-2}\;dv_g \le \inf_{\phi}\int_M h\phi^2 u_e^{N-2}\;dv_g.
\end{equation}
This implies that $m(u_e)=1$, that is, in the case of $u_e$ being minimal for $F_2$ under the assumption $Y(M^n,[g])>0$, $\lambda_2(u_e)$ is simple. On the other hand, if we assume that $\nu{[g]}>1$ and that $u_e$ is maximal for $F_2$, as in Theorem \ref{MainTheorem}, then the same is not forced by the same techniques.
\end{remark}
\section{Example: $k>1$}\label{Example}
Let $(H,h)$ be a compact Riemannian manifold of dimension $n \geq 2$ with constant negative scalar curvature $R_H$. By scaling down $h$ if necessary, we can assume that $(H,h)$ satisfies the following two properties:
\begin{enumerate}[(i)]
\item The scalar curvature
\begin{align} \label{RH}
R_H < - \frac{4n}{n-1},
\end{align}
\item The first non-zero eigenvalue $\lambda_1(-\Delta_h)$ satisfies
\begin{align} \label{Lbig}
\lambda_1(-\Delta_h) > 1.
\end{align}
\end{enumerate}
Let $(M,g) = (H \times S^1(1), h \oplus dt^2)$. Here $S^1=S^1(1)$ denotes the circle of radius $1$. Then $(M,g)$ is a Riemannian manifold of dimension $m = n+1$ with the following properties:
\begin{enumerate}[(i)]
\item The scalar curvature $R_g = R_H$ is constant.
\item If $L_g$ denotes the conformal laplacian, then
\begin{align} \label{Lform} \begin{split}
L_g &= -\Delta_g + \frac{(m-2)}{4(m-1)} R_g \\
&= -\Delta_g + \frac{n-1}{4n}R_H.
\end{split}
\end{align}
Consequently, the first eigenvalue of $L_g$ is $\lambda_1(L_g) = \frac{(n-1)}{4n}R_H$, with eigenfunctions given by constants.
\item The {\em second} eigenvalue of $L_g$ is given by
\begin{align} \label{L2}
\lambda_2(L_g) = \lambda_1(-\Delta_{S^1(1)}) + \frac{(n-1)}{4n}R_H = 1 + \frac{(n-1)}{4n}R_H < 0,
\end{align}
with eigenfunctions given by the first eigenfunction on the circle factors
\begin{align} \label{trigszed}
\psi_1 = \psi_1(t) = \cos t, \ \ \ \ \psi_2 = \psi_2(t) = \sin t.
\end{align}
In particular, $\lambda_2(L_g)$ has multiplicity two.
\end{enumerate}
\begin{theorem}\label{example} The product metric $g$ is maximal in its conformal class. In other words, if $\tilde{g} \in [g]$, then
\begin{align} \label{gmax}
\lambda_2(L_{\tilde{g}}) \text{Vol}(M,\tilde{g})^{\frac{2}{m}} \leq \lambda_2(L_g) \text{Vol}(M,g)^{\frac{2}{m}}.
\end{align}
Moreover, if equality holds then $u$ is constant.
Consequently, $(M,g) = (H \times S^1, h \oplus dt^2)$ is a maximal metric for which the eigenfunctions corresponding to $\lambda_2(g)$ define a harmonic map
\begin{align} \label{S1}
\Psi = (\psi_1, \psi_2) : M \rightarrow S^1,
\end{align}
given by projection onto the $S^1$-factor.
\end{theorem}
\medskip
\begin{remark} As noted in the introduction, the argument below can be easily adapted to metrics on products of $H$ with a spheres of any dimension, producing examples
of maximal metrics for which $\lambda_2(L)$ has high multiplicity.
\end{remark}
\medskip
\begin{proof} Let $\tilde{g} = u^{\frac{4}{n-2}}g \in [g]$ with $u \in C^{\infty}(M)$ and $u > 0$. We want to show
that
\begin{align} \label{goal1}
\lambda_2( \tilde{g}) \mbox{Vol}(\tilde{g})^{\frac{2}{m}} \leq \lambda_2(g) \text{Vol}(M,g)^{\frac{2}{m}}.
\end{align}
If we normalize $u$ so that
\begin{align} \label{bv}
\mbox{Vol}(M,\tilde{g}) = \int_M u^N \, dv_g = 1,
\end{align}
where $N = \frac{2m}{m-2}$, then (\ref{goal1}) is equivalent to
\begin{align} \label{goal2}
\lambda_2( \tilde{g}) \leq \lambda_2(g) \text{Vol}(M,g)^{\frac{2}{m}}.
\end{align}
Let $w_2$ be an eigenfunction associated to $\tilde{\lambda}_2 = \lambda_2(\tilde{g})$:
\begin{align*}
L_{\tilde{g}} w_2 = \tilde{\lambda}_2 w_2.
\end{align*}
By conformal invariance of $L$, the function
\begin{align} \label{p2}
\phi_2 = \dfrac{w_2}{u}
\end{align}
is an eigenfunction satisfying
\begin{align} \label{Lphi}
L_g \phi_2 = \tilde{\lambda}_2 \phi_2 u^{N-2}.
\end{align}
We normalize $\phi_2$ so that
\begin{align} \label{p2norm}
\int_M \phi_2^2 \, u^{N-2} \, dv_g = 1,
\end{align}
hence
\begin{align} \label{Ep}
\int_M \phi_2 \, L_g \phi_2 \, dv_g = \tilde{\lambda}_2.
\end{align}
Likewise, let $\phi_1$ be the generalized eigenfunction associated to $\tilde{\lambda}_1 = \lambda_1(\tilde{g})$:
\begin{align} \label{Lphi1}
L_g \phi_1 = \tilde{\lambda}_1 \phi_1 u^{N-2}.
\end{align}
We also normalize $\phi_1$ so that
\begin{align} \label{p1norm}
\int_M \phi_1^2 \, u^{N-2} \, dv_g = 1.
\end{align}
It follows from the strong maximum principle that $\phi_1 > 0$.
Let $t \in [0,2\pi)$ be the coordinate on $S^1$, then
\begin{align} \label{trigs}
\psi_1 = \psi_1(t) = \cos t, \ \ \ \ \psi_2 = \psi_2(t) = \sin t,
\end{align}
are first eigenfunctions for the laplacian on the $S^1$-factor. Since these are also eigenfunctions for $\lambda_2(g)$, they satisfy
\begin{align} \label{Lpsi}
L_g \psi_i = \lambda_2(g) \psi_i, \ \ \ \ i = 1,2.
\end{align}
We also have the integral formulas
\begin{align} \label{Ints}
\int_M \psi_1^2 \, dv_g = \int_M \psi_2^2 \, dv_g = \text{Vol}(H,h)\cdot \pi = \frac{1}{2} \text{Vol}(M,g).
\end{align}
Since
\begin{align} \label{circle}
\psi_1^2 + \psi_2^2 = 1,
\end{align}
it follows that
\begin{align} \label{bal}
\int_M \left( \psi_1^2 + \psi_2^2 \right) \, u^{N-2} \, dv_g = \int_M u^{N-2} \, dv_g.
\end{align}
Therefore, we may assume (after relabeling if necessary) that $\psi_1$ satisfies
\begin{align} \label{half}
\int_M \psi_1^2 \, u^{N-2} \, dv_g \leq \frac{1}{2} \int_M u^{N-2} \, dv_g.
\end{align}
The following Lemma reduces the proof of the theorem to a key inequality:
\begin{lemma} Suppose
\begin{align} \label{key}
\tilde{\lambda}_2 \leq \dfrac{ \frac{1}{2} \text{Vol}(M,g) \lambda_2(g)}{ \int_M \psi_1^2 \, u^{N-2} \, dv_g }.
\end{align}
Then (\ref{goal2}) holds. Moreover, if equality holds in (\ref{key}) then $u \equiv const.$ and equality holds in (\ref{goal2}). \end{lemma}
\begin{proof} If (\ref{key}) holds, then from (\ref{half}) it follows that
\begin{align*}
\frac{1}{2} \text{Vol}(M,g) \lambda_2(g) &\geq \tilde{\lambda}_2 \int_M \psi_1^2 \, u^{N-2} \, dv_g \geq \frac{1}{2} \tilde{\lambda}_2 \int_M u^{N-2} \, dv_g.
\end{align*}
By H\"older's inequality,
\begin{align} \label{Hold} \begin{split}
\frac{1}{2} \text{Vol}(M,g) \lambda_2(g) &\geq \frac{1}{2} \tilde{\lambda}_2 \int_M u^{N-2} \, dv_g \\
&\geq \frac{1}{2} \tilde{\lambda}_2 \left( \int_M u^N \, dv_g \right)^{\frac{2}{m}} \left( \int_M \, dv_g \right)^{\frac{m-2}{m}}\\ &= \frac{1}{2} \tilde{\lambda}_2 \text{Vol}(M,g)^{\frac{m-2}{m}},
\end{split}
\end{align}
and we conclude
\begin{align*}
\lambda_2(g) \, \text{Vol}(M,g)^{\frac{2}{m}} \geq \tilde{\lambda}_2,
\end{align*}
and we see that (\ref{goal2}) holds.
If equality holds then we must have equality in H\"older's inequality in (\ref{Hold}), and $u \equiv const.$
\end{proof}
To prove (\ref{key}) holds, we will use the min-max characterization of $\lambda_2(L)$:
\begin{align} \label{mmax}
\tilde{\lambda}_2 = \inf_{\Sigma^2 \subset W^{1,2}(M,g) } \sup_{w \in \Sigma^2 \setminus \{ 0 \}} \dfrac{ E_g(w)}{\int w^2 u^{N-2} \, dv_g },
\end{align}
where $\Sigma^2 \subset W^{1,2}:=W^{1,2}(M,g)$ denotes a $2$-dimensional subspace of $W^{1,2}$ and
\begin{align} \label{Energy}
E_g(w) = \int_M w L_g w \, dv_g.
\end{align}
In particular, for any two-dimensional subspace $\Sigma^2 \subset W^{1,2}$,
\begin{align} \label{mmax}
\tilde{\lambda}_2 \leq \sup_{w \in \Sigma^2 \setminus \{ 0 \}} \dfrac{ E_g(w)}{\int_M w^2 u^{N-2} \, dv_g }.
\end{align}
Let
\begin{align*}
\Sigma^2 = \Sigma_0 = \mbox{span}\{ \psi_1, \phi_2 \}.
\end{align*}
By homogeneity of the Rayleigh quotient,
\begin{align} \label{Rtrig0}
\sup_{w \in \Sigma_0 \setminus \{ 0 \}} \dfrac{ E_g(w)}{\int_M w^2 u^{N-2} \, dv_g } = \max_{ \theta \in [0,2\pi]} \dfrac{ E_g( \cos \theta \, \psi_1 + \sin \theta \, \phi_2) }{ \int_M ( \cos \theta \, \psi_1 + \sin \theta \, \phi_2)^2 u^{N-2} \, dv_g },
\end{align}
hence
\begin{align} \label{Rtrig}
\tilde{\lambda}_2 \leq \max_{\theta \in [0,2\pi]} \dfrac{ E_g( \cos \theta \, \psi_1 + \sin \theta \, \phi_2) }{ \int_M ( \cos \theta \, \psi_1 + \sin \theta \, \phi_2)^2 u^{N-2} \, dv_g }.
\end{align}
By (\ref{Energy}),
\begin{align} \label{E} \begin{split}
E_g(& \cos \theta \, \psi_1 + \sin \theta \, \phi_2) \\
&= \int_M (\cos \theta \, \psi_1 + \sin \theta \, \phi_2) \, L_g (\cos \theta \, \psi_1 + \sin \theta \, \phi_2) \, dv_g \\
&= \cos^2 \theta \int_M \psi_1 L_g \psi_1 \, dv_g + \sin \theta \cos \theta \int_M \psi_1 \, L_g \phi_2 \, dv_g \\
& \ \ \ \ + \sin \theta \cos \theta \int_M \phi_2 \, L_g \psi_1 \, dv_g + \sin^2 \theta \int_M \phi_2 L_g \phi_2 \, dv_g.
\end{split}
\end{align}
If we define
\begin{align} \label{alpha}
\alpha = \int_M \psi_1 \, \phi_2 \, u^{N-2} \, dv_g,
\end{align}
then since $L$ is self-adjoint,
\begin{align} \label{SA} \begin{split}
\sin \theta \cos \theta \int_M \psi_1 \, L_g \phi_2 \, dv_g &+ \sin \theta \cos \theta \int_M \phi_2 \, L_g \psi_1 \, dv_g \\
&= 2 \sin \theta \cos \theta \int_M \psi_1 \, L_g \phi_2 \, dv_g \\
&= 2 \tilde{\lambda}_2 \sin \theta \cos \theta \int_M \psi_1 \, \phi_2 \, u^{N-2} \, dv_g \\
&= 2 \tilde{\lambda}_2 \alpha \, \sin \theta \cos \theta.
\end{split}
\end{align}
By (\ref{Ints}),
\begin{align} \label{E1}
\int_M \psi_1 L_g \psi_1 \, dv_g = \lambda_2(g) \int_M \psi_1^2 \, dv_g = \frac{1}{2} \text{Vol}(M,g) \lambda_2(g).
\end{align}
Substituting (\ref{SA}), (\ref{E1}), and (\ref{Ep}) into (\ref{E}), we obtain
\begin{align} \label{E}
E_g( \cos \theta \, \psi_1 + \sin \theta \, \phi_2) = \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) \cos^2 \theta + \tilde{\lambda}_2 \Big[ 2 \alpha \sin \theta \cos \theta + \sin^2 \theta\Big].
\end{align}
Turning to the denominator in (\ref{Rtrig}), by the definition of $\alpha$ and the normalization of $\phi_2$,
\begin{align} \label{D1} \begin{split}
\int_M ( \cos \theta \, & \psi_1 + \sin \theta \, \phi_2)^2 u^{N-2} \, dv_g \\
& = \int_M \big( \cos^2 \theta \, \psi_1^2 + 2 \sin \theta \cos \theta \psi_1 \phi_2 + \sin^2 \theta \phi_2^2 \big) u^{N-2} \, dv_g \\
&= \cos^2 \theta \int_M \psi_1^2 u^{N-2} \, dv_g + 2 \sin \theta \cos \theta \int_M \psi_1 \phi_2 u^{N-2} \, dv_g + \sin^2 \theta \int_M \phi_2^2 u^{N-2} \, dv_g \\
&= \cos^2 \theta \int_M \psi_1^2 u^{N-2} \, dv_g + \Big[ 2 \alpha \sin \theta \cos \theta + \sin^2 \theta\Big].
\end{split}
\end{align}
Therefore, the Rayleigh quotient of $w_{\theta} = (\cos \theta) \psi_1 + (\sin \theta) \phi_2$ is
\begin{align} \label{Rw1}
f(\theta) := \mathcal{R}[w_{\theta}] = \dfrac{ \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) \cos^2 \theta + \tilde{\lambda}_2 \Big[ 2 \alpha \sin \theta \cos \theta + \sin^2 \theta\Big] }{ \cos^2 \theta \int_M \psi_1^2 u^{N-2} \, dv_g + \Big[ 2 \alpha \sin \theta \cos \theta + \sin^2 \theta\Big]}.
\end{align}
We need to consider two cases: \medskip
\noindent {\bf Case 1.} First, suppose $f$ attains its maximum at $\theta_c \in [0,2\pi]$ for which $\cos \theta_c = 0$. Then
\begin{align} \label{case1}
\max_{ \theta \in [0, 2\pi] } f = \tilde{\lambda}_2.
\end{align}
In this case, we need the following lemma:
\begin{lemma} Suppose $\Sigma \subset W^{1,2}$ is a two-dimensional subspace such that
\begin{align} \label{sat}
\sup_{w \in \Sigma^2 \setminus \{ 0 \}} \dfrac{ E_g(w)}{\int w^2 u^{N-2} \, dv_g } = \tilde{\lambda}_2.
\end{align}
Then
\begin{align} \label{split}
\Sigma \subset E_1 \oplus E_2,
\end{align}
where $E_1$ and $E_2$ are the spaces of (generalized) eigenfunctions corresponding to $\tilde{\lambda}_1$ and $\tilde{\lambda}_2$ respectively.
\end{lemma}
\begin{proof} For classical eigenvalues this result is a simple consequence of the minimax principle. As above let $\phi_1 \in E_1$ be a generalized eigenfunction associated to $\tilde{\lambda}_1$, and
\begin{align*}
E_1^{\bot} = \{ v \in W^{1,2} \, : \, \int_M v \, \phi_1 \, u^{N-2} \, dv_g = 0 \}.
\end{align*}
Let $w_0 \in \Sigma \cap E_1^{\bot}$ with $w \neq 0$. Such a $w_0$ must exist; otherwise, $\Sigma \subset E_1$, which contradicts the fact that $\Sigma$ is two-dimensional. By the variational characterization of $\tilde{\lambda}_2$,
\begin{align*}
\dfrac{ E_g(w_0)}{\int w_0^2 u^{N-2} \, dv_g } \geq \tilde{\lambda}_2,
\end{align*}
with equality if and only if $w_0 \in E_2$. However, by assumption,
\begin{align*}
\tilde{\lambda}_2 \geq \dfrac{ E_g(w_0)}{\int w_0^2 u^{N-2} \, dv_g }.
\end{align*}
It follows that $w_0 \in E_2$, and (\ref{split}) must hold.
\end{proof}
Applying the lemma to $\Sigma_0 = \{ \psi_1, \phi_2 \}$, we conclude that
\begin{align} \label{sub1}
\psi_1 = c_1 \phi_1 + c_2 \phi_2,
\end{align}
where $c_1, c_2$ are constants. We claim that $c_1 = 0$. If not, then since $\phi_1 > 0$, there cannot be a point $x_0 \in M$ at which
\begin{align*}
\psi_1(x_0) = \phi_2(x_0) = 0.
\end{align*}
Applying $L_g$ to both sides of (\ref{sub1}) and using the fact that $\phi_1$ and $\phi_2$ are generalized eigenfunctions for $\tilde{\lambda}_1$ and $\tilde{\lambda}_2$ respectively, we get
\begin{align} \label{Lk} \begin{split}
L_g \psi_1 &= c_1 L_g \phi_1 + c_2 L_g \phi_2 \\
&= c_1 \tilde{\lambda}_1 \phi_1 u^{N-2} + c_2 \tilde{\lambda}_2 \phi_2 u^{N-2} \\
&= \tilde{\lambda}_1 \left( \psi_1 - c_2 \phi_2 \right)u^{N-2} + c_2 \tilde{\lambda}_2 \phi_2 u^{N-2} \\
&= \tilde{\lambda}_1 \psi_1 u^{N-2} + c_2 \left( \tilde{\lambda}_2 - \tilde{\lambda}_1 \right) \phi_2 u^{N-2}.
\end{split}
\end{align}
Since $\psi_1$ is an eigenfunction for $\lambda_2(g)$, this implies
\begin{align} \label{Lk2}
\lambda_2(g) \psi_1 = \tilde{\lambda}_1 \psi_1 u^{N-2} + c_2 \left( \tilde{\lambda}_2 - \tilde{\lambda}_1 \right) \phi_2 u^{N-2}.
\end{align}
Let $x_0$ be a point at which $\psi_1(x_0) = 0$, then $\phi_2(x_0) \neq 0$. But by (\ref{Lk2}),
\begin{align}
0 = c_2 \big( \tilde{\lambda}_2 - \tilde{\lambda}_1 \big) \phi_2(x_0) u(x_0)^{N-2},
\end{align}
hence $c_2 = 0$. However, this would imply $\psi_1 = c_1 \phi_1$. Since $\phi_1 > 0$ but $\psi_1$ obviously changes sign, we get a contradiction. Therefore, $c_1 = 0$.
Since $c_1 = 0$, we have
\begin{align*}
\psi_1 = c_2 \phi_2.
\end{align*}
From this it immediately follows that
\begin{align*}
\frac{1}{2} \text{Vol}(M,g) \lambda_2(g) &= E_g(\psi_1) = c_2^2 E_g(\phi_2) = c_2^2 \tilde{\lambda}_2,
\int_M \psi_1^2 \, u^{N-2} \, dv_g = c_2^2,
\end{align*}
hence
\begin{align*}
\dfrac{ \frac{1}{2} \text{Vol}(M,g) \lambda_2(g)}{ \int_M \psi_1^2 \, u^{N-2} \, dv_g } = \tilde{\lambda}_2,
\end{align*}
so equality holds in (\ref{key}). In particular, in this case equality holds in (\ref{goal2}). \medskip
\noindent {\bf Case 2.} The final case to consider is when the maximum of $f$ occurs at some $\theta \in [0,2\pi]$ for which $\cos \theta \neq 0$. If $\cos \theta \neq 0$, we can rewrite $f$ as
\begin{align} \label{fdef}
f(\theta) = \dfrac{ \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) + \tilde{\lambda}_2 \left[ 2 \alpha \tan \theta + \tan^2 \theta \right] }{ \int_M \psi_1^2 u^{N-2} \, dv_g + \left[ 2 \alpha \tan \theta + \tan^2 \theta \right]},
\end{align}
with domain $(-\frac{\pi}{2}, \frac{\pi}{2})$. If we take the derivative of $f$, we see that $\theta_c \in (-\frac{\pi}{2},\frac{\pi}{2})$ is a critical point of $f$ if and only if
\begin{align*}
2 \left( \tilde{\lambda}_2 \int_M \psi_1^2 \, u^{N-2} \, dv_g - \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) \right) \left(\alpha + \tan \theta_c \right) \sec^2 \theta_c = 0.
\end{align*}
Therefore, we have two possibilities at the point where $f$ attains its maximum. First, if
\begin{align*}
\left( \tilde{\lambda}_2 \int_M \psi_1^2 \, u^{N-2} \, dv_g - \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) \right) = 0,
\end{align*}
then equality holds in (\ref{key}) and we are done. The second possibility is that $\tan \theta_c = -\alpha$. Plugging this into the formula for $f$, we find
\begin{align} \label{pen}
\tilde{\lambda}_2 \leq \max f = \dfrac{ \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) - \tilde{\lambda}_2 \alpha^2 }{ \int_M \psi_1^2 \, u^{N-2} \, dv_g - \alpha^2}.
\end{align}
Clearing the denominator gives
\begin{align} \label{pen2}
\tilde{\lambda}_2 \int_M \psi_1^2 \, u^{N-2} \, dv_g - \alpha^2 \tilde{\lambda}_2 \leq \frac{1}{2} \text{Vol}(M,g) \lambda_2(g) - \tilde{\lambda}_2 \alpha^2,
\end{align}
hence
\begin{align} \label{ult}
\tilde{\lambda}_2 \int_M \psi_1^2 \, u^{N-2} \, dv_g \leq \frac{1}{2} \text{Vol}(M,g) \lambda_2(g),
\end{align}
and once again (\ref{key}) holds.
Now suppose equality holds in (\ref{goal2}). Then we have equality in (\ref{ult}), and therefore equality in (\ref{key}). It follows that $u$ is constant.
\end{proof}
|
1,108,101,565,757 | arxiv | \section{Introduction}
Intelligence, surveillance and reconnaissance (ISR) are critical missions within military operations, and modern-day combat zones pose important challenges for ISR [\cite{chgral}, \cite{zaloga2011unmanned}, \cite{krishnamoorthy2012uav}]. ISR operations are maintained through effective and efficient information collection. Unmanned ground vehicles (UGVs) are an important asset for ISR, target engagements, convoy operations for resupply missions, search and rescue, environmental mapping, disaster area surveying and mapping. Depending upon the nature of the missions, UGVs are preferred over other collection assets. Some of the instances where UGVs are preferred include: unsuitable terrain for human or unmanned aerial vehicles (UAVs), harsh and hostile environment, tedious information collection process for humans, and many more.
Despite the numerous advantages of UGVs, their size and limited payload capacity lead to fuel constraints and therefore, they are required to make one or more refueling stops in a long mission. Moreover, these operations encounter unknown terrain or obstacles, resulting in uncertainty in the fuel (or time) required to travel among different points of interests (POIs); for example, in a hostile terrain with improvised explosive devices (IEDs), conducting anti-IED sweeps and explosive ordinance disposal can lead to unexpected delays for UGVs. In fact, in many applications, even the locations of the POIs are not precisely known (uncertain) due to inaccurate a-priori map or imperfect and noisy exteroceptive sensory information or perturbations; for example, in a fire monitoring application, the POIs of UGVs change based on the random propagation of the fire \cite{casbeer2005forest}. Likewise, other types of system uncertainties include availability of UGVs with specific attributes such as sensors or terrain-compatible vehicle dynamics.
Due to these challenges, to successfully harness the benefits of the UGVs, it is critical to efficiently solve the UGV path-planning problem (UGVPP). Note that the NP-hard problems such as multiple traveling salesman problem (TSP) and distance-constrained vehicle routing problem are special cases of the UGVPP. In this paper, we consider extensions of UGVVPP with aforementioned uncertainties, and refer to this class of problems as Stochastic AVPP (S-AVPP). Motion planning literature \cite{dadkhah2012survey} for autonomous vehicles (AVs) classifies uncertainties into four categories: vehicle dynamics, knowledge of environment, operational environment, and pose information. Uncertainties in operational environment like wind and atmospheric turbulences suite to UAVs and pose information is regarding localization of UGVs. This paper focuses on UGVPP with uncertainties in vehicle dynamics and knowledge of environment. Some of the previous works include: analysis of robustness of modular vehicle fleet considering uncertainties in demand of vehicles \cite{li2017robustness}; path planning for multiple UGVs for deterministic data using heuristic \cite{bellingham2003multi}; single vehicle path planning problems for UGVs considering environmental uncertainties [\cite{evers2014online}, \cite{evers2014robust}. The path planning problem for multiple UGVs considering uncertainties in vehicle dynamics and environmental uncertainties simultaneously using algebraic modelling framework is new to the literature. Furthermore, these algorithms will also be applicable to tackle similar challenges arising in path-planning for UAVs and underwater vehicles, which are used for crop monitoring, ocean bathymetry, forest fire monitoring, border surveillance, and disaster management.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[scale=0.4]{vehPaths1.pdf}
\end{center}
\caption{An illustration of considering the availability of UGVs as uncertain in the UGVPP. (a) Optimal solution for a deterministic UGVPP which is sub-optimal for the UGVs when considering uncertainty for the availability of UGVs. (b-c) Optimal solutions for stochastic UGVPP instances having different chances of availability for UGV2. Note that as the chances of the availability of UGV2 reduces, the number of assigned POIs to UGV2 also reduces.}
\label{Fig1}
\end{figure}
\section{Notation} \label{sec:notations}
Let $T= \{t_1,\dots,t_n\}$ denote the set of points of interests (POIs), let $d_0$ denote the depot where a set of heterogeneous unmanned ground vehicles (UGVs) $M:=\{1,\ldots,|M|\}$, each with fuel capacity $F_m, m \in M$, are initially stationed, let $\overbar{D}=\{d_1,\dots, d_k\}$ denote the set of additional $k$ depots or refueling sites, and let $D = \overbar D \bigcup \{d_0\}$. All of the $|M|$ UGVs stationed at the depot $d_0$ are assumed to be fueled to capacity. The model formulations are defined on a directed graph $G = (V, E)$, where $V = T\cup D$ denotes the set of vertices and $E$ denotes the set of edges joining any pair of vertices. We assume that $G$ does not contain any self-loops. For each edge $(i,j) \in E$, we let $c_{ij}$ and $\hat{f}_{ij}^m$ represent the travel cost and the nominal fuel that will be consumed by UGV $m \in M$ while traversing the edge $(i,j)$. We remark that $\hat{f}_{ij}^m\,$ is directly computed using the length of the edge $(i,j)$ and the fuel economy of the UGV. Additional notations that will be used in the mathematical formulation are as follows: for any set $S \subset V$, $\delta^+(S)=\{(i,j) \in E: i\in S, j\notin S\}$ and $\delta^-(S)=\{(i,j) \in E: i\notin S, j\in S\}$. When $S = \{i\}$, we shall simply write $\delta^+(i)$ and $\delta^-(i)$ instead of $\delta^+(\{i\})$ and $\delta^-(\{i\})$, respectively.
The notation introduced next is for describing the uncertainty associated with the UGVs' fuel consumption. Let $\bm f$ denote a discrete random variable vector representing the fuel consumed by any UGV to traverse any edge in $E$. The vector $\bm f$ has $|E|\times |M|$ components, one for each edge, and the random variable in the vector $\bm f$ corresponding to edge $(i,j)$ is denoted by $f_{ij}$. Let $\Omega$ denote the set of scenarios for $\bm f$, where $\omega \in \Omega$ represents a random event or realization of the random variable $\bm f$ with a probability of occurrence $p(\omega)$. We use $f_{ij}^m(\omega)$ to denote the fuel consumed by an UGV $m$ when traversing the edge $(i,j)$, and $\bm f(\omega) = \bigg(\big\{f^1_{ij}(\omega)\big\}_{(i,j)\in E}, \ldots, \big\{f^{|M|}_{ij}(\omega)\big\}_{(i,j)\in E} \bigg)$ to denote the random vector for the realization $\omega \in \Omega$. Finally, we use $\mathbb E$ to denote the expectation operator, i.e., $\mathbb{E}_\Omega(\alpha) = \sum_{\omega \in \Omega}p(\omega)\alpha$. Table \ref{tab:notations} lists all the notations introduced in this section for ease of reading. In the next section, we present two-stage stochastic program formulations using the notation introduced in this section.
\begin{table}[htbp]
\centering
\scalebox{0.6}{
\begin{tabular}{ll}
\toprule
Symbol & explanation \\
\midrule
$T = \{t_1, \dots, t_n\}$ & set of $n$ POIs \\
$d_0$ & depot where $m$ UGVs are initially stationed \\
$\overbar{D} = \{d_1, \dots, d_k\}$ & additional depots/refueling sites \\
$D = \overbar D \bigcup \{d_0\}$ & set of depots \\
$F_m$ & fuel capacity of an `$m$' UGV \\
$G = (V, E)$ & directed graph with $V = T\cup D$ \\
$c_{ij}$ & travel cost for the edge $(i,j) \in E$ \\
$\hat{f}_{ij}^m$ & nominal fuel consumed by the UGV to traverse the edge $(i,j) \in E$ \\
$\bm f$ & random variable vector representing the fuel consumed by any UGV\\
$p_j^m$ & profit or incentive for visiting a POI by an UGV $m$\\
$\Omega$ & set of scenarios for $\bm f$ \\
$\omega \in \Omega$ & realization of random variable $\bm f$ \\
$p(\omega)$ & probability of occurrence of $\omega$ \\
$\mathbb E$ & expectation operator \\
\bottomrule
\end{tabular}}
\caption{Table of notations}
\label{tab:notations}
\end{table}
\section{Mathematical formulation} \label{sec:formulation}
The first-stage decision variables represent `here-and-now' decisions that are determined before the realization of randomness, and second-stage decisions are determined after scenarios representing the uncertainties are presented. The first-stage decision variables in the stochastic program are used to compute the initial set of routes for each of the UGVs such that either each POI is visited by only one of the UGVs or all the UGVs collect maximum incentives from the POIs, while ensuring that no UGV ever runs out of fuel as it traverses its route. The fuel constraint for each UGV in the first-stage is enforced using the nominal fuel consumption value $\hat f_{ij}^m$ for each edge $(i,j) \in E$. For a realization $\omega \in \Omega$, the second-stage decision variables are used to compute the recourse costs that must be added to the first-stage routes based on the realized values of $f_{ij}^m(\omega)$ for all $(i,j) \in E$ and $m \in M$.
Specifically, the first-stage decision variables are as follows: each edge $(i,j)\in E$ is associated with a variable $x_{ij}^m$ that equals $1$ if the edge $(i,j)$ is traversed by `$m$' UGV, and $0$ otherwise. We let $\bm x \in \{0,1\}^{|E| \times |M|}$ denote the vector of all decision variables $x_{ij}^m$. There is also a flow variable $z_{ij}$ associated with each edge $(i,j) \in E$ that denotes the total nominal fuel consumed by any UGV as it starts from depot $i$ and reaches the vertex $j$. Additionally, for any $A \subseteq E$, we let $x^m(A) = \sum_{(i,j)\in A} x_{ij}^m$. Analogous to the variable $x_{ij}^m$ in the first stage, we define a binary variable $y_{ij}^m(\omega)$ for each edge $(i,j)\in E$. The variables $y_{ij}^m(\omega)$ are used to define the refueling trips needed for any vehicle when the route defined by the first-stage feasible solution $\bm x$ is not feasible for the realization $\omega \in \Omega$.
\begin{comment}
Similarly, $v_{ij}(\omega)$ is a flow variable analogous to $z_{ij}$ for every $(i,j)\in E$ and $\omega \in \Omega$. Additional second-stage variables $q_{ij}(\omega)$ for each $(i,j) \in E$ are used to compute the cost of a refueling trip for the realization $\omega$. Finally, for every $i,j \in V$ and $\omega \in \Omega$, we let $\hat d$ be a depot in the set $D$ that minimizes the overall refueling trip between $i$ and $j$, i.e., $\hat d = \operatornamewithlimits{arg min}_d f_{id}(\omega) + f_{dj}(\omega)$. We remark that the dependence of $\hat{d}$ on $(i,j) \in E$ and $\omega$ is not explicitly shown and we leave it to the reader to understand the dependence from the context.
\end{comment}
\begin{comment}
\section{Formulation 1} \label{sec:for1}
\textit{Given a team of heterogeneous UGVs (each UGV with a different capacity and travel time between POIs), a set of target POIs to visit and stochastic travel times, find a path for each UGV such that each POI site is visited by at most by one UGV, and the overall distance traveled by the UGVs is minimized}.
\subsection{Objective function} \label{subsec:obj} The objective function for the two-stage stochastic programming model is the sum of the first-stage travel cost and the expected second-stage recourse cost. The second-stage recourse cost for a realization $\omega \in \Omega$ of the fuel consumption of the UGVs is the cost of the additional refueling trips that are required for the realization $\omega$. The recourse cost is a function of the first-stage routing decision $\bm x$ and the realization $\omega$. Letting the recourse cost be denoted by $\beta(\bm x, \bm f(\omega))$, the objective function for the two-stage stochastic optimization problem is given by
\begin{flalign}
\min \,\, C, \text{ where } C \triangleq \sum\limits_{\substack{(i,j) \in E \\ m \in M}} c_{ij}^m x_{ij}^m + \mathbb{E}_{\Omega} \left[ \beta(\bm x,\bm f) \right] = \sum\limits_{\substack{(i,j) \in E \\ m \in M}} c_{ij}^m x_{ij}^m + \sum_{\omega \in \Omega} p(\omega) \beta(\bm x, \bm f(\omega)). \label{eq:obj}
\end{flalign}
\subsection{First-stage routing constraints} \label{subsec:1_stage}
The constraints for the first-stage enforce the routing constraints, i.e., the requirements that each POI in $T$ should be visited at least once by some UGV and that each UGV never runs out of fuel as it traverses its route. In the first-stage, the fuel constraint is enforced using the nominal value of fuel consumed by any UGV to traverse any edge $(i,j) \in E$. The first-stage routing constraints are as follows:
\begin{subequations}
\begin{flalign}
&x^m(\delta^+(d)) = x^m(\delta^-(d)) \quad \forall d\in D\setminus\{d_0\}, m \in M, \label{f1_1} & \\
&x^m(\delta^+(d_0)) = 1, \quad \forall m \in M, \label{f1_2} &\\
& x^m(\delta^-(d_0)) = 1, \quad \forall m \in M, \label{f1_3} &\\
&x^m(\delta^+(S)) \geqslant 1 \quad \forall S\subset V\setminus\{d_0\} : S\cap T \cap D \neq \emptyset, \quad \forall m \in M, \label{f1_4} & \\
&\sum_{m \in M} x^m(\delta^+(i)) = 1 \text{ and } \sum_{m \in M} x^m(\delta^-(i)) = 1 \quad \forall i \in T, \label{f1_5} &\\
& x^m(\delta^+(i)) = x^m(\delta^-(i)) \quad \forall i \in T, m \in M, \label{f1_6} &\\
&x_{ij}^m \in \{0,1\} \quad \forall (i,j) \in E, m \in M \label{f1_7}. &
\end{flalign}
\label{eq:1stage}
\end{subequations}
Constraint \eqref{f1_1} forces the in-degree and out-degree of each refueling station to be equal. Constraints \eqref{f1_2} and \eqref{f1_3} ensure that all the UGVs leave and return to depot $d_0$, where $m$ denotes an UGV. Constraint \eqref{f1_4} ensures that a feasible solution is connected. For each target $i$, the pair of constraints in \eqref{f1_5} require that some UGV visits the POI $i$. Constraint \eqref{f1_6} forces the in-degree and out-degree of each POI to be equal. Finally, constraint \eqref{f1_7} imposes binary restrictions on the decision variables $x_{ij}^m$.
\subsection{Second-stage constraints} \label{ss}
The second-stage model for a fixed $\bm x$ and $\bm f(\omega)$ is given as follows:
\begin{subequations}
\begin{flalign}
&\beta(\bm x,\bm f(\omega)) = \min \sum_{i,j \in T} y_{ij}(\omega) \label{ss_f1_1}& \\
&\text{subject to: } \notag & \\
& y_{ij}(\omega) \geqslant \sum_{m \in M} x_{ij}^m(f_{ij}^m(\omega)-f_{ij}) \quad \forall (i,j) \in T, \label{ss_f1_2} & \\
& y_{ij}(\omega) \geqslant 0, \quad \forall \, (i,j) \in E.\label{eq:var_2} &
\end{flalign}
\label{ss_f1_3}
\end{subequations}
The objective function \eqref{ss_f1_1} minimizes the total additional travel for the UGVs for a given scenario. Constraint \eqref{ss_f1_2} estimates the additional travel for each edge for the given scenario. Finally, constraint \eqref{ss_f1_3} imposes the continuous restrictions on the second-stage variables.
\end{comment}
\section{Formulation 1} \label{sec:for2}
\textit{Given a team of heterogeneous UGVs (each UGV with a different capacity and travel time between POIs) and multiple refueling depots, a set of target POIs to visit and stochastic travel times or fuel consumption, find a path for each UGV such that each POI site is visited by at most one UGV, and the overall distance traveled by the UGVs is minimized}.
\subsection{Objective function} \label{subsec:obj2} The objective function for the two-stage stochastic programming model is the sum of the first-stage travel cost and the expected second-stage recourse cost. The second-stage recourse cost for a realization $\omega \in \Omega$ of the fuel consumption of the vehicles is the cost of the additional refueling trips that are required for the realization $\omega$. The recourse cost is a function of the first-stage routing decision $\bm x$ and the realization $\omega$. Letting the recourse cost be denoted by $\beta(\bm x, \bm f(\omega))$, the objective function for the two-stage stochastic optimization problem is given by:
\begin{flalign}
\min \,\, C, \text{ where } C \triangleq \sum\limits_{\substack{(i,j) \in E \\ m \in M}} \hat f_{ij}^m x_{ij}^m + \mathbb{E}_{\Omega} \left[ \beta(\bm x,\bm f) \right] \nonumber \\ = \sum\limits_{\substack{(i,j) \in E \\ m \in M}} \hat f_{ij}^m x_{ij}^m + \sum_{\omega \in \Omega} p(\omega) \beta(\bm x, \bm f(\omega)). \label{eq:obj2}
\end{flalign}
\subsection{First-stage routing constraints} \label{subsec:2_stage}
The constraints for the first-stage enforce the routing constraints, i.e., the requirements that each POI $i\in T$ should be visited by only one of the UGVs and that each UGV never runs out of fuel as it traverses its route. In the first-stage, the fuel constraint is enforced using the nominal value of fuel consumed by any UGV to traverse any edge $(i,j) \in E$. The first-stage routing constraints are as follows:
\begin{subequations}
\begin{flalign}
&x^m(\delta^+(d)) = x^m(\delta^-(d)) \quad \forall d\in D\setminus\{d_0\}, m \in M, \label{f2_1} & \\
&x^m(\delta^+(d_0)) = 1, \quad \forall m \in M, \label{f2_2} &\\
& x^m(\delta^-(d_0)) = 1, \quad \forall m \in M, \label{f2_3} &\\
&x^m(\delta^+(S)) \geqslant 1 \quad \forall S\subset V\setminus\{d_0\} : S\cap T \cap D \neq \emptyset, & \nonumber \\
& \quad \forall m \in M, \label{f2_4} & \\
&\sum_{m \in M} x^m(\delta^+(i)) = 1 \text{ and } \sum_{m \in M} x^m(\delta^-(i)) = 1 \quad \forall i \in T, \label{f2_5} &\\
& x^m(\delta^+(i)) = x^m(\delta^-(i)) \quad \forall i \in T, m \in M, \label{f2_61} &
\end{flalign}
\label{eq:1stage}
\end{subequations}
\begin{subequations}
\begin{flalign}
& \sum_{j\in V}z_{ij} - \sum_{j\in V}z_{ji} = \sum_{j\in V}\hat f_{ij}^mx_{ij}^m \quad \forall i \in T, \forall m \in M, \label{f2_6} & \\
&z_{di} = \hat f_{di}^mx_{di}^m \quad \forall i\in V,\, d \in D, m \in M, \label{f2_7} & \\
&0 \leqslant z_{ij} \leqslant F_mx_{ij}^m \quad \forall (i,j) \in E, m \in M, \label{f2_8} & \\
&x_{ij}^m \in \{0,1\} \quad \forall (i,j) \in E, m \in M. \label{f2_9} &
\end{flalign}
\label{eq:1stage}
\end{subequations}
Constraint \eqref{f2_1} forces the in-degree and out-degree of each refueling station to be equal. Constraints \eqref{f2_2} and \eqref{f2_3} ensure that all the UGVs leave and return to depot $d_0$. Constraint \eqref{f2_4} ensures that a feasible solution is connected. For each POI $i$, the pair of constraints in \eqref{f2_5} require that some UGV visits the POI $i$. Constraint \eqref{f2_61} forces the in-degree and out-degree of each POI to be equal. Constraint \eqref{f2_6} eliminates subtours of the targets and also defines the flow variables $z_{ij}$ for each edge $(i,j) \in E$ using the nominal fuel consumption values $\hat f_{ij}$. Constraints \eqref{f2_7} -- \eqref{f2_8} together impose the fuel constraints on the routes for all the UGVs. Finally, constraint \eqref{f2_9} imposes binary restrictions on the decision variables $x_{ij}^m$.
\subsection{Second-stage constraints} \label{f2-ss}
The second-stage model for a fixed $\bm x$ and $\bm f(\omega)$ is given as follows:
\begin{subequations}
\begin{flalign}
&\beta(\bm x,\bm f(\omega)) = \min \sum\limits_{\substack{(i,j) \in E \\ m \in M}} f_{ij}^m(\omega)\bar{y}_{ij}^m(\omega) \label{ss-f2-1}& \\
&\text{subject to: } \notag & \\
&y^m(\delta^+(d))(\omega) = y^m(\delta^-(d))(\omega) \quad \forall d\in D\setminus\{d_0\}, & \nonumber \\
& \quad \forall m \in M, \label{ss-f2-2} & \\
&y^m(\delta^+(d_0)) (\omega) = 1, \quad \forall m \in M, \label{ss-f2-3} &\\
&y^m(\delta^-(d_0)) (\omega) = 1, \quad \forall m \in M, \label{ss-f2-4} &\\
&y^m(\delta^+(S))(\omega) \geqslant 1 \quad \forall S\subset V\setminus\{d_0\} : S\cap T \cap D \neq \emptyset, & \nonumber \\
& \quad \forall m \in M, \label{ss-f2-5} & \\
&\sum_{m \in M} y^m(\delta^+(i)) (\omega) = 1 \text{ and } \sum_{m \in M} y^m(\delta^-(i)) (\omega) = 1, & \nonumber \\
& \quad \forall i \in T, \label{ss-f2-6} & \\
& y^m(\delta^+(i)) = y^m(\delta^-(i)) \quad \forall i \in T, m \in M, \label{ss-f2-61} &\\
& \sum_{j\in V}\bar{z}_{ij}(\omega) - \sum_{j\in V}\bar{z}_{ji}(\omega) = \sum_{j\in V} f_{ij}^m(\omega)y_{ij}^m (\omega) \quad \forall i \in T, & \nonumber \\
& \quad \forall m \in M, \label{ss-f2-7} &
\end{flalign}
\label{eq:1stage1}
\end{subequations}
\begin{subequations}
\begin{flalign}
&\bar{z}_{di} = f_{di}^m(\omega) y_{di}^m (\omega) \quad \forall i\in V,\, d \in D, m \in M, \label{ss-f2-8} & \\
&0 \leqslant \bar{z}_{ij}(\omega) \leqslant F_my_{ij}^m(\omega) \quad \forall (i,j) \in E, \label{ss-f2-9} & \\
& \bar{y}_{ij}^m(\omega) \geqslant y_{ij}^m(\omega) - x_{ij}^m \quad \forall (i,j) \in E, m \in M, \label{ss-f2-10} & \\
&y_{ij}^m(\omega) \in \{0,1\}, \bar{y}_{ij}^m(\omega) \geqslant 0 \quad \forall (i,j) \in E \label{ss-f2-11}. &
\end{flalign}
\label{f2-ssc}
\end{subequations}
The decision variables $y_{ij}^m(\omega)$ are similar to the first-stage variable $\bm x$ thus obtaining the paths for each UGV based on the realization $\bm f(\omega)$, and the variables $\bar{y}_{ij}^m(\omega)$ denote the differences in edges traveled by each UGV `$m$' compared to the first-stage decision $\bm x$. The objective function \eqref{ss-f2-1} minimizes the total swapped and additional travel for the UGVs for a given scenario in comparison to the corresponding first-stage decisions. Constraints \eqref{ss-f2-2}-\eqref{ss-f2-9} are similar to the first-stage constraints. Constraint \eqref{ss-f2-10} estimates the swapped or additional travel for each of the UGVs for the given scenario. Finally, constraint \eqref{ss-f2-11} imposes the restrictions on the second-stage variables.
\section{Formulation 2} \label{sec:for3}
\textit{Given a team of UGVs and a subset of it is randomly available for the mission, and a set of POIs sites to visit, find a path for each UGV such that each POI is visited by at most by one UGV, and an objective based on the incentives of POIs visited by the UGVs is maximized}.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{vehPaths1.pdf}
\caption{The optimal solution for a two UGV path planning problem for different chances of availability for the second UGV. Note that as the chances of the availability of the second UGV reduces, the number of assigned POIs to the second UGV also reduces.}
\label{stochastic}
\end{figure}
\end{comment}
\subsection{Objective function} \label{subsec:obj3} The objective function for the two-stage stochastic programming model is the sum of the first-stage profit and the expected second-stage profits. The second-stage profit for a realization $\omega \in \Omega$ of the fuel consumption of the UGV is the change in profit for the realization $\omega$. The reduction in profits is a function of the first-stage routing decision $\bm x$ and the realization $\omega$. Letting the recourse cost be denoted by $\beta(\bm x, \bm z, \bm f(\omega))$, the objective function for the two-stage stochastic optimization problem is given by:
\begin{flalign}
\max \,\, C, \text{ where } C \triangleq \sum\limits_{\substack{(i,j) \in E: j \in T \\ m \in M}} p_{j}^m x_{ij}^m + \mathbb{E}_{\Omega} \left[ \beta(\bm x, \bm z, \bm f) \right] \nonumber \\ = \sum\limits_{\substack{(i,j) \in E: j \in T \\ m \in M}} p_{j}^m x_{ij}^m + \sum_{\omega \in \Omega} p(\omega) \beta(\bm x, \bm z, \bm f(\omega)). \label{eq:obj}
\end{flalign}
\subsection{First-stage routing constraints} \label{subsec:3_stage}
The constraints for the first-stage enforce the routing constraints, i.e., the requirements that each POI in $T$ can be visited at least once by some UGV and that each UGV never runs out of fuel as it traverses its route. In the first-stage, the fuel constraint is enforced using the nominal value of fuel consumed by any UGV to traverse any edge $(i,j) \in E$. The first-stage routing constraints are as follows:
\begin{subequations}
\begin{flalign}
&x^m(\delta^+(d)) = x^m(\delta^-(d)) \quad \forall d\in D\setminus\{d_0\}, m \in M, \label{eq:degree_d_1} & \\
&x^m(\delta^+(d_0)) = 1, \quad \forall m \in M, \label{eq:degree_d0_1_1} &\\
& x^m(\delta^-(d_0)) = 1, \quad \forall m \in M, \label{eq:degree_d0_2_1} &\\
&x^m(S) \leqslant |S|-1 \quad \forall S\subset V\setminus\{d_0\} \nonumber &\\
&: S\cap T \cap D \neq \emptyset, \quad \forall m \in M, \label{eq:sec_1} & \\
&\sum_{i\in V, m \in M} x_{ij}^m \leqslant 1 \text{ and } \sum_{i\in V, m \in M} x_{ji}^m \leqslant 1 \quad \forall \, j \in T, \label{eq:5} &\\
& x^m(\delta^+(i)) = x^m(\delta^-(i)) \quad \forall i \in T, m \in M, \label{eq:51} &\\
& \sum_{j\in V}z_{ij}^m - \sum_{j\in V}z_{ji}^m = \sum_{j\in V}f_{ij}^mx_{ij}^m \quad \forall i \in T, m \in M, \label{eq:7} & \\
&z_{di}^m = f_{di}^mx_{di}^m \quad \forall \, i\in V, \, d \in D, m \in M, \label{eq:11} & \\
&0 \leqslant \sum_{(i,j) \in E} f_{ij}^mx_{ij}^m \leqslant F_m\quad \forall (i,j) \in E, m \in M, \label{eq:fuel_3_1} & \\
&x_{ij}^m \in \{0,1\} \, , \,\, z_{ij}^m \leq F_m \quad \forall (i,j) \in E, m \in M. \label{eq:bin_1}. &
\end{flalign}
\label{eq:1stage}
\end{subequations}
Constraint \eqref{eq:degree_d_1} forces the in-degree and out-degree of each refueling station to be equal. Constraints \eqref{eq:degree_d0_1_1} and \eqref{eq:degree_d0_2_1} ensure that all the UGVs leave and return to depot $d_0$, where $m$ is the number of UGVs. Constraint \eqref{eq:sec_1} ensures that a feasible solution is connected. For each target $i$, the pair of constraints in \eqref{eq:5} state that some UGV visits the POI $i$ only once. Constraint \eqref{eq:51} forces the in-degree and out-degree of each POI station to be equal. Constraints \eqref{eq:7}-\eqref{eq:11} eliminates sub-tours of the POIs and also defines the flow variables $z_{ij}^m$ for each edge $(i,j) \in E$ and UV $h$. Constraints \eqref{eq:fuel_3_1} impose the fuel capacity constraints on the routes for all the UGVs. Finally, constraint \eqref{eq:bin_1} imposes restrictions on the decision variables.
\subsection{Second-stage constraints} \label{subsec:second-stage}
The second-stage model for a fixed $\bm x$, {$\bm z$}, and $\bm f(\omega)$ is given as follows:
\begin{subequations}
\begin{flalign}
&\beta(\bm x, \bm z, \bm f(\omega)) = \max \sum\limits_{\substack{(i,j) \in E \\ m \in M}} -p_{j}^m v_{ij}^m(\omega) \label{eq:recourse}& \\
&\text{subject to: } \notag & \\
& \sum_{j\in V}f_{ij}^mv_{ij}^m(\omega) = \sum_{j\in V}z_{ij}^m - \sum_{j\in V}z_{ji}^m - \alpha_{m}(\omega)\sum_{j\in V} f_{ij}^m x_{ij}^m & \nonumber \\
& \quad \forall i \in T, m \in M, \label{eq:14} & \\
& v_{ij}^m(\omega) \leq x_{ij}^m \quad \forall (i,j) \in E, m \in M, \label{eq:15a} & \\
&f_{di}^m v_{di}^m(\omega) = z_{di}^m - \alpha_{m}(\omega).f_{di}^m x_{di}^m \quad \forall \, i\in V, \, d \in D, & \nonumber \\
& \quad \forall m \in M, \label{eq:11a} & \\
& v_{ij}^m(\omega) \in \{0,1\} \quad \forall \, (i,j) \in E, m \in M.\label{eq:16} &
\end{flalign}
\label{eq:2stage}
\end{subequations}
In the second-stage, $\alpha_{m}(\omega)$ takes a value of $1$ or $0$ denoting the availability of an UGV $m$ for the scenario $\omega$ or not. Variable $v_{ij}^m(\omega)$ maintains the feasibility of the constraints \eqref{eq:14}-\eqref{eq:15a} for the given first-stage values $x$ and $z$. Constraint \eqref{eq:15a} states the dependence of $x_{ij}^m$ and $v_{ij}^m(\omega)$, and finally binary restrictions for $v_{ij}^m(\omega)$ are presented in \eqref{eq:16}. Let the relaxed recourse problem for $\beta(\bm x, \bm z, \bm f(\omega))$ be represented as $\beta_r(\bm x, \bm z, \bm f(\omega))$. In $\beta_r(\bm x, \bm z, \bm f(\omega))$, the constraints \eqref{eq:16} are replaced by $0 \leq v_{ijh}(\omega) \leq 1$.
\begin{comment}
\begin{flalign}
&\varphi^{ec}(x,z,\omega) = \quad \text{Maximize} \quad -\sum_{i\in T, h \in H} p_{ih} y_{ih}(\omega) \notag& \\
&\text{subject to: } \notag & \\
& v_{djh}^1(\omega) \geq x_{djh} \quad \forall d \in D, j \in T, h \in H, \label{eq:a16} & \\
& v_{ijh}^k(\omega) \geq x_{ijh} + \sum_{\ell \in V} x_{\ell i k-1} - 1\quad \forall (i,j) \in E, i \notin D, h \in H, k = 2,\ldots, \left\vert{T}\right\vert \label{eq:a17} & \\
& \sum_{k=1}^K \sum_{(i,j) \in E} f_{ij}(\omega) v_{ijh}^k(\omega) + \sum_{(i,j) \in E} f_{ij} v_{ijh}^K(\omega) \leq F + Mz^K_h(\omega) \quad \forall h \in H, K = 1,\ldots, \left\vert{T}\right\vert \label{eq:a18} & \\
& z^k_h(\omega) \geq z^{(k-1)}_h(\omega) \quad \forall h \in H, k = 2,\ldots, \left\vert{T}\right\vert \label{eq:a19} & \\
& y_{jh}(\omega) \geq \sum_{i \in V} v_{ijh}(\omega) + z^k_h(\omega) - 1 \quad \forall j \in T, h \in H, k = 1,\ldots, \left\vert{T}\right\vert \label{eq:a20} & \\
& v_{ijh}^k(\omega) \in \{0,1\}, y_{jh}(\omega) \in \{0,1\}, z^k_h(\omega) \in \{0,1\}, \quad \forall \, (i,j) \in E, h \in H, k = 1,\ldots, \left\vert{T}\right\vert.\label{eq:a21} &
\end{flalign}
\end{comment}
\begin{comment}
i k-1
We call this as a \textit{static} model as first-stage decisions are taken based on the expected uncertainties in the second-stage, and the first-stage decisions on paths are available a priori for the UVs. Whenever the confidence interval of uncertainties are outside the range as computed using stability analysis, the paths for the UVs are revised using a \textit{dynamic} plan.
\end{comment}
\begin{thm}\label{31}
The objective values of $\beta(\bm x, \bm z, \bm f(\omega))$ and $\beta_r(\bm x, \bm z, \bm f(\omega))$ are same.
\end{thm}
\begin{comment}
\begin{proof}
We need to show that the values of $v_{ijh}(\omega)$ will be either $0$ or $1$ for $\beta_r(\bm x, \bm z, \bm f(\omega))$. Due to constraints \eqref{eq:15a}, it is sufficient to show that any $v_{ijh}(\omega) > 0$ will be equal to 1. If $\alpha_{h}(\omega)=1$, then $v_{ijh}(\omega) =0$ due to constraint \eqref{eq:7}. When $\alpha_{h}(\omega)=0$, $\sum_{j\in V} f_{ij} x_{ijh} = \sum_{j\in V}f_{ij}v_{ijh}(\omega)$ due to constraint \eqref{eq:7}, and let $\Gamma$ be a set with $(i,j,h) \in \Gamma$ for any $x_{ijh} = 1$, and let $\Gamma'$ be the corresponding set for $v_{ijh}\omega$. By the constraints \eqref{eq:15a}, $\left\vert{\Gamma}\right\vert = \left\vert{\Gamma'}\right\vert$, so for any $\Gamma$ to satisfy $\sum_{j\in V} f_{ij} x_{ijh} = \sum_{j\in V}f_{ij}v_{ijh}(\omega)$, $\Gamma = \Gamma'$ and all $v_{ijh}(\omega) = 1$ for any $(i,j,h) \in \Gamma'$.
\end{proof}
\end{comment}
\begin{comment}
\subsection{Risk-measures}\label{sec-intro}
The formulation presented in TS is risk-neutral as it considers expectation as a primary criterion for choosing the values for the first-stage decision variables. However, in the event of presence of higher variability in the data, it is imperative to consider risk-measures for more robust solutions. In this section, we introduce a deviation based risk-measure conditional value of risk (CVaR) in the model TS. The deterministic equivalent problem formulation with CVaR risk-measure is given as:
\begin{flalign}
& \quad \text{Maximize} \quad \text{TS-CVaR:} \sum_{(i,j)\in E, h \in H} (1+\lambda)p_{ijh} x_{ijh} + \sum_{\omega \in \Omega}p(\omega)\phi^{cvar}(x,z,\omega) \notag& \\
&\text{subject to: Constraints (1)-(13).} \notag &
\end{flalign}
\begin{flalign}
&\phi^{cvar}(x,z,\omega) = \quad \text{Maximize} \quad -\sum_{(i,j)\in E, h \in H} p_{ijh} v_{ijh}(\omega) - \lambda \eta - \frac{\lambda}{1-\alpha} u(\omega) \notag& \\
&\text{subject to:Constraints (15)-(17),} \notag & \\
& \sum_{(i,j)\in E, h \in H} p_{ijh} v_{ijh}(\omega) + \eta + u(\omega) \geq 0 \quad \forall (i,j) \in E, h \in H, \label{eq:15} & \\
& \eta \in \mathbb{R}, v({\omega}) \in \mathbb{R}^+.\label{eq:16a} &
\end{flalign}
Also, the Theorem \eqref{31} stated for risk-neutral $\phi^{ec}$ is valid for $\phi^{cvar}$, i.e,
$v_{ijh}(\omega) \in \{0,1\}$ can be replaced by $0 \leq v_{ijh}(\omega) \leq 1$ for $\phi^{cvar}$.
\end{comment}
\section{Algorithm}\label{det-model}
The constraints \eqref{eq:fuel_3_1} are the typical knapsack constraints and the formulation will resemble `orienteering problem'. We will refer the formulation with and without knapsack constraints as TS-OP and TS, respectively. In this section, we present a decomposition algorithm to solve problem TS and TS-OP. The formulations TS and its variants can be provided to any commercial branch-and-cut solvers to obtain an optimal solution. However, observe that the formulations will contain constraint \eqref{eq:sec_1} to ensure any feasible solution is connected. The number of such constraints is exponential and it may not be computationally efficient to enumerate all these constraints and provide them upfront to the solvers. Additionally, stochastic integer programs are large in scale due to the variables and constraints in the scenarios and they require decomposition algorithms to exploit the special structure of the problem. These challenges and opportunities motivated us to design a decomposition algorithm to solve the instances for TS and its variants.
\subsection{Decomposition algorithm}\label{det-model}
The decomposition algorithm is a variant of L-shaped algorithm where the deterministic parameters are used to obtain first-stage solutions, and then the second-stage programs are solved based on the obtained first-stage solutions. Then the optimality cuts are generated and added to the first-stage program to approximate the value function of the second-stage cost. The dual information of all the realization of the random data are used to generate the optimality cuts for the first-stage. The use of L-shaped method for TS is possible only due to the theorem \eqref{31}. Otherwise due to the binary restrictions for second-stage variables, the value function will be non-convex and lower semi-continuous in general, and a direct use of L-shaped method is not possible. The first-stage problem is solved as a mixed-integer program with binary restrictions for $x$ variables and by theorem \eqref{31}, the second-stage programs are solved as linear programs.
\subsubsection{Problem reformulation}\label{decomp-model}
For the sake of decomposition, the first-stage problem \eqref{eq:obj} -\eqref{eq:bin_1} is reformulated as the following master-problem (MP) and we add an unrestricted variable $\theta$ to the first-stage program. In the formulation TS-MP, let $h(\omega)$ and $\mu(\omega)$ represent the right-hand side and dual values for the second-stage constraints \eqref{eq:14} - \eqref{eq:16} and $v_{ijh}(\omega) \in \{0,1\}$ are replaced by $0 \leq v_{ijh}(\omega) \leq 1$. Similarly, $T(\omega)$ and $T'(\omega)$ represent the co-efficient matrices for the first-stage variables $x$ and $z$ in the second-stage constraint \eqref{eq:14} - \eqref{eq:16}, respectively. The first-stage master program for the decomposition algorithm TS-MP is given as follows:
\begin{subequations}
\begin{flalign}
& z^ k = \max \sum\limits_{\substack{(i,j) \in E: j \in T \\ m \in M}} p_{j}^m x_{ij}^m + \theta \label{eq:mp} & \\
& \text{ Subject to:} \nonumber & \\
& \eqref{eq:degree_d_1} - \eqref{eq:bin_1}, \label{eq:12} &
\end{flalign}
\label{eq:1MP}
\end{subequations}
\begin{subequations}
\begin{flalign}
& \sum \limits_{\substack{\omega \in \Omega}} \sum\limits_{\substack{(i,j) \in E, \\ m \in M}} (((\pi_1(\omega)^{t \top}T_1)+(\pi_2(\omega)^{t \top}T_2) \nonumber &\\
& + (\pi_3(\omega)^{t \top}T_3)) x_{ij}^m + ((\pi_1(\omega)^{t \top}S_1)+(\pi_3(\omega)^{t \top}S_3)) z_{ij}^m ) \nonumber &\\
& + \theta \leq \sum \limits_{\substack{\omega \in \Omega}} \pi(\omega)^t h(\omega) , \ t \in 1,...,k \label{eq-master-1a} &\\
&x_{ij}^m \in \{0,1\} \, , \,\, z_{ij}^m \leq F_m \quad \forall (i,j) \in E, m \in M, \,\,\,\theta \in \mathbb{R}. \label{eq:bin_12} &
\end{flalign}
\label{eq:1MP}
\end{subequations}
In the master problem \eqref{eq:1MP}, for a scenario $\omega$, $\pi_1(\omega)$, $\pi_2(\omega)$, and $\pi_3(\omega)$ are the dual vectors of the constraints \eqref{eq:14}, \eqref{eq:15a}, and \eqref{eq:11a}, respectively. Similarly, $T_1$, $T_2$, and $T_3$ represent the co-efficient matrices for the variables $x_{ij}^m$ in the constraints \eqref{eq:14}, \eqref{eq:15a}, and \eqref{eq:11a}, respectively. Also, $S_1$ and $S_3$ represent the co-efficient matrices for the variables $z_{ij}^m$ in the constraints \eqref{eq:14}, and \eqref{eq:11a}, respectively. Finally, $\pi(\omega)$ and $h(\omega)$ represent the dual vector and righthand side for the constraints \eqref{eq:14} - \eqref{eq:16}. $\theta$ is an unrestricted decision variable. Constraints \eqref{eq-master-1a} are the \textit{optimality} cuts, which are computed based on the optimal dual solution of all the subproblems given as the second-stage problem $\beta_r(\bm x, \bm z, \bm f(\omega))$. Optimality cuts approximate the value function of the second-stage subproblems $\beta_r(\bm x, \bm z, \bm f(\omega))$. It should be noted that the model TS-MP has relatively complete recourse property, i.e, $\beta_r(\bm x, \bm z, \bm f(\omega)) < \infty$ for any $x_{ij}^m$ and $z_{ij}^m$.
We would like to emphasize that the number of optimality cuts generated from second-stage dual values can be a single cut or multi-cut. In single-cut, a cut is generated across all the second-stage problems and in multi-cut, each second-stage program will be approximated by a cut in the first-stage program. In our computational experiments, we adopted a single cut approach as we do not want to stress the first-stage problem as it already consists of binary variables and sub-tour elimination constraints \eqref{eq:sec_1}. Also, the presented algorithm can be extended to instances of TS-OP as the changes occur only in first-stage constraints.
\begin{algorithm}[H]
\caption{Branch and Cut Algorithm}\label{algo:improvement}
\begin{algorithmic}[1]
\doublespacing
\State \textbf{Step 0.} Initialize.\\ $n \leftarrow 0$, $lb \leftarrow -\infty$, $\epsilon$ is an user defined parameter and $ub\leftarrow \infty$, and $x^0, z^0$ are obtained as follows: argmax$ \{ \sum_{(i,j) \in E, m \in M} p_{j}^m x_{ij}^m | x, z \in \eqref{eq:degree_d_1} -\eqref{eq:bin_1}\}.$ \Comment{Initial solution}
\State \textbf{Step 1.} Solve second-stage programs. \\ Solve $\beta_r(\bm x, \bm z, \bm f(\omega))$ for each $\omega \in \Omega$, and obtain dual solution $\pi(\omega)$ for each second-stage program. \Comment{Solve sub-problems for given first-stage solution}
\State \textbf{Step 2.} Optimality cut.
Based on the dual values $\pi(\omega)$s from the sub-problems, generate an optimality cut \eqref{eq-master-1a} and add it to the master problem. \Comment{Sub-problems' objective function are approximated}
\State \textbf{Step 3.} Obtain upper bounds.\\
Solve the master program TS-MP with the new optimality cut and let the objection function value be $u^n$. Set $ub \leftarrow min\{u^n, ub\}$. \Comment{Sub-problems' objective function are approximated}
\State \textbf{Step 4.} Add sub-tour elimination constraints. \\ Check for strongly connected components and if there are any sub-tours, add the constraint \eqref{eq:sec_1}. Go to Step 3. \Comment{Check whether the paths are connected}
\State \textbf{Step 5.} Update bounds. \\
$v^n \leftarrow \{\sum_{(i,j)\in E, m \in M} p_{j}^m x_{ij}^m| x, z \in \eqref{eq:degree_d_1} -\eqref{eq:bin_1}\} + \mathbb{E}_{\Omega} \left[ \beta(\bm x, \bm z, \bm f) \right]$, and set $lb \leftarrow max\{v^n, lb\}$. If $lb$ is updated, set incumbent solution to $x^{*} \leftarrow x^n,$ and $z^{*}\leftarrow z^n$.
\Comment{Update the incumbent solution}
\State \textbf{Step 6.} Termination. If $ub-lb < \epsilon|ub|$ then stop, $x^{*},$ and $z^{*}$ are the optimal solutions, else set $n \leftarrow n+1$ and return to step 1. \Comment{termination condition}
\vspace{1ex}
\end{algorithmic}
\end{algorithm}
\subsection{Sub-tour elimination constraints}\label{det-model}
In our algorithm, we relax the constraints \eqref{eq:sec_1} from the formulation, and whenever the first-stage problem obtains an integer feasible solution to this relaxed problem, we check if any of the constraints \eqref{eq:sec_1} are violated by the integer feasible solution. If so, we add the infeasible constraint to the first-stage problem. This process of adding constraints to the problem sequentially has been observed to be computationally efficient for the TSP, VRP and a huge number of their variants.
Now, we will detail the algorithm used to find a constraint \eqref{eq:sec_1} that is violated for a given integer feasible solution to the relaxed problem. A violated constraint \eqref{eq:sec_1} can be described by a subset of vertices $S \subset V\setminus\{d_0\}$ such that $S\cap D \neq \emptyset$ and $x(S) = |S|$ for every $d \in S\cap D$. We find the strongly connected components of $S$. Every strongly connected component that does not contain the depot is a subset $S$ of $V\setminus\{d_0\}$ which violates the constraint \eqref{eq:sec_1}. We add all these infeasible constraints and continue solving the original problem. Many off-the-shelf commercial solvers provide a feature called ``solver callbacks'' to implement such an algorithm into its branch-and-cut framework.
\section{Computational Experiments}\label{det-model}
In this section, we discuss the computational performance of the branch-and-cut algorithm for formulations presented in the Sec. \ref{sec:formulation}. The mixed-integer linear programs were implemented in Java, using the traditional branch-and-cut framework and the solver callback functionality of CPLEX version 12.6.2. All the simulations were performed on a Dell Precision T5500 workstation (Intel Xeon E5630 processor @2.53 GHz, 12 GB RAM). The computation times reported are expressed in seconds, and we imposed a time limit of 3,600 seconds for each run of the algorithm. The performance of the algorithm was tested with randomly generated test instances. \\
\noindent {\it Instance generation}
The problem instances were randomly generated in a square grid of size [100,100]. The number of refueling stations was set to 4 and the locations of the depot and all the refueling stations were fixed a priori for all the test instances. The number of POIs varies from $10$ to $40$ in steps of five, while their locations were uniformly distributed in the square grid; for each $|T| \in \{10,15,20,25,30,25,40\}$, we generated five random instances. For each of the above generated instances, the number of UGVs in the depot was $3$, and the fuel capacity of the UGVs, $F$, was varied linearly with a parameter $\lambda$. $\lambda$ is defined as the maximum distance between the depot and any POI. The fuel capacity $F$ was assigned a value from the set $\{2.25 \lambda, 2.5 \lambda, 2.75 \lambda, 3\lambda \}$. The travel costs and the fuel consumed to travel between any pair POIs vertices were assumed to be directly proportional to the Euclidean distances between the pair and rounded down to the nearest integer. \\
The utility of the stochastic programming approach can be evaluated by estimating the value of the stochastic solution (VSS) introduced by \cite{birge1982value}. The objective value of the recourse problem (RP) can be stated as in \eqref{eq:obj2}; then we take the expected value of the random variable and solve the \textit{expected value problem} (EV), where $\bm f(\omega)$ in \eqref{eq:obj2} is replaced by $\bm f(\bar{\omega})$ which represents the mean of the random variable $\bm f(\omega)$. Considering $\bar{x}_{ij}^m$ as the solution for the EV problem, the expected result of using the expected value solutions $(\bar{x})$ is EEV which is given in \eqref{eq:eev}. Then the VSS can be defined as the difference between the objective values of the recourse problem and the EEV, i.e, VSS=EEV-RP.
\begin{flalign}
\min \,\, C, \text{ where } C \triangleq \sum\limits_{\substack{(i,j) \in E \\ m \in M}} \hat f_{ij}^m \bar{x}_{ij}^m + \mathbb{E}_{\Omega} \left[ \beta(\bm \bar{x},\bm f) \right] \nonumber \\ = \sum\limits_{\substack{(i,j) \in E \\ m \in M}} \hat f_{ij}^m \bar{x}_{ij}^m + \sum_{\omega \in \Omega} p(\omega) \beta(\bm \bar{x}, \bm f(\omega)). \label{eq:eev}
\end{flalign}
Figures \ref{Fig2} and \ref{Fig3} represent the results for the formulation presented in section \ref{sec:for2}. In the computational experiments, instances 1 to 5 used 10 POIs and instances 6 to 10 used 20 POIs. Figure \ref{Fig2} shows the use of two-stage model when uncertainties are considered for travel time among depots and points of interests. Gamma distribution is used to characterize the uncertainty in travel time and continuous beta distribution for the uncertainty in fuel consumption. The results from stochastic model are compared with deterministic model. Overall, under travel time uncertainty, the average improvement indicated by VSS is between 6\% and 20\%.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[scale=0.8]{Comp1.JPG}
\end{center}
\caption{Performance of VSS while considering uncertainties for travel time among depots and points of interests.}
\label{Fig2}
\end{figure}
Figure \ref{Fig3} demonstrates the use of two-stage model when uncertainties are considered for fuel capacity. Gamma distribution is used to characterize uncertainty in travel time and continuous beta distribution for uncertainty in fuel consumption. The results from stochastic model are compared with deterministic model. Overall, under fuel capacity uncertainty, the average improvement indicated by VSS is between 20\% and 40\%.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[scale=0.8]{Comp2.JPG}
\end{center}
\caption{Performance of VSS while considering uncertainties for fuel capacity.}
\label{Fig3}
\end{figure}
Figures \ref{Fig4} and \ref{Fig5} represent the results for the formulation presented in the section \ref{sec:for3}. In the computational experiments, figures \ref{Fig4} and \ref{Fig5} used 10 and 20 POIs, respectively. Each POI has a reward and is visited by at most one UGV, and the objective is to maximize the total reward collected by the UGVs. UGV 3’s availability with probabilities are 1, 0.75, 0.25, and 0 in Cases 1, 2, 3, and 4, respectively. As represented in the figures, the contribution of UGV 3 monotonically decreases depending upon the probability of availability. This type of marginal decrease in the contribution of UGV 3 is not possible with a deterministic model since the availability has to be considered as binary (0 or 1). Hence, the deterministic model can only handle extreme cases, and a stochastic model can include marginal increase or decrease of an asset's utilization based on the probability distribution to denote its availability.
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[scale=0.5]{Comp3.JPG}
\end{center}
\caption{ Uncertainty in availability of UGVs - 10 POIs}
\label{Fig4}
\end{figure}
\begin{figure}[!hbtp]
\begin{center}
\includegraphics[scale=0.5]{Comp4.JPG}
\end{center}
\caption{ Uncertainty in availability of UGVs - 20 POIs.}
\label{Fig5}
\end{figure}
\section{Conclusion}
Path planning problem for manned and unmanned UGVs is an important area of research for efficient use of UGVs. This paper presents two different stochastic programming models to address uncertainties; travel time and availability of UGVs. To overcome the computational complexity, a decomposition algorithm is presented. Computational experiments are performed to demonstrate the usefulness of stochastic models over their deterministic versions. A potential future study is to evaluate the robustness of the solutions under cost minimization and profit maximization, and choose an appropriate objective based on the uncertainties in the environment.
\section{Acknowledgement}
The authors wish to acknowledge the technical and financial support of the Automotive Research Center (ARC) in accordance with Cooperative Agreement W56HZV-19-2-0001 U.S. Army CCDC Ground Vehicle Systems Center (GVSC) Warren, MI.
\section{Legal Statement}
DISTRIBUTION A. Approved for public release: distribution unlimited.
\section{References}
|
1,108,101,565,758 | arxiv | \section{Introduction}
The lepton asymmetry of the Universe, represented by neutrinos and antineutrinos, is nowadays one of the most weakly constrained cosmological parameter.
Although the baryon number asymmetry is well measured
from cosmic microwave background (CMB) constraints concerning the baryon density, the lepton asymmetry could be larger by many orders of magnitude and not of the same order as expected
by the Big Bang Nucleosynthesis (BBN) considerations \citep{2017PhRvD..95d3506B}. The presence of a large lepton asymmetry can be considered as an excess the neutrinos on antineutrinos or vice versa,
which can be a requirement due to both the charge neutrality of the Universe and it is possibly hidden in the cosmic neutrino background ($C\nu B$) and it can be imprinted on cosmological observation.
For instance, from CMB anisotropy \citep{Castorina,Dominik02}. The large neutrino asymmetries have consequences in the early Universe phase transitions, cosmological magnetic fields
and dark matter relic density (see \citep{Schwarz,Semikoz,Stuke,2017JCAP...04..048B}, for more details). Other effects due to the asymmetric leptonic can be considered as changes in the decoupling
temperature of $C\nu B$ \citep{Freese,Kang}, the time equivalence between the energy densities of radiation and matter, the production of primordial light elements at BBN \citep{Sarkar},
an excess in the contribution of the total radiation energy density and the expansion rate of the Universe \citep{Giusarma,2015PhRvD..92l3535A}, photon decoupling \citep{xi7}, among others. These changes
can affect the evolution of the matter density perturbations in the Universe, which has effects not only on the CMB anisotropies, but also on the formation, evolution and distribution of
the large-scale structure (LSS) of the Universe \citep{book,2015PhRvD..92l3535A}. The effects of the cosmological neutrinos on both the CMB and LSS are only gravitational, since they are decoupled
(free streaming particles)
at the time of recombination and structure formation. The LSS formation is more sensitive to the neutrino masses than CMB. The increasing of the structure is driven by the cosmic expansion and
self-gravity of matter perturbations, both affected by the massive neutrinos. Nevertheless, the relic neutrino slows down the growth of structures due to its high thermal speed, leading
to a suppression of the total matter power spectrum \citep{Ali}. On the other hand, the gravitational lensing of CMB and the integrated Sachs-Wolf effect are also modified by the presence
of massive neutrinos \citep{Abazajian}. The effect of massive neutrinos in the non-linear growth structure regime has been recently studied by \cite{Zeng}.\\
The neutrinos properties are very important in the determination of the dynamics of the Universe inferring direct effects on cosmological sources
and consequently in
the estimation of cosmological parameters (see \citep{Dolgov,Lesgourgues,Abazajian,Wang2,Lorenz,Vagnozzi1,Yang,DiVal2,Li,WangLF,Wang}).
The parameters that characterize the neutrinos effects on cosmological probes
are the total neutrino mass $\Sigma m_{\nu}$ and the effective number of species $N_{\rm eff}$.
Altogether, the up dated constraint on the neutrino mass scale is $\Sigma m_{\nu} < 0.12$ eV at 95 percent C.L. and $N_{\rm eff}=2.99\pm 0.17$ at 95 percent C.L.
from the final full-mission Planck measurements of the CMB anisotropies \citep{Planck2018}.
In the case of the three active neutrino flavors with zero asymmetries and a standard thermal history,
the value of effective number of species is the well-known $N_{\rm eff} = 3.046$ \citep{book}
and an improved calculation $N_{\rm eff} = 3.045$ \citep{de Salas}, but the presence of neutrino asymmetries can increase this number without
the need to introduce new relativistic species. In general terms, any excess over this value can be parameterized through
$\Delta N_{\rm eff} = N_{\rm eff} - 3.046$, which in principle is assumed to be some excess of the number of relativistic relics degrees of freedom,
known in the literature as dark radiation
(see \cite{Nunes} for recent constraints on $\Delta N_{\rm eff}$).\\
Finally, and more important to our work, is to consider the aforementioned cosmological lepton symmetry, which is another natural extension on the neutrino physics properties.
This property is usually parameterized by the so-called degeneracy parameter \textbf{$\xi_{\nu} = \mu_{\nu}/T_{\nu 0}$}, where \textbf{$\mu_{\nu}$} is the neutrino chemical potential and $T_{\nu 0}$ is the
current temperature of the relic neutrino spectrum $T_{\nu 0}\approx1.9K$.
We can assign to chemical potentials a label of its eigenstates of mass, such that $\lbrace u_i \rbrace$ is for neutrinos and $\lbrace - u_i \rbrace$ for antineutrinos.
If the neutrinos are Majorana particles, then they must have $u_i = 0$, and if $u_i \neq 0$, neutrinos are Dirac fermions thus, evidence on the null hypothesis is necessary to help to solve this
question \citep{Mangano}. The difference between $\lbrace \xi_i \rbrace$ and $\lbrace - \xi_i \rbrace$ determines the asymmetry between the density of neutrinos and antineutrinos.
Then, the presence of a relevant and non-zero $\xi_{\nu}$ have some cosmological implications \citep{xi6,xi10,xi7,xi5,xi4,xi8,xi1,xi9,xi3,xi2,Dominik01,xi11,Dominik02}.
From the particle physics point of view, to measure the lepton asymmetry of the Universe is crucial to understand some of particle physics processes that might have taken place in early Universe
at high energies, including the better constraint on models for the creation of matter-antimatter asymmetry in the Universe \citep{Affleck,Casas,Canetti}. The tightest constraints on lepton asymmetry
at present are commonly based on a combination of CMB data via constraints on the baryon density and measurements of the primordial abundances of light elements \citep{xi2,Mangano,Cooke}.\\
In this work, our main target is to obtain new and precise limits on the cosmological lepton asymmetry, measured in terms of the degeneracy parameter $\xi_{\nu}$, as well as
the neutrino mass scale, taking as a basis, the configurations of future CMB experiments such as CMB-CORE and CMB-S4.\\
This paper is organized as follows. In the next section, we briefly comment on the $C\nu B$ and the cosmological lepton asymmetry. In section \ref{methodology},
we present the methodology used to obtain the forecasts from CMB-CORE and CMB-S4 experiments. In section \ref{Results}, we present our results and discussions. Finally, section \ref{conclusions} our final considerations.
\section{Cosmic neutrino background and neutrino asymmetry}
\label{CNB}
The current contribution of neutrinos to the energy density of the Universe is given by,
\begin{eqnarray}
\rho_{\nu} = 10^4 h^2 \Omega_{\nu} eV cm^{-3},
\end{eqnarray}
where \textbf{$\Omega_{\nu}=\rho_{\nu}/\rho_{cri}$} is the neutrino energy density in units of critical density. As usual, relativistic neutrinos contribute to the total energy density of radiation $\rho_{r}$, typically parametrized as
\begin{eqnarray}
\rho_{r} = \left( \rho_{\gamma} + \rho_{\nu} \right) = \left( 1 + \frac{7}{8}\Big(\frac{4}{11} \Big)^{4/3} N_{\rm eff} \right) \rho_{\gamma},
\end{eqnarray}
where $\rho_{\gamma}$ is the energy density of photons, the factor $7/8$ is due to the neutrinos that are fermions and $N_{\rm eff}=3.046$ the value
of the effective number of neutrinos species in the standard case,
with zero asymmetries and no extra relativistic degrees of freedom.
Neutrinos become nonrelativistic when their average momentum falls below their mass.
At the very early Universe, neutrinos and antineutrinos
of each flavor $\nu_i$ ($i = e,\mu,\tau$) behave like relativistic particles.
Both the energy density and pressure of one species of massive degenerate neutrinos and antineutrinos are
described by (ley us use here the unit system where $\hbar = c = k_B = 1$)
\begin{eqnarray}
\rho_{\nu_i} + \rho_{\bar{\nu_i}} = T^4_{\nu} \int \frac{d^3q}{2(\pi)^3} E_{\nu_i} (f_{\nu_i}(q) + f_{\bar{\nu_i}}(q))
\end{eqnarray}
and
\begin{eqnarray}
3 (p_{\nu_i} + p_{\bar{\nu_i}}) = T^4_{\nu} \int \frac{d^3q}{2(\pi)^3} \frac{q^2}{E_{\nu_i}}(f_{\nu_i}(q) + f_{\bar{\nu_i}}(q)),
\end{eqnarray}
where $E^2_{\nu_i} = q^2 + a^2 m_{\nu_i}$ is one flavor neutrino/antineutrino energy and $q = a p$ is the comoving momentum.
The functions $f_{\nu_i}$, $f_{\bar{\nu_i}}$ are the Fermi-Dirac phase space distributions given by
\begin{eqnarray}
f_{\nu_i}(q) = \frac{1}{e^{E_{\nu_i}/T_{\nu} - \xi_{\nu}} + 1}, f_{\bar{\nu_i}}(q) = \frac{1}{e^{E_{\bar{\nu_i}}/T_{\nu} + \xi_{\bar{\nu}}} + 1},
\end{eqnarray}
where \textbf{$\xi_{\nu} = \mu_{\nu}/T_{\nu0}$} is the neutrino degeneracy parameter and $\mu_{\nu}$ is the neutrino chemical potential. At the early Universe, we assumed that neutrinos-antineutrinos are produced in
thermal and chemical equilibrium. Their equilibrium distribution functions has been frozen from the time of decoupling to the present. Then, as the chemical potential \textbf{$\mu_{\nu}$} scales as $T_{\nu}$, the degeneracy parameter $\xi_{\nu}$ remains constant and it is different from zero if a neutrino-antineutrino asymmetry has been produced before the decoupled. The energy of neutrinos changes according to cosmological redshift
after decoupling, which is a moment when they are still relativistic. The neutrino degeneracy parameter is conserved and the presence of a significant and non-null $\xi_{\nu}$ have some cosmological implication
through the evolution of the universe, such as BBN, photon decoupling and LSS, among others
\citep*[see][]{xi6,xi10,xi7,xi5,xi4,xi8,xi1,xi9,xi3,xi2,xi11}. If $\xi_{\nu}$ remains constant, finite and non-zero after decoupling, then it could lead to an asymmetry on the neutrinos and antineutrinos given by
\begin{eqnarray}
\label{asymmetry}
\eta_{\nu} \equiv \frac{n_{\nu_{i}}-n_{\bar{\nu}_{i}}}{n_{\gamma}} = \frac{1}{12 \zeta (3)}\sum_{i} y_{\nu 0} \left( \pi^2 \xi_{\nu_i} + \xi_{\nu_i}^3 \right),
\end{eqnarray}
where $n_{\nu_{i}} (n_{\bar{\nu}_{i}})$ is the neutrino (antineutrino) number density, $n_{\gamma}$ is the photon number density, $\zeta (3) \approx 1.20206$, and $y_{\nu 0}^{1/3}=T_{\nu 0}/T_{\gamma 0}$ is the ratio of neutrino and photons temperature to the present, where $T_{\gamma 0}$ is the temperature of the CMB ($T_{\gamma 0} = 2.726K$).\\
\noindent As we have mentioned above, the neutrino asymmetry can produce changes in the expansion rate of the Universe at early times, which can be expressed as an excess in $N_{\rm eff}$ in the form
\begin{eqnarray}
\label{Delta_Neff}
\Delta N_{\rm eff} = \frac{15}{7} \sum_i \left[ 2 \left( \frac{\xi_{\nu_i}}{\pi} \right)^2 + \left( \frac{\xi_{\nu_i}}{\pi}\right)^4 \right].
\end{eqnarray}
In what follows, let us impose expected sensitivities on $\xi$ by taking predictions from some future CMB experiments.
\section{Methodology}
\label{methodology}
Let us predict the ability of future CMB experiments to constrain the neutrino lepton asymmetry as well as the neutrino mass scale.
We follow the common approach already used \citep*[see e.g][]{DiVal,Finelli}, on mock data for some possible future experimental
configurations, assuming a fiducial flat $\Lambda$CDM model compatible with the
Planck 2018 results. We have used the publicly available Boltzmann code CLASS \citep{class} to compute the theoretical CMB angular power spectra
$C_l^{TT}$, $C_l^{TE}$, $C_l^{EE}$ for temperature, cross temperature-polarization and polarization. Together with the primary anisotropy signal,
we have also taken into account informations from CMB weak lensing,
considering the power spectrum of the CMB lensing potential $C_l^{PP}$. The missions BB are clearly sensitive also to the BB lensing polarization signal,
but here we take the conservative approach to not include it in the forecasts.\\
In our simulations, we have used an instrumental noise given by the usual expression
\begin{eqnarray}
N_l = w^{-1} \exp(l(l+1)\theta^2/8 \ln(2)),
\end{eqnarray}
where $\theta$ is the experimental FWHM angular resolution, $w^{-1}$ is the experimental power noise expressed in
$\mu$K-arcmin. The total variance of the multipoles $a_{lm}$ is therefore given by the sum of the fiducial $C'_{l}$s with the instrumental noise $N_{l}$.\\
The simulated experimental data are then compared with a theoretical model assuming a Gaussian likelihood $\mathcal{L}$ given by
\begin{eqnarray}
- 2 \ln \mathcal{L} = \sum_l (2l + 1) f_{sky} \Big( \frac{D}{|\bar{C}|} + \ln \frac{|\bar{C}|}{|\hat{C}|} -3 \Big),
\end{eqnarray}
where $\bar{C_l}$ and $\hat{C_l}$ are the assumed fiducial and theoretical spectra plus noise, and $|\bar{C}|$ and $|\hat{C}|$
are the determinants of the theoretical and observed data covariance matrices given by
\begin{eqnarray}
|\bar{C}| = \bar{C_l}^{TT} \bar{C_l}^{EE}\bar{C_l}^{PP} - (\bar{C_l}^{TE})^2 \bar{C_l}^{PP} - (\bar{C_l}^{TP})^2 \bar{C_l}^{EE},
\end{eqnarray}
\begin{eqnarray}
|\hat{C}| = \hat{C}^{TT} \hat{C}^{EE} \hat{C}^{PP} - (\hat{C}^{TE})^2 \hat{C}^{PP} - (\hat{C}^{TP})^2 \hat{C}^{EE},
\end{eqnarray}
$D$ is defined as
\begin{eqnarray}
D = \hat{C}^{TT} \bar{C_l}^{EE} \bar{C_l}^{PP} + \bar{C_l}^{TT}\hat{C}^{EE}\bar{C_l}^{PP} + \bar{C_l}^{TT}\bar{C_l}^{EE}\hat{C}^{PP} \nonumber \\
- \bar{C_l}^{TE}( \bar{C_l}^{TE} \hat{C}^{PP} + 2 \hat{C}^{TE} \bar{C_l}^{PP}) \nonumber \\
- \bar{C_l}^{TP} (\bar{C_l}^{TP} \hat{C}^{EE} + 2 \hat{C}^{TP} \bar{C_l}^{EE}),
\end{eqnarray}
and finally $f_{sky}$ is the sky fraction sampled by the experiment after foregrounds removal.\\
\noindent In Table \ref{tab1} we have summarized the experimental specifications for CMB-CORE and CMB-S4 data.
Forecast based on future CMB experiments to probe neutrinos properties were also investigated in
\cite{Capparelli,Brinckmann,Mishra}. Specifically for CMB-S4, we use an experimental specification different from that presented in
\cite{Mishra} and \cite{S4} ($l_{\rm max} = 5000$), where we avoid systematic erros at highest angular resolution ($l>3000$)
produced by noise due to information from CMB weak lensing, which could produce more optimistic results than in previous analyzes.
\begin{table*}
\centering
\caption{ Experimental specifications for CORE and S4 with beamwidth, power noise sensitivities of the temperature and polarization. }
\label{tab1}
\begin{tabular}{lccccr}
\hline
Experiment & Beam & Power noise [$\mu$K-arcmin] & $l_{\rm min}$ & $l_{\rm max}$ & $f_{\rm sky}$ \\
\hline
CMB-CORE & 6.0 & 2.5 & 2 & 3000 & 0.7 \\
CMB-S4 & 3.0 & 1.0 & 50 & 3000 & 0.4 \\
\hline
\end{tabular}
\end{table*}
\section{Results}
\label{Results}
We have used the public and available CLASS \citep{class} and Monte Python \citep{monte} codes concerning the model considered in the present work, where we introduced the $\xi_{\nu}$ corrections on $N_{\rm eff}$ defined in equation (\ref{Delta_Neff}) in CLASS code. We have considered one massive and two massless neutrino states, as standard in the literature, and we fixed the mass ordering to the normal hierarchy with the minimum masses $\sum m_{\nu} = 0.06$ eV and the expected sensitivities obtained on the total neutrino mass are essentially independent of neutrino mixing parameters how is it concluded in \cite{Castorina}.\\
\noindent The individual values of neutrino flavour asymmetries in principle can be different if we take into account the effect of the oscillations and collisions around the epoch of neutrino decoupling, which means that the equations (\ref{asymmetry}) and (\ref{Delta_Neff}) are not necessarily valid \citep{Pastor,Castorina}. However, following \cite{Dominik02} we assume a single value of $\xi_{\nu}$, which means that for the values of the neutrino mixing parameters preferred by global fits of oscillations, and in particular that of $sin^2 \theta_{13}$, the impact of a lepton asymemtry can be approximated by choosing a common value $\xi_{\nu}$ for degenerancy parameters \citep{2002NuPhB.632..363D,xi4,Mangano}.\\
\noindent On the other hand, in \cite{Castorina} it is shown how the addition of flavor oscillations produces strong constraints on the total neutrino asymmetry, whose boundaries are dominated mainly by the limits imposed by BBN tests. However, although the impact that produces the combined effect of BBN and flavor oscillations on these limits is evident, we are more interested in showing the impact of the improvement in the sensitivity of future CMB experiments.\\
\noindent In our forecasts, we have assumed the set of the cosmological parameters:
\[
\{100 \omega_{\rm b}, \, \omega_{\rm cdm}, \, \ln10^{10}A_{s}, \,
n_s, \, \tau_{\rm reio}, \, H_0, \, \sum m_{\nu}, \, \xi_{\nu} \}.
\]
where the parameters are: baryon density, cold dark matter density, amplitude
and slope of the primordial spectrum of metric fluctuations, optical depth to reionization, Hubble constant,
neutrino mass scale, and the degeneracy parameter characterizing the degree of leptonic asymmetry, respectively.
In the forecast, we assume fiducial values of \{ 2.22, 0.119, 3.07, 0.962, 0.05, 68.0, 0.06, 0.05
\footnote{This value is in accordance with the results obtained in \citep{Nunes} (Table III)
and \cite{Castorina}.}\}, which are assumed from our analysis performed for Planck 2018.\\
\begin{table*}
\centering
\caption{Summary of the observational constraints from both CORE and S4 experiments. The notation $\sigma{\rm (CORE)}$ and $\sigma{\rm (S4)}$,
represents the 68 percent CL estimation on the fiducial values from
CORE and S4, respectively. The parameter $H_0$ is in km s${}^{-1}$ Mpc${}^{-1}$ units and $\sum m_{\nu}$ is in eV units. }
\label{results}
\begin{tabular} { l l l l l l }
\hline
Parameter & Fiducial value & $\sigma{\rm (CORE)}$ & $\sigma{\rm (S4)}$ \\
\hline
{$10^{2}\omega_{b }$} & 2.22 & 0.000057 & 0.00012 \\\\
{$\omega_{cdm } $} & 0.11919 & 0.00037 & 0.0000093 \\\\
{$H_0$} & 68.0 & 0.32 & 0.0088 \\\\
{$\ln10^{10}A_{s}$} & 3.0753 & 0.0056 & 0.0035 \\\\
{$n_{s} $} & 0.96229 & 0.0022 & 0.0054 \\\\
{$\tau_{\rm reio} $} & 0.055 & 0.0028 & 0.00025 \\\\
{$\sum m_{\rm \nu}$} & 0.06 & 0.024 & 0.00053 \\\\
{$\xi_{\nu}$} & 0.05 & 0.071 & 0.027 \\\\
\hline
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=9cm]{Planck_Core.pdf}
\caption{One-dimensional marginalized distribution and 68 percent CL and 95 percent CL regions for some selected parameters taking into
account Planck and CORE experiments.}
\label{Planck_Core}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{Core_S4.pdf}
\caption{One-dimensional marginalized distribution and 68 percent CL and 95 percent CL regions for some selected parameters taking
into account CORE and S4 experiments.}
\label{Core_S4}
\end{figure}
\noindent Table \ref{results} shows the constraints on the model baseline imposed by the CORE and S4 experiments. Figs \ref{Planck_Core} and \ref{Core_S4} shows the parametric space for some parameters of
interest in our work, from Planck/CORE and CORE/S4 constraints, respectively. From Planck data, we can note that the degeneracy parameter is constrained to $\xi_{\nu} = 0.05 \pm 0.20$ ($\pm 0.33$) at 68 percent CL and 95 percent CL., which is a result compatible with the null hypothesis even to 1$\sigma$ CL. In \cite{Dominik02}, the authors obtain $\xi_{\nu} = -0.002^{+0.114}_{-0.11}$ at 95 percent CL from Planck data. Evidence for cosmological lepton asymmetry from CMB data have been found by \cite{xi11}.\\
\noindent On the other hand, the constraints on the degeneracy parameter are close to the null value also within the accuracy achieved by CORE data, $\xi_{\nu}=0.05\pm 0.071$ ($\pm 0.11$) at 68 percent CL and 95 percent CL, being compatible with the null hypothesis even to 1$\sigma$ CL, as in the case of Planck data, used in this present work. However, with respect to the accuracy obtained by CMB-S4, we find $\xi_{\nu}=0.05\pm 0.027$ ($\pm 0.043$) at 68 percent CL (95 percent CL), respectively. These constraints can rule out the null hypothesis up to 2$\sigma$ CL on $\xi_{\nu}$. In principle this last result can opens the door to the possibility to unveil the physical nature of neutrinos, that is, the neutrinos can be Dirac particles against the null hypothesis and no Majorana particles as established by such hypothesis. However, these results must be firmly established from the point of view of particle physics for example from ground-based experiments such as PandaX-III (Particle And Astrophysical Xenon Experiment III), which to explore the nature of neutrinos, including physical properties such as the absolute scale of the neutrino masses and the aforementioned violation of leptonic number conservation through Neutrinoless Double Beta Decay (NLDBD)\footnote{In nuclear physics Double Beta Decay (DBD) it's a type of radioactive decay process of second-order weak interactions observed experimentally in several isotopes, in which two electrons (positrons) and
two antineutrinos (neutrinos) are emitted simultaneously from decaying nucleus (protons into neutrons or vice versa). If neutrinos are Majorana particles, a second DBD mode is possible, where a nucleus can decay again by emitting just two electrons (positrons) without antineutrinos (neutrinos), which are exchanged in the decay of nucleons. } and whose observation will be a clear signal that the neutrinos are their own antiparticles \citep*[for more details see][]{Chen}. These results could be available within the first 5 - 10 years of the next decade.\\
\noindent In Fig. \ref{Planck_Core} we can note that there is a high anti-correlation between the neutrinos' masses and $H_0$, that will increase the tension between the local and global measures of $H_0$, if the masses of the neutrinos increase (therefore they will decrease the value of $H_0$), such that, constraints on those parameters must be cautiously interpreted until such tension can be better understood. Within the standard base-$\Lambda$CDM cosmology, the Planck Collaboration \citep{Planck2018} report $H_0=67.36\pm 0.54\,km\,s^{-1}Mpc^{-1}$ which is about 99 percent away from the locally measured value $H_0=72.24\pm 1.74\,km\,s^{-1}Mpc^{-1}$ reported in \cite{riess}. We obtain, $H_0 = 68.00 \pm 2.32$ ($\pm 3.78$) $km\,s^{-1}Mpc^{-1}$ at 68 percent CL and 95 percent CL, for our model with Planck data, which at least $2\sigma$ can reduce the tension between the global and local value of $H_0$. The difference with our results regarding Planck 2018 is due to our extended parameter space. On the other hand, from the Planck data analysis we can note that the neutrino mass scale is constrained to $\sum m_{\rm \nu} < 0. 36 $ eV at 95 percent CL,
which is in good agreement with the one obtained by Planck Collaboration, i.e., $\sum m_{\rm \nu} < 0.24 $ eV \citep{Planck2018}. From the $\sum m_{\rm \nu}-H_0$ plane, we note that no relevant changes are obtained with respect to the mass splitting, which requires that $\sum m_{\rm \nu} < 0.1 $ eV to rule out the inverted mass hierarchy ($m_2\gtrsim m_1 \ggg m_3$). However, these results starts to favor the scheme h
of normal hierarchy ($m_1 \ll m_2 < m_3$) which will be evidenced with our results take CORE and S4 predictions. The results from CORE and S4 present considerable improvements with respect to Planck data, see Figs \ref{Planck_Core} and \ref{Core_S4}. With respect to neutrino mass scale bounds imposed from CORE and S4 data, we find the limit $ 0.021 < \sum m_{\rm \nu} \lesssim 0.1$ eV and $0.05913 < \sum m_{\rm \nu} \lesssim 0.061$ eV at 95 percent CL, for CORE and S4, respectively. Thus, unfavorable to inverted hierarchy scheme mass at least at 95 percent CL in both cases.\\
\noindent In the standard scenario of three active neutrinos and if we consider effects of non-instantaneous decoupling, we have $N_{\rm eff}=3.046$, where it is important to make clear that in all the analysis we considered this value as a fixed one. It is well known that the impact of the leptonic asymmetry increase the radiation energy density with the form, $N_{\rm eff} = 3.046 + \Delta N^{\xi_{\nu}}_{\rm eff}$, where $\Delta N^{\xi_{\nu}}_{\rm eff}$ is due to the leptonic asymmetry induced via equation (\ref{Delta_Neff}):
\begin{eqnarray}
\label{Delta_Neff_2}
\Delta N^{\xi_{\nu}}_{\rm eff} = \frac{60}{7} \left( \frac{\xi_{\nu}}{\pi} \right)^2 + \frac{30}{7}\left( \frac{\xi_{\nu}}{\pi}\right)^4,
\end{eqnarray}
where the sum is $i = 1, 2$ only (two massless neutrino states). Without losing of generality, we can evaluate the contribution $\Delta N^{\xi_{\nu}}_{\rm eff}$ via the standard error propagation theory. We note that, $\Delta N^{\xi_{\nu}}_{\rm eff}=0.002\pm 0.019$ ($\pm0.030$) for Planck data, $\Delta N^{\xi_{\nu}}_{\rm eff} = 0.0022 \pm 0.0083$ ($\pm0.013$) for CORE data and $\Delta N^{\xi_{\nu}}_{\rm eff} = 0.0022\pm 0.0045$ ($\pm0.0059$) for S4 data, all limits at 68 percent and 95 percent CL. Therefore, in general lines, we can assert that the contribution from $\xi_{\nu}$ on $N_{\rm eff}$ are very small. But in the case of CMB-S4, even this contribution being very small, it can be non-null.
\section{Conclusions}
\label{conclusions}
In this work, we have derived new constraints relative to the lepton asymmetry through the degeneracy parameter by using the CMB angular power spectrum from the Planck data and future CMB experiments like CORE
and CMB-S4. We have analyzed the impact of a lepton asymmetry on $N_{\rm eff}$ where, as expected, we noticed the existence of very small corrections on $\Delta N_{\rm eff}$, but corrections that can not negligible at the level of CMB-S4 experiments, although should be taken into account that sensitivity results obtained from CMB-S4 not including all expected systematic errors, as was previously mentioned \citep*[for similar considerations and more detailed information see][]{2015PhRvD..92l3535A}. Within this cosmological scenario, we have also investigated the neutrino mass scale in combination with the cosmological lepton asymmetry. We have found strong limits on $\sum m_{\rm \nu}$, where the mass scale for both, CORE and CMB-S4 configurations, are well bound to be $\sum m_{\rm \nu} < 0.1$ eV at 95 percent CL,
therefore, favoring a normal hierarchy scheme within the perspective adopted here.\\
\noindent As future perspective, it can be interesting to consider a neutrino asymmetry interaction with the dark sector of the Universe, and to see how this coupling can affect the neutrino and dark matter/dark energy properties, as well as to bring possible new corrections on $\Delta N_{\rm eff}$ due to such interaction, including properly the effect of flavor oscillations and galaxy bias due to neutrinos, which have been recently targeted in literature. For more details about these last topics and a deeper discussion we cordially invite the reader to \cite{Pastor,Castorina}, where can be perceived as a proper implementation of the oscillations can significantly improve the constraints on the main physical properties of massive neutrinos and about the neutrino bias galaxy such as in \cite{Vagnozzi2, Giusarma2}, where is suggested that the proper modeling of the bias parameter is necessary in order to reduce the impact of non-linearities
and minimization of systematics, in addition to the need to correct for the neutrino-induced scale-dependent bias, whose correct implementation is still under construction within the community of cosmologists.
\section{Acknowledgments}
The authors thank the referee for his/her valuable comments and suggestions. E.M.C. Abreu thanks CNPq (Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico), Brazilian scientific support federal agency,
for partial financial support, grant number 302155/2015-5.
|
1,108,101,565,759 | arxiv | \section{Introduction}
\indent In 1997, El Karoui, Kapoudjian, Pardoux, Peng and Quenez \cite{KKPPQ} introduced reflected backward stochastic differential equations (RBSDEs) as follows:
\begin{enumerate}[label=\roman{*}), ref=(\roman{*})]
\item $Y_t = \xi + \int_t^T f(s, Y_s, Z_s ) ds + K_T -K_t -\int_t^T (Z_s, dB_s)$, $~~~0 \leq t \leq T$;
\item $Y_t \geq S_t $, $~~~0 \leq t \leq T$;
\item $\{K_t, t \in [0,T]\}$ is continuous and increasing, moreover, $K_0=0$ and \\
$\int_0^T (Y_s - S_s) dK_s = 0$.
\end{enumerate}
Here $B$ is Brownian motion. The solutions $\{(Y_t,Z_t,K_t),~t\in[0,T]\}$ are $\mathcal{F}_t$ progressively measurable processes and $Y$ is forced to stay above a process $S$ called an obstacle. To do so, a continuous increasing process $K$ is introduced in the dynamics.\\
\indent El Karoui, Pardoux and Quenez \cite{KPQ1} gave an application of RBSDEs, driven by a Brownian motion, to the optimal stopping time problem and American options. It has been shown that the price of an American option, as well as a superhedging strategy for the option, are solutions to RBSDEs. \\
\indent Hamad{\`{e}}ne and Ouknine \cite{ham1} extended continuous RBSDEs to RBSDEs with jumps. They investigated an RBSDE driven by a Brownian motion and an independent Poisson process. Moreover, instead of being continuous, the obstacle is just right continuous with left limits. They provided another solution of the problem, in Hamad{\`{e}}ne and Ouknine \cite{ham2}, using Snell envelope theory. A similar result has also been carried out by Essaky \cite{essaky}. Other significant results on BSDEs and RBSDEs with jumps are the works of Crepey and Matoussi \cite{crepy1} and Bouchard and Elie \cite{bouch}. Crepey and Matoussi\cite{crepy1} deal with more general dynamics.\\
\indent None of the above works have used a Markov chain to model the jumps. Moreover, diffusions can be approximated by Markov chains. See the work of Kushner \cite{kushner84}. Consequently, there is some motivation for discussing Markov chain models. van der Hoek and Elliott \cite{RE1} introduced a market model where uncertainties are modeled by a finite state Markov chain, rather than by Brownian motion or related jump diffusions. In this paper uncertainty is modeled using a Markov chain. Another tool used in van der Hoek and Elliott \cite{RE1} is the presence of a stochastic discount function (SDF) which implies no-arbitrage pricing. Kluge and Rogers \cite{rogers1}, Rogers \cite{rogers2}, Rogers and Zane \cite{rogers3} use the term \enquote{potential} for stochastic discount functions modelled by Markov processes. Rogers and Yousaf \cite{rogers4} combined Markov chain models and the potential approach to model interest rates and exchange rates. It is stated in Rogers and co-authors's work that taking the Markov process to be a finite state Markov chain gives better results. Moreover, the computation of the pricing formula in the potential approach is reduced to a finite weighted sum. In \cite{RE1}, stock prices are determined by the model, given the dividend paid by the stock, which in turn depends, at each time on the state of the Markov chain. SDFs are used to give the current price of future cashflows. Current prices of financial products such as bonds, foreign currencies, futures and European options were also derived in van der Hoek and Elliott \cite{RE1}. Later, van der Hoek and Elliott \cite{RE2} proved that the price of an American option in the Markov chain model with an SDF is a solution of a variational inequality driven by a system of ordinary differential equations.\\
\indent In the present work, we shall discuss American options in van der Hoek and Elliott's framework using an RBSDE approach. BSDEs in this framework were introduced by Cohen and Elliott \cite{Sam1} as
\[Y_t = \xi + \int_t^T f(u, Y_u,Z_u) du - \int_t^T Z'_{u-} dM_u,~~~ t \in [0,T],\]
where, $f$ is the driver, $\xi$ is the terminal condition and $M$ is a vector martingale given by the dynamics of the Markov chain. \\
\indent An, Cohen and Ji \cite{An} discuss American options using the theory of RBSDEs, for the Markov chain, in discrete time. This approach, as well as the above results on RBSDEs for Brownian motion, have not been investigated in a finite state Markov chain framework with a SDF in continuous time. Also, in the American option problem, as the holder of the option has the freedom to exercise at any time prior to maturity, most studies focus on determining the optimal exercise time for the holder and its associated optimal price. Instead of determining the option price, we consider the other party's side of the contract and show the existence of a superhedging strategy which covers the option's payoff at any time prior to maturity, in case the holder exercises the option. \\
\indent The sections of the paper are as follows: In Section 2, we present the Markov chain model and some preliminary results. Section 3 establishes the existence and uniqueness of solutions for RBSDEs under the Markov chain model, and in section 4, we discuss an application to American options, where we show that a superhedging strategy exists as the solution to an RBSDE with the Markov chain noise.
\section{The Model and Some Preliminary Results.}\label{prelim}
\subsection{The Markov Chain}
\indent Consider a continuous time financial market where randomness is modeled by a finite state Markov chain. Following van der Hoek and Elliott \cite{RE1, RE2}, we assume the finite state Markov chain $X=\{X_t: t\geq 0 \}$ is defined on the probability space $(\Omega,\mathcal{F},P)$ and the state space of $X$ is identified with the set $\{e_1,e_2\cdots,e_N\}$ in $\mathbb{R}^N$, where $e_i=(0,\cdots,1\cdots,0) ' $ with 1 in the $i$-th position. Then the Markov chain has the semimartingale representation:
\begin{equation}\label{semimartingale}
X_t=X_0+\int_{0}^{t}A_uX_udu+M_t.
\end{equation}
Here, $A=\{A_t, t\geq 0 \}$ is the rate matrix of the chain $X$ and $M$ is a vector martingale (see Elliott, Aggoun and Moore \cite{RE4}). We assume the elements $A_{ij}(t)$ of $A$ are bounded. Then the martingale $M$ is square integrable.
Take $\mathcal{F}_t=\sigma\{X_u | 0\leq u \leq t\}$ to be the $\sigma$-algebra generated by the Markov process $X=\{X_t\}$ and $\{\mathcal{F}_t\}$ to be the filtration on $(\Omega,\mathcal{F},P)$. Since $X$ is right continuous and has left limits (written RCLL), the filtration $\{\mathcal{F}_t\}$ is also right-continuous. \\
\indent We refer the reader to Buchanan and Hildebrandt \cite{Egg} for the proof of the following lemma.
\begin{lemma} \label{polya}
If a sequence $f_n(x)$ of monotonic functions converges to a continuous function $f(x)$ in $[a,b]$, then this convergence is uniform.
\end{lemma}
\indent The following is given in Elliott \cite{elliott} as Lemma 2.21:
\begin{lemma}\label{indistinguish}
Suppose $V$ and $Y$ are real valued processes defined on the same probability space $(\Omega, \mathcal{F},P)$ such that for every $t \geq 0$, $V_t = Y_t$, a.s. If both processes are right continuous, then $V$ and $Y$ are indistinguishable, that is:
$$P(V_t = Y_t,~ \text{for any}~ t\geq 0)=1. $$
\end{lemma}
Denote by $P'$ the transpose of any $\mathbb{R}^{n\times p}$ matrix $P$ for any $p,n \in \mathbb{N}$, $\text{diag} (x)$ for any $x\in \mathbb{R}^n$, the matrix whose diagonal components are the entries of the vector $x$ and the remaining components are zero and similarly $\text{diag} (M)$ for any $M \in \mathbb{R}^{n\times n}$ the square matrix whose diagonal components are those of $M$ and the remaining components are zero.\\
\indent For our Markov chain $X_t \in \{e_1,\cdots,e_N\}$, note that $X_t X'_t = \text{diag}(X_t)$. Also, from \eqref{semimartingale}
$dX_t = A_t X_t dt + dM_t$. Then,
\begin{align}\label{1}
\nonumber X_tX'_t &= X_0X'_0 + \int_0^t X_{u-} dX'_u + \int_0^t (dX_{u}) X'_{u-} + \sum_{0 < u \leq t} \Delta X_u \Delta X'_u \\
\nonumber &= \text{diag}(X_0) + \int_0^t X_u (A_uX_u)' du + \int_0^t X_{u-} dM'_u \\
\nonumber& + \int_0^t A_u X_u X'_{u-} du + \int_0^t (dM_u) X'_{u-} + [X,X]_t\\
\nonumber
& = \text{diag} (X_0) + \int_0^t X_u X'_u A'_u du + \int_0^t X_{u-} dM'_u \\
& + \int_0^t A_u X_u X'_{u-} du + \int_0^t (dM_u) X'_{u-} + [X,X]_t - \left\langle X,X\right\rangle_t + \left\langle X,X \right\rangle_t.
\end{align}
Here, $\left\langle X, X\right\rangle$ is the unique predictable process such that $[X,X]-\left\langle X,X \right\rangle$ is a martingale and write
\begin{equation}\label{L_t}
L_t = [X,X]_t - \left\langle X,X\right\rangle_t, \quad t \in [0,T].
\end{equation}
However, we also have:
\begin{equation}\label{2}
X_tX'_t = \text{diag} (X_t) = \text{diag}(X_0) + \int_0^t \text{diag} (A_u X_u) du + \int_0^t \text{diag}(M_u).
\end{equation}
Equating the predictable terms in \eqref{1} and \eqref{2}, we have
\begin{equation}\label{3}
\left\langle X, X\right\rangle_t = \int_0^t \text{diag}(A_uX_u) du - \int_0^t \text{diag}({X_u}) A'_u du - \int_0^t A_u \text{diag}(X_u) du.
\end{equation}
\indent Let $\Psi$ be the matrix
\begin{equation}\label{Psi}\Psi_t = \text{diag}(A_tX_t)- \text{diag}(X_t)A'_t - A_t \text{diag}(X_t).
\end{equation}
Then $d \left\langle X,X \right\rangle_t = \Psi_t dt$. For any $t>0$, Cohen and Elliott \cite{Sam1, Sam3}, define the semi-norm $\|.\|_{X_t}$, for
$C, D \in \mathbb{R}^{N\times K}$ as :
\begin{align*}
\left\langle C, D\right\rangle_{X_t} & = Tr(C' \Psi_t D), \\
\|C\|^2_{X_t} & = \left\langle C, C\right\rangle_{X_t}.
\end{align*}
We only consider the case where $C \in \mathbb{R}^N$, hence we
introduce the semi-norm $\|.\|_{X_t}$ as:
\begin{align}\label{normC}
\nonumber
\left\langle C, D\right\rangle_{X_t} & = C' \Psi_t D, \\[2mm]
\|C\|^2_{X_t} & = \left\langle C, C\right\rangle_{X_t}.
\end{align}
It follows from equation \eqref{3} that
\[\int_t^T \|C\|^2_{X_s} ds = \int_t^T C' d\left\langle X, X\right\rangle_s C.\]
For $n \in \mathbb{N}$, denote by $|\cdot|_n$ the Euclidian norm in $\mathbb{R}^n$ and by $\|\cdot\|_{n\times n}$ the norm in $\mathbb{R}^{n \times n}$ such that $\|\Psi\|_{n\times n}= \sqrt{Tr(\Psi' \Psi)}$ for any $\Psi \in \mathbb{R}^{n \times n}$.\\
\indent The following lemma is Lemma 3.5 in \cite{zhedim}.
\begin{lemma}\label{normbound}
For any $C \in \mathbb{R}^N$,
$$ ~~~~\|C\|_{X_t} \leq \sqrt{3m} |C|_N, ~~\text{ for any }t\in[0,T],$$
where $m>0$ is the bound of $\|A_t\|_{N\times N}$, for any $t\in[0,T]$.
\end{lemma}
The proof of the following lemma is found in \cite{Sam3}:
\begin{lemma}\label{Z2}
For $Z$, a predictable process in $\mathbb{R}^N$, verifying:
\[E \left[ \int_0^t \|Z_u\|^2_{X_u} du\right] < \infty,\]
we have:
\begin{equation*}
E \left[\left(\int_0^t Z'_{u} dM_u \right)^2\right] = E \left[ \int_0^t \|Z_u\|^2_{X_u} du\right].
\end{equation*}
\end{lemma}
Denote by $\mathcal{P}$, the $\sigma$-field generated by the predictable processes defined on $(\Omega, P, \mathcal{F})$ and with respect to the filtration $\{\mathcal{F}_t\}_{t \in [0,\infty)}$. For $t\in[0,\infty)$, consider the following spaces: \\[2mm]
$ L^2(\mathcal{F}_t): =\{\xi;~\xi$ is a $ \mathbb{R} \text{-valued}~ \mathcal{F}_t $-measurable random variable such that $ E[|\xi|^2]< \infty\};$\\[2mm]
$L^2_{\mathcal{F}}(0,t;\mathbb{R}): =\{\phi:[0,t]\times\Omega\rightarrow\mathbb{R};~ \phi$ is an adapted and RCLL process with $E[\int^t_0|\phi(s)|^2ds]<+\infty\}$;\\[2mm]
$P^2_{\mathcal{F}}(0,t;\mathbb{R}^N): =\{\phi:[0,t]\times\Omega\rightarrow\mathbb{R}^N;~ \phi $ is a predictable process with $E[\int^t_0\|\phi(s)\|_{X_s}^2ds]<+\infty\}.$
\subsection{BSDEs for the Markov Chain Model.}\label{bsdeMC}
\indent Consider a one-dimensional BSDE with the Markov chain noise
as follows:
\begin{equation}\label{BSDEMC}
Y_t = \xi + \int_t^T f(u, Y_u, Z_u ) du -\int_t^T Z'_{u} dM_u
,~~~~~t\in[0,T].
\end{equation}
Here the terminal condition $\xi$ and the coefficient $f$ are known. \\
\indent Lemma \ref{existence} (Theorem 6.2 in Cohen and Elliott \cite{Sam1}) gives the existence and uniqueness result of solutions for BSDEs
driven by Markov chains.
\begin{lemma}\label{existence}
Assume $\xi\in L^2(\mathcal{F}_T)$ and the predictable
function $f: \Omega \times [0, T] \times \mathbb{R} \times
\mathbb{R}^N \rightarrow \mathbb{R}$ satisfies a Lipschitz
condition, in the sense that there exists some constants $l_1, l_2>0$ such
that for each $y_1,y_2 \in \mathbb{R}$ and $z_1,z_2 \in
\mathbb{R}^{N}$, $t\in[0,T]$,
\begin{equation}\label{Lipchl}
|f(t,y_1,z_1) - f(t, y_2, z_2)| \leq l_1 |y_1-y_2| + l_2 \|z_1
-z_2\|_{X_t}.
\end{equation}
We also assume $f$ satisfies
\begin{equation}\label{finite}
E [ \int_0^T |f^2(t,0,0)| dt] <\infty.
\end{equation} Then there exists a solution $(Y, Z)\in L^2_{\mathcal{F}}(0,T;\mathbb{R})\times P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$
to BSDE (\ref{BSDEMC}). Moreover, this solution is
unique up to indistinguishability for $Y$ and equality $d\langle
X,X\rangle_t$ $\times\mathbb{P}$-a.s. for $Z$.
\end{lemma}
The following lemma as an extension of the above lemma to stopping times can be found in Cohen and Elliott \cite{Sam3}.
\begin{lemma}\label{BSDEST} Let $\tau >0$ be a stopping time such that there exists a real value $T$ such that $P(\tau > T)=0$.
Under the assumptioms of Lemma \ref{existence} with changing $T$ into $\tau$, BSDE for the Markov chain with stopping time
\begin{equation}
Y_t = \xi + \int_{t\wedge\tau}^{\tau} f(s, Y_s, Z_s ) ds -\int_{t\wedge\tau}^{\tau} Z'_{s} dM_s
,~~~~~t\geq 0.
\end{equation}
has a solution $(Y, Z)\in L^2_{\mathcal{F}}(0,\tau;\mathbb{R})\times P^2_{\mathcal{F}}(0,\tau;\mathbb{R}^N)$. Moreover, this solution is
unique up to indistinguishability for $Y$ and equality $d\langle
X,X\rangle_t$ $\times\mathbb{P}$-a.s. for $Z$.
\end{lemma}
See Campbell and Meyer \cite{campbell} for the following definition:
\begin{definition}[Moore-Penrose pseudoinverse]\label{defMoore}
The Moore-Penrose pseudoinverse of a square matrix $Q$ is the matrix $Q^{\dagger}$ satisfying the properties:\\[2mm]
1) $QQ^{\dagger}Q = Q$ \\[2mm]
2) $Q^{\dagger}QQ^{\dagger} = Q^{\dagger}$ \\[2mm]
3) $(QQ^{\dagger})' = QQ^{\dagger}$ \\[2mm]
4) $(Q^{\dagger}Q)'=Q^{\dagger}Q.$
\end{definition}
\begin{ass}\label{ass0}
Assume the Lipschitz constant $l_2$ of the driver $f$ given in \eqref{Lipchl} satisfies $$~~~~~~l_2\|\Psi_t^{\dagger}\|_{N \times N} \sqrt{6m}< 1, ~~~\text{ for any }~t \in [0,T],$$ where $\Psi$ is given in \eqref{Psi} and $m>0$ is the bound of $\|A_t\|_{N\times N}$, for any $t\in[0,T]$.
\end{ass}
\indent The following lemma, which is a comparison result for BSDEs driven by a Markov chain, is found in Yang, Ramarimbahoaka and Elliott \cite{zhedim}.\\
\begin{lemma} \label{CT} For $i=1,2,$ suppose $(Y^{(i)},Z^{(i)})$ is the solution of the
BSDE:
$$Y^{(i)}_t = \xi_i + \int_t^T f_i(s, Y^{(i)}_s, Z^{(i)}_s ) ds
- \int_t^T (Z_{s}^{(i)})' dM_s,\hskip.4cmt\in[0,T].$$
Assume $\xi_1,\xi_2\in L^2(\mathcal{F}_T)$, and $f_1,f_2:\Omega \times [0,T]\times \mathbb{R}\times \mathbb{R}^N \rightarrow \mathbb{R}$ satisfy some conditions such that the above two BSDEs have unique solutions. Moreover assume $f_1$ satisfies \eqref{Lipch} and Assumption \ref{ass0}.
If $\xi_1 \leq \xi_2 $, a.s. and $f_1(t,Y_t^{(2)}, Z_t^{(2)}) \leq f_2(t,Y_t^{(2)}, Z_t^{(2)})$, a.e., a.s., then
$$P( Y_t^{(1)}\leq Y_t^{(2)},~~\text{ for any } t \in [0,T])=1.$$
\end{lemma}
\section{RBSDEs driven by the Markov Chains}\label{section4}
\indent We now introduce an RBSDE for the Markov Chain:
\begin{enumerate}[label=\roman{*}), ref=(\roman{*})]
\item $V_t = \xi + \int_t^T f(u, V_u, Z_u ) du + K_T -K_t -\int_t^T Z'_u dM_u$, $~~~0 \leq t \leq T$ ;
\item $V_t \geq G_t $, $0 \leq t \leq T$;
\item $\{K_t, t \in [0,T]\}$ is continuous and increasing, moreover, $K_0=0$ and \\
$\int_0^T (V_u - G_u) dK_u = 0$.
\end{enumerate}
\indent We want to show the existence and uniqueness of the solution $(V,Z,K)$ of above equation under some conditions on $\xi,f$ and $G$ .
\begin{thm}
Suppose we have:
\begin{enumerate}
\item $\xi\in L^2(\mathcal{F}_T)$,
\item a $\mathcal{P} \times \mathcal{B}(\mathbb{R}^{1+N})$ measurable function $f: \Omega \times [0, T] \times \mathbb{R} \times \mathbb{R}^N \rightarrow \mathbb{R}$ which is Lipschitz continuous, with constants $c'$ and $c''$, in the sense that, for any $t \in [0,T]$, $v_1,v_2 \in \mathbb{R}$ and $z_1,z_2 \in \mathbb{R}^N$, $t\in[0,T]$,\emph{}
\begin{equation}\label{Lipch}
|f(t,v_1,z_1) - f(t, v_2, z_2)| \leq c'|v_1-v_2| + c''\|z_1 -z_2\|_{X_t}
\end{equation}
and $c''$ satisfies
\begin{equation}\label{c''}c''\|\Psi_t^{\dagger}\|_{N \times N} \sqrt{6m}< 1, ~~~\text{ for any }~t \in [0,T],\end{equation}
where $\Psi$ is given in \eqref{Psi} and $m>0$ is the bound of $\|A_t\|_{N\times N}$, for any $t\in[0,T]$.
\item \begin{equation}\label{Con_f}
E \left[ \int_0^T |f^2(t,0,0)| dt\right] < \infty,
\end{equation}
\item a process $G$ called an \enquote{obstacle} which satisfies
\begin{equation}\label{Con_g}
E \left[ \sup_{0\leq t \leq T} (G_t^+)^2 \right] < \infty.
\end{equation}
\end{enumerate}
Then there exists a solution $(V,Z,K)$, $V$ adapted and RCLL and $Z$ predictable, of the RBSDE i), ii), iii) above such that $V \in L^2_{\mathcal{F}}(0,T;\mathbb{R})$, $K_T\in L^2(\mathcal{F}_T)$ and $Z \in P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$, moreover, this solution is
unique up to indistinguishability for $Y$, $K$ and equality $d\langle
X,X\rangle_t$ $\times\mathbb{P}$-a.s. for $Z$.
\end{thm}
\subsection{Proof of Uniqueness}
\indent In this section, we first suppose that solutions of the RBSDE exist, then we prove that they are unique, almost surely.\\[2mm]
\noindent {\bf Proof.}
Suppose $\xi \in L^2(\mathcal{F}_T)$, $f$ satisfies \eqref{Lipch}, \eqref{c''} and \eqref{Con_f} and $G$ satisfies \eqref{Con_g}. Let $(V^{(1)}, Z^{(1)},K^{(1)})$ and $(V^{(2)}, Z^{(2)},K^{(2)})$ be two solutions of the RBSDE, that is, both $(V^{(1)}, Z^{(1)},K^{(1)})$ and $(V^{(2)}, Z^{(2)},K^{(2)})$ satisfy i) - iii), $V^{(1)}, V^{(2)} \in L^2_{\mathcal{F}}(0,T;\mathbb{R})$, $K^{(1)}_T,K^{(2)}_T\in L^2(\mathcal{F}_T)$ and $Z^{(1)}, Z^{(2)} \in P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$. Applying the product rule to $|V_t^{(1)}- V_t^{(2)}|$, we have
\begin{align}\label{v1v22}
\nonumber
&|V_t^{(1)} - V_t^{(2)}|^2 \\
\nonumber
& = -2 \int_t^T (V_{u-}^{(1)} - V_{u-}^{(2)}) d(V_{u}^{(1)} - V_{u}^{(2)}) - \sum_{t \leq u \leq T} \Delta (V_u^{(1)}-V_u^{(2)})\Delta (V_u^{(1)}-V_u^{(2)}) \\
\nonumber
&= -2 \int_t^T (V^{(1)}_{u}- V_{u}^{(2)} ) [f(u,V_u^{(2)}, Z_u^{(2)}) - f(u,V_u^{(1)}, Z_u^{(1)})] du \\
\nonumber
& \quad -2 \int_t^T (V^{(1)}_{u}- V_{u}^{(2)}) dK_u^{(2)} + 2 \int_t^T (V^{(1)}_{u}- V_{u}^{(2)} ) dK_u^{(1)} \\
\nonumber
& \quad -2 \int_t^T (V^{(1)}_{u-}- V_{u-}^{(2)}) (Z_{u}^{(1)}- Z_{u}^{(2)})' dM_u \\
& \quad - \sum_{t \leq u \leq T} \Delta (V_u^{(1)}-V_u^{(2)})\Delta (V_u^{(1)}-V_u^{(2)}).
\end{align}
We derive
\begin{align}\label{deltav1v2}
\nonumber
&\sum_{t\leq u \leq T} \Delta (V_u^{(1)}-V_u^{(2)} ) \Delta (V_u^{(1)}-V_u^{(2)} ) \\
\nonumber
& = \sum_{t\leq u \leq T}( ( Z_{u}^{(1)}-Z_{u}^{(2)})' \Delta X_u)( ( Z_{u}^{(1)}-Z_{u}^{(2)})'\Delta X_u ) \\
\nonumber
& = \sum_{t\leq u \leq T} (Z_{u}^{(1)}-Z_{u}^{(2)} )' \Delta X_u \Delta X_u' (Z_{u}^{(1)}-Z_{u}^{(2)} ) \\
\nonumber
& = \int_t^T ( Z_{u}^{(1)}-Z_{u}^2 )' (dL_u + d\left\langle X,X\right\rangle_u) (Z_{u}^{(1)}-Z_{u}^{(2)} )\\
& = \int_t^T (Z_{u}^{(1)}-Z_{u}^{(2)})' dL_u(Z_{u}^{(1)}-Z_{u}^{(2)} ) + \int_t^T \|Z_{u}^{(1)}-Z_{u}^{(2)} \|_{X_u}^2 du.
\end{align}
From ii) and iii), we know
\begin{align}\label{k1k2}
\nonumber& - \int_t^T (V^{(1)}_{u}- V_{u}^{(2)}) dK_u^{(2)} + \int_t^T (V^{(1)}_{u}- V_{u}^{(2)} ) dK_u^{(1)}\\
\nonumber& = - \int_t^T (V_u^{(1)} - G_u) dK^{(2)}_u + \int_t^T (V_u^{(2)} - G_u) dK_u^{(2)}\\
\nonumber& + \int_t^T (V_u^{(1)} - G_u) dK^{(1)}_u - \int_t^T (V_u^{(2)} - G_u) dK_u^{(1)}\\
\nonumber
& = -\int_t^T (V_u^{(1)} - G_u) dK^{(2)}_u - \int_t^T (V_u^{(2)} - G_u) dK_u^{(1)}\\
&\leq 0.
\end{align}
Therefore, writing $c=\max\{c',c''\}$, by \eqref{v1v22}, \eqref{deltav1v2} and \eqref{k1k2} using the Lipschitz condition, we deduce for any $t\in[0,T],$
\begin{align}\label{star}
\nonumber
& E \left[ |V_t^{(1)}- V_t^{(2)}|^2 \right]+ E \left[ \int_t^T \|Z_u^{(1)} - Z_u^{(2)}\|^2_{X_u} du \right]\\
\nonumber
& \leq 2 E \left[ \int_t^T |(V^{(1)}_{u}- V_{u}^{(2)} ) (f(u,V_u^{(2)}, Z_u^{(2)}) - f(u,V_u^{(1)}, Z_u^{(1)}))| du \right] \\
\nonumber
&\leq E \left[ 2 \int_t^T c (|V_u^{(1)}-V_u^{(2)}|^2+ |V_u^{(1)}-V_u^{(2)}| \cdot\|Z_{u}^{(1)}- Z_{u}^{(2)}\|_{X_u}) du \right] \\
& \leq E \left[ (2c+2c^2) \int_t^T |V_u^{(1)}-V_u^{(2)}|^2 du + \frac{1}{2}\int_t^T \|Z_{u}^{(1)}- Z_{u}^{(2)}\|^2_{X_u} du \right].
\end{align}
That is,
\begin{align*}
E \left[ |V_t^{(1)}- V_t^{(2)}|^2 \right] \leq (2c+2c^2) E \left[ \int_t^T |V_u^{(1)}-V_u^{(2)}|^2 du \right].
\end{align*}
From Gronwall's lemma, we know $E \left[ |V_t^{(1)}- V_t^{(2)}|^2 \right]=0$ for any $t\in [0,T]$. So for each $t \in [0,T]$, $V_t^{(1)}- V_t^{(2)} =0$, a.s. Since $V^{(1)}$ and $V^{(2)}$ are RCLL, it follows from Lemma \ref{indistinguish} that
$$P( V_t^{(1)}= V_t^{(2)}, \text{ for any } t\in
[0,T])=1.$$
Also,
$$E \left[ \int_0^T |V_u^{(1)}-V_u^{(2)}|^2 du \right]= \int_0^T E \left[|V_u^{(1)}-V_u^{(2)}|^2 \right]du=0.$$
By (\ref{star}), we obtain
$$E \left[ \int_0^T \|Z_u^{(1)} - Z_u^{(2)}\|^2_{X_u} du \right]=0.$$
Hence $Z_t^{(1)}=Z_t^{(2)}$, $d\left\langle X,X\right\rangle_t \times \mathbb{P}$-a.s., and from Lemma \ref{Z2}, we derive for any $t\in[0,T]$, $\int_t^T (Z_{u}^{(1)}-Z_{u}^{(2)})' dM_u=0$, a.s. Using i), we have for any $t\in[0,T],$
\begin{align*}
V_t^{(1)}-V_t^{(2)} & = \int_t^T (f(u, V_u^{(1)}, Z_u^{(1)})-f(u, V_u^{(2)}, Z_u^{(2)})) du \\
& + (K_T^{(1)}-K_T^{(2)})-(K_t^{(1)}-K_t^{(2)}) - \int_t^T (Z_{u}^{(1)}-Z_{u}^{(2)})' dM_u.
\end{align*}
Set $t=0$, noticing $K_0^{(1)}=K_0^{(2)}=0$, we deduce
$$\begin{array}{ll}
|K_T^{(1)}-K_T^{(2)}| \\[2mm]
\leq |V_0^{(1)}-V_0^{(2)}|+|\int_0^T (f(u, V_u^{(1)}, Z_u^{(1)})-f(u, V_u^{(2)},
Z_u^{(2)}))du | \\[2mm]
+ |\int_0^T (Z_{u}^{(1)}-Z_{u}^{(2)})' dM_u|\\[2mm]
\leq \int_0^T c (|V_u^{(1)}-V_u^{(2)}| + \|Z_u^{(1)}-Z_u^{(2)}\|_{X_u}) du \\[2mm]
=0, ~ \text{ a.s.}
\end{array}$$
Then, similarly, we conclude for any $t\in[0,T]$, $K_t^{(1)}=K_t^{(2)}$, a.s. Since $K$ is continuous, we derive $$P(K_t^{(1)}=K_t^{(2)}, \text{ for any } t \in [0,T])=1.$$
$\mbox{} \hfill \Box$
\subsection{Proof of Existence}
\indent Following \cite{KKPPQ}, in the case of RBSDE driven by a Brownian motion, we proceed with the proof of existence using approximation via penalization.\\
\noindent {\bf Proof of existence.} Set $c=\max\{c',c''\}$.
For each $n$, consider the following BSDE driven by the Markov chain:
\begin{equation}\label{V_n}
V_t^n = \xi + \int_t^T f(u, V_u^n, Z_u^n) du + n \int_t^T (V_u^n -G_u)^- du - \int_t^T (Z_{u}^n)' dM_u.
\end{equation}
For $(u,v,z)\in [0,T]\times \mathbb{R}\times \mathbb{R}^N$, define a map:
\[f_n(u, v, z) = f(u, v, z) + n (v -G_u)^-.\]
For any $u \in [0,T]$ and $(v_1,z_1),(v_2,z_2) \in \mathbb{R}\times \mathbb{R}^N$, we have
\begin{align}\label{fnlip}
\nonumber
&|f_n(u, v_1, z_1)-f_n(u, v_2, z_2)|\\[2mm]
\nonumber
&\leq |f(u, v_1, z_1)-f(u, v_2, z_2)|+n| (v_1 -G_u)^- -(v_2 -G_u)^-|\\[2mm]
\nonumber
&\leq c'|v_1-v_2|+c''\|z_1-z_2\|_{X_u}
+ n| v_1 -v_2|\\[2mm]
&\leq (c'+n)|v_1-v_2|+c''\|z_1-z_2\|_{X_u}.
\end{align}
So $f_n$ is a Lipschitz continuous function in $v$ and $z$. Hence by Lemma \ref{existence}, there exists a unique pair $(V^n, Z^n)\in L^2_{\mathcal{F}}(0,T;\mathbb{R})\times P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$ which satisfies \eqref{V_n}.
We define:
\[K_t^n = n \int_0^t (V_u^n - G_u)^- du, \quad 0 \leq t \leq T. \]
\begin{lemma} \label{lemma_ito_V}
\begin{align*}
|V_t^n|^2 & = |\xi|^2 + 2 \int_t^T V_u^n f(u, V_u^n, Z_u^n) du +2\int_t^T V_u^n dK_u^n \\
& -2 \int_t^T V_{u-}^n (Z_{u}^n)'dM_u- \int_t^T (Z_{u}^n)' dL_{u} Z_{u}^n - \int_t^T \|Z_u^n\|^2_{X_u} du.
\end{align*}
\end{lemma}
Similar calculations as in \eqref{deltav1v2} yield the result.
Here, we establish for $ (V^n, Z^n, K^n)$ a priori estimates which are independent of $n$.
\begin{lemma}\label{estimateAO}
There exists a constant $C_0>0$, such that for any $n \in \mathbb{N}$:
\begin{equation*}
\sup_{0\leq t \leq T} E [|V_t^n|^2] + E \left[ \int_0^T \|Z_t^n\|^2_{X_t}dt \right]+E [|K_T^n|^2] \leq C_0.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\beta >0$ be an arbitrary constant. Since
\begin{align*}&E \left[ \int_t^T e^{\beta u} (V_u^n -G_u) dK_u^n\right]\\
&= E \left[\int_t^T n e^{\beta u} ((V_u^n -G_u)^+ (V_u^n -G_u)^- - ((V_u^n -G_u)^-)^2) du \right]\\ & \leq 0,\end{align*}
we use Lemma \ref{lemma_ito_V} to derive, for any $t \in [0,T]$,
\begin{align*}
&E \left[ e^{\beta t} |V_t^n|^2 \right]+ E \left[ \int_t^T \beta |V_u^n|^2 e^{\beta u} du \right] + E \left[ \int_t^T e^{\beta u}\|Z_u^n\|^2_{X_u} du \right] \\
& = E \left[ e^{\beta T} |\xi|^2 \right] + 2 E \left[ \int_t^T e^{\beta u} V_u^n f(u, V_u^n, Z_u^n) du \right]+ 2 E \left[ \int_t^T e^{\beta u} V_u^n dK_u^n\right]\\
& \leq E [e^{\beta T}|\xi|^2] + 2 E\left[ \int_t^T e^{\beta u} (|f(u, 0, 0)| + c |V_u^n| + c \|Z^n_u\|_{X_u}) |V_u^n|du\right] \\
& + 2 E \left[ \int_t^T e^{\beta u} G_u dK_u^n \right]
\end{align*}
\begin{align}\label{apriory1}
\nonumber
& \leq E [e^{\beta T}|\xi|^2] + E \left[ \int_t^Te^{\beta u} |f(u, 0, 0)|^2 du \right] + (1+ 2c+3c^2) E \left[\int_t^T e^{\beta u}|V_u^n|^2 du \right] \\
\nonumber
& + \frac{1}{3} E \left[ \int_t^T e^{\beta u}\|Z_u^n\|^2_{X_u} du\right]+2 e^{\beta T}E \left[K_T^n \sup_{0 \leq t\leq T} (G_t^+) \right] \\
\nonumber
& \leq E [e^{\beta T}|\xi|^2] + E \left[ \int_t^Te^{\beta u} |f(u, 0, 0)|^2 du \right] + (1+ 2c+3c^2) E \left[ \int_t^Te^{\beta u} |V_u^n|^2 du \right] \\
& + \frac{1}{3} E \left[\int_t^T e^{\beta u}\|Z_u^n\|^2_{X_u} du\right]+ \frac{e^{2\beta T}}{\alpha}E \left[ \sup_{0 \leq t \leq T} (G_t^+)^2\right] + \alpha E \left[ (K^n_T)^2\right],
\end{align}
where $\alpha>0$ is an arbitrary constant. Therefore, there exists a constant $C_1 >0$ such that for any $t \in [0,T]$,
\begin{align}\label{apriory2}
\nonumber
& E \left[e^{\beta t} |V_t^n|^2 \right] +E \left[ \int_t^T \beta |V_u^n|^2 e^{\beta u} du \right] + \frac{2}{3} E \left[ \int_t^T e^{\beta u}\|Z_u^n\|^2_{X_u} du \right] \\
& \leq C_1(1+ E \left[ \int_t^T e^{\beta u}|V_u^n|^2 du \right]) + \frac{e^{2\beta T}}{\alpha}E \left[ \sup_{0 \leq t \leq T} (G_t^+)^2\right] + \alpha E \left[ (K^n_T)^2\right].
\end{align}
We now give an estimate for $ E \left[ (K^n_T)^2\right]$. From \eqref{V_n}, we have
\[K_T^n = V_0^n - \xi - \int_0^T f(u, V_u^n, Z_u^n) du + \int_0^T (Z_{u}^n)' dM_u.\]
Then
\begin{align*}
& E \left[ |K_T^n |^2\right] \\
& \leq 4 E \left[ |V_0^n|^2 + |\xi|^2 + |\int_0^T f(u, V_u^n, Z_u^n) du|^2 + |\int_0^T (Z_{u}^{n})'dM_u|^2 \right]\\
& \leq 4 E \left[ |V_0^n|^2 + |\xi|^2\right] \\
& + 4 T E \left[ \int_0^T (|f(u,0,0)|+ c |V_u^n| + c \|Z_u^n\|_{X_u})^2du\right] + 4 E \left[\int_0^T \|Z_u^n\|^2_{X_u} du\right]\\
& \quad \quad \text{(the last integral is obtained using Lemma \ref{Z2})} \\
& \leq 4 \left( E[|\xi|^2] + |V_0^n|^2 + 3T E \left[\int_0^T (|f(u,0,0)|^2+c^2 |V_u^n|^2 + c^2\|Z_u^n\|^2_{X_u}) du \right]\right)\\
& + 4 E \left[ \int_0^T \|Z_u^n\|^2_{X_u} du\right].
\end{align*}
So, there is a constant $C_2>C_1$ such that
\begin{equation}\label{estimate_K}
E [(K_T^n)^2] \leq C_2 \left( 1+ |V_0^n|^2] + E \left[ \int_0^T (|V_u^n|^2 + \| Z_u^n\|^2_{X_u}) du\right]\right).
\end{equation}
Therefore, in \eqref{apriory2}, set $\alpha = 1/3C_2$ to obtain
\begin{align*}
&E[e^{\beta t}|V_t^n|^2] + E \left[ \int_t^T \beta |V_u^n|^2 e^{\beta u} du \right]+\frac{2}{3} E \left[ \int_t^T e^{\beta u}\|Z_u^n\|^2_{X_u}du \right]\\
& \leq C_2 (1+ E \left[ \int_t^T e^{\beta u} |V_u^n|^2 du \right]) + 3C_2 e^{2\beta T}E \left[ \sup_{0 \leq t \leq T} (G_t^+)^2 \right] \\
& + \frac{1}{3} \left(1 + \sup_{0 \leq t \leq T} E \left[e^{\beta u}|V_t^n|^2 + \int_0^T e^{\beta u} (|V_u^n|^2 + \|Z_u^n\|^2_{X_u})du \right]\right).
\end{align*}
Taking the supremum over $t$, we know
\begin{align*}\label{grown}
& \frac{2}{3} \sup_{0\leq t \leq T}E[e^{\beta t} |V_t^n|^2] + (\beta - C_2 -\dfrac{1}{3}) E \left[ \int_0^T e^{\beta u}|V_u^n|^2 du\right] +\frac{1}{3} E \left[ \int_0^T e^{\beta u} \|Z_u^n\|^2_{X_u} du\right] \\
& \leq C_2+\frac{1}{3} + 3C_2 e^{2\beta T}E \left[ \sup_{0 \leq t \leq T} (G_t^+)^2 \right].
\end{align*}
Set $\beta = C_2 +\dfrac{1}{3}$. Then, there are two constants $C_3>0$ and $C_4>0$ such that
\[\sup_{0\leq t \leq T}E[e^{\beta t} |V_t^n|^2] \leq C_3\]
and
\[E \left[ \int_0^T e^{\beta u} \|Z_u^n\|^2_{X_u} du\right] \leq C_4.\]
Hence, from \eqref{estimate_K}, we derive
\begin{equation}\label{C_Kn}
E [|K_T^n|^2] \leq C_5,
\end{equation}
for some constant $C_5>0$. Therefore, there exists a constant $C_0>0$ such that for any $n \in \mathbb{N}$,
\begin{equation*}
\sup_{0\leq t \leq T} E [|V_t^n|^2] + E \left[ \int_0^T \|Z_u^n\|^2_{X_u}du \right]+E [|K_T^n|^2] \leq C_0.
\end{equation*}
\end{proof}
We prove the following:
\begin{lemma}\label{lemma_sup}
For any $n$, there is a constant $C>0$ such that for any $n \in \mathbb{N}$,
\begin{equation*}
E \left[ \sup_{0\leq t \leq T} |V_t^n|^2\right] < C.
\end{equation*}
\end{lemma}
\begin{proof}
We know for any $n \in \mathbb{N}$,
\begin{align*}
|V_t^n|^2 & \leq 4|\xi|^2 + 4|\int_t^T f(u, V_u^n, Z_u^n) du|^2 + 4|K_T^n|^2 + 4|\int_t^T( Z_{u}^n)' dM_u|^2 \\
& \leq 4|\xi|^2 + 12T \int_t^T (|f(u,0,0)|^2 + c^2 |V_u^n|^2 + c^2 \|Z_u^n\|^2_{X_u}) du \\
& + 4 |K_T^n|^2 + 4|\int_t^T (Z_{u}^n)' dM_u|^2.
\end{align*}
Taking the supremum over $t$, we deduce
\begin{align}\label{2star}
\nonumber
\sup_{0\leq t \leq T} |V_t^n|^2 & \leq 4 |\xi|^2 + 12 T \int_0^T |f(u,0,0)|^2 du \\
\nonumber
& + 12Tc^2\int_0^T |V_u^n|^2 du + 12Tc^2 \int_0^T \|Z_u^n\|^2_{X_u} du \\
& +4 |K_T^n|^2+ 4 \sup_{0\leq t \leq T}|\int_t^T (Z_{u}^n)'dM_u|^2.
\end{align}
Using Doob's inequality and Lemma \ref{Z2}, we obtain
\begin{align*}
&E \left[ \sup_{0\leq t \leq T}|\int_t^T ( Z_{u}^n)' dM_u|^2 \right]\\
& = E \left[ \sup_{0\leq t \leq T}|\int_0^T ( Z_{u}^n)' dM_u-\int_0^t ( Z_{u}^n)' dM_u|^2 \right] \\
& \leq 2 E \left[ |\int_0^T ( Z_{u}^n)' dM_u|^2 \right]+2 E \left[ \sup_{0\leq t \leq T}|\int_0^t ( Z_{u}^n)' dM_u|^2 \right] \\
& \leq 10 E \left[ |\int_0^T( Z_{u}^n)' dM_u|^2\right]=10 E \left[ \int_0^T \|Z_u^n\|^2_{X_u}du\right].
\end{align*}
Also,
\begin{align*}
E \left[ \int_0^T |V_u^n|^2 du \right] = \int_0^T E [|V_u^n|^2] du \leq T \sup_{0\leq t \leq T} E [|V_t^n|^2].
\end{align*}
By Lemma \ref{estimateAO}, there is a constant $C>0$ such that the result holds.
\end{proof}
Now, we prove:
\begin{lemma}\label{limit_V}
There is process $\{V_t,~ t \in [0,T]\}$ such that $V\in L^2_{\mathcal{F}}(0,T;\mathbb{R})$,
\[E \left[ \int_0^T (V_t - V_t^n)^2 dt \right] \rightarrow 0,~~ \text{ as } n \rightarrow \infty,\]
and
\[E \left[ \sup_{0\leq t\leq T} |V_t|^2 \right] \leq C.\]
\end{lemma}
\begin{proof} Since $f_n(\cdot, \cdot, \cdot)$ is increasing in $n$, that is, for any $(t,y,z)\in [0,T]\times \mathbb{R} \times \mathbb{R}^N$, $n\in\mathbb{N}$,
\[f_n(t, y, z) \leq f_{n+1}(t, y, z),\]
moreover, $f_n$ satisfies \eqref{fnlip} and the constant $c''$ satiisfies \eqref{c''},
by Lemma \ref{CT} we derive for any $n \in \mathbb{R}^N$,
$$P(V_t^n \leq V_t^{n+1}, ~\text{for any}~ t \in [0,T])=1.$$
That is, for any $n \in \mathbb{N}$, there exists a subset $B_n \subseteq \Omega$ and $\hat{B} \subseteq \Omega$ such that $\hat{B} = \bigcap\limits_{n=1}^{\infty} B_n$, $P(\hat{B}) =1$ and for any $\omega \in \hat{B}$, $V_t^n(\omega) \leq V_t^{n+1}(\omega)$, $t \in [0,T]$.
For any $\omega \in \Omega$, define:
\[V_t(\omega) = \sup_{n \in \mathbb{N}} V_t^n (\omega), ~ \quad t \in [0,T].\]
So $ P(V_t^n \uparrow V_t$, ~$t \in [0,T])=1$. Therefore,
\[P (\mathbb{I}_{\{V_t > 0\}}|V_t^n|\uparrow \mathbb{I}_{\{V_t > 0\}} |V_t|, ~~t \in [0,T]) =1\]
and
\[P (\mathbb{I}_{\{V_t \leq 0\}}|V_t^n| \downarrow \mathbb{I}_{\{V_t \leq 0\}} |V_t|, ~~ t \in [0,T]) =1.\]
By Levi's Lemma and Lemma \ref{lemma_sup}, we deduce
\[E \left[ \int_0^T |V_t|^2 dt \right] = \lim_{n \rightarrow \infty } E \left[ \int_0^T |V_t^n|^2 dt \right] \leq \lim_{n\rightarrow \infty}(\sup_{0 \leq t \leq T} E [|V_t^n|^2] T) \leq CT.\]
Then $V\in L^2_{\mathcal{F}}(0,T;\mathbb{R})$ and $|V_t| < \infty$, a.e, a.s. So $V_t^n - V_t \uparrow 0,$ a.e, a.s. Again, by Levi's Lemma, we have
\[E \left[ \int_0^T |V_t^n -V_t|^2 dt \right]\rightarrow 0, \text{ as } n \rightarrow \infty.\]
Since $\{\sup\limits_{0 \leq t \leq T} V_t^n, n\in \mathbb{N}\}$ is also an increasing sequence, we know there exists a random variable $H$ such that for any $\omega \in \Omega$:
\[\sup_{n \in \mathbb{N}}\sup_{0\leq t \leq T} V_t^n (\omega) = H(\omega),\]
so
\[\sup_{0\leq t \leq T} V_t^n \uparrow H, \text{ a.s.}\]
Also, by Levi's lemma, we obtain
\[\lim_{n \rightarrow \infty} E[\sup_{0 \leq t \leq T}|V_t^n|^2] = E [|H|^2].\]
By Lemma \ref{lemma_sup}, we deduce that $E [|H|^2] \leq C$. Hence,
\begin{align*}
E \left[ \sup_{0 \leq t \leq T} |V_t|^2\right] & = E [\sup_{0 \leq t \leq T} (\lim_{n \rightarrow \infty} |V_t^n|^2)] \\
& \leq E \left[ \sup_{0 \leq t \leq T} (\lim_{n \rightarrow \infty} (\sup_{0 \leq t \leq T} |V_t^n|^2))\right]\\
& \leq E [\sup_{0\leq t \leq T }|H|^2] = E [|H|^2]\leq C.
\end{align*}
Hence, we proved Lemma \ref{limit_V}.
\end{proof}
Now, consider the same set $\hat{B}$ in the proof of Lemma \ref{limit_V}. Also, by Lemma \ref{limit_V} $\sup\limits_{0\leq t \leq T}|V_t| < \infty$, a.s., that is, there is a subset $\bar{B} \subseteq \Omega$ such that for any $\omega \in \bar{B}$, $|V_t(\omega)| < \infty$ for any $t\in [0,T]$ and $P(\bar{B})=1$. Then, for $\omega \in \hat{B}\cap\bar{B}$,
\[V_t^n(\omega)-V_t(\omega) \uparrow 0 , \quad t \in [0,T].\] By Lemma \ref{polya}, we derive for any $\omega \in \hat{B}\cap \bar{B}$,
\[\lim_{n\rightarrow \infty} \sup_{0\leq t \leq T} |V_t^n(\omega) - V_t(\omega)|^2 =0 .\]
Since $P(\hat{B}\cap\bar{B}) =1$, it follows that:
\[\lim_{n\rightarrow \infty} E \left[ \sup_{0 \leq t \leq T} |V_t^n-V_t|^2\right] =0.\]
Hence, $\{V^n\}_{n\in \mathbb{N}}$ is a uniform Cauchy sequence, that is:
\begin{equation} \label{cauchyV}
E \left[ \sup_{0 \leq t \leq T} |V_t^n - V_t^p|^2 \right] \rightarrow 0, \text{ as } n,p \rightarrow \infty.
\end{equation}
Now, using Lemma \ref{lemma_ito_V} for $|V_t^n - V_t^p|^2$, and taking the expectation, gives:
\begin{align*}
& E[|V_t^n - V_t^p|^2] + E \left[ \int_t^T \|Z_u^n - Z_u^p\|_{X_u}^2 du\right]\\
& = 2 E \left[ \int_t^T (f(u, V_u^n, Z_u^n) - f(u,V_u^p,Z_u^p)) (V_u^n - V_u^p) du\right] \\
& + 2 E \left[ \int_t^T (V_u^n - V_u^p) d(K_u^n -K_u^p)\right].
\end{align*}
Noting that $$dK_u^n = n (V_u^n-G_u)^- du$$ then
\begin{align*}
& E \left [ \int_t^T (V_u^n -V_u^p) d(K_u^n -K_u^p) \right] \\
& = E \left[ \int_t^T (V_u^n -G_u) dK_u^n \right] - E \left[ \int_t^T (V_u^n -G_u) dK_u^p \right] \\
& - E \left[ \int_t^T (V_u^p -G_u) dK_u^n \right] + E \left[ \int_t^T (V_u^p -G_u) dK_u^p \right]\\
& \leq E \left[ \int_t^T (V_u^n -G_u)^- dK_u^p \right] + E \left[ \int_t^T (V_u^p-G_u)^- dK_u^n\right].
\end{align*}
Thus,
\begin{align*}
& E[|V_t^n - V_t^p|^2] + E \left[ \int_t^T \|Z_u^n - Z_u^p\|_{X_u}^2 du\right]\\
& \leq 2c E \left[ \int_t^T \left( |V_u^n-V_u^p|^2 + |V_u^n-V_u^p|\cdot \|Z_u^n - Z_u^p\|_{X_u}\right) du\right] \\
& + 2E \left[ \int_t^T (V_u^n - G_u)^- dK_u^p \right] + 2 E \left[ \int_t^T (V_u^p -G_u)^- dK_u^n \right]\\
\nonumber
& \leq (2c+2c^2) E \left[ \int_t^T |V_u^n-V_u^p|^2 du \right] + \frac{1}{2} E \left[ \int_t^T \|Z_u^n - Z_u^p\|_{X_u}^2 du \right] \\
&\quad + 2E \left[ \int_t^T (V_u^n - G_u)^- dK_u^p \right] + 2 E \left[ \int_t^T (V_u^p -G_u)^- dK_u^n \right].
\end{align*}
That is,
\begin{align}\label{Z_n_p}
\nonumber
& E \left[ \int_t^T \|Z_u^n - Z_u^p\|_{X_u}^2 du\right] \\
\nonumber
& \leq (4c+4c^2) E \left[ \int_t^T |V_u^n-V_u^p|^2 du \right] \\
& + 4E \left[ \int_t^T (V_u^n - G_u)^- dK_u^p \right] + 4 E \left[ \int_t^T (V_u^p -G_u)^- dK_u^n \right].
\end{align}
\begin{lemma}\label{V_G}
\[E \left[ \sup_{0 \leq t \leq T} |(V_t^n - G_t)^-|^2 \right] \rightarrow 0, \text{ as } n\rightarrow \infty.\]
\end{lemma}
\begin{proof}
As $V_t^n \geq V_t^0$, replace $G_t$ by $G_t \vee V^0_t$. Since
$E [\sup\limits_{0 \leq t\leq T} |V^0_t|^2] < \infty,$ we have \[E [\sup_{0 \leq t\leq T} |G_t \vee V^0_t|^2] < \infty.\]We shall compare $V_t$ and $G_t$. For $n \in \mathbb{N}$, consider the following BSDE for the Markov chain:
\[\tilde{V}_t^n = \xi + \int_t^T F_n(u, \tilde{V}_u^n, \tilde{Z}_u^n)du - \int_t^T( \tilde{Z}_{u}^n)' dM_u,\]
where $F_n(u, v, z) = f(u, Y_u^n, Z_u^n) + n (G_u - v)$. Then, by Lemma \ref{existence}, for each $n \in \mathbb{N}$, there exists a unique solution $(\tilde{V}^n,\tilde{Z}^n)\in L^2_{\mathcal{F}}(0,T;\mathbb{R})\times P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$ to the above BSDE. As $ (G_u - v) \leq (v-G_u)^- $ for any $u \in [0,T]$, it follows that $F_n(u, v, z) \leq f_n(u,v,z)$ for any $u\in[0,T]$, $(v, z) \in\mathbb{R}\times \mathbb{R}^N$. Hence, from Lemma \ref{CT}, $$P(\tilde{V}_t^n \leq V_t^n, \text{ for any } t\in [0,T])=1.$$
Let $\tau \in [0,T]$ be a stopping time. By Lemma \ref{BSDEST}, the following BSDE for the Markov chain with stopping time $\tau$
\[\tilde{V}_{\tau}^n = \xi + \int_{\tau}^T F_n(u, \tilde{V}_u^n, \tilde{Z}_u^n)du - \int_{\tau}^T( \tilde{Z}_{u}^n)' dM_u\]
has a unique solution.
Then, applying Ito's formula to $e^{-n \tau} \tilde{V}_{\tau}^n$, we have
\begin{align*}
e^{-nT}\xi - e^{-n\tau} \tilde{V}_{\tau}^n & = \int_{\tau}^T e^{-nu} \left( - f(u, V_u^n, Z_u^n) - n (G_u - \tilde{V}_u^n)\right) du \\
& \quad + \int_{\tau}^T e^{-nu}( \tilde{Z}_{u}^n)'dM_u -n \int_t^T \tilde{V}_u^n e^{-nu} du.
\end{align*}
Rearranging and taking the expectation given $\mathcal{F}_{\tau}$, we derive
\begin{equation}\label{vto}
\tilde{V}^n_{\tau} = E \left[ e^{-n(T - \tau)}\xi + \int_{\tau}^T e^{-n(u-\tau)} f(u,V_u^n, Z_u^n) du + n \int_{\tau}^T e^{-n(u-\tau)}G_u du| \mathcal{F}_\tau\right].
\end{equation}
It is easy to see that as $n \rightarrow \infty$,
\[e^{-n(T - \tau)}\xi + n \int_{\tau}^T e^{-n(u-\tau)}G_u du \rightarrow \xi 1_{\{\tau=T\}} + G_{\tau}1_{\{\tau<T\}},\]
a.s, and in mean square. So
\begin{equation}\label{E1_vto}
E \left[ e^{-n(T - \tau)}\xi + n \int_{\tau}^T e^{-n(s-\tau)}G_u du| \mathcal{F}_{\tau}\right] \rightarrow E\left[\xi 1_{\{\tau=T\}} + G_{\tau}1_{\{\tau<T\}}|\mathcal{F}_{\tau} \right]
\end{equation}
in mean square.
Also, by Holder's inequality, we know
\begin{align*}
\lvert \int_{\tau}^T e^{-n(u-\tau)} f(u,V_u^n, Z_u^n) du \lvert & \leq \left( \int_{\tau}^T e^{-2n(u-\tau)} du \right)^{1/2} \left( \int_{\tau}^T |f(u,V_u^n, Z_u^n)|^2 du\right)^{1/2}\\
& \leq \left( \int_{\tau}^T e^{-2n(u-\tau)} du \right)^{1/2} \left( \int_{0}^T |f(u,V_u^n, Z_u^n)|^2 du\right)^{1/2} \\
& \leq (\frac{1}{2n} (1- e^{-2n(T-\tau)}))^{1/2} \left( \int_{0}^T |f(u,V_u^n, Z_u^n)|^2 du\right)^{1/2} \\
& \leq \frac{1}{\sqrt{2n}} \left( \int_{0}^T |f(u,V_u^n, Z_u^n)|^2 du\right)^{1/2}.
\end{align*}
Hence,
\begin{equation}\label{E2_vto}
E \left[ \int_{\tau}^T e^{-n(u-\tau)} f(u,V_u^n, Z_u^n) du | \mathcal{F}_{\tau}\right] \rightarrow 0
\end{equation}
in mean square, as $n \rightarrow \infty.$ Therefore, from \eqref{vto}, \eqref{E1_vto} and \eqref{E2_vto},
\[\tilde{V}_{\tau}^n \rightarrow \xi 1_{\{\tau=T\}} + G_{\tau}1_{\{\tau<T\}}\]
in mean square. Since $V_{\tau}^n \leq V_{\tau}$, a.s., and $V_{\tau}^n \geq \tilde{V}_{\tau}^n$, we obtain
\[V_{\tau} \geq \xi 1_{\{\tau=T\}} + G_{\tau}1_{\{\tau<T\}},\]
and it follows that $V_{\tau} \geq G_{\tau}$, a.s that is $(V_{\tau} - G_{\tau})^- =0$, a.s. Therefore, by the Section Theorem (\cite{Della} page 220 or \cite{elliott} Corollary 6.25), we have
$$P ( (V_t - G_t)^- =0, ~~ t\in [0,T]) =1.$$
So
$$P ( (V_t^n - G_t)^- \downarrow 0, ~~ t \in [0,T])=1 .$$
Noting, for a.s. $\omega \in \Omega$,
\begin{align*}
(V_t^n - G_t)^- & = \frac{1}{2} (|V_t^n -G_t|- (V_t^n -G_t))\\
& \leq \frac{1}{2} (|V_t^n -V_t|+ |V_t-G_t|- (V_t^n-V_t) -(V_t -G_t)) \\
& = \frac{1}{2} (|V_t^n-V_t|+ V_t-G_t - (V_t^n -V_t) - (V_t-G_t)) \\
& \leq |V_t^n -V_t|, \text{ for any } t \in [0,T],
\end{align*}
we deduce
\[0 \leq \sup_{0 \leq t \leq T} (V_t^n - G_t)^- \leq \sup_{0 \leq t \leq T} |V_t^n -V_t|.\]
Since $\lim\limits_{n \rightarrow \infty}\sup\limits_{0 \leq t \leq T} |V_t^n -V_t| = 0, \text{ a.s.}$, we obtain
$$ \lim_{n \rightarrow \infty} \sup_{0\leq t \leq T} (V_t^{n} - G_t)^- = 0, \text{ a.s.}$$
As,
\[(V_t^n -G_t)^- \leq (G_t - V_t^0)^+ \leq |G_t| + |V_t^0|,\]
$E \left[ \sup\limits_{0 \leq t \leq T} G_t^2 \right] <\infty$ and $E \left[ \sup\limits_{0 \leq t \leq T} (V^0_t)^2 \right] <\infty$, then the result follows from the dominated convergence theorem.\\
\end{proof}
\indent Returning to \eqref{Z_n_p}, we have
\begin{align*}
E \left[ \int_t^T (V_u^p - G_u)^- dK_u^n \right] & \leq E \left[ \int_0^T \sup_{0 \leq u\leq T}(V_u^p - G_u)^- dK_u^n \right] \\
& \leq E \left[ \sup_{0 \leq t\leq T}(V_t^p - G_t)^- K^n_T\right] \\
& \leq \left( E \left[ \sup_{0 \leq t\leq T}|(V_t^p - G_t)^-|^2\right]\right)^{1/2}\left( E \left[ |K_T^n|^2\right]\right)^{1/2}.
\end{align*}
Hence, from \eqref{C_Kn} and Lemma \ref{V_G}, we deduce as $n,p \rightarrow \infty$,
\begin{equation} \label{v_g_k_0}
E \left[ \int_t^T (V_u^p - G_u)^- dK_u^n \right] + E \left[ \int_t^T (V_u^n - G_u)^- dK_u^p \right] \rightarrow 0.
\end{equation}
It follows from \eqref{Z_n_p}, \eqref{v_g_k_0} and Lemma \ref{limit_V} that as $n,p\rightarrow \infty$:
\begin{align} \label{v_z_0} E \left[ \int_0^T \|Z_u^p - Z_u^n\|^2_{X_u} du \right] \rightarrow 0 .
\end{align}
Consider the factor space of equivalence classes of processes in $P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$. An equivalence class is just all processes which differ by a null process. On that space the semi norm is actually a norm and so the space is complete. Then there exists a process $Z\in P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$ such that as $n\rightarrow \infty$,
$ E \left[ \int_0^T \|Z_u^n- Z_u\|^2_{X_u} du \right] \rightarrow 0 .$
Now
\begin{align*}
& K_t^n - K_t^p \\
& = (K_T^n-K_T^p) + \int_t^T( f(u,V_u^n,Z_u^n) -f(u,V_u^p, Z_u^p))du \\
& - (V_t^n - V_t^p) - \int_t^T( Z_{u}^n-Z_{u}^p )'dM_u
\end{align*}
\begin{align*}
& = (V_0^n-V_0^p) - \int_0^t( f(u, V_u^n, Z_u^n)- f(u, V_u^p, Z_u^p)) du + \int_0^t ( Z_{u}^n - Z_{u}^p)'dM_u \\
& - (V_t^n - V_t^p).
\end{align*}
Using Doob's inequality and Lemma \ref{Z2} on the last equation, we derive:
\begin{align*}\label{kpn2}
\nonumber
&E[\sup_{t\in[0,T]}|K_t^n - K_t^p|^2]\\
\nonumber
& \leq 4 E[|V_0^n-V_0^p|^2 ]+4 E[\sup_{t\in[0,T]}|\int_0^t( f(u, V_u^n, Z_u^n)- f(u, V_u^p, Z_u^p)) du |^2]\\
\nonumber
& + 4E[\sup_{t\in[0,T]}|\int_0^t ( Z_{u}^n - Z_{u}^p)'dM_u|^2] + 4E[\sup_{t\in[0,T]}|V_t^n - V_t^p|^2]\\
\nonumber
& \leq 8E[\sup_{t\in[0,T]}|V_t^n - V_t^p|^2]+4 E[(\int_0^T| f(u, V_u^n, Z_u^n)- f(u, V_u^p, Z_u^p)| du )^2]\\
\nonumber
&+ 16E[|\int_0^T ( Z_{u}^n - Z_{u}^p)'dM_u|^2]\\
& \leq 8E[\sup_{t\in[0,T]}|V_t^n - V_t^p|^2]+8c^2T E[\int_0^T| V_u^n- V_u^p|^2 du ]\\
&+(16+8c^2T)E[\int_0^T\| Z_{u}^n - Z_{u}^p)\|^2_{X_u}d u].
\end{align*}
Therefore by \eqref{cauchyV} and \eqref{v_z_0}:
\begin{equation*
E\left[\sup_{0\leq t\leq T}|K_t^n -K_t^p|^2 \right] \rightarrow 0, \text{ as } n,p \rightarrow \infty.
\end{equation*}
Hence, $\{K^n\}_{n \in \mathbb{N}}$ is a Cauchy sequence which converges uniformly to some limit $K$ in mean square. Since $\{V^n\}_{n \in \mathbb{N}}$ and $\{Z^n\}_{n \in \mathbb{N}}$ are Cauchy sequences which converge to $V$ and $Z$, we know $V, Z, K$ satisfy i). Moreover, $K$ is continuous and increasing. Condition ii) follows from the proof of Lemma \ref{V_G}. Next we prove the remaining part of Condition iii). We know $(V^n,K^n)$ converges uniformly in $t$ to $(V,K)$ in probability. Therefore the measure $dK^n$ converges to $dK$ weakly in probability. It follows that:
\[\int_0^T (V_t^n-G_t)dK_t^n \rightarrow \int_0^T (V_t-G_t) dK_t\]
in probability. Using Lemma \ref{V_G} we deduce that:
\[\int_0^T (V_t - G_t) dK_t \geq 0, \text{ a.s.}\]
However,
\[\int_0^T (V^n_t - G_t) dK^n_t = n \int_0^T (V^n_t - G_t) (V^n_t - G_t)^- dt \leq 0, \quad n \in \mathbb{N}.\]
Hence,
\[\int_0^T (V_t-G_t) dK_t =0,\text{ a.s.}\]
Finally, we conclude that $(V, Z,K)$ solves the RBSDE.~~~~~ $\mbox{} \hfill \Box$
\section{Application to American Options}
\subsection{The Stochastic Discount Function (SDF)}
\indent As in \cite{RE1} and \cite{RE2}, we give the following definition:
\begin{definition}
A stochastic discount process is an adapted stochastic process $\pi =\{\pi_t, t\geq 0 \}$ such that for any asset price process $\{\mathcal{A}_t,t\geq 0\}$,
\[\pi_t\mathcal{A}_t=E[\pi_s\mathcal{A}_s|\mathcal{F}_t].\]
Here, $E$ is expectation with respect to the real world probability $P$.
\end{definition}
We suppose the stochastic discount function is modeled as follows:
\[\pi_t=\exp \left[ -\int_0^t X'_{u-}C_udX_u -\int_0^t D'_uX_udu \right],\]
where $C_u$ is an $N\times N$ matrix and $D_u$ is a vector in $\mathbb{R}^N$ for each $u\geq 0$.\\
The following lemma is Theorem 3.1 in \cite{RE1}.
\begin{lemma}\label{lemma_pi}
\begin{equation*}
d\pi_t = \pi_t [-D'_tX_t+X'_t\sigma_tA_tX_t] dt + \pi_{t_{-}} X'_{t_{-}}\sigma_{t_{-}}dM_t,
\end{equation*}
where $\sigma_t = (\sigma_t^{ij})$ is the $N \times N$ matrix with:
\begin{equation*}
\sigma_t^{ij} = \exp (C_t^{ii}- C_t^{ij}) -1, \quad \quad 1 \leq i,j\leq N.
\end{equation*}
\end{lemma}
Denote by $\Gamma$, the matrix whose components are:
\begin{align*}
\Gamma_t^{ii}&=A_t^{ii}-D_t^{i} \text{ and } \\
\Gamma_{t}^{ij}&=A_{t}^{ij}\exp (C_t^{jj}-C_t^{ji}) \text{ if } i \neq j,
\end{align*}
\subsection{The Market}
\indent We consider a market consisting of $n$ stocks with price process $S^j = \{S^j_t, t \in [0,T]\}$, $j=1,2,\cdots,n$ and a bond with price $B = \{B_t, t \in [0,T] \}$, where $T <\infty$ will be the maturity time.
Suppose each stock $S^j$ pays, at any time $t\in [0,T]$, a dividend denoted by $\mathcal{D}^j_ t$ and for a vector function $\delta_{j,t} \in \mathbb{R}^N$, it has the form $\mathcal{D}^j_ t = \delta'_{j,t} X_t$. The stock price is the discounted value of all future dividends. It is shown \cite{RE1} that, for any $t \in [0,T]$, $S_t^j$ can be written in the form $S^j_t =s'_{j,t}X_t$ where $s_{j,t} \in \mathbb{R}^N$ is a function satisfying the vector ordinary differential equation:
\begin{equation}\label{stock_ode}
\frac{ds_{j,t}}{dt} + \Gamma'_t s_{j,t} = - \delta_{j,t} ~~ \text{ and } ~ s_{j,t} \rightarrow 0 \text{ as } t \rightarrow 0.
\end{equation}
Note, for each $j=1,2,\cdots,n$, the vector function $s_{j,t} \in \mathbb{R}^N$ is the solution of the ordinary differential equation \eqref{stock_ode}, hence its $i$-th component $s_{j,t}^i$ is continuous on the domain $[0, + \infty)$. Therefore, on the interval $[0,T]$, for each $i$, $s_{j,t}^i$ is bounded. Moreover, we suppose stock prices are strictly positive, hence $s_{j,t}^i$ is strictly positive for each $i$ and $j$. Therefore there is $c_2 >0$ and $c_3 >0$ such that:
\begin{equation} \label{sbound}
c_2 \leq s_{j,t}^i \leq c_3 \quad \text{ for any } i=1,\cdots,N;~j=1,\cdots,n.
\end{equation}
Lemma 6.1 in \cite{RE1} gives the dynamics of the stock prices $S^j$ as:
\begin{lemma}\label{stockS}
\begin{equation*}
S^j_t = S^j_0 + \int_0^t ((A'_u -\Gamma'_u) s_{j,u})'X_u du - \int_0^t \delta'_{j,u} X_u du + \int_0^t s'_{j,u}dM_u ,
\end{equation*}
$j=1,\cdots,n$ and $t \in [0,T]$.
\end{lemma}
Let $r_t \in \mathbb{R}$ be the interest rate at any time $t \in [0,T]$, so the bond price has the dynamics:
\begin{equation*}
dB_t = r_t B_t dt,
\end{equation*}
It is shown in \cite{RE1} that:
\begin{lemma}
For any $t \in [0,T]$,
\begin{equation*}
r_t = D'_t X_t - X'_t\sigma_t A_t X_t.
\end{equation*}
\end{lemma}
Hence, the dynamics of the stochastic discount function $\pi$ in Lemma \ref{lemma_pi} becomes:
\begin{equation}\label{pi}
d \pi_t = - \pi_t r_t dt + \pi_{t_{-}} X'_{t_{-}}\sigma_{t_{-}} dM_t ~~ \text{for any}~ t \in [0,T].
\end{equation}
It is known that the market in the presence of a positive discount factor has no arbitrage opportunity.
\subsection{The Self-financing Super-hedging Strategy}
\indent We state the following definitions
\begin{definition}[Self-financing strategy]
Let $V$ be the portfolio value, $h^0_t \in \mathbb{R}$ the number of bonds $B$ held at time $t$ and let $h_t = (h_t^1,\cdots,h_t^n)'$ with $h^j_t \in \mathbb{R}$ is the number of stocks $S^j$ held at time $t$, $j=1, \cdots,n$. Then
\begin{equation}\label{portfolio}
V_t = h_t^0B_t +\sum\limits_{j=1}^n h_t^jS_t^j, ~~ t \in [0,T].
\end{equation}
Let $K$ be the cumulative consumption process with $K_0= 0$. Then, a self-financing strategy, is a vector process $(V, h, K)$ such that:
\begin{equation}\label{self_financing}
dV_t = h_t^0 dB_t + \sum_{j=1}^n ( h_t^j dS^j_t + h_t^j d\mathcal{D}^j_t) - dK_t, ~~ t \in [0,T].
\end{equation}
\end{definition}
For American options, the portfolio value should dominate the payoff at any time $t$ to cover any exercise action. This leads to the following definition:
\begin{definition}
Given a payoff process $\{G_t\}$, a self-financing strategy is called a superhedging strategy if:
\[V_t \geq G_t, \quad t \in [0,T) ~\text{ and }~~ V_T = G_T.\]
\end{definition}
We shall discuss whether we can find such a strategy. The theory of RBSDEs, driven by Brownian motions, ensures the existence of such strategy in the classical Black-Scholes model. We shall show a similar result for the Markov chain model. \\
\indent It follows from Lemma \eqref{stockS}, \eqref{portfolio} and \eqref{self_financing} that:
\begin{align}\label{dV}
\nonumber
dV_t & = h_t^0 r_t B_t dt + \sum_{j=1}^n h_t^j X'_t (A'_t - \Gamma'_t)s_{j,t} dt + \sum_{j=1}^n h_{t-}^j s'_{j,t} dM_t - dK_t \\
\nonumber
& = r_t V_t dt - r_t (\sum_{j=1}^n h_t^j X_t's_{j,t})dt+ \sum_{j=1}^n h_t^j X'_t (A'_t - \Gamma'_t)s_{j,t} dt + \sum_{j=1}^n h_{t-}^j s'_{j,t} dM_t - dK_t \\
\nonumber
& = r_tV_t dt + \sum_{j=1}^n h_t^j X_t' (-r_t + (A'_t -\Gamma'_t))s_{j,t}dt + \sum_{j=1}^n h^j_{t-}s'_{j,t} dM_t -dK_t \\
& = r_tV_t dt + X_t' (-r_t + (A'_t -\Gamma'_t))(\sum_{j=1}^n h_t^j s_{j,t})dt + (\sum_{j=1}^n h^j_{t-}s'_{j,t}) dM_t -dK_t.
\end{align}
Now, consider the
function $f: \mathbb{R} \times \mathbb{R} \times \mathbb{R}^N \rightarrow \mathbb{R}$ such that
\begin{equation}\label{f}
f(t, v, z) = -r_t v - X'_t( -r_t + (A'_t-\Gamma'_t)) z,
\end{equation}
and the RBSDE:
\begin{equation}\label{R}\begin{cases}
\text{i})& V_t = G_T + \int_t^T f(u, V_u , Z_u) du + K_T - K_t - \int_t^T Z_{u-}'dM_u ;\\
\text{ii})& V_t \geq G_t;\\
\text{iii})& \{K_t, t \in [0,T]\} \text{ is continuous and increasing},~ K_0=0 \\
& ~ \text{ and } \int_0^T (V_u - G_u) dK_u = 0.
\end{cases} \end{equation}
\begin{prop}[Lipschitz Condition.] \label{lipf}
Let $f$ be given by \eqref{f}. We suppose that there is a constant $c_1 >0$ such that:
\begin{equation}\label{normGA}
| (A_t - \Gamma_t )X_t|_N \leq c_1,
\end{equation}
for any $t \in [0,T]$.
Then, for $z_1, z_2 \in \mathbb{R}^N$ and for $v_1, v_2 \in \mathbb{R}$, there is a constant $c_6>0$ such that:
\[|f(t, v_1, z_1 ) - f(t, v_2, z_2)| \leq c_6 (\|z_1 - z_2\|_{X_t}+ |v_1 - v_2|),\]
for any $t \in [0,T]$.
\end{prop}
\begin{proof}
The interest rate $r_t$ is, in practice, positive and bounded, so there is $c_4 >0$ such that
\begin{equation}\label{normr}
r_t \leq c_4.
\end{equation}
From \eqref{normr} and \eqref{normGA}, there is a constant $c_5 >0$ such that
\begin{equation} \label{normS}|(-r_t + (A_t - \Gamma_t))X_t|_N \leq c_5.\end{equation}
Now, for $z_1,z_2 \in \mathbb{R}^N$ and $v_1, v_2 \in \mathbb{R}^N$, we have
\begin{align*}
& |f(t, v_1, z_1) - f(t, v_2, z_2)| \\
&= |(v_1-v_2)r_t + X'_t(-r_t + (A'_t - \Gamma'_t))(z_1-z_2)| \\
& \leq |v_1-v_2|r_t + |(-r_t + (A_t - \Gamma_t))X_t |_N \times |z_1-z_2|_N.
\end{align*}
From Lemma \ref{normbound}, there is a constant $\beta>0$ such that $|z_2 - z_1 |_N \leq \sqrt{3\beta} \|z_2 - z_1 \|_{X_t}$. Hence, with \eqref{normS}, there is a constant $c_6$, such that
\[|f(t, v_1, z_1) - f(t, v_2, z_2)| \leq c_6 ( \|z_2 - z_1 \|_{X_t} + |v_1-v_2|).\]
\end{proof}
Therefore, from previous section RBSDE \eqref{R} has a unique solution $(V,Z,$ $K)$ such that $V \in L^2_{\mathcal{F}}(0,T;\mathbb{R})$, $K_T\in L^2(\mathcal{F}_T)$ and $Z \in P^2_{\mathcal{F}}(0,T;\mathbb{R}^N)$
Now, if $(V,Z,K)$ is the unique solution to RBSDE \eqref{R} and if there exists a non-zero vector $h=(h_t^1, \cdots, h_t^n)'$ such that $\sum\limits_{j=1}^n h_t^j s_{j,t} = Z_t$, then $(V,h,K)$ solves \eqref{dV}. The equation $\sum\limits_{j=1}^n h_t^j s_{j,t} = Z_t$ has a solution $h_t$, $t \in [0,T]$ if $Z_t$ belongs to the linear subspace of $\mathbb{R}^N$ spanned by the vectors $s_{1,t},\cdots,s_{n,t}$, which holds only if $n \geq N$. Moreover, the decomposition of $Z_t$ into a linear combination of $s_{j,t}$'s is unique if $s_{j,t}$'s are linearly independent, in which case, $n$ cannot be greater than $N$, hence $n=N$. This leads to the following proposition:
\begin{prop}\label{complete}
Suppose $f$, in equation \eqref{f} and Proposition \ref{lipf}, satisfies $c_6\|\Psi^{\dagger}_t\|_{N\times N} \sqrt{6m} < 1$. A unique super hedging strategy $(V,h,K)$ exists for the American option with payoff $G$ only if the market is composed by $N$ linearly independent stocks.
\end{prop}
The condition in Proposition \ref{complete} is fulfilled by supposing that the vectors $\delta_{j,t}$'s representing the dividends are linearly independent.
\subsection{The Discounted Super-hedging Portfolio Value}
\indent Suppose $(V, h, K)$ is the unique superhedging strategy for the American option with payoff $G$. Let $\varphi_t$, $t \in [0,T]$, be the matrix whose $i$-th columns are $s_{i,t}$, $i=1,\cdots,N$. Then, from \eqref{dV}, $(V,h,K)$ satisfies:
\begin{equation}\label{dV2}
dV_t = r_tV_t dt + X_t' (-r_t + (A'_t -\Gamma'_t))\varphi_t h_tdt + h'_{t-} \varphi'_{t-} dM_t -dK_t.
\end{equation}
We shall write the equation for the discounted portfolio $\pi V$. Using the product rule for semimartingales, we have:
\begin{equation*}
V_t\pi_t = V_T\pi_T - \int_t^T V_{u_{-}} d\pi_u - \int_t^T \pi_{u_{-}}dV_u - \sum_{t < u \leq T} \Delta \pi_u \Delta V_u,
\end{equation*}
and we recall from Chapter 1 that $\sum\limits_{t < u \leq T} \Delta \pi_u \Delta V_u$ is the optional covariation of $\pi_t$ and $V_t$. Note again $dX_t = \Delta X_t$ and $\Delta X_t = \Delta M_t$.
We have from \eqref{pi} and \eqref{dV2} that $$\Delta \pi_t = \pi_{t_{-}} X'_{t_{-}} \sigma_{t_{-}} \Delta X_t ~~ \text{and}~~\Delta V_t = h'_{t-} \varphi'_{t-} \Delta X_t.$$
Also,
\begin{align*}
&\pi_{u_{-}} X'_{u_{-}} \sigma_{u_{-}} \Delta X_u h'_{u-} \varphi'_{u-} \Delta X_u \\
& = \sum_{i,j} \pi_{u_{-}} (e'_iX_{u_{-}}) (e'_j X_u) (e'_i \sigma_{u_{-}} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i) \\
& = \sum_{i,j} \pi_{u_{-}} (e'_iX_{u_{-}}) (e'_j \Delta X_u) (e'_i \sigma_{u_{-}} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i).
\end{align*}
Therefore, noting $\sigma_u^{ii}=0, ~ i=1,2,\cdots,N$,
\begin{align*}
&\sum_{t < u \leq T} \Delta \pi_u \Delta V_u \\
& = \sum_{i,j} \sum_{t < u \leq T} \pi_{u_{-}} (e'_iX_{u_{-}}) (e'_j \Delta X_u) (e'_i \sigma_{u_{-}} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i) \\
& = \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u_{-}}) ( e'_j (A_uX_u du + dM_u))(e'_i \sigma_{u_{-}} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i) \\
& = \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u}) e'_j(A_uX_u)(e'_i \sigma_{u} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i) du\\
& + \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u}) (e'_jdM_u)(e'_i \sigma_{u_{-}} (e_j -e_i))h'_{u-} \varphi'_{u-}(e_j-e_i)\\
& = \int_t^T \sum_{i,j} \pi_{u} (e'_iX_{u}) A_u^{ji}\sigma_u^{ij} h'_{u} \varphi'_{u}(e_j-e_i) du\\
& + \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u})(e'_jdM_u) \sigma_{u-}^{ij}h'_{u-} \varphi'_{u-}(e_j-e_i).
\end{align*}
Hence we derive
\begin{align*}
\pi_t V_t &= \pi_T V_T + \int_t^T V_u \pi_u r_u du - \int_t^T V_{u_{-}}\pi_{u_{-}}X'_{u_{-}}\sigma_{u_{-}}dM_u \\
& - \int_t^T \pi_u V_u \ r_u du - \int_t^T \pi_u X'_u (-r_u + (A'_u - \Gamma'_u))\varphi_u h_u du \\
& - \int_t^T \pi_u (-dK_u) - \int_t^T \pi_{u_{-}} h'_{u_{-}}\varphi'_{u-} dM_u \\
& - \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u_{-}}) A_u^{ji}\sigma_u^{ij} h'_{u-} \varphi'_{u-}(e_j-e_i) du\\
& - \int_t^T \sum_{i,j} \pi_{u_{-}} (e'_iX_{u_{-}})(e'_jdM_u) \sigma_{u-}^{ij}h'_{u-} \varphi'_{u-}(e_j-e_i).
\end{align*}\\
Collecting together the $du$ terms, and the $dM_u$ terms,
we have:
\begin{align*}
&\pi_t V_t \\
&= \pi_T V_T + \int_t^T (\pi_u X'_u (-r_u + (A'_u - \Gamma'_u))\varphi_u h_u - \pi_u\sum_{i,j} (X'_{u}e_i) A_u^{ji}\sigma_u^{ij} h'_u \varphi'_u (e_j-e_i)) du \\
& + \int_t^T \pi_u dK_u \\
& - \int_t^T ( \pi_{u_{-}} V_{u_{-}}X'_{u_{-}} \sigma_{u_{-}} + \sum_{i,j} \pi_{u_{-}}(e'_iX_{u-}) \sigma_{u-}^{ij}h'_{u-}\varphi'_{u-}(e_j-e_i) e'_j + \pi_{u-} h'_{u-} \varphi'_{u-}) dM_u.
\end{align*}
Now, let $\tilde{V}_t = \pi_t V_t$, $\tilde{Z}_t = \pi_t \varphi_t h_t$ and $\tilde{K}_t = \int_0^t \pi_u dK_u$. Also, let
\begin{equation*}
H(t, z) = X'_t (-r_t + (A'_t - \Gamma'_t))z - \sum_{i,j} (X'_{t}e_i) A_t^{ji}\sigma_t^{ij} z' (e_j-e_i)),\quad \text{and}
\end{equation*}
\begin{equation*}
I(t, z, v) = v X'_{t_{-}} \sigma_{t_{-}} + \sum_{i,j} (e'_iX_{t-}) \sigma_{u-}^{ij}z(e_j-e_i) e'_j + z'.
\end{equation*}
Then, $(\tilde{V}_t, \tilde{Z}_t, \tilde{K}_t)$ solves the following equation with final condition $\pi_T G_T$:
\begin{equation}\label{RBSDEdiscount}
\begin{cases}
1) & \tilde{V}_t = \pi_TG_T + \int_t^T H(u,\tilde{Z}_u ) du + \tilde{K}_T - \tilde{K}_t - \int_t^T I(u_{-}, \tilde{Z}_{u_{-}}, \tilde{V}_{u_{-}}) dM_u; \\
2) & \tilde{V}_t \geq \pi_t G_t.
\end{cases}
\end{equation}
Such a solution is called a super-hedging strategy for the discounted American claim.
\begin{prop}
Consider $\mathcal{S}_t^T$, the set of all stopping times $\{\tau\}$ with $t \leq \tau \leq T$. Then the solution $\tilde{V}$ of \eqref{RBSDEdiscount} is the solution to the optimal stopping time problem:
\begin{equation*}
\tilde{V}_t = ess \sup_{\tau \in \mathcal{S}_t^T} E \left[ \int_t^{\tau} H(u, \tilde{Z}_u) du + \pi_{\tau}G_{\tau}1_{\{\tau < T\}} +\pi_T G_T1_{\{\tau=T\}} | \mathcal{F}_t\right] .
\end{equation*}
\end{prop}
\begin{proof}
Let $\tau \in \mathcal{S}_t^T$. Take the conditional expectation from time $t$ to time $\tau$ in \eqref{RBSDEdiscount}:
\begin{align*}
\tilde{V}_t &= E \left[ \int_t^{\tau} H(u, \tilde{Z}_u) du + \tilde{V}_{\tau} + \tilde{K}_{\tau} - \tilde{K}_t| \mathcal{F}_t \right] .
\end{align*}
Since $\tilde{K}_{\tau} - \tilde{K}_t \geq 0$ and $$ \tilde{V}_{\tau} \geq \pi_{\tau}G_{\tau}1_{\{\tau < T\}} + \pi_T G_T1_{\{\tau=T\}},$$
\begin{equation*}
\tilde{V}_t \geq E \left[ \int_t^{\tau} H(u, \tilde{Z}_u) du + \pi_{\tau}G_{\tau}1_{\{\tau < T\}} + \pi_T G_T1_{\{\tau=T\}}| \mathcal{F}_t\right].
\end{equation*}
This is true for any $\tau \in \mathcal{S}_t^T$, in particular:
\[\tilde{V}_t \geq ess \sup_{\tau \in \mathcal{S}_t^T} E \left[ \int_t^{\tau} H(u, \tilde{Z}_u) du + \pi_{\tau}G_{\tau}1_{\{\tau < T\}} +\pi_T G_T1_{\{\tau=T\}}| \mathcal{F}_t\right] .\]
The reverse of the above inequality is obtained by choosing an optimal time from $\mathcal{S}_t^T$ and the condition $\int_0^T (V_t-G_t) dK_t =0$ . In fact, let
\begin{equation*}
\tau_t = \inf \{t \leq u \leq T; V_u = G_u\},
\end{equation*}
and $\tau_t = T$ if $V_u \geq G_u$. When $t\leq s< \tau_t$, $V_t > G_t$, therefore $dK_u = 0 $ for $t\leq s< \tau_t$. Taking the integral from $t$ to $\tau_t$ and using the continuity of $K$, we have
\[\tilde{K}_{\tau_t}-\tilde{K}_t = \int_t^{\tau_t} \pi_u dK_u = 0.\]
Therefore:
\begin{align*}
\tilde{V}_t & = E\left[ \pi_{\tau_t} G_{\tau_t}+ \int_t^{\tau_t} H(u, \tilde{Z}_u) du + \tilde{K}_{\tau_t} - \tilde{K}_{t}| \mathcal{F}_t \right] \\
& = E\left[ \pi_{\tau_t}G_{\tau_t} + \int_t^{\tau_t} H(u, \tilde{Z}_u) du | \mathcal{F}_t \right] \\
& \leq ess \sup_{\tau \in \mathcal{S}_t^T} E\left[ \int_t^{\tau} H(u, \tilde{Z}_u) du + \pi_{\tau}G_{\tau}1_{\{\tau < T\}} +\pi_T G_T1_{\{\tau=T\}}| \mathcal{F}_t \right].
\end{align*}
The price $\tilde{V}_t = \pi_t V_t$ is the super-replication of the discounted payoff $\pi_tG_t$ of the American option.
\end{proof}
\section{Conclusion}
\indent American options have been discussed in a market model where uncertainty is described by a Markov chain. RBSDEs are introduced in this framework and the existence and uniqueness of their solutions established. A constrained super-hedging strategy for an American option is shown to exist as the unique solution of an RBSDE.
|
1,108,101,565,760 | arxiv | \section{Introduction}
Let \(T=(S^{1})^{r}\) be a torus, and let \(X\) be a \(T\)-space satisfying some
fairly mild assumptions (see Section~\ref{sec:4:assumptions}).
Recall that \(H_{T}^{*}(X)=H_{T}^{*}(X;\mathbb Q)\), the equivariant cohomology of~\(X\) with rational coefficients,
can be defined as the cohomology of the Borel construction (or homotopy quotient)~\(X_{T}=(ET\times X)/T\),
and that it is an algebra over the polynomial ring~\(R=H^{*}(BT)\).
In~\cite[p.~23]{Borel:1960}, A.~Borel observed that
``even if one is interested mainly in a statement involving only cohomology,
one has to use in the proof groups which play the role of homology groups,
and therefore this presupposes some homology theory''.
In this spirit we defined in~\cite{AlldayFranzPuppe}
the equivariant homology~\(H^{T\!}_{*}(X)\)
of~\(X\), which is a module over~\(R\).
In contrast to~\(H_{T}^{*}(X)\), it is not the homology of any space. Nevertheless, it has many
desirable properties: it is related to~\(H_{T}^{*}(X)\) via universal coefficient spectral sequences,
and, in the case of a rational Poincaré duality space~\(X\), also through an equivariant Poincaré duality isomorphism
\begin{equation}
\label{eq:PD-iso-intro}
H_{T}^{*}(X) \stackrel{\cong}\longrightarrow H^{T\!}_{*}(X),
\quad
\alpha\mapsto \alpha\cap o_{T},
\end{equation}
which is the cap product with an equivariant orientation~\(o_{T}\inH^{T\!}_{*}(X)\).
Note that unlike the non-equivariant situation, the isomorphism~\eqref{eq:PD-iso-intro}
does not necessarily translate into the perfection of the equivariant Poincaré pairing
\begin{equation}
\label{eq:PD-pairing-intro}
H_{T}^{*}(X) \times H_{T}^{*}(X) \to R,
\quad
(\alpha,\beta) \mapsto \pair{\alpha\cup\beta,o_{T}}.
\end{equation}
In fact, the pairing~\eqref{eq:PD-pairing-intro} is perfect if and only
if \(H_{T}^{*}(X)\) is a reflexive \(R\)-module, see~\cite[Cor.~1.3]{AlldayFranzPuppe}.
Hence, in the equivariant setting Poincaré duality
cannot be phrased in terms of cohomology alone.
Another reason to consider equivariant homology is that sometimes it behaves
better than cohomology. For example, the sequence
\begin{equation}
\label{eq:exact-hHT-0-intro}
0 \to H^{T\!}_{*}(X^{T}) \to H^{T\!}_{*}(X) \to H^{T\!}_{*}(X,X^{T}) \to 0
\end{equation}
is always exact (see Proposition~\ref{thm:hHT-short-exact-intro} below),
which is rarely the case for the corresponding sequence
in equivariant cohomology.
The first theme of the present paper is to extend Poincaré duality and its generalization
Poincaré--Alexander--Lefschetz duality to the torus-equivariant setting.
Equivariant Poincaré--Alexander--Lefschetz duality
for compact Lie groups and certain generalized (co)homology theories
has been discussed by Wirthmüller~\cite{Wirthmuller:1974}
and Lewis--May~\cite[\S III.6]{LewisMaySteinberger:1986},~\cite[\S XVI.9]{May:1996}
in the framework of equivariant stable homotopy theory.
Here we are interested in an explicit algebraic description
in the context of the singular Cartan model, \emph{cf.}~Section~\ref{sec:4:singular-Cartan-model}.
We allow rational homology manifolds which may be non-compact or non-ori\-entable.
To this end we have to define equivariant cohomology with compact supports
and equivariant homology with closed supports, and, for non-orientable homology manifolds,
also equivariant (co)homology with twisted coefficients.
\begin{theorem}[Poincaré--Alexander--Lefschetz duality]
\label{thm:PAL-intro}
Let \(X\) be an orientable \(n\)-dimensional rational homology manifold
with a \(T\)-action, and let \((A,B)\) be a closed \(T\)-stable pair in~\(X\).
Then there is an isomorphism of \(R\)-modules
\begin{equation*}
H_{T}^{*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \cong H^{T,c}_{n-*}(A,B).
\end{equation*}
\end{theorem}
Here \(H^{T,c}_{*}(A,B)\) denotes the equivariant homology of the pair~\((A,B)\)
with compact supports.
Theorem~\ref{thm:PAL-intro} extends to an isomorphism of spectral sequences
induced by a \(T\)-stable filtration on~\(X\), and it implies
an equivariant Thom isomorphism.
We also prove analogous results for non-orientable manifolds
and twisted coefficients, which is essential for our applications.
Another important result in equivariant stable homotopy theory is the Adams isomorphism.
In Proposition~\ref{thm:locally-free-action} we prove a version of it in our context.
\smallskip
Our second theme is to extend the results of~\cite{AlldayFranzPuppe}
to the new (co)homology theories and to combine them with equivariant duality results.
Recall that the equivariant \(i\)-skeleton~\(X_{i}\subset X\) is the union of all \(T\)-orbits of dimension~\(\le i\);
this defines the orbit filtration of~\(X\).
A crucial observation, originally made by Atiyah~\cite{Atiyah:1974}
in the context of equivariant \(K\)-theory,
is that
the \(R\)-module \(H_{T}^{*}(X_{i},X_{i-1})\)
is zero or Cohen--Macaulay of dimension~\(r-i\).
The same holds for equivariant homology, and it implies
the following result.
\begin{proposition}
\label{thm:hHT-short-exact-intro}
For any~\(0\le i\le r\) there is an exact sequence
\begin{equation*}
0 \to H^{T\!}_{*}(X_{i}) \to H^{T\!}_{*}(X) \to H^{T\!}_{*}(X,X_{i}) \to 0.
\end{equation*}
\end{proposition}
The case~\(i=0\) was made explicit in~\eqref{eq:exact-hHT-0-intro} above.
Again, this extends to homology with compact supports and/or twisted coefficients, see Proposition~\ref{thm:hHT-short-exact}.
Using the naturality properties of equivariant Poincaré--Alexander--Lefschetz duality,
we can easily generalize a result of Duflot~\cite{Duflot:1983}
about smooth actions on differential manifolds, see Proposition~\ref{thm:duflot-general}:
\begin{corollary}
\label{thm:duflot-intro}
Let \(X\) be a rational homology manifold.
For any~\(0\le i\le r\) there is an exact sequence
\begin{equation*}
0 \to H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i}) \to H_{T}^{*}(X) \to H_{T}^{*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i}) \to 0.
\end{equation*}
\end{corollary}
\smallskip
We now turn to the relation between equivariant homology and the orbit filtration.
Recall that the \emph{Atiyah--Bredon complex}~\(AB^{*}(X)\) is defined by
\begin{equation}
AB^{i}(X)=H_{T}^{*+i}(X_{i},X_{i-1})
\end{equation}
for~\(0\le i\le r\) and zero otherwise. (We set \(X_{-1}=\emptyset\).)
The differential
\begin{equation}
d_{i}\colon H_{T}^{*}(X_{i},X_{i-1}) \to H_{T}^{*+1}(X_{i+1},X_{i})
\end{equation}
is the boundary map in the long exact sequence of the triple~\((X_{i+1},X_{i},X_{i-1})\).
In other words, \(AB^{*}(X)\) is the \(E_{1}\)~page of the spectral sequence
arising from the orbit filtration and converging to~\(H_{T}^{*}(X)\),
and \(H^{*}(AB^{*}(X))\) is its \(E_{2}\)~page.
A principal result of~\cite{AlldayFranzPuppe} is a natural isomorphism
\begin{equation}
\label{eq:Ext-HAB-intro}
H^{i}(AB^{*}(X)) = \Ext_{R}^{i}(H^{T\!}_{*}(X),R)
\end{equation}
for all~\(i\ge 0\). This is once again a consequence of the Cohen--Macaulay property of~\(H^{T\!}_{*}(X_{i},X_{i-1})\).
In~\cite{AlldayFranzPuppe}
we used the isomorphism~\eqref{eq:Ext-HAB-intro} to study syzygies in equivariant cohomology
and to relate them to the Atiyah--Bredon complex.
Here we again indicate generalizations to (co)homology with the new pair of supports
and{\slash}or twisted coefficients.
They are used in~\cite{Franz:geocrit} to prove a ``geometric criterion'' for syzygies
in equivariant cohomology that only depends on the quotient~\(X/T\) as a stratified space.
\medskip
The paper is organized as follows.
In Section~\ref{sec:4:equiv-cohomology}
we first review equivariant cohomology with closed supports and equivariant homology with compact supports
and then define equivariant (co)homology with the other pair of supports.
We also consider homology manifolds and define variants of equivariant (co)homology
with twisted coefficients in this case.
Theorem~\ref{thm:PAL-intro} and its corollaries are proved in Section~\ref{sec:duality-results}.
Applications to the orbit structure are given in Section~\ref{sec:PAL-applications}.
There we also relate the cohomology of the Atiyah--Bredon complex
to the question of uniformity of an action.
Given the importance of~\eqref{eq:Ext-HAB-intro},
we include a direct proof of it in Section~\ref{sec:quick-proof}.
It uses only exact sequences as in Proposition~\ref{thm:hHT-short-exact-intro}
and avoids the intricate reasoning with spectral sequences done in~\cite{AlldayFranzPuppe}.
\begin{acknowledgements}
We thank an anonymous referee for numerous helpful comments and
in particular for suggesting a strengthening of Proposition~\ref{thm:locally-free-action}.
\end{acknowledgements}
\section{Equivariant homology and cohomology}
\label{sec:4:equiv-cohomology}
\subsection{Notation and standing assumptions}
\label{sec:4:assumptions}
We write ``\(\subset\)'' for inclusion of sets and ``\(\subsetneq\)'' for proper inclusion.
Throughout this paper, \(T=(S^{1})^{r}\) denotes a compact torus of rank~\(r\ge0\),
and \(\Bbbk\) a field. From Section~\ref{sec:PAL-applications} on we will assume
that the characteristic of~\(\Bbbk\) is zero. All (co)homology is taken with coefficients in~\(\Bbbk\)
unless specified otherwise.
\(C_{*}(-)\) and \(C^{*}(-)\) denote normalized singular chains and cochains
with coefficients in the field~\(\Bbbk \), and \(H_{*}(-)\)~and~\(H^{*}(-)\)
singular (co)ho\-mol\-ogy.
We adopt a cohomological grading, so that the homology
of a space lies in non-positive degrees; an element~\(c\in H_{i}(X)\)
has cohomological degree~\(-i\).
\(R=H^{*}(BT)\) is the symmetric algebra generated by~\(H^{2}(BT)\), and \(\mathfrak m\lhd R\) its maximal homogeneous ideal.
All \(R\)-modules are assumed to be graded.
We consider \(\Bbbk \) as an \(R\)-module (concentrated in degree~\(0\)) via the canonical augmentation.
For an \(R\)-module~\(M\) and an~\(l\in\mathbb Z\)
the notation~\(M[l]\) denotes a degree shift by~\(l\), so that the degree~\(i\)~piece of~\(M[l]\)
is the degree~\(i-l\)~piece of~\(M\). For the cohomology of some space,
we alternatively write \(H^{*}(X)[l]\) or \(H^{*-l}(X)\). Due to the cohomological grading,
we have in homology the identity~\(H_{*}(X)[l]=H_{*+l}(X)\).
We assume all spaces
to be Hausdorff, second-countable, locally compact, locally contractible
and of finite covering dimension,
hence also completely regular, separable and metrizable.
Important examples are topological (in particular, smooth) manifolds, orbifolds, complex algebraic varieties,
and countable, locally finite CW~complexes.
We also assume that only finitely many distinct isotropy groups occur in any \(T\)-space~\(X\).
\begin{remark}
\label{rem:quotient}
Under these assumptions on a \(T\)-space~\(X\),
the orbit space~\(X/T\) is again Hausdorff and locally compact \cite[Thm.~3.1]{Bredon:1972},
second-countable,
locally contractible \cite[Thm~3.8, Cor.~3.12]{Conner:1960}
and of finite covering dimension \cite[Thm.~VIII.3.16]{Borel:1960}.
It is easy to see that the same applies to the fixed point set \(X^{T}\)
with the exception of local contractability: see Remark~\ref{rem:4:loc-contractible} below.
\end{remark}
It follows from our assumptions that every subset~\(A\subset X\) is paracompact, hence singular cohomology
and Alexander--Spanier cohomology are naturally isomorphic for all
pairs~\((A,B)\)
such that \(A\) and~\(B\) are locally contractible.
We therefore put as another standing assumption
that all
subsets~\(A\subset X\) we consider
are locally contractible;
this holds automatically if \(A\) is open in~\(X\).
And we call \((A,B)\) a \(T\)-pair if \(A\) and \(B\) are \(T\)-stable.
In addition we will put a finiteness condition on the (co)homology
of the spaces and pairs we consider. This will be explained in detail
once we have defined equivariant (co)homology.
\subsection{The singular Cartan model}
\label{sec:4:singular-Cartan-model}
Let \(X\) be a \(T\)-space.
We recall from~\citeorbitsone{Sec.~\ref*{sec:equiv-cohomology}}
the definition of equivariant homology and cohomology via the ``singular Cartan model''.
As pointed out in~\cite{AlldayFranzPuppe},
it can be replaced by the usual Cartan model
for differentiable actions on manifolds and \(\Bbbk=\mathbb R\).
The \emph{singular Cartan model} of the \(T\)-pair~\((A,B)\) in~\(X\) is
\begin{align}
\label{eq:4:definition-CT}
C_{T}^{*}(A,B) &= C^{*}(A,B)\otimes R
\shortintertext{with \(R\)-linear differential}
\label{eq:4:definition-d-CT}
d(\gamma\otimes f) &=
d\gamma\otimes f + \sum_{i=1}^{r}a_{i}\cdot\gamma\otimes t_{i}f \\
\shortintertext{and \(R\)-bilinear product}
(\gamma\otimes f)\cup(\gamma'\otimes f') &= \gamma\cup\gamma'\otimes f f'.
\end{align}
Here \(t_{1}\),~\ldots,~\(t_{r}\) are a basis of~\(H^{2}(BT)\subset R\), and
\(a_{1}\),~\ldots,~\(a_{r}\) are representative loops of the dual basis of~\(H_{1}(T)\);
the product~\(a_{i}\cdot\gamma\) refers to the action of~\(C_{*}(T)\) on~\(C^{*}(X)\)
induced by the \(T\)-action on~\(X\).
The equivariant chain complex~\(C^{T\!}_{*}(A,B)\)
is the \(R\)-dual of~\eqref{eq:4:definition-CT},
\begin{equation}
\label{eq:4:definition-hCT}
C^{T\!}_{*}(A,B) = \Hom_{R}(C_{T}^{*}(A,B),R).
\end{equation}
Equivariant cohomology and homology are defined as
\begin{align}
H_{T}^{*}(A,B) &= H^{*}(C_{T}^{*}(A,B)), \\
H^{T\!}_{*}(A,B) &= H_{*}(C^{T\!}_{*}(A,B)).
\end{align}
This definition of~\(H_{T}^{*}(A.B)\) is naturally isomorphic,
as an \(R\)-algebra, to the usual one based on the Borel construction~\(X_{T}\).
\subsection{Other supports}
Let \((A,B)\) be a closed \(T\)-pair in a \(T\)-space~\(X\).
We define the \emph{equivariant cohomology of~\((A,B)\) with compact supports}
by
\begin{align}
H_{T,c}^{*}(A,B) &= \mathop{\underrightarrow\lim} H_{T}^{*}(U,V) = H^{*}(C_{T,c}^{*}(A,B)),
\intertext{where}
\label{eq:definition-CTc}
C_{T,c}^{*}(A,B) &= \mathop{\underrightarrow\lim} C_{T}^{*}(U,V) = \bigl(\mathop{\underrightarrow\lim} C^{*}(U,V)\bigr)\otimes R,
\end{align}
and the direct limits are taken over all \(T\)-stable open neighbourhood pairs~\((U,V)\) of~\((A,B)\)
such that \(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} V\) is compact.
By tautness and excision, \(H_{T,c}^{*}(A,B)\) is easily seen to be naturally isomorphic to the Alexander--Spanier cohomology
of the closure of~\((A,B)\) in the one-point compactification of~\(X\) relative to the added point.
Hence it does not matter whether \((A,B)\) is considered as a closed \(T\)-pair in~\(X\) or in~\(A\).
The \emph{equivariant homology of~\((A,B)\) with closed supports}
is defined by taking the \(R\)-dual
of~\eqref{eq:definition-CTc},
\begin{align}
C^{T,c}_{*}(A,B) &= \Hom_{R}(C_{T,c}^{*}(A,B),R), \\
H^{T,c}_{*}(A,B) &= H_{*}(C^{T,c}_{*}(A,B)).
\end{align}
Clearly, we have \(H_{T,c}^{*}(A,B)=H_{T}^{*}(A,B)\)
and \(H^{T,c}_{*}(A,B)=H^{T\!}_{*}(A,B)\) if \(X\) is compact.
\subsection{Properties}
\label{sec:properties}
We list several important properties of equivariant (co)homology,
omitting proofs that were given in~\cite{AlldayFranzPuppe}.
In Section~\ref{sec:twisted} we will extend all results of this section
to homology manifolds and (co)homology with twisted coefficients,
see Remark~\ref{rem:twisted-properties}.
\begin{assumption}
\label{ass:4:finite}
For the rest of this paper we assume
that \(H^{*}(A,B)\) is a finite-dimensional \(\Bbbk\)-vector space
for any \(T\)-pair~\((A,B)\) for which we consider equivariant
cohomology with closed supports or equivariant homology with compact supports.
By Proposition~\ref{thm:4:serre-ss} below, this implies
that both \(H_{T}^{*}(A,B)\) and \(H^{T\!}_{*}(A,B)\) are finitely generated \(R\)-modules.
(Each of the latter conditions is actually equivalent to the former.)
\end{assumption}
\begin{proposition}[Serre spectral sequence]
\label{thm:4:serre-ss}
\label{thm:4:ss-HT-E2}
\label{thm:4:ss-hHT-E2}
Let \((A,B)\) be a \(T\)-pair in~\(X\). There are spectral sequences,
natural in~\((A,B)\), with
\begin{align*}
E_{1} = E_{2} &= H^{*}(A,B)\otimes R \;\Rightarrow\; H_{T}^{*}(A,B), \\
E_{1} = E_{2}&= H_{*}(A,B)\otimes R \;\Rightarrow\; H^{T\!}_{*}(A,B).
\end{align*}
\end{proposition}
\begin{proof}
These are eqs.~\eqref{eq:ss-HT-E2} and \eqref{eq:ss-hHT-E2}
in~\cite{AlldayFranzPuppe}.
\end{proof}
\begin{proposition}[Universal coefficient theorem \citeorbitsone{Prop.~\ref*{thm:uct}}]
\label{thm:4:uct}
Let \((A,B)\) be a \(T\)-pair in~\(X\).
There are spectral sequences,
natural in~\((A,B)\), with
\begin{align*}
E_{2}^{p} &= \Ext_{R}^{p}(H_{T}^{*}(A,B),R) \;\Rightarrow\; H^{T\!}_{*}(A,B), \\
E_{2}^{p} &= \Ext_{R}^{p}(H^{T\!}_{*}(A,B),R) \;\Rightarrow\; H_{T}^{*}(A,B).
\end{align*}
\end{proposition}
For a multiplicative subset~\(S\subset R\) and a \(T\)-space~\(X\), define the \(T\)-stable subset
\begin{equation}
X^{S} = \bigl\{\, x\in X \bigm| S\cap\ker(H^{*}(BT)\to H^{*}(BT_{x}))=\emptyset\,\bigr\} \subset X.
\end{equation}
It is closed in~\(X\), \emph{cf.}~\cite[p.~132]{AlldayPuppe:1993}.
For example, \(X^{S}=X^{T}\) if \(\Char\Bbbk=0\) and \(S\) contains all non-zero linear polynomials.
\begin{proposition}[Localization theorem]
\label{thm:localization-thm-homology}
Let \((A,B)\) be a \(T\)-pair in~\(X\), and let \(S\subset R\) be a multiplicative subset.
Then the inclusion~\(X^{S}\hookrightarrow X\) induces isomorphisms
of \(S^{-1}R\)-modules
\begin{align*}
S^{-1}H_{T}^{*}(A,B) &\to S^{-1}H_{T}^{*}(A^{S},B^{S}), \\
S^{-1}H^{T\!}_{*}(A^{S},B^{S}) &\to S^{-1}H^{T\!}_{*}(A,B).
\end{align*}
\end{proposition}
\begin{proof}
The localization theorem for equivariant cohomology with closed supports
is classical, \emph{cf.}~\cite[Ch.~3]{AlldayPuppe:1993}.
(Recall that only finitely many orbit types occur in~\(X\).)
By the universal coefficient theorem
there is a spectral sequence converging to~\(H^{T\!}_{*}(A,B)\)
with \(E_{2}\)~page~\(\Ext_{R}(H_{T}^{*}(A,B),R)\), and similarly
for the pair~\((A^{S},B^{S})\).
The inclusion~\(X^{S}\hookrightarrow X\) gives rise to a map of spectral sequences,
which on the \(E_{2}\)~pages is the canonical map
\begin{equation}
\label{eq:Ext-S}
\Ext_{R}(H_{T}^{*}(A^{S},B^{S}),R) \to \Ext_{R}(H_{T}^{*}(A,B),R).
\end{equation}
Since localization is an exact functor, the \(S\)-localization of~\eqref{eq:Ext-S} is the map
\begin{equation}
\Ext_{S^{-1}R}(S^{-1}H_{T}^{*}(A^{S},B^{S}),S^{-1}R) \to \Ext_{S^{-1}R}(S^{-1}H_{T}^{*}(A,B),S^{-1}R),
\end{equation}
which is an isomorphism by the cohomological localization theorem.
Hence, the localization of~\(H^{T\!}_{*}(A^{S},B^{S}) \to H^{T\!}_{*}(A,B)\)
is an isomorphism as well.
\end{proof}
Let \(K\subset T\) be a subtorus, say of rank~\(p\), with quotient~\(L=T/K\).
In this case we have canonical morphisms of algebras
\begin{equation}
\label{eq:def-RK-RL}
H^{*}(BL)=R_{L}=\Bbbk[t_{p+1},\dots,t_{r}]\to R\to H^{*}(BK)=R_{K}=\Bbbk[t_{1},\dots,t_{p}].
\end{equation}
Moreover, any choice of splitting~\(T\cong K\times L\) defines an isomorphism~\(R=R_{K}\otimes R_{L}\).
\begin{proposition}
\label{thm:action-trivial}
Let \(K\subset T\) be a subtorus with quotient~\(L=T/K\).
Let \((A,B)\) be a closed \(T\)-pair in~\(X\)
such that \(K\) acts trivially on~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\).
Then there are isomorphisms of \(R\)-modules
\begin{align*}
H_{T}^{*}(A,B) &= H_{L}^{*}(A,B)\otimes_{R_{L}} R, \\
H^{T\!}_{*}(A,B) &= H^{L}_{*}(A,B)\otimes_{R_{L}} R.
\end{align*}
The result holds for any \(T\)-pair~\((A,B)\) if \(K\) acts trivially on all of~\(A\).
\end{proposition}
In the proof below as well as in that of Proposition~\ref{thm:locally-free-action}
we will use the following fact, \emph{cf.}~\cite[Cor.~B.1.13]{AlldayPuppe:1993}:
Let \(\phi\colon M\to N\) be a quasi-isomorphism of dg~\(R\)-modules.
If \(M\)~and~\(N\) are free as \(R\)-modules,
then \(\phi\) is a homotopy equivalence over~\(R\).
\begin{proof}
Since \(B\) is closed in~\(A\), we have,
by tautness and excision, a quasi-iso\-mor\-phism of dg \(R\)-modules
\begin{equation}
\label{eq:CTAB-dirlim-quiso}
C_{T}^{*}(A,B) \to \mathop{\underrightarrow\lim} C_{T}^{*}(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,U\mathbin{\mkern-2mu\MFsm\mkern-2mu} B) = \Bigl(\mathop{\underrightarrow\lim} C^{*}(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,U\mathbin{\mkern-2mu\MFsm\mkern-2mu} B)\Bigr)\otimes R,
\end{equation}
where the direct limit is taken over all \(T\)-stable open sets~\(U\supset B\).
Hence we may work with this direct limit, which we denote by~\(M\).
Now choose a splitting~\(T=K\times L\).
By~\citeorbitsone{Prop.~\ref*{thm:hHT-independent-basis}}, we may assume
that the representatives~\(a_{i}\in C_{1}(T)\) appearing in the ``Cartan differential''~\eqref{eq:4:definition-d-CT}
are chosen such that \(a_{1}\),~\ldots,~\(a_{p}\in C_{1}(K)\) and \(a_{p+1}\),~\ldots,~\(a_{r}\in C_{1}(L)\).
Since we are using normalized singular (co)chains,
\(C_{*}(K)\) acts trivially on each~\(C^{*}(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,U\mathbin{\mkern-2mu\MFsm\mkern-2mu} B)\).
The differential on~\(M\) therefore takes the form
\begin{equation}
d(\gamma\otimes f) = d\gamma\otimes f +\sum_{i=p+1}^{r} a_{i}\cdot\gamma\otimes t_{i}f,
\end{equation}
which implies
\begin{equation}
H_{T}^{*}(A,B)=H_{L}^{*}(A,B)\otimes H^{*}(BK)
= H_{L}^{*}(A,B)\otimes_{R_{L}} R
\end{equation}
by the Künneth formula.
By the remark made above, the quasi-isomorphism~\eqref{eq:CTAB-dirlim-quiso}
is a homotopy equivalence, which is preserved by the functor~\(\Hom_{R}(-,R)\).
For equivariant homology we can therefore argue analogously.
The last claim follows by the five-lemma from the previous one, applied to~\(A\) and~\(B\) separately,
and the long exact sequence of the pair.
\end{proof}
At the other extreme, we have the following:
\begin{proposition}
\label{thm:locally-free-action}
Let \(K\subset T\) be a subtorus, say of rank~\(p\), with quotient~\(L=T/K\).
Let \((A,B)\) be closed a \(T\)-pair in~\(X\)
such that \(K\) acts freely on~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\) (or just locally freely in case~\(\Char\Bbbk=0\)).
Then \(H^{*}(A/K,B/K)\) is finite-dimensional, and there are isomorphisms of
\(R_{L}\)-modules
\begin{align*}
H_{T}^{*}(A,B) &= H_{L}^{*}(A/K,B/K), \\
H^{T\!}_{*}(A,B) &= H^{L}_{*-p}(A/K,B/K).
\end{align*}
The result holds for any \(T\)-pair~\((A,B)\) if \(K\) acts (locally) freely on all of~\(A\).
\end{proposition}
The cohomological part is well-known, cf~\cite[Prop.~3.10.9]{AlldayPuppe:1993}.
That a degree shift by~\(-p\) is necessary for the homological part can already
be seen by considering~\(K=T=X\):
In this case one has \(H^{T\!}_{*}(X)=\Bbbk[-r]=H_{*-r}(X/T)\), \emph{cf.}~\citeorbitsone{Ex.~\ref*{ex:homogeneous-space}}.
Geometrically, the homological isomorphism
can be understood as a transfer
for the quotient map~\(X\to X/K\).
Since in the singular setting it is delicate to define a transfer map or integration over the fibre
on the (co)chain level, we will follow an algebraic approach
and postpone the geometrical aspects to our discussion of Poincaré--Alexander--Lefschetz duality
(Remark~\ref{rem:locally-free}).
The homology isomorphism can also be viewed
as a version of the Adams isomorphism in equivariant stable homotopy theory
(see~\cite[\S II.7]{LewisMaySteinberger:1986} or~\cite[\S XVI.5]{May:1996})
in our algebraic context.
The proof of Proposition~\ref{thm:locally-free-action} requires some preparation.
Recall that \(\mathfrak m=(t_{1},\dots,t_{r})\) is the maximal graded ideal in~\(R\).
In the proof below we will use the local duality isomorphism
\begin{equation}
\label{eq:4:local-duality}
H_{\m}^{j}(M) = \Ext_{R}^{r-j}(M,R[2r])^{\vee},
\end{equation}
which is natural in the \(R\)-module~\(M\),
see for instance \cite[Thm.~A1.9]{Eisenbud:2005}
(where the generators of the polynomial ring are assigned the degree~\(1\), not~\(2\)).
The symbol~``\({}^{\vee}\)'' in~\eqref{eq:4:local-duality} denotes the dual of a graded \(\Bbbk\)-vector space.
We will also need the Čech complex
computing \(H_{\m}^{*}(M)\)
by means of
some generators
of~\(\mathfrak m\) as in~\cite[p.~189]{Eisenbud:2005}.
More generally, we consider the Čech complex
for a dg~\(R\)-module~\(M\).
Thus we obtain a bicomplex~\(C_{\m}^{*,*}(M)\) with
first differential~\(d^{\mkern 1mu I}\) coming from~\(M\)
and second differential~\(d^{\mkern 1mu\II}\)
coming from the Čech complex for the canonical generators~\(t_{1}\),~\ldots,~\(t_{r}\).
An element in~\(C_{\m}^{i,j}(M)\) is a sum of elements of degree~\(j\)
in the \(i\)-fold localizations in this Čech complex.
While \(j\) is unbounded in both directions,
we have \(0\le i\le r\), so that
both filtrations of the bicomplex are regular \cite[p.~452]{Bredon:1997}.
Hence both associated spectral sequences converge to~\(H_{\m}^{*}(M)\),
the cohomology of~\(C_{\m}^{*,*}(M)\) with respect to the total differential.
In the first bicomplex spectral sequence we have
\begin{align}
{}^{I}\mkern-4mu E_{1} &= C_{\m}^{*,*}(H^{*}(M)), \\
\label{eq:ss2}
{}^{I}\mkern-4mu E_{2} &= H_{\m}^{*}(H^{*}(M))
\end{align}
since the cohomology of the localization of~\(M\) is the localization of the cohomology.
Taking the other bicomplex spectral sequence, we get
\begin{equation}
\label{eq:ss1}
{}^{\II}\mkern-4mu E_{1} = {}^{\II}\mkern-4mu H_{\m}^{*}(M)
\end{equation}
where \({}^{\II}\mkern-4mu H_{\m}^{*}(M)\) means the cohomology of~\(C_{\m}^{*,*}(M)\) with respect to the differential~\(d^{\mkern 1mu\II}\),
that is, the local cohomology of the \(R\)-module~\(M\) with trivial differential.
Suppose that \(M\) is finitely generated and free as an \(R\)-module.
By local duality one then has that the \(E_{1}\)~page
\begin{equation}
{}^{\II}\mkern-4mu E_{1}^{k} = {}^{\II}\mkern-4mu H_{\m}^{k}(M) =
\begin{cases}
\Hom_{R}(M,R[2r])^{\vee} & \text{if \(k=r\),} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
is concentrated in the column~\(k=r\), and therefore
\begin{equation}
H_{\m}^{*}(M) = H^{*}(\Hom_{R}(M,R[2r]))^{\vee}[r] = H^{*}(\Hom_{R}(M,R))^{\vee}[-r].
\end{equation}
If \(M\) is \(R\)-homotopy equivalent to some~\(M'\), then so are \(C_{\m}^{*,*}(M)\) and \(C_{\m}^{*,*}(M')\),
hence \(H_{\m}^{*}(M)\congH_{\m}^{*}(M')\) as \(R\)-modules. In particular, if \(M=C_{T}^{*}(X)\), then it is
\(R\)-homotopy equivalent to a dg~\(R\)-module~\(M'\) which is finitely generated and free as an \(R\)-module.
We therefore conclude that
\begin{align}
H_{\m}^{*}(C_{T}^{*}(X)) &= H^{*}(\Hom_{R}(C_{T}^{*}(X),R))^{\vee}[-r] \\
&= H^{*}(C^{T\!}_{*}(X))^{\vee}[-r] = H^{T\!}_{*}(X)^{\vee}[-r].
\end{align}
Let \(\m_{L}=(t_{p+1},\dots,t_{r})\) be the maximal graded ideal of~\(R_{L}\).
Using the canonical generators,
we can similarly define \(C_{\nn\!}^{*,*}(-)\) and \(H_{\nn\!}^{*}(-)\)
for dg~\(R_{L}\)-modules, hence a~fortiori for dg~\(R\)-modules.
Since these generators are among the chosen generators of~\(\mathfrak m\),
we have a canonical map of bicomplexes
\begin{equation}
C_{\m}^{*,*}(M) \to C_{\nn\!}^{*,*}(M)
\end{equation}
for any dg~\(R\)-module~\(M\), inducing a map of \(R\)-modules~\(H_{\m}^{*}(M)\toH_{\nn\!}^{*}(M)\).
\begin{lemma}
\label{thm:Hn-Hm-iso}
Let \((A,B)\) be a closed \(T\)-pair in~\(X\).
Assume that \(K\) acts freely on~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\), and that all~\(x\in A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\) have the same isotropy group, say \(K'\).
Then the map~\(H_{\m}^{*}(H_{T}^{*}(A,B))\toH_{\nn\!}^{*}(H_{T}^{*}(A,B))\) is an isomorphism.
If \(\Char\Bbbk=0\), then it is enough that \(K\) acts locally freely and that the isotropy groups in~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\)
have the same identity component~\(K'\).
\end{lemma}
\begin{proof}
Since \(K\) acts (locally) freely, the composition \(K'\to T\to L\) is injective (or has finite kernel).
This implies that the composition~\(H^{*}(BL)\to H^{*}(BT)\to H^{*}(BK')\) is surjective.
Hence there are \(t_{1}'\),~\ldots,~\(t_{p}'\in\m_{L}\)
such that \(t_{i}\) and \(t_{i}'\) map to the same element in~\(H^{*}(BK')\)
for~\(1\le i\le p\), and \(u_{i}=t_{i}-t_{i}'\) maps to~\(0\). By the localization theorem (Proposition~\ref{thm:localization-thm-homology}),
this implies that the localization of~\(H_{T}^{*}(A,B)\) at~\(u_{i}\) vanishes.
We observe that \(u_{1}\),~\ldots,~\(u_{p}\),~\(t_{p+1}\),~\ldots,~\(t_{r}\) also generate \(\mathfrak m\).
Since local cohomology can be computed from any set of generators, \emph{cf.}~\cite[Thm.~A1.3]{Eisenbud:2005},
we can assume that one has chosen these generators
instead of the canonical generators~\(t_{1}\),~\ldots,~\(t_{r}\).
Then the terms in the Čech complex involving at least one of the~\(u_{i}\)'s
drop out, and we are left with the Čech complex computing~\(H_{\nn\!}^{*}(H_{T}^{*}(A,B))\).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{thm:locally-free-action}]
We choose a splitting~\(T=K\times L\) with compatibly chosen representatives \(a_{i}\in C_{1}(T)\)
as in the proof of Proposition~\ref{thm:action-trivial}.
As mentioned already, the isomorphism
\begin{equation}
\label{eq:iso-HT-HL-quotient-K}
H_{T}^{*}(A,B) = H_{L}^{*}(A/K,B/K)
\end{equation}
is classical
(and requires that \(B\) is closed in~\(A\)).
It is induced by the quasi-isomorphism of dg~\(R_{L}\)-modules
\begin{align}
\label{eq:quasi-iso-XK}
C_{L}^{*}(A/K,B/K) = C^{*}(A/K,B/K)\otimes R_{L} &\to C_{T}^{*}(A,B) = C^{*}(A,B)\otimes R, \\
\notag
\gamma\otimes f &\mapsto \pi^{*}\gamma\otimes f,
\end{align}
where \(\pi\colon X\to X/K\) is the projection.
It follows from the localization theorem that the localization of~\(H_{K}^{*}(A,B)\)
at each generator~\(t_{i}\) of~\(R_{K}\) vanishes. Since \(H_{K}^{*}(A,B)\)
is finitely generated over~\(R_{K}\) by Assumption~\ref{ass:4:finite},
this implies that \(H_{K}^{*}(A,B)\) is killed by some power of each~\(t_{i}\)
and therefore that it is finite-dimensional as \(\Bbbk\)-vector space. By taking \(T=K\)
in~\eqref{eq:iso-HT-HL-quotient-K}, we see that \(H^{*}(A/K,B/K)\)
is also finite-dimensional.
For the homological statement
we start by proving that the canonical map
\begin{equation}
C_{\m}^{*,*}(C_{T}^{*}(A,B)) \to C_{\nn\!}^{*,*}(C_{T}^{*}(A,B))
\end{equation}
is a quasi-isomorphism.
We proceed by induction on the number~\(m\) of (connected) orbit types in~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\).
For~\(m=0\) there is nothing to show as \(A=B\) in this case.
Otherwise fix an orbit type of maximal dimension in~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\)
and let \(A'\subset A\) be the union of~\(B\) and all other orbit types;
\(A'\) is \(T\)-stable and closed in~\(A\).
The short exact sequence
\begin{equation}
0 \to C_{T}^{*}(A,A') \to C_{T}^{*}(A,B) \to C_{T}^{*}(A',B) \to 0
\end{equation}
gives rise to the commutative diagram
\begin{equation}
\begin{tikzcd}[column sep=small]
0 \arrow{r} & C_{\m}^{*,*}(C_{T}^{*}(A,A')) \arrow{d} \arrow{r} & C_{\m}^{*,*}(C_{T}^{*}(A,B)) \arrow{d} \arrow{r} & C_{\m}^{*,*}(C_{T}^{*}(A',B)) \arrow{d} \arrow{r} & 0 \\
0 \arrow{r} & C_{\nn\!}^{*,*}(C_{T}^{*}(A,A')) \arrow{r} & C_{\nn\!}^{*,*}(C_{T}^{*}(A,B)) \arrow{r} & C_{\nn\!}^{*,*}(C_{T}^{*}(A',B)) \arrow{r} & 0 \mathrlap{,}
\end{tikzcd}
\end{equation}
whose horizontal sequences are again short exact.
The right vertical arrow is a quasi-isomorphism by induction.
To see that the left one is so as well, we consider the induced map
between the \(E_{2}\)~pages of the first bicomplex spectral sequences~\eqref{eq:ss2}.
In our case this is the map
\begin{equation}
H_{\m}^{*}(H_{T}^{*}(A,A')) \to H_{\nn\!}^{*}(H_{T}^{*}(A,A')),
\end{equation}
and it is an isomorphism by Lemma~\ref{thm:Hn-Hm-iso} and our choice of~\(A'\).
The map induced in cohomology by the left arrow above
therefore is also an isomorphism.
Hence the middle arrow is a quasi-isomorphism by the five-lemma, which proves the claim.
The quasi-isomorphism~\eqref{eq:quasi-iso-XK}
is in fact a homotopy equivalence over~\(R_{L}\)
as both sides are free as \(R_{L}\)-modules.
We therefore get isomorphisms of \(R_{L}\)-modules
\begin{align}
H^{T\!}_{*}(A,B)^{\vee}[-r] &= H_{\m}^{*}(C_{T}^{*}(A,B)) = H_{\nn\!}^{*}(C_{T}^{*}(A,B)) \\
&= H_{\nn\!}^{*}(C_{L}^{*}(A/K,B/K))
= H^{L}_{*}(A/K,B/K)^{\vee}[-(r-p)],
\end{align}
which translates into the claimed isomorphism
\begin{equation}
H^{T\!}_{*}(A,B) \to H^{L}_{*}(A/K,B/K)[-p] = H^{L}_{*-p}(A/K,B/K).
\end{equation}
The last claim follows again from the absolute case and the five-lemma.
\end{proof}
All these results hold as well
for cohomology with compact supports and homology with closed supports
and closed \(T\)-pairs~\((A,B)\),
assuming that \(H_{c}^{*}(A,B)\) is a finite-dimensional \(\Bbbk\)-vector space, \emph{cf.}~Assumption~\ref{ass:4:finite}.
The proofs are identical; the localization theorem for cohomology
with compact supports follows from the version for closed supports
since direct limits preserve isomorphisms.
\begin{proposition}
\label{thm:relative-cohomology-complement}
For any closed \(T\)-pair~\((A,B)\) in~\(X\) there are isomorphisms of \(R\)-modules
\begin{align*}
H_{T,c}^{*}(A,B) &= H_{T,c}^{*}(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B), \\
H^{T,c}_{*}(A,B) &= H^{T,c}_{*}(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B).
\end{align*}
\end{proposition}
\begin{proof}
The first identity follows from excision and the fact that a direct limit
is an exact functor. The second identity then is a consequence
of the universal coefficient theorem.
\end{proof}
\subsection{Homology manifolds}
Let \(X\) be a \emph{\(\Bbbk\)-homology manifold}, say of dimension~\(n\).
By this we mean a connected space~\(X\) such that for any~\(x\in X\) one has
\begin{equation}
\label{eq:def-homology-mf}
H_{i}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu}\{x\}) \cong
\begin{cases}
\Bbbk & \text{if \(i=n\),} \\
0 & \text{if \(i\ne n\).}
\end{cases}
\end{equation}
If in addition \(H^{c}_{n}(X) \cong \Bbbk\), then
\(X\) is called \emph{orientable}.
Homology manifolds are an appropriate setting for Poincaré duality,
see Lemma~\ref{thm:PD-nonequiv} below.
\begin{assumption}
\label{assumption:pm1}
We assume that any homology manifold~\(X\) we consider
is orientable or
admits an orientable two-fold covering~\(\pi\colon \tilde X\to X\).
\end{assumption}
For non-orientable~\(X\), such a covering will be called
an \emph{orientation cover}. Note that \(\tilde X\) is necessarily connected.
For orientable~\(X\) we define the trivial two-fold covering
to be the orientation cover.
We will use orientation covers to define (co)homology with twisted coefficients
in Section~\ref{sec:twisted}.
\begin{remark}
Any \(\mathbb Z\)-homology manifold admits an orientation cover, but
it seems unclear whether this holds for arbitrary
\(\Bbbk\)-homology manifolds, see the discussion in~\cite[p.~331]{Bredon:1997}.
On the other hand, if an orientation cover exists, then it is unique.
For orientable~\(X\), this is true by definition.
For non-orientable~\(X\) it can be seen as follows:
Let \(\gamma\) be a loop at~\(x\in X\).
By transporting local orientations along~\(\gamma\),
we get an automorphism of~\(H_{n}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu}\{x\})\), \emph{cf.}~\cite[p.~39]{Bredon:1960},
which is necessarily multiplication by some non-zero scalar.
This induces a morphism~\(\phi\colon\pi_{1}(X)\to\Bbbk^{\times}\).
The connected orientable covers of~\(X\) are of the form~\(\tilde X/G\)
where \(\tilde X\) is the universal cover
and \(G\) a subgroup of the kernel of~\(\phi\).
In particular,
there is at most one orientation cover.
\end{remark}
The following observation seems to be well-known, but we could not find a suitable reference.
\begin{lemma}
Assume \(\Char\Bbbk=0\). Any connected, locally orientable orbifold
is a \(\Bbbk\)-homology manifold satisfying Assumption~\ref{assumption:pm1}.
\end{lemma}
See~ \cite[Sec.~1.1]{AdemLeidaRuan:2007} or~\cite{Satake:1956}
for the definition of an orbifold.
By ``locally orientable'' we mean that
locally the orbifold~\(X\), say of dimension~\(n\), is the quotient of an open ball in~\(\mathbb R^{n}\)
by a finite subgroup of~\(SO(n)\).
\begin{proof}
Condition~\eqref{eq:def-homology-mf} holds
because one locally divides by a finite subgroup of~\(SO(n)\)
and \(\Char\Bbbk=0\).
The existence of an orientation cover can be shown in the same way as for manifolds.
Recall that in the smooth case one proceeds as follows, \emph{cf.}~\cite[Ch.~15--17]{Lee:2012}:
If \(X\) admits an oriented atlas, that is, if the charts of~\(X\) can be oriented in a way consistent with coordinate changes,
then one can integrate differential forms with compact supports,
and the integration map provides an isomorphism \(H^{c}_{n}(X)\cong\Bbbk\).
Otherwise \(H^{c}_{n}(X)=0\), and one can construct a connected double cover with oriented atlas
by doubling all charts and gluing them according
to whether coordinate changes preserve or reverse chart orientations.
Hence \(X\) is orientable in our sense if and only if it
admits an oriented atlas.
For an orbifold~\(X\) one can also define differential forms with compact supports,
and if \(X\) is locally orientable and has an atlas of compatibly oriented
charts, then one can integrate these forms, \emph{cf.}~\cite[\S 8]{Satake:1956}. If such an atlas does not
exist, then one can again pass to an oriented two-fold cover.
Now the proofs for manifolds go through without change.
\end{proof}
\begin{lemma}
Let \(X\) be a \(\Bbbk\)-homology manifold with orientation cover~\(\pi\colon\tilde X\to X\).
Any \(T\)-action on~\(X\) lifts to a \(T\)-action on~\(\tilde X\).
\end{lemma}
See~\cite[Cor.~I.9.4]{Bredon:1972} for an analogous result
in the context of topological manifolds.
\begin{proof}
The case of orientable~\(X\) is trivial. If \(X\) is non-orientable,
then by~\cite[Sec.~I.9]{Bredon:1972} the \(T\)-action on~\(X\) lifts
to a \(\tilde T\)-action on~\(\tilde X\),
where \(\tilde T\) is a two-fold covering of~\(T\)
and \(\ker(\tilde T \to T)\cong\mathbb Z_{2}\) acts by deck transformations.
If the non-trivial deck transformation~\(\tau\)
were orientation-preserving, then \(X\) would have to be orientable
because \(H^{c}_{n}(X)=H^{c}_{n}(\tilde X)^{\tau}\cong\Bbbk\),
where \(n=\dim X\).
So \(\tau\) does not preserve orientations, which implies
that \(\tilde T\) cannot be connected. Hence its identity component is \(T\),
and the action lifts.
\end{proof}
\subsection{Twisted coefficients}
\label{sec:twisted}
The aim of this section is to introduce equivariant (co)homology with twisted coefficients~\(\skew{-2}\tilde\Bbbk\).
To distinguish it from the (co)homology we have considered so far,
the latter will be called (co)homology with constant coefficients~\(\Bbbk\) from now on.
Twisted coefficients are only interesting if the characteristic of the ground field~\(\Bbbk\) differs from~\(2\), which we assume
in this section.
For~\(\Char\Bbbk=2\) (co)homology with twisted coefficients is defined to be the same as (co)homology with constant coefficients.
We focus on cohomology with closed supports and homology with compact supports.
All results are equally valid for the other pair of supports;
we will indicate when proofs for that case need additional arguments.
\smallskip
Let \(X\) be a \(\Bbbk\)-homology manifold (which, by our definition, is connected)
with orientation cover~\(\pi\colon\tilde X\to X\)
and non-trivial deck transformation~\(\tau\).
For a pair~\((A,B)\) in~\(X\),
we write \((\tilde A,\tilde B)=(\pi^{-1}(A),\pi^{-1}(B))\).
Moreover, we denote the involution of~\(C^{*}(\tilde A,\tilde B)\)
induced
by~\(\tau\)
by the same letter.
Since \(2 \in \Bbbk \) is invertible, we get a decomposition
\begin{equation}
\label{eq:splitting-cochains-1}
C^{*}(\tilde A,\tilde B) = C^{*}(\tilde A,\tilde B)_{+}\oplus C^{*}(\tilde A,\tilde B)_{-}
\end{equation}
into the eigenspaces of~\(\tau\) for the eigenvalues~\(\pm 1\).
Note that \(\pi^{*}\) is an isomorphism of~\(C^{*}(A,B)\) onto~\(C^{*}(\tilde A,\tilde B)_{+}\).
We define \(C^{*}(A,B;\skew{-2}\tilde\Bbbk)\), the cochains on~\((A,B)\) with twisted coefficients,
to be the eigenspace for the eigenvalue~\(-1\) of~\(\tau\).
Hence the splitting~\eqref{eq:splitting-cochains-1} becomes
\begin{equation}
\label{eq:splitting-cochains}
C^{*}(\tilde A,\tilde B) = C^{*}(A,B)\oplus C^{*}(A,B;\skew{-2}\tilde\Bbbk)
\end{equation}
and induces an analogous decomposition in cohomology,
\begin{equation}
\label{eq:splitting-cohomology}
H^{*}(\tilde A,\tilde B) = H^{*}(A,B)\oplus H^{*}(A,B;\skew{-2}\tilde\Bbbk).
\end{equation}
We now assume that \(X\) is equipped with a \(T\)-action and that the pair~\((A,B)\) is \(T\)-stable. Since
the decomposition~\eqref{eq:splitting-cochains} is \(C_{*}(T)\)-stable,
we can
define \emph{equivariant (co)homology with twisted coefficients}
in a way analogous to Section~\ref{sec:4:singular-Cartan-model}:
\begin{align}
C_{T}^{*}(A,B;\skew{-2}\tilde\Bbbk) &= C_{T}^{*}(\tilde A,\tilde B)_{-} = C^{*}(A,B;\skew{-2}\tilde\Bbbk)\otimes R \\
\intertext{with the same differential as in~\eqref{eq:4:definition-d-CT},}
\label{eq:def-HT-kktilde}
H_{T}^{*}(A,B;\skew{-2}\tilde\Bbbk) &= H^{*}(C_{T}^{*}(A,B;\skew{-2}\tilde\Bbbk)), \\
C^{T\!}_{*}(A,B;\skew{-2}\tilde\Bbbk) &= \Hom_{R}(C_{T}^{*}(A,B;\skew{-2}\tilde\Bbbk),R), \\
\label{eq:def-hHT-kktilde}
H^{T\!}_{*}(A,B;\skew{-2}\tilde\Bbbk) &= H_{*}(C^{T\!}_{*}(A,B;\skew{-2}\tilde\Bbbk)).
\end{align}
Note that
one has decompositions
\begin{align}
\label{eq:splitting-HT}
H_{T}^{*}(\tilde A,\tilde B) &= H_{T}^{*}(A,B)\oplus H_{T}^{*}(A,B;\skew{-2}\tilde\Bbbk), \\
\label{eq:splitting-hHT}
H^{T\!}_{*}(\tilde A,\tilde B) &= H^{T\!}_{*}(A,B)\oplus H^{T\!}_{*}(A,B;\skew{-2}\tilde\Bbbk).
\end{align}
(For \(H_{T,c}^{*}(-)\) and \(H^{T,c}_{*}(-)\)
they follow from the fact that sets of the form~\(\tilde V\)
such that the complement of~\(V\subset X\) is compact are cofinal among all
subsets of~\(\tilde X\) with compact complement.)
Of course, one already has decompositions on the (co)chain level.
Assumption~\ref{ass:4:finite} is extended as follows:
\begin{assumption}
\label{ass:4:finite-homologymf}
For any \(T\)-pair~\((A,B)\) in a \(\Bbbk\)-homology manifold~\(X\)
and any (co)homology theory we are going to consider,
we assume that the non-equivariant cohomology of the cover~\((\tilde A,\tilde B)\) is finite-dimensional over~\(\Bbbk\).
In light of~\eqref{eq:splitting-cohomology}, this is equivalent to both the cohomology
with constant coefficients and that with twisted coefficients being finite-dimensional.
By Proposition~\ref{thm:4:serre-ss},
this in turn implies that the equivariant (co)homology of~\((A,B)\) with constant or twisted coefficients
is finitely generated over~\(R\).
\end{assumption}
Our definition of~\(H^{*}(A,B;\skew{-2}\tilde\Bbbk)\) does not require \(A\) or \(B\) to be \(\Bbbk\)-homology manifolds themselves.
But if \(A\) is connected and open in~\(X\), then it is a \(\Bbbk\)-homology manifold as well, and the restriction of~\(\pi\) to~\(A\) is
the orientation cover of~\(A\). Hence the definition of twisted coefficients
is independent of the ambient space in this case.
\begin{remark}
\label{rem:twisted-properties}
All results from Section~\ref{sec:properties}
(Serre spectral sequences, universal coefficient theorems and localization theorems)
carry over to twisted coefficients.
To see this, one can either redo the proofs with twisted (co)homology,
or one can reduce the new results to the untwisted case
by using the splittings~\eqref{eq:splitting-HT} and~\eqref{eq:splitting-hHT}.
\end{remark}
\begin{remark}
An alternative way to define cohomology with twisted coefficients
is to use local coefficient systems. This could be done as well in the equivariant setting,
and one could even dispense with Assumption~\ref{assumption:pm1}.
The drawback of this approach would be that one cannot reduce statements
to the case of constant coefficients anymore.
In particular, one would need to prove a generalization
of Proposition~\ref{thm:locally-free-action}
(essentially, of the Vietoris--Begle mapping theorem)
to local coefficients, which is required to prove Proposition~\ref{thm:stratum-cm-c}.
\end{remark}
\begin{remark}
\label{rem:4:loc-contractible}
We are mainly interested in applying our results to the fixed point sets~\(X^{K}\)
of subtori~\(K\subset T\), and
because we want to use the Localization Theorem for singular cohomology
(Proposition~\ref{thm:localization-thm-homology}),
we put local contractability
into the standing assumptions in Section~\ref{sec:4:assumptions}
Now it is a small step from~\eqref{eq:definition-CTc}
to using Alexander--Spanier cohomology
for all closed invariant pairs~\((A,B)\),
\emph{cf.}~\citeorbitsone{Rem.~\ref*{rem:loc-contractible}}.
Thus it is not, in fact, necessary to assume
closed subsets to be locally contractible
since the Localization Theorem for Alexander--Spanier cohomology
does not need this assumption. We would, however,
continue to assume that the ambient space~\(X\)
satisfies the standing assumptions; and we do not know
of any torus action on such a space where the fixed point sets
are not locally contractible -- but nor do we know a proof
that they always are.
\end{remark}
\section{Equivariant duality results}
\label{sec:duality-results}
\subsection{Poincaré duality}
Let \(\Bbbk\) be a field.
We start with the statement of non-equivariant Poincaré duality
for orientable homology manifolds in our setting because it is not easy
to locate it in the literature in the desired generality.
\begin{lemma}
\label{thm:PD-nonequiv}
Let \(X\) be an orientable \(\Bbbk\)-homology manifold of dimension~\(n\).
For any non-zero~\(o\inH^{c}_{n}(X)\),
the cap product map
\begin{equation*}
H_{c}^{*}(X) \to H_{n-*}(X),
\quad
\alpha\mapsto \alpha\cap o
\end{equation*}
is an isomorphism.
\end{lemma}
Such an~\(o\) is called an orientation of~\(X\);
it generalizes the notion of a fundamental class of a manifold.
\begin{proof}
Recall that \(X\) is assumed to be a locally compact and locally contractible second-countable Hausdorff space.
Sheaf (co)homology and singular (co)homology (with closed or compact supports)
are therefore naturally isomorphic on~\(X\),
\emph{cf.}~\cite[Thm.~III.1.1, Cor.~V.12.17, Cor.~V.12.21]{Bredon:1997}.
As mentioned in the proof of~\cite[Cor.~V.16.9]{Bredon:1997},
the stalks of the orientation sheaf on~\(X\) are given by~\eqref{eq:def-homology-mf}.
Hence \(X\) is an \(n\)-dimensional homology manifold over~\(\Bbbk\)
in the sense of~\cite[Def.~V.9.1]{Bredon:1997}.
Moreover, by~\cite[Thm.~V.16.16\,(f)]{Bredon:1997}
our definition of orientability coincides with the one in~\cite[Def.~V.9.1]{Bredon:1997}.
By~\cite[Thm.~V.9.2, Cor.~V.10.2]{Bredon:1997},
the sheaf-theoretically defined cap product with~\(o\)
is an isomorphism.
This map coincides with the
cap product in the singular theory given above,
\emph{cf.}~\cite[Ex.~V.22]{Bredon:1997}.
\end{proof}
Now let \(X\) be a \(T\)-space and
a not necessarily orientable \(\Bbbk\)-homology manifold of dimension~\(n\) with orientation cover~\(\tilde X\).
The cup product in~\(\tilde X\) is \(\tau\)-equiv\-ari\-ant,
so that we obtain a pairing
\begin{equation}
\label{eq:cup-product-twisted-c}
C_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk) \otimes C_{T}^{*}(X)\to C_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk),
\end{equation}
hence a cap product
\begin{equation}
\label{eq:cap-product-twisted-c}
H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk) \otimes H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk) \to H^{T\!}_{*}(X),
\quad
\alpha\otimes b\mapsto \alpha\cap b.
\end{equation}
Extending the above definition,
an \emph{orientation} of~\(X\)
is a non-zero element~\(o\in H^{c}_{n}(X;\skew{-2}\tilde\Bbbk)\subsetH^{c}_{n}(\tilde X)\).
An \emph{equivariant orientation} is an element~\(o_{T}\inH^{T,c}_{n}(X;\skew{-2}\tilde\Bbbk)\)
that restricts to an orientation under the restriction map~\(H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk)\toH^{c}_{*}(X;\skew{-2}\tilde\Bbbk)\).
\begin{proposition}
\label{thm:PD-noncompact}
Let \(X\) be an \(n\)-dimensional \(\Bbbk \)-homology manifold.
Any orientation~\(o\) of~\(X\) lifts uniquely to an equivariant orientation~\(o_{T}\).
Moreover, taking the cap product with~\(o_{T}\) gives an isomorphism of \(R\)-modules
\begin{equation*}
H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk) \stackrel{\cap o_{T}}\longrightarrow H^{T\!}_{n-*}(X)
\end{equation*}
and, dually, an isomorphism~
\begin{equation*}
H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk)\longrightarrowH_{T}^{n-*}(X).
\end{equation*}
\end{proposition}
\begin{proof}
The canonical projection~\(H^{c}_{*}(X;\skew{-2}\tilde\Bbbk)\otimes R\toH^{c}_{*}(X;\skew{-2}\tilde\Bbbk)\)
is the edge homomorphism of the \(E_{2}\)~page of the Serre spectral sequence
for~\(H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk)\)
(Proposition~\ref{thm:4:serre-ss} and Remark~\ref{rem:twisted-properties}).
Since \(H^{c}_{*}(X)\otimes R\) lives
in homological degrees at most~\(n\), there are no higher differentials,
and the map~\(H^{T,c}_{n}(X;\skew{-2}\tilde\Bbbk)\toH^{c}_{n}(X;\skew{-2}\tilde\Bbbk)\)
is an isomorphism. Hence any orientation lifts uniquely
to an equivariant orientation.
To prove the first isomorphism, let us assume for the moment that \(X\) is orientable.
Applying the Serre spectral sequence to the map
\begin{equation}
\label{eq:eq-duality-proof}
H_{T,c}^{*}(X) \to H^{T\!}_{*}(X),
\quad
\alpha \mapsto \alpha\cap o_{T},
\end{equation}
we find on the \(E_{2}\)~level the \(R\)-linear extension
\begin{equation}
H_{c}^{*}(X)\otimes R \to H_{*}(X)\otimes R,
\quad
\alpha\otimes f \mapsto \alpha\cap o\otimes f
\end{equation}
of the non-equivariant Poincaré duality isomorphism
from Lemma~\ref{thm:PD-nonequiv},
which is therefore an isomorphism, too.
The non-orientable case reduces to the orientable one:
Since \(H^{c}_{n}(\tilde X)=H^{c}_{n}(\tilde X)_{-}=H^{c}_{n}(X;\skew{-2}\tilde\Bbbk)\),
capping with~\(o_{T}=\tilde o_{T}\)
restricts to the claimed isomorphism.
The second isomorphism is a consequence of the first and the universal coefficient theorem
(Proposition~\ref{thm:4:uct} and Remark~\ref{rem:twisted-properties}).
\end{proof}
\begin{remark}
If \(X\) is orientable, then the two eigenspaces of~\(\tau\) in the decomposition
\begin{equation}
H_{T}^{*}(\tilde X) = H_{T}^{*}(X) \oplus H_{T}^{*}(X;\skew{-2}\tilde\Bbbk)
\end{equation}
are isomorphic as \(R\)-modules and even as modules over~\(H_{T}^{*}(X)\).
Hence (co)ho\-mol\-ogy with twisted coefficients and with constant coefficients
are isomorphic in this case, and these isomorphisms are compatible
with the two Poincaré duality isomorphisms
\(H_{T,c}^{*}(X)\to H^{T\!}_{*}(X)\) and \(H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk)\to H^{T\!}_{*}(X)\).
Of course, in the case~\(\Char\Bbbk=2\) there is no difference either.
\end{remark}
\subsection{Poincaré--Alexander--Lefschetz duality}
\label{sec:pal-duality}
Classically, Poincaré--Alexander--Lefschetz duality (also called ``Poincaré--Lefschetz duality'')
refers to an isomorphism of vector spaces
\begin{equation}
\label{eq:PAL-classical}
H_{c}^{*}(A,B) \cong H_{n-*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A)
\end{equation}
for any closed pair~\((A,B)\) in an oriented \(n\)-dimensional manifold~\(X\),
\emph{cf.}~\cite[\S VIII.7]{Dold:1980}
or~\cite[\S VI.8]{Bredon:1993},
for instance.
In this section we generalize this to equivariant (co)ho\-mol\-ogy, and
we also derive a spectral sequence version of it.
Our approach is similar to the one in~\cite{Bredon:1993}.
We continue to assume that \(X\) is an \(n\)-dimensional
\(\Bbbk \)-homology manifold with a \(T\)-action.
\goodbreak
Our first result
includes Theorem~\ref{thm:PAL-intro} from the introduction.
\begin{theorem}[Poincaré--Alexander--Lefschetz duality]
\label{thm:PAL-A-B}
Let \((A,B)\) be a closed \(T\)-pair in~\(X\).
Then there is a
commutative diagram
\begin{equation*}
\begin{tikzcd}[font=\small,column sep=small]
\rar & H_{T,c}^{n-*}(A,B;\skew{-2}\tilde\Bbbk) \dar \rar & H_{T,c}^{n-*}(A;\skew{-2}\tilde\Bbbk) \dar \rar & H_{T,c}^{n-*}(B;\skew{-2}\tilde\Bbbk) \dar \rar{d} & H_{T,c}^{n+1-*}(B;\skew{-2}\tilde\Bbbk) \dar \rar & {} \\
\rar & H^{T\!}_{*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \rar & H^{T\!}_{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \rar & H^{T\!}_{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B) \rar{d} & H^{T\!}_{*-1}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B) \rar & {}
\end{tikzcd}
\end{equation*}
all of whose vertical arrows are isomorphisms.
An analogous diagram exists with the roles of homology and cohomology interchanged
and all arrows reversed.
\end{theorem}
Since we also want to prove an extension to spectral sequences,
we place ourselves in a slightly more general situation.
Consider an increasing filtration
\begin{equation}
\emptyset=X_{-1}\subset X_{0}\subset\dots\subset X_{m}=X
\end{equation}
of~\(X\) by
closed \(T\)-stable subsets.
See Remark~\ref{rem:4:loc-contractible} on how to do without the standing assumption
of local contractability.)
We set \(\widehat X_{i}=X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i}\), so that the decreasing filtration
of~\(X\) by the open complements of the~\(X_{i}\) can be written as
\begin{equation}
X=\widehat X_{-1}\supset \dots \supset \widehat X_{m-1} \supset \widehat X_{m}=\emptyset.
\end{equation}
\def\CU#1{C^{*}(#1\,|\,\mathcal U;\skew{-2}\tilde\Bbbk)}
\def\CTU#1{C_{T}^{*}(#1\,|\,\mathcal U;\skew{-2}\tilde\Bbbk)}
Let \(\pi\colon\tilde X\to X\) be the orientation cover, and let
\(U=(U_{-1},U_{0},\dots,U_{m})\) be an increasing sequence of open subsets of~\(X\)
such that \(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} U_{-1}\) is compact and \(X_{i}\subset U_{i}\) for all~\(i\).
Any such sequence determines an open cover
\begin{equation}
\mathcal U=\bigl\{\pi^{-1}(U_{0}),\pi^{-1}(U_{1}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{0}),\dots,\pi^{-1}(U_{m}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{m-1})\bigr\}
\end{equation}
of~\(\tilde X\). We write \(\CU{X}\) for the complex of \(\mathcal U\)-small cochains
and similarly \(\CTU{X}=\CU{X}\otimes R\) for the corresponding singular Cartan model.
We start by establishing a variant of the cup product~\eqref{eq:cup-product-twisted-c}.
\begin{lemma}
\label{thm:relative-cup-Ui}
For any~\(-1\le i\le j\le m\)
there is a well-defined relative cup~product
\begin{equation*}
C_{T}^{*}(X,U_{j};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{i}) \stackrel{\cup}\longrightarrow \CTU{X}.
\end{equation*}
It is compatible with restrictions in the sense that the diagram
\begin{equation*}
\begin{tikzcd}
C_{T}^{*}(X,U_{j};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{j}) \arrow{r}{\cup} & \CTU{X} \\
C_{T}^{*}(X,U_{j};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{i}) \arrow{r}{\cup} \arrow{u} \arrow{d} & \CTU{X} \arrow{u}[right]{=} \arrow{d}[right]{=} \\
C_{T}^{*}(X,U_{i};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{i}) \arrow{r}{\cup} & \CTU{X}
\end{tikzcd}
\end{equation*}
commutes.
\end{lemma}
\begin{proof}
The product of~\(\alpha\otimes f\inC_{T}^{*}(X,U_{j};\skew{-2}\tilde\Bbbk)\) and \(\beta\otimes g\inC_{T}^{*}(\widehat X_{i})\)
is defined by
\begin{equation}
(\alpha\otimes f)\cup(\beta\otimes g) = \alpha\cup\pi^{*}(\hat\beta)\otimes fg,
\end{equation}
where \(\hat\beta\in C^{*}(X)\) is a preimage of~\(\beta\).
To show that this is well-defined,
consider a \(\mathcal U\)-small singular simplex~\(\sigma\) in~\(\tilde X\). If \(\sigma\) lies
in~\(\pi^{-1}(U_{j})\), then so does any face~\(\sigma'\) of it. Hence \(\alpha(\sigma')=0\)
and therefore \((\alpha\cup\pi^{*}(\hat\beta))(\sigma)=0\).
If \(\sigma\) does not lie in~\(\pi^{-1}(U_{j})\), then it lies in~\(\pi^{-1}(\widehat X_{i})\supset\pi^{-1}(\widehat X_{j})\) since it
is \(\mathcal U\)-small, and \((\alpha\cup\pi^{*}(\hat\beta))(\sigma)\) is again independent of
the choice of~\(\hat\beta\).
The commutativity of the diagram is clear by construction.
\end{proof}
For \(i\le j\), write
\begin{equation}
{\bar C}_{T,c}^{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk) = \mathop{\underrightarrow\lim} C_{T}^{*}(U_{j},U_{i};\skew{-2}\tilde\Bbbk)
\end{equation}
where the direct limit is taken over all open covers~\(\mathcal U\)
induced by sequences~\(U\) as above,
and let \({\bar C}^{T,c}_{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk)\) be the \(R\)-dual complex.
Note that for~\(i\le j\le k\) we have short exact sequences
\begin{equation}
\label{eq:XXj-XXi-XjXi}
0 \to {\bar C}_{T,c}^{*}(X_{k},X_{j};\skew{-2}\tilde\Bbbk) \to {\bar C}_{T,c}^{*}(X_{k},X_{i};\skew{-2}\tilde\Bbbk) \to {\bar C}_{T,c}^{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk) \to 0,
\end{equation}
and the canonical maps
\({\bar C}_{T,c}^{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk)\toC_{T,c}^{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk)\)
and
\(C^{T,c}_{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk)\to{\bar C}^{T,c}_{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk)\)
are quasi-isomorphisms by tautness.
By passing to the direct limit in Lemma~\ref{thm:relative-cup-Ui}, we get
the family of relative cup products
\begin{equation}
\label{eq:relative-cup-Xi}
{\bar C}_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{i}) \stackrel{\cup}\longrightarrow {\bar C}_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk).
\end{equation}
Fix a representative~\(c_{T}\in{\bar C}^{T,c}_{n}(X;\skew{-2}\tilde\Bbbk)\)
of the equivariant orientation~\(o_{T}\inH^{T,c}_{n}(X;\skew{-2}\tilde\Bbbk)\).
Composition of~\eqref{eq:relative-cup-Xi} with~\(c_{T}\) yields
a pairing
\(
{\bar C}_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk)\otimesC_{T}^{*}(\widehat X_{i}) \to R
\),
which we interpret as a map
\begin{equation}
\label{eq:map-hCTXi-hCThatXi}
f_{i}\colon{\bar C}_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk)\toC^{T\!}_{*}(\widehat X_{i}).
\end{equation}
\begin{lemma}
\label{thm:hCTXi-hCThatXi-iso}
The map~\eqref{eq:map-hCTXi-hCThatXi} is a quasi-isomorphism.
Moreover, for~\(i\le j\) it leads to a commutative diagram
\begin{equation*}
\begin{tikzcd}
0 \rar & {\bar C}_{T,c}^{*}(X,X_{j};\skew{-2}\tilde\Bbbk) \dar{f_{j}} \rar & {\bar C}_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk) \dar{f_{i}} \rar & {\bar C}_{T,c}^{*}(X_{j},X_{i};\skew{-2}\tilde\Bbbk) \dar{f_{ji}} \rar & 0 \\
0 \rar & C^{T\!}_{*}(\widehat X_{j}) \rar & C^{T\!}_{*}(\widehat X_{i}) \rar & C^{T\!}_{*}(\widehat X_{i},\widehat X_{j}) \rar & 0 \mathrlap{,}
\end{tikzcd}
\end{equation*}
whose rows are exact and whose induced map~\(f_{ji}\) is a quasi-isomorphisms as well.
\end{lemma}
\begin{proof}
The exactness of the top row in the diagram was already observed in~\eqref{eq:XXj-XXi-XjXi}.
The compatibility of relative cup products with restrictions stated in Lemma~\ref{thm:relative-cup-Ui}
implies that the left square in the diagram commutes, which induces the right vertical arrow.
Note that Proposition~\ref{thm:relative-cohomology-complement}
remains valid for twisted coefficients, and that the diagram
\begin{equation}
\begin{tikzcd}
H_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk) \otimes H^{T,c}_{*}(X,X_{i};\skew{-2}\tilde\Bbbk) \arrow{r} \arrow{d}[left]{\cong} & R \arrow{d}{=} \\
H_{T,c}^{*}(\widehat X_{i};\skew{-2}\tilde\Bbbk) \otimes H^{T,c}_{*}(\widehat X_{i};\skew{-2}\tilde\Bbbk) \arrow{r} & R
\end{tikzcd}
\end{equation}
is commutative.
Moreover, the restriction of the orientation~\(o_{T}\) to any component of~\(\widehat X_{i}\)
is again an orientation.
Hence the map~\eqref{eq:map-hCTXi-hCThatXi} corresponds
to the map
\(
H_{T,c}^{*}(\widehat X_{i};\skew{-2}\tilde\Bbbk) \to H^{T\!}_{*}(\widehat X_{i})
\),
which is an isomorphism by Proposition~\ref{thm:PD-noncompact}.
Coming back to the commutative ladder,
two out of the three maps between the corresponding long exact sequences in (co)homology
are isomorphisms, hence so is the third.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:PAL-A-B}]
Consider the filtration \(\emptyset\subset B\subset A\subset X\) of~\(X\)
and the associated diagram
\begin{equation*}
\begin{tikzcd}
0 \rar & {\bar C}_{T,c}^{*}(A,B;\skew{-2}\tilde\Bbbk) \dar{f_{AB}} \rar & {\bar C}_{T,c}^{*}(A;\skew{-2}\tilde\Bbbk) \dar{f_{A}} \rar & {\bar C}_{T,c}^{*}(B;\skew{-2}\tilde\Bbbk) \dar{f_{B}} \rar & 0 \\
0 \rar & C^{T\!}_{*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \rar & C^{T\!}_{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \rar & C^{T\!}_{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B) \rar & 0 \mathrlap{,}
\end{tikzcd}
\end{equation*}
whose top row is again of the form~\eqref{eq:XXj-XXi-XjXi}.
The maps~\(f_{A}\) and~\(f_{B}\) are special cases of the map~\(f_{ji}\) from Lemma~\ref{thm:hCTXi-hCThatXi-iso}.
It follows from their definition that the right square commutes, which induces the map~\(f_{AB}\).
By passing to (co)homology we get the commutative ladder stated in Theorem~\ref{thm:PAL-A-B}.
Since \(H^{*}(f_{A})\) and \(H^{*}(f_{B})\) are isomorphisms by Lemma~\ref{thm:hCTXi-hCThatXi-iso},
so is \(H^{*}(f_{AB})\). This proves the first part of the theorem.
The analogous result with the roles of (co)homology reversed is obtained
by applying the functor~\(\Hom_{R}(-,R)\) to the diagram above.
Because the short sequences in the diagram split over~\(R\),
their duals remain exact.
Moreover, the natural inclusion of~\(C_{T}^{*}(A,B)\) into its double dual is a chain homotopy equivalence.
This follows from the fact that for the chain-equivalent minimal Hirsch--Brown model,
which is free and finitely generated over~\(R\), the corresponding map is even an isomorphism.
\end{proof}
A spectral sequence version of equivariant Poincaré--Alexander--Lefschetz duality
is as follows:
\begin{proposition}
\label{thm:PAL-duality}
Let \(o_{T}\inH^{T,c}_{n}(X;\skew{-2}\tilde\Bbbk)\) be an equivariant orientation of~\(X\).
Taking the cap product with~\(o_{T}\)
induces an isomorphism (of degree~\(-n\)) from the \(E_{1}\)~page on between the spectral sequences
\begin{align*}
E_{1}^{p} &= H_{T,c}^{*}(X_{p},X_{p-1};\skew{-2}\tilde\Bbbk) \;\Rightarrow\; H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk), \\
E_{1}^{p} &= H^{T\!}_{*}(\widehat X_{p-1},\widehat X_{p}) \;\Rightarrow\; H^{T\!}_{*}(X).
\end{align*}
Similarly, the spectral sequences
\begin{align*}
E_{1}^{p} &= H^{T,c}_{*}(X_{p},X_{p-1};\skew{-2}\tilde\Bbbk) \;\Rightarrow\; H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk), \\
E_{1}^{p} &= H_{T}^{*}(\widehat X_{p-1},\widehat X_{p}) \;\Rightarrow\; H_{T}^{*}(X)
\end{align*}
are isomorphic from the \(E_{1}\)~page on.
\end{proposition}
\begin{proof}
We filter \({\bar C}_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk)\) by~\(\mathcal F_{i} = {\bar C}_{T,c}^{*}(X,X_{i-1};\skew{-2}\tilde\Bbbk)\) for~\(0\le i\le m\)
and similarly \(C^{T\!}_{*}(X)\) by~\(\widehat\mathcal F_{i} = C^{T\!}_{*}(\widehat X_{i-1})\).
We know from Lemma~\ref{thm:hCTXi-hCThatXi-iso}
that the diagram
\begin{equation}
\begin{tikzcd}
\mathcal F_{j} = {\bar C}_{T,c}^{*}(X,X_{j};\skew{-2}\tilde\Bbbk) \arrow{r} \arrow{d} & C^{T\!}_{*}(\widehat X_{j}) = \widehat\mathcal F_{j} \arrow{d} \\
\mathcal F_{i} = {\bar C}_{T,c}^{*}(X,X_{i};\skew{-2}\tilde\Bbbk) \arrow{r} & C^{T\!}_{*}(\widehat X_{i}) = \widehat\mathcal F_{i}
\end{tikzcd}
\end{equation}
commutes for~\(i\le j\),
so that we obtain a map of spectral sequences with
\begin{align}
E_{0}^{i}(\mathcal F) = {\bar C}_{T,c}^{*}(X_{i},X_{i-1};\skew{-2}\tilde\Bbbk)
&\to
E_{0}^{i}(\widehat\mathcal F) = C^{T\!}_{*}(\widehat X_{i-1},\widehat X_{i}), \\
\label{eq:PAL-E1}
E_{1}^{i}(\mathcal F) = H_{T,c}^{*}(X_{i},X_{i-1};\skew{-2}\tilde\Bbbk)
&\to
E_{1}^{i}(\widehat\mathcal F) = H^{T\!}_{*}(\widehat X_{i-1},\widehat X_{i}).
\end{align}
It follows as in the proof of Theorem~\ref{thm:PAL-A-B}
that the map~\eqref{eq:PAL-E1} is an isomorphism.
The second part
follows analogously
by dualizing \eqref{eq:map-hCTXi-hCThatXi} and the filtrations
\(\mathcal F\)~and~\(\widehat\mathcal F\).
\end{proof}
Equipped with equivariant Poincaré--Alexander--Lefschetz duality,
we can easily deduce the following result, which is asserted
in~\cite[p.~849]{Bredon:1974} without proof.
\begin{corollary}
\label{thm:quotient-homology-mf}
If \(X\) is orientable and \(T\) acts locally freely,
then \(X/T\) is an orientable \(\Bbbk\)-homology manifold of dimension~\(n-r\).
\end{corollary}
\begin{proof}
As discussed in Remark~\ref{rem:quotient},
\(X/T\) satisfies our assumption on spaces, and it is connected since \(X\) is.
To verify condition~\eqref{eq:def-homology-mf},
take an~\(x\in X\) with image~\(\bar x\in \bar X=X/T\).
By Proposition~\ref{thm:locally-free-action}
and Poincaré--Alexander--Lefschetz duality
for the \(T\)-pair~\((Tx,\emptyset)\) in~\(X\), we have
\begin{align}
H_{i}(\bar X,\bar X\mathbin{\mkern-2mu\MFsm\mkern-2mu}\{\bar x\})
&=H^{T\!}_{i+r}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} Tx)
\cong H_{T,c}^{n-r-i}(Tx) \\
\notag
&= H_{c}^{n-r-i}(\{\bar x\})
= \begin{cases}
\Bbbk & \text{if \(i=n-r\),} \\
0 & \text{otherwise.}
\end{cases}
\end{align}
Again by Proposition~\ref{thm:locally-free-action},
the equivariant orientation~\(o_{T}\inH^{T,c}_{n}(X)\) descends to
a non-zero element in~\(H^{c}_{n-r}(X/T)\).
Hence \(X/T\) is orientable.
\end{proof}
\begin{example}
A simple example shows why orientability is needed in Corollary~\ref{thm:quotient-homology-mf}
above. Let \(X\) be the open Möbius band with its standard locally free
action of~\(T=S^{1}\). Then \(X/T\) is a half-open interval, and so it is
not a (homology) manifold, but rather a manifold with boundary. The quotient~\(\tilde X/T\)
of the orientation cover looks like the letter~``V'' with its vertex
corresponding to the end point of the interval, which in turn
corresponds to the middle circle, the only non-free orbit.
\end{example}
\begin{remark}
\label{rem:locally-free}
Let \((A,B)\) be a closed \(T\)-pair in~\(X\).
In Proposition~\ref{thm:locally-free-action} we established
an isomorphism of \(H^{*}(BL)\)-modules
\begin{equation}
\label{eq:locally-free-action-pal}
H^{T\!}_{*}(A,B) = H^{L}_{*-p}(A/K,B/K)
\end{equation}
whenever a subtorus~\(K\subset T\) of rank~\(p\) and with quotient~\(L=T/K\)
acts freely on~\(A\mathbin{\mkern-2mu\MFsm\mkern-2mu} B\);
a locally free action was sufficient in case~\(\Char\Bbbk=0\).
In the context of orientable homology manifolds, we can now understand this isomorphism
in terms of Poincaré--Alexander--Lefschetz duality:
Assume that \(K\) acts freely (or just locally freely if \(\Char\Bbbk=0\)) on the orientable homology manifold~\(X\),
so that \(X/K\) is again an orientable homology manifold by Corollary~\ref{thm:quotient-homology-mf}.
Let \(n=\dim X=\dim X/K+p\).
Using the cohomological part of Proposition~\ref{thm:locally-free-action}
and Poincaré--Alexander--Lefschetz duality, we get
\begin{multline}
H^{T\!}_{*}(A,B) = H_{T,c}^{n-*}(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A) \\
= H_{L,c}^{n-*}((X\mathbin{\mkern-2mu\MFsm\mkern-2mu} B)/K,(X\mathbin{\mkern-2mu\MFsm\mkern-2mu} A)/K)
= H^{L}_{*-p}(A/K,B/K).
\end{multline}
Hence the isomorphism~\eqref{eq:locally-free-action-pal} can be interpreted
as a push-forward map or integration over the fibre in this setting.
\end{remark}
\subsection{Thom isomorphism}
As in the non-equivariant case, the Thom isomorphism
is a consequence of Poincaré and Poincaré--Alexander--Lefschetz duality,
\emph{cf.}~\cite[\S VIII.7, \S VIII.11]{Dold:1980}.
In fact, one can use our version of equivariant duality
to define also Gysin homomorphisms (push forwards), indices, Euler classes
etc.\ in the equivariant setting and to prove their main properties
(\emph{cf.}~\cite[Sec.~5.3]{AlldayPuppe:1993}) for cohomology with different supports.
The use of the Cartan model even provides a more functorial approach
than the minimal Hirsch--Brown model used in~\cite{AlldayPuppe:1993}.
Here we only develop the theory as far as needed for our applications in Section~\ref{applications-homology-mf}.
We continue to assume that \(X\) is an \(n\)-dimensional
\(\Bbbk \)-homology manifold with a \(T\)-action.
\begin{proposition}
\label{thm:thom-iso}
$ $
\begin{enumerate}
\item Let \(Y\subset X\)
be a closed \(T\)-stable \(\Bbbk\)-homology manifold of dimension~\(m\).
Suppose that the orientation cover of~\(X\) restricts to the orientation cover of~\(Y\).
Then there is an isomorphism of \(R\)-modules
\begin{equation*}
H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} Y) \cong H_{T}^{*}(Y)
\end{equation*}
of degree~\(m-n\).
\item Assume \(\Char\Bbbk=0\), and let \(K\subset T\) be a subtorus. Then there is an isomorphism of \(R\)-modules
\begin{equation*}
H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X^{K}) \cong H_{T}^{*}(X^{K}).
\end{equation*}
This isomorphism has degree~\(m-n\) if all components of~\(X^{K}\)
are of dimension~\(m\); in general it only preserves degrees mod~\(2\).
\end{enumerate}
\end{proposition}
\begin{proof}
We start with the first case.
By Poincaré--Alexander--Lefschetz duality for the pair~\((X,Y)\) and
Poincaré duality for~\(Y\) we have isomorphisms of \(R\)-modules
\begin{equation}
H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} Y) \cong H^{T,c}_{*}(Y,\skew{-2}\tilde\Bbbk) \cong H_{T}^{*}(Y),
\end{equation}
whose composition has degree~\(m-n\).
Note that for the first isomorphism \(H^{T,c}_{*}(Y,\skew{-2}\tilde\Bbbk)\)
is defined via the restriction of the orientation cover of~\(X\),
and via the orientation cover for~\(Y\) in the second isomorphism.
By assumption, these two covers coincide.
We now consider the fixed point set~\(X^{K}\).
It has finitely many components, say \(Y_{1}\),~\ldots,~\(Y_{k}\),
which are \(\Bbbk\)-homology manifolds whose dimensions are congruent to~\(n\) mod~\(2\)
by a result of Conner and Floyd~\cite[Thm.~V.3.2]{Borel:1960}.
By excision we have
\begin{equation}
H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X^{K}) = \bigoplus_{i} H_{T}^{*}(X,X\mathbin{\mkern-2mu\MFsm\mkern-2mu} Y_{i}).
\end{equation}
The claim follows once we know
that the restriction of an orientation cover for~\(X\)
to each~\(Y_{i}\) is an orientation cover for that component.
This is the content of the following lemma.
\end{proof}
\begin{lemma}
\label{thm:orientation-cover-XT}
Assume \(\Char\Bbbk=0\).
Then the restriction of an orientation cover for~\(X\)
to any component~\(Y\) of~\(X^{T}\) is an orientation cover for~\(Y\).
\end{lemma}
Note that each component~\(Y\) is orientable if and only if its orientation cover is trivial.
According to the theorem of Conner and Floyd mentioned previously, each component~\(Y\) of~\(X^{T}\) is orientable if so is \(X\).
Lemma~\ref{thm:orientation-cover-XT} can therefore be seen as a generalization of this part of their result.
Also note that for a smooth \(T\)-manifold~\(X\) Lemma~\ref{thm:orientation-cover-XT}
is a consequence of the fact that the normal bundle of each component~\(Y\)
of~\(X^{T}\) is orientable, \emph{cf.}~\cite[Cor.~2]{Duflot:1983}:
By excision one can restrict from~\(X\) to a \(T\)-stable tubular neighbourhood of~\(Y\),
and, like the normal bundle, this neighbourhood is orientable if and only if \(Y\) is.
\begin{proof}
Let \(\tilde X\to X\) be an orientation cover for~\(X\) and \(\tilde Z\to Z\)
its restriction to~\(Z=X^{T}\). Note that \(\tilde Z=(\tilde X)^{T}\).
For each component~\(Y\) of~\(Z\), say of dimension~\(m\), let \(\tilde Y\to Y\) be the further restriction.
We will show \(H^{c}_{m}(\tilde Y)_{-}\ne0\), which proves that \(\tilde Y\to Y\)
is an orientation cover of~\(Y\): If \(Y\) is orientable, this condition ensures that \(\tilde Y\)
is disconnected, and if \(Y\) is non-orientable, it shows that \(\tilde Y\) is orientable.
Since the cap product~\eqref{eq:cap-product-twisted-c} is natural with respect to proper maps of spaces,
we get a commutative diagram
\begin{equation}
\begin{tikzcd}
H_{T,c}^{*}(\tilde X)_{-} \arrow{r}{\cap\,\iota_{*}(b)} \arrow{d}[left]{\iota^{*}} & H^{T\!}_{*}(\tilde X)_{+} \\
H_{T,c}^{*}(\tilde Z)_{-} \arrow{r}{\cap\,b} & H^{T\!}_{*}(\tilde Z)_{+}\mathrlap{,} \arrow{u}[right]{\iota_{*}}
\end{tikzcd}
\end{equation}
where \(\iota\colon\tilde Z\hookrightarrow\tilde X\) and \(b\inH^{T,c}_{*}(\tilde Z)_{-}\).
Note that \(\iota_{*}\colon H^{c}_{*}(\tilde Z)\toH^{c}_{*}(\tilde X)\)
commutes with the involution~\(\tau\) and therefore preserves the \(\pm1\)~eigenspaces.
Let \(S\subset R\) be the multiplicative subset of homogeneous polynomials of positive degree.
We localize the diagram at~\(S\) and choose \(b\) to be a preimage of~\(o_{T}\in S^{-1}H^{T\!}_{*}(\tilde X)_{-}\),
which is possible by the localization theorem in equivariant homology (Proposition~\ref{thm:localization-thm-homology},
here for homology with closed supports).
By the same result and equivariant Poincaré duality, this
turns the top and vertical arrows into isomorphisms, hence also the bottom arrow.
Now \(b\in S^{-1}H^{T,c}_{*}(\tilde Z)_{-}\) is a sum of elements, one for each component of~\(Z=X^{T}\).
The summand~\(b^{Y}\) corresponding to the component~\(Y\)
can be written in the form
\begin{equation}
b^{Y} = b^{Y}_{m} + \dots + b^{Y}_{0}
\in S^{-1}H^{T,c}_{*}(\tilde Y)_{-} = H^{c}_{*}(\tilde Y)_{-}\otimes S^{-1}R
\end{equation}
for some~\(b^{Y}_{i}\in H^{c}_{i}(\tilde Y)_{-}\otimes S^{-1}R\).
A cap product \(\alpha\cap c\) with \(\alpha\in H^{m}(\tilde Y)\)
and \(c\in H^{c}_{i}(\tilde Y)\) vanishes unless \(i = m\).
Because capping with~\(b^{Y}\) is an isomorphism,
we conclude that \(b^{Y}_{m}\ne0\), hence~\(H^{c}_{m}(Y)_{-}\ne0\).
\end{proof}
\section{Applications to the orbit structure}
\label{sec:PAL-applications}
We assume throughout the rest of this paper that
\(X\) is a \(T\)-space and that
the characteristic of the field~\(\Bbbk\) is \(0\).
Recall that the orbit filtration~\((X_{i})\) has been defined in the introduction.
\subsection{\texorpdfstring{\boldmath{General \(T\)-spaces}}{General T-spaces}}
In Sections \ref*{sec:main-result},~\ref*{sec:applications}.1 and~\ref*{sec:partial-exactness}
of~\cite{AlldayFranzPuppe} we established results about the
equivariant cohomology with closed supports and
equivariant homology with compact supports
of the orbit filtration of a \(T\)-space~\(X\).
All these results have analogues for the other pair of supports,
\emph{i.\,e.}, for cohomology with compact supports and homology with closed supports.
Moreover, for a \(\Bbbk\)-homology manifold~\(X\), one has another set of
analogous results for (co)homology with twisted coefficients.
The proofs for the new cases are usually identical to the ones given in~\cite{AlldayFranzPuppe}.
In the case of twisted coefficients, one may alternatively derive them
from the decompositions~\eqref{eq:splitting-HT} and~\eqref{eq:splitting-hHT}
and the untwisted result for an orientation cover;
see Proposition~\ref{thm:stratum-cm-c} below for an example.
We therefore content ourselves by stating the most important results in a more general setting.
All results in this section
are equally valid for the other pair of supports.
We simplify notation in the following way:
For a \(T\)-pair~\((A,B)\) in a homology manifold~\(X\) we write \(H_{T}^{*}(A,B;\ell)\) to denote
either cohomology with constant coefficients (\(\ell=\Bbbk\)) or
with twisted coefficients (\(\ell=\skew{-2}\tilde\Bbbk\)).
The same applies to homology and (co)chain complexes.
If \(X\) is not a homology manifold, then \(\ell\) always means constant coefficients.
\begin{proposition}
\label{thm:stratum-cm-c}
The \(R\)-modules \(H_{T}^{*}(X_{i},X_{i-1};\ell)\)~and~\(H^{T\!}_{*}(X_{i},X_{i-1};\ell)\) are zero or
Cohen--Macaulay of dimension~\(r-i\) for~\(0\le i\le r\).
\end{proposition}
\begin{proof}
The version for constant coefficients and the usual pair of supports
is proved in~\citeorbitsone{Prop.~\ref*{thm:stratum-cm}},
following the ideas of~\cite[Sec.~7]{Atiyah:1974}.
The proof for the other pair of supports is identical.
The case of twisted coefficients follows
from the untwisted version for an orientation cover
and the observation that a non-zero direct summand
of a Cohen--Macaulay module is again Cohen--Macaulay of the same dimension.
\end{proof}
\begin{corollary}
\label{thm:hHT-orbit-degeneration-c}
The spectral sequence associated with the orbit filtration of \(C^{T\!}_{*}(X;\ell)\)
and converging to~\(H^{T\!}_{*}(X;\ell)\)
degenerates at~\(E^{1}_{p}=H^{T\!}_{*}(X_{p},X_{p-1};\ell)\).
\end{corollary}
\begin{proof}
See \citeorbitsone{Cor.~\ref*{thm:hHT-orbit-degeneration}}.
\end{proof}
The following two results are immediate consequences of Corollary~\ref{thm:hHT-orbit-degeneration-c},
\emph{cf.}~\citeorbitsone{Cor.~\ref*{thm:Ext-hHT-zero}}.
For the convenience of the reader, we provide proofs that are based only on
the crucial Cohen--Macaulay property identified in Proposition~\ref{thm:stratum-cm-c}.
\begin{proposition}
\label{thm:hHT-short-exact}
For any~\(-1\le i< j\le r\) there is a short exact sequence
\begin{equation*}
0 \longrightarrow H^{T\!}_{*}(X_{j},X_{i};\ell)
\longrightarrow H^{T\!}_{*}(X,X_{i};\ell)
\longrightarrow H^{T\!}_{*}(X,X_{j};\ell)
\longrightarrow 0.
\end{equation*}
\end{proposition}
\begin{proposition}
\label{thm:Ext-i-j-0}
\(\Ext_{R}^{p}(H^{T\!}_{*}(X_{j},X_{i};\ell),R) = 0\) for~\(p>j\) and~\(p\le i\).
In other words,
\(\dim_{R}H^{T\!}_{*}(X_{j},X_{i};\ell)\le r-i-1\)
and \(\depth_{R}H^{T\!}_{*}(X_{j},X_{i};\ell)\le r-j\).
\end{proposition}
\begin{proof}[Proof of Propositions~\ref{thm:hHT-short-exact} and~\ref{thm:Ext-i-j-0}]
We prove both statements simultaneously by falling induction on~\(i\).
For \(i=r\) there is nothing to show.
Now assume both claims are true for a given~\(i\) and all~\(j\ge i\).
By Proposition~\ref{thm:stratum-cm-c}, \(H^{T\!}_{*}(X_{i},X_{i-1};\ell)\) is zero or Cohen--Macaulay of dimension~\(r-i\).
Because \(H^{T\!}_{*}(X,X_{i};\ell)\) is of dimension~\(\le r-i-1\) by induction,
the connecting homomorphism
\begin{equation}
H^{T\!}_{*}(X,X_{i};\ell) \to H^{T\!}_{*-1}(X_{i},X_{i-1};\ell)
\end{equation}
is zero, \emph{cf.}~\citeorbitsone{Lemma~\ref*{thm:CM-map-0}}, so that we get the short
exact sequence
\begin{equation}
\label{eq:hHT-short-exact-1}
0 \longrightarrow H^{T\!}_{*}(X_{i},X_{i-1};\ell)
\longrightarrow H^{T\!}_{*}(X,X_{i-1};\ell)
\longrightarrow H^{T\!}_{*}(X,X_{i};\ell)
\longrightarrow 0.
\end{equation}
By induction, the map~\(H^{T\!}_{*}(X,X_{i};\ell)\toH^{T\!}_{*}(X,X_{j};\ell)\)
is surjective, hence so is the composition
\begin{equation}
H^{T\!}_{*}(X,X_{i-1};\ell) \to H^{T\!}_{*}(X,X_{i};\ell)\toH^{T\!}_{*}(X,X_{j};\ell),
\end{equation}
which proves the first claim.
Taking \(X=X_{j}\) in~\eqref{eq:hHT-short-exact-1}, we obtain
\begin{equation}
0 \longrightarrow H^{T\!}_{*}(X_{i},X_{i-1};\ell)
\longrightarrow H^{T\!}_{*}(X_{j},X_{i-1};\ell)
\longrightarrow H^{T\!}_{*}(X_{j},X_{i};\ell)
\longrightarrow 0.
\end{equation}
The second claim now follows by induction and the way \(\Ext\)~modules
(or dimension and depth{\slash}projective dimension) behave with respect to
short exact sequences.
\end{proof}
The spectral sequence for equivariant cohomology induced by the orbit filtration
does \emph{not} degenerate at the \(E_{1}\)~page in general.
Since this page of the spectral sequence
is of independent interest, we give it a name.
The \emph{non-augmented Atiyah--Bredon complex}~\(AB^{*}(X;\ell)\)
with coefficients in~\(\ell\) is the complex of \(R\)-modules
defined by
\begin{equation}
AB^{i}(X;\ell)=H_{T}^{*+i}(X_{i}, X_{i-1};\ell)
\end{equation}
for~\(0\le i\le r\)
and zero otherwise.
The differential
\begin{equation}
d_{i}\colon H_{T}^{*}(X_{i}, X_{i-1};\ell)\to H_{T}^{*+1}(X_{i+1}, X_{i};\ell)
\end{equation}
is the connecting morphism in the long exact sequence of the triple~\((X_{i+1},X_{i},X_{i-1})\).
Note that \(AB^{*}(X;\ell)\) is the \(E_{1}\)~page of the spectral sequence
arising from the orbit filtration of~\(X\) and converging to~\(H_{T}^{*}(X;\ell)\),
and its cohomology~\(H^{*}(AB^{*}(X;\ell))\) is the \(E_{2}\)~page.
The \emph{augmented Atiyah--Bredon complex}
is obtained by augmenting~\(AB^{*}(X;\ell)\)
by~\(AB^{-1}(X;\ell)=H_{T}^{*}(X;\ell)\) and the restriction to the fixed point set,
\begin{multline}
\label{eq:4:atiyah-bredon}
0
\longrightarrow H_{T}^{*}(X;\ell)
\longrightarrow H_{T}^{*}(X_0;\ell)
\stackrel{d_{0}}\longrightarrow H_{T}^{*+1}(X_1, X_0;\ell)
\stackrel{d_{1}}\longrightarrow \cdots \\ \cdots
\stackrel{d_{r-1}}\longrightarrow H_{T}^{*+r}(X_r, X_{r-1};\ell)
\longrightarrow 0.
\end{multline}
\begin{remark}
This sequence first appeared explicitly in the paper~\cite{Bredon:1974} of Bredon,
but it goes back to work of Atiyah~\cite[Sec.~7]{Atiyah:1974}.
In the context of equivariant \(K\)-theory,
Atiyah showed that the freeness of~\(K_{T}^{*}(X)\) implies that the sequence
\begin{equation}
0 \to K_{T}^{*}(X,X_{i-1}) \to K_{T}^{*}(X_{i},X_{i-1}) \to K_{T}^{*}(X,X_{i}) \to 0
\end{equation}
is exact for all~\(i\) \cite[eq.~(7.3)]{Atiyah:1974}.
This in turn is equivalent to the exactness of the \(K\)-theoretic analogue of~\eqref{eq:4:atiyah-bredon},
\emph{cf.}~\cite[Lemma~4.1]{FranzPuppe:2007}.
Atiyah actually considered representations only, but his arguments work for any \(T\)-space.
\end{remark}
It turns out that the cohomology of the non-augmented Atiyah--Bredon complex
is completely determined by~\(H^{T\!}_{*}(X;\ell)\).
\goodbreak
\begin{theorem}
\label{thm:exthab-ss-c}
For any \(T\)-space~\(X\)
the following two spectral sequences converging to~\(H_{T}^{*}(X;\ell)\) are naturally isomorphic from the \(E_{2}\)~page on:
\begin{enumerate}
\item The one induced by the orbit filtration with~\(E_{1}^{p}=H_{T}^{*}(X_{p},X_{p-1};\ell)\),
\item The universal coefficient spectral sequence with~\(E_{2}^{p}=\Ext_{R}^{p}(H^{T\!}_{*}(X;\AA),R)\).
\end{enumerate}
\end{theorem}
\begin{proof}
See \citeorbitsone{Thm.~\ref*{thm:exthab-ss}}.
The version for twisted coefficients may again be derived from the untwisted
result for an orientation cover.
\end{proof}
\begin{corollary}
\label{thm:4:ext-hab}
For any~\(i\ge0\) there is an isomorphism of \(R\)-modules
\begin{equation*}
H^{i}(AB^{*}(X;\ell)) = \Ext_{R}^{i}(H^{T\!}_{*}(X;\ell),R).
\end{equation*}
\end{corollary}
In Section~\ref{sec:quick-proof} we will give a direct proof of this important result
that is not based on Theorem~\ref{thm:exthab-ss-c}.
\begin{theorem}
\label{thm:4:conditions-partial-exactness}
The following conditions are equivalent for any~\(0\le j\le r\):
\begin{enumerate}
\item \label{4:q1} The Atiyah--Bredon sequence~\eqref{eq:4:atiyah-bredon}
is exact at all positions~\(-1\le i\le j-2\).
\item \label{4:q4} The restriction map~\(H_{T}^{*}(X;\ell)\to H_{K}^{*}(X;\ell)\)
is surjective for all subtori~\(K\) of~\(T\) of rank~\(r-j\).
\item \label{4:q3} \(H_{T}^{*}(X;\ell)\) is free over all subrings~%
\(H^{*}(BL)\subset H^{*}(BT)=R\), where \(L\) is a quotient of~\(T\) of rank~\(j\).
\item \label{4:q2} \(H_{T}^{*}(X;\ell)\) is a \(j\)-th syzygy.
\end{enumerate}
\end{theorem}
Several equivalent definitions of syzygies are collected in~\citeorbitsone{Sec.~\ref*{sec:Torsion-freeness}}.
\begin{proof}
The proof of~\citeorbitsone{Thm.~\ref*{thm:conditions-partial-exactness}} carries over.
Only the argument for the equivalence~\(\hbox{\eqref{4:q4}}\Leftrightarrow\hbox{\eqref{4:q3}}\)
has to be slightly modified in the case of twisted coefficients:
The involution on an orientation cover~\(\tilde X\) induces one on the Borel construction~\(\tilde X_{T}\),
and \(H_{T}^{*}(X;\skew{-2}\tilde\Bbbk)=H^{*}(\tilde X_{T})_{-}\) in the notation of Section~\ref{sec:twisted},
and analogously for~\(K\).
(Note that the decomposition of the cohomology into the \(\pm1\)~eigenspaces of the involution
exists even for spaces that do not satisfy our standing assumptions.)
Now one considers the map
\begin{equation}
H_{T}^{*}(X;\skew{-2}\tilde\Bbbk)=H_{T/K}^{*}(\tilde X_{K})_{-} \to H^{*}(\tilde X_{K})_{-}=H_{K}^{*}(X;\skew{-2}\tilde\Bbbk).
\end{equation}
and applies the Leray--Hirsch argument
as used in~\cite{AlldayFranzPuppe}
to the \(-1\)~eigenspaces.
\end{proof}
\subsection{Homology manifolds}
\label{applications-homology-mf}
In this section we assume that \(X\) is a \(\Bbbk\)-homology manifold.
Theorem~\ref{thm:exthab-ss-c}
and \citeorbitsone{Thm.~\ref*{thm:exthab-ss}}
may be combined
with Poincaré duality and Poincaré--Alexander--Lefschetz duality
in various ways. The following result is an example of this.
Recall that \(\widehat X_{i}=X\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i}\).
\begin{corollary}
The following spectral sequences
are isomorphic from the \(E_{2}~\)page on:
\begin{align*}
E_{1}^{p} &= H^{T\!}_{*}(\widehat X_{p-1},\widehat X_{p}) \;\Rightarrow\; H^{T\!}_{*}(X), \\
E_{2}^{p} &= \Ext_{R}^{p}(H_{T}^{*}(X),R) \;\Rightarrow\; H^{T\!}_{*}(X).
\end{align*}
\end{corollary}
\begin{proof}
Let \(n=\dim X\). By Theorem~\ref{thm:PAL-duality},
the first spectral sequence is isomorphic,
from the \(E_{1}\)~page on, to the spectral sequence
\begin{equation}
E_{1}^{p}=H_{T,c}^{*}(X_{p},X_{p-1};\skew{-2}\tilde\Bbbk)[-n] \;\Rightarrow\; H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk)[-n].
\end{equation}
By Poincaré duality,
the second spectral sequence is isomorphic to
\begin{equation}
E_{2}^{p}=\Ext_{R}^{p}(H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk),R)[-n] \;\Rightarrow\; H_{T,c}^{*}(X;\skew{-2}\tilde\Bbbk)[-n].
\end{equation}
Hence the claim follows from Theorem~\ref{thm:exthab-ss-c}.
\end{proof}
\begin{proposition}
\label{thm:thom-iso-Xi-Xi1}
For any~\(0\le i\le r\) there is an isomorphism of \(R\)-modules
\begin{equation*}
H_{T}^{*}(\widehat X_{i-1},\widehat X_{i})\congH_{T}^{*}(X_{i}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i-1}),
\end{equation*}
preserving degrees modulo~\(2\).
\end{proposition}
\begin{proof}
Since only finitely many
isotropy groups occur in~\(X\),
there is a subtorus~\(K\subset T\)
such that \(\widehat X_{i-1}^{K}=X_{i}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i-1}\).
Hence our claim reduces to the Thom isomorphism from Proposition~\ref{thm:thom-iso}.
\end{proof}
The following result generalizes a theorem of Duflot~\cite[Thm.~1]{Duflot:1983}
concerning smooth actions on differential manifolds.
More than the extension to continuous actions on homology manifolds, our main insight is
that Duflot's result follows
by equivariant Poincaré--Alexander--Lefschetz duality
from
Proposition~\ref{thm:hHT-short-exact}
which is valid for \emph{all} \(T\)-spaces.
\begin{proposition}
\label{thm:duflot-general}
For any~\(0\le i\le r\)
there are short exact sequences
\begin{equation*}
0 \to H_{T}^{*}(X,\widehat X_{i}) \to H_{T}^{*}(X) \to H_{T}^{*}(\widehat X_{i}) \to 0
\end{equation*}
and
\begin{equation*}
0 \to H_{T}^{*}(X,\widehat X_{i-1}) \to H_{T}^{*}(X,\widehat X_{i}) \to H_{T}^{*}(X_{i}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i-1}) \to 0
\end{equation*}
where the right map in the lower sequence preserves degrees only mod~\(2\).
\end{proposition}
Duflot also considers actions of \(p\)-tori~\((\mathbb Z_{p})^{r}\) with~\(p>2\).
The results of~\cite{AlldayFranzPuppe} and this paper can as well be extended to \(p\)-tori;
we will elaborate on this elsewhere because some proofs require modification.
\begin{proof}
By Proposition~\ref{thm:hHT-short-exact} we have a short exact sequence
\begin{equation}
\label{eq:Xi-X-short-exact}
0 \longrightarrow H^{T\!}_{*}(X_{i};\skew{-2}\tilde\Bbbk)
\longrightarrow H^{T\!}_{*}(X;\skew{-2}\tilde\Bbbk)
\longrightarrow H^{T\!}_{*}(X,X_{i};\skew{-2}\tilde\Bbbk)
\longrightarrow 0.
\end{equation}
The first short exact sequence we are claiming follows from this
by Poincaré--Alexander--Lefschetz duality (Theorem~\ref{thm:PAL-A-B}).
Replacing \(X_{i}\) by~\(X_{i-1}\) and \(X_{j}\) by~\(X_{i}\)
in Proposition~\ref{thm:hHT-short-exact}
leads similarly to the short exact sequence
\begin{equation}
0 \to H_{T}^{*}(X,\widehat X_{i-1}) \to H_{T}^{*}(X,\widehat X_{i}) \to H_{T}^{*}(\widehat X_{i-1},\widehat X_{i}) \to 0.
\end{equation}
Combining this with Proposition~\ref{thm:thom-iso-Xi-Xi1}
confirms our second claim.
\end{proof}
Not surprisingly, we also get the following spectral sequence version:
\begin{proposition}
The spectral sequence associated
to the filtration~\((\widehat X_{i})\) and converging to~\(H_{T}^{*}(X)\)
degenerates at the \(E_{1}\)~page.
\end{proposition}
\begin{proof}
By Theorem~\ref{thm:PAL-duality}, this spectral sequence is isomorphic,
from the \(E_{1}\)~page on, to the spectral sequence converging to~\(H^{T,c}_{*}(X;\skew{-2}\tilde\Bbbk)\)
with \(E_{1}^{p}=H^{T,c}_{*}(X_{p},X_{p-1};\skew{-2}\tilde\Bbbk)\). The latter degenerates
by Corollary~\ref{thm:hHT-orbit-degeneration-c}.
\end{proof}
\subsection{Uniform actions}
Let \(X\) be a \(T\)-space.
For dimensional reasons, it follows from Proposition~\ref{thm:stratum-cm-c}
that the differential
\begin{equation}
d_{i}\colon H_{T}^{*}(X_{i}, X_{i-1})\to H_{T}^{*+1}(X_{i+1}, X_{i})
\end{equation}
cannot be injective unless \(H_{T}^{*}(X_{i}, X_{i-1})=0\).
This has implications for the uniformity of actions,
which we discuss now.
Recall from~\cite[Def.~3.6.17]{AlldayPuppe:1993}
that the \(T\)-action on~\(X\) is said to be \emph{uniform}
if for any subtorus~\(K\subset T\) and any component~\(F\) of~\(X^{K}\)
one has \(F^{T}\ne\emptyset\).
(This implies \(X^{T}\ne\emptyset\) if \(X\ne\emptyset\).)
We call \(F\) a \emph{minimal stratum} of~\(X\) corresponding to the subtorus~\(K\subset T\)
if \(F\) is a component of~\(X^{K}\)
and if \(F^{L}=\emptyset\) for any subtorus~\(L\) properly containing \(K\).
Note that the action is uniform if and only if all minimal strata are components of~\(X^{T}\).
This observation makes it easy to construct non-uniform actions, even in the context
of compact orientable manifolds with fixed points, see \cite[Ex.~1.7.4]{Allday:2005}.
It has been noted by a number of authors
that the \(T\)-action is uniform if \(H_{T}^{*}(X)\) is a free \(R\)-module.
A large part of~\cite{AlldayFranzPuppe}, however, is concerned
with the case where \(H_{T}^{*}(X)\) is a torsion-free \(R\)-module (that is, a first syzygy),
but not necessarily free. So we note the following, which is also an immediate
consequence of the characterization of uniform actions given in~\cite[Thm.~3.6.18]{AlldayPuppe:1993}.
\begin{proposition}
If \(H_{T}^{*}(X)\) is \(R\)-torsion-free, then the action is uniform.
\end{proposition}
\begin{proof}
Assume that there is a minimal stratum~\(F\), corresponding to a subtorus \(K\subsetneq T\).
Then \(H_{T}^{*}(F)\) is a direct summand of~\(H_{T}^{*}(X^{K})\). Set \(S=H^{*}(BK)\mathbin{\mkern-2mu\MFsm\mkern-2mu}\{0\}\)
and \(\tilde S=R\mathbin{\mkern-2mu\MFsm\mkern-2mu}\{0\}\).
By the localization theorem, we have
\begin{equation}
S^{-1}H_{T}^{*}(X) \cong S^{-1}H_{T}^{*}(X^{K}) = S^{-1}H_{T}^{*}(F)\oplus S^{-1}H_{T}^{*}(X^{K}\mathbin{\mkern-2mu\MFsm\mkern-2mu} F),
\end{equation}
so we can choose a~\(c\inH_{T}^{*}(X)\) such that its image in~\(S^{-1}H_{T}^{*}(X^{K})\)
is non-zero and lies in~\(S^{-1}H_{T}^{*}(F)\). Because \(F^{T}\) is empty, \(H_{T}^{*}(F)\)
is \(R\)-torsion. This implies that the image of~\(c\) in~\(\tilde S^{-1}H_{T}^{*}(X^{T})\) is zero,
hence also the one in~\(H_{T}^{*}(X^{T})\). But this is a contradiction
because the torsion-freeness of~\(H_{T}^{*}(X)\) is equivalent
to the injectivity of the map~\(H_{T}^{*}(X)\toH_{T}^{*}(X^{T})\).
\end{proof}
\begin{proposition}
\label{thm:minimal-stratum-HAB}
Let \(F\) be a minimal stratum of~\(X\) corresponding to a subtorus \(K\subset T\) of rank~\(r-i\).
Then \(H^{i}(AB^{*}(X))\ne0\).
\end{proposition}
\begin{proof}
The minimal stratum~\(F\) is a component of both~\(X_{i}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i-1}\)
and \(X_{i}\mathbin{\mkern-2mu\MFsm\mkern-2mu} X_{i-2}\). So the summand \(H_{T}^{*}(F)\)
maps isomorphically under the restriction~\(H_{T}^{*}(X_{i},X_{i-1})\toH_{T}^{*}(X_{i},X_{i-2})\),
and \(H_{T}^{*}(F)\cap\im d_{i-1}=0\).
On the other hand, \(\dim_{R}H_{T}^{*}(F)=r-i\).
Because \(H_{T}^{*}(X_{i+1}, X_{i})\) is of dimension~\(r-i-1\), the restriction of the differential
to~\(H_{T}^{*}(F)\) cannot be injective.
\end{proof}
Proposition~\ref{thm:minimal-stratum-HAB} also follows
from~\cite[Thm.~3.6.14]{AlldayPuppe:1993}.
For another result relating the uniformity and torsion-freeness,
see~\cite[Thm.~3.8.7\,(4)]{AlldayPuppe:1993}.
In the notation of that theorem, a minimal stratum~\(F=c\)
corresponding to a subtorus~\(K\subset T\) gives one of the pairs~\((K_{i},c_{i})\),
\(1\le i\le\gamma\).
\section
The cohomology of the Atiyah--Bredon complex}
\label{sec:quick-proof}
In this section we shall give a direct proof of Corollary~\ref{thm:4:ext-hab}.
Instead of reasoning with spectral sequences, we will rely
on Propositions~\ref{thm:hHT-short-exact} and~\ref{thm:Ext-i-j-0}.
Our proof is valid for any pair of supports and, in case of a \(\Bbbk\)-homology manifold,
also for twisted coefficients. For ease of notation, we write it down only
for constant coefficients and the usual pair of supports.
Recall that we are still assuming the characteristic of~\(\Bbbk\) to be \(0\).
For convenience, we define \(X_{r+1}=X\) in addition to~\(X_{-1}=\emptyset\).
Let~\(0\le i\le r\).
The following commutative diagram with exact rows will play a central role:
\begin{equation}
\begin{tikzcd}[column sep=small]
\label{eq:hHT-Xi-Xi1-diagram}
0 \rar & H^{T\!}_{*}(X_{i}) \dar \rar & H^{T\!}_{*}(X_{i+1}) \dar \rar & H^{T\!}_{*}(X_{i+1},X_{i}) \dar \rar & 0 \\
0 \rar & H^{T\!}_{*}(X_{i},X_{i-1}) \rar & H^{T\!}_{*}(X_{i+1},X_{i-1}) \rar & H^{T\!}_{*}(X_{i+1},X_{i}) \rar & 0.
\end{tikzcd}
\end{equation}
The exactness of top row is Proposition~\ref{thm:hHT-short-exact}
for the triple~\((X_{i+1},X_{i},X_{-1})\),
and that of the bottom follows by looking at~\((X_{i+1},X_{i},X_{i-1})\).
For brevity, we denote \(H^{T\!}_{*}(X_{j},X_{i})\) by~\(M_{j,i}\) and,
for any \(R\)-module~\(M\), we abbreviate \(\Ext_{R}^{p}(M,R)[p]\) by~\(\mathcal{E}^{p}(M)\).
From the bottom row of~\eqref{eq:hHT-Xi-Xi1-diagram} and the long exact sequence for~\(\Ext\)
we have
a connecting homomorphism
\begin{equation}
\mathcal{E}^{i}(M_{i,i-1})
\stackrel{\delta_{i}}\longrightarrow \mathcal{E}^{i+1}(M_{i+1,i}).
\end{equation}
\begin{lemma}
There is an isomorphism of \(R\)-modules, natural in~\(X\),
\begin{equation*}
H^{i}(AB^{*}(X)) \cong \ker\delta_{i} \bigm/ \im\delta_{i-1}.
\end{equation*}
\end{lemma}
\begin{proof}
By Proposition~\ref{thm:Ext-i-j-0}, the universal coefficient spectral sequence
\begin{equation*}
E_{2}^{p} = \Ext_{R}^{p}(H^{T\!}_{*}(X_{i+1},X_{i-1}),R) \;\Rightarrow\; H_{T}^{*}(X_{i+1},X_{i-1})
\end{equation*}
collapses (since \(E_{2}^{p}=0\) unless \(p=i\) or~\(i+1\)),
and there is a short exact sequence
\begin{equation}
\label{eq:Ei-short-exact}
0 \longrightarrow \mathcal{E}^{i+1}(M_{i+1,i-1})
\longrightarrow H_{T}^{*}(X_{i+1},X_{i-1})
\longrightarrow \mathcal{E}^{i}(M_{i+1,i-1})
\longrightarrow 0
\end{equation}
coming from the filtration of the spectral sequence.
Consider the following (possibly non-commuting) diagram:
\begin{equation*}
\begin{tikzcd}[font=\small,column sep=small]
& H_{T}^{*}(X_{i+1},X_{i-1}) \arrow{r} \arrow{d} & H_{T}^{*}(X_{i},X_{i-1}) \arrow{r}{d_{i}} \arrow{d}{\phi_{i}} & H_{T}^{*}(X_{i+1},X_{i}) \arrow{r} & H_{T}^{*}(X_{i+1},X_{i-1}) \\
0 \arrow{r} & \mathcal{E}^{i}(M_{i+1,i-1}) \arrow{r} & \mathcal{E}^{i}(M_{i,i-1}) \arrow{r}{\delta_{i}} & \mathcal{E}^{i+1}(M_{i+1,i}) \arrow{r} \arrow{u}[right]{\psi_{i+1}} & \mathcal{E}^{i+1}(M_{i+1,i-1}) \arrow{r} \arrow{u} & 0
\end{tikzcd}
\end{equation*}
The rows are part of long exact sequences; the bottom one
is based on the bottom row of~\eqref{eq:hHT-Xi-Xi1-diagram} and uses again Proposition~\ref{thm:Ext-i-j-0}.
The vertical maps come from~\eqref{eq:Ei-short-exact},
and \(\phi_{i}\)~and~\(\psi_{i+1}\) are isomorphisms, once again by Proposition~\ref{thm:Ext-i-j-0}.
The left square and the right square commute by naturality.
Hence \(\phi_{i}\) maps \(\ker d_{i}\) isomorphically onto~\(\ker\delta_{i}\),
and \(\psi_{i+1}\) maps \(\im\delta_{i}\) isomorphically onto~\(\im d_{i}\).
The maps~\(\phi_{i}\) and \(\psi_{i}\) are induced by the filtration of~\(H_{T}^{*}(X_{i},X_{i-1})\)
coming from the universal coefficient spectral sequence. But since \(\Ext_{R}^{j}(H_{T}^{*}(X_{i},X_{i-1}),R)=0\)
for \(j\ne i\), the filtration of~\(H_{T}^{*}(X_{i},X_{i-1})\) has only one non-trivial step, \emph{i.\,e.},
it looks like
\begin{equation}
0 = \cdots = 0 = \mathcal F^{i+1} \subset \mathcal F^{i} = H_{T}^{*}(X_{i},X_{i-1}) = \mathcal F^{i-1} = \cdots = \mathcal F^{0}.
\end{equation}
By the properties of spectral sequences the composition~\(\psi_{i}\phi_{i}\)
is the inclusion~\(\mathcal F^{i}\hookrightarrow\mathcal F^{i-1}\), which in our case is the identity.
So \(\phi_{i}=\psi_{i}^{-1}\) for any~\(i\).
As a consequence, \(\phi_{i}\) induces an isomorphism
\begin{equation*}
\ker d_{i} \bigm/ \im d_{i-1} \to \ker\delta_{i} \bigm/ \im\delta_{i-1}.
\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Corollary~\ref{thm:4:ext-hab}]
Applying \(\Ext_{R}(-,R)\) to the diagram~\eqref{eq:hHT-Xi-Xi1-diagram}
leads to the commutative diagram
\begin{equation*}
\begin{tikzcd}
\mathcal{E}^{i}(H^{T\!}_{*}(X_{i+1})) \arrow{r} & \mathcal{E}^{i}(H^{T\!}_{*}(X_{i})) \arrow{r} & \mathcal{E}^{i+1}(M_{i+1,i}) \\
& \mathcal{E}^{i}(M_{i,i-1}) \arrow{r}{\delta_{i}} \arrow{u} & \mathcal{E}^{i+1}(M_{i+1,i})\mathrlap{.} \arrow{u}[right]{=}
\end{tikzcd}
\end{equation*}
Together with the analogous square for~\(i-1\) instead of~\(i\)
we can form the commutative diagram
\begin{equation*}
\begin{tikzcd}
0 \\
\mathcal{E}^{i-1}(H^{T\!}_{*}(X_{i-1})) \arrow{r}{=} \arrow{u} & \mathcal{E}^{i-1}(H^{T\!}_{*}(X_{i-1})) \arrow{d} \\
\mathcal{E}^{i-1}(M_{i-1,i-2}) \arrow{r}{\delta_{i-1}} \arrow{u} & \mathcal{E}^{i}(M_{i,i-1}) \arrow{r}{\delta_{i}} \arrow{d}{p_{i}} & \mathcal{E}^{i+1}(M_{i+1,i}) \\
& \mathcal{E}^{i}(H^{T\!}_{*}(X_{i})) \arrow{r}{=} \arrow{d} & \mathcal{E}^{i}(H^{T\!}_{*}(X_{i})) \arrow{u} \\
& 0 & \mathcal{E}^{i}(H^{T\!}_{*}(X_{i+1})) \arrow{u} \\
& & 0\mathrlap{,} \arrow{u}
\end{tikzcd}
\end{equation*}
where all columns come from the long exact sequence for~\(\Ext\),
applied to some row of~\eqref{eq:hHT-Xi-Xi1-diagram}.
We have used Proposition~\ref{thm:Ext-i-j-0} to obtain the zero entries.
As a consequence,
\begin{align}
\ker\delta_{i} &= p_{i}^{-1}(\mathcal{E}^{i}(H^{T\!}_{*}(X_{i+1}))) \\
\intertext{and}
\im\delta_{i-1} &= \ker p_{i}.
\end{align}
Hence
\begin{equation}
H^{i}(AB^{*}(X)) = \ker\delta_{i} \bigm/ \im\delta_{i-1}
\cong \mathcal{E}^{i}(H^{T\!}_{*}(X_{i+1}))
\cong \mathcal{E}^{i}(H^{T\!}_{*}(X)).
\end{equation}
The last isomorphism follows from Proposition~\ref{thm:Ext-i-j-0} and the short sequence
\begin{multline}
0 = \mathcal{E}^{i}(H^{T\!}_{*}(X,X_{i+1})) \to \mathcal{E}^{i}(H^{T\!}_{*}(X)) \\
\to \mathcal{E}^{i}(H^{T\!}_{*}(X_{i+1})) \to \mathcal{E}^{i+1}(H^{T\!}_{*}(X,X_{i+1})) = 0.
\end{multline}
This completes the proof.
\end{proof}
|
1,108,101,565,761 | arxiv | \section{Introduction}\label{intro}
Understanding various scaling laws governing a phase transition has been one of the
primary research topics over the last fifty years, be it from an equilibrium perspective or at the
nonequilibrium front \cite{landau1958,Stanleybook,Onukibook,Puribook}.
Also for polymers, the equilibrium aspects of phase transitions have been studied extensively \cite{deGennesbook,Doibook,Cloizeauxbook,rubinstein2003polymer}.
Polymers in general represent a large class of macromolecules be they chemically synthesized or naturally occurring.
A range of fundamentally important biomolecules, e.g., proteins and DNA, fall under the broad canopy of polymers.
Most of these polymeric systems exhibit some form of conformational phase transitions depending on certain external
conditions, viz., the collapse transition in homopolymers. Upon changing the solvent condition
from good (where monomer-solvent interaction is stronger) to poor (where monomer-monomer interaction is stronger),
a homopolymer undergoes a collapse transition from its extended coil state to a
compact globule \cite{stockmayer1960,nishio1979}. This transition belongs to a class of phase transitions that can be understood
by investigating various associated scaling laws \cite{deGennesbook,Doibook,Cloizeauxbook,rubinstein2003polymer}.
From a general point of view, the understanding of the collapse transition in homopolymers can be extended to investigate other
conformational transitions experienced by different types of macromolecules, e.g., in a protein the collapse of the backbone
may occur simultaneously or precede its folding to a native state \cite{camacho1993,Pollack2001,Sadqi2003,haran2012,reddy2017}.
\par
Due to certain technical difficulties such as preparing a super-dilute solution or finding a long
enough polymer with negligible polydispersity, the experimental realization of the collapse
transition was rare in the past \cite{nishio1979,chu1995}. Since the introduction of technical
equipment like small angle x-ray scattering, single molecule fluorescence, dynamic
light scattering, dielectric spectroscopy, etc., monitoring the behavior of a single macromolecule
has become feasible \cite{schuler2002,Xu2006,tress2013}. On the other hand, theoretically the scaling laws
related to the static and the equilibrium dynamic aspects of the transition are well
understood since a long time \cite{deGennesbook,Doibook,Cloizeauxbook,rubinstein2003polymer}.
\par
In contrast to the equilibrium literature, however, in the nonequilibrium aspects, i.e., for the kinetics
of the collapse transition, there is no unanimous theoretical understanding even though quite a few analytical
and computational studies have been conducted \cite{deGennes1985,byrne1995,timoshenko1995,kuznetsov1995,kuznetsov1996,kuznetsov1996eDNA,dawson1997,pitard1998,klushin1998,Halperin2000,kikuchi2002,Abrams2002,Montesi2004,yeomans2005,pham2008,guo2011}.
The aforesaid experimental developments to track single polymers and the lack of
understanding of the nonequilibrium dynamics of polymers motivated us to perform a series of works on the kinetics of polymer collapse
\cite{MajumderEPL,Majumder2016PRE,majumder2016proceeding,majumder2017SM,christiansen2017JCP,majumder2018proceeding}. There our novel
approach of understanding the collapse by using its analogy with usual coarsening phenomena of particle and spin systems provided intriguing
new insights, as will be discussed subsequently.
\par
Most of the studies on collapse kinetics in the past dealt with the understanding of the relaxation time, i.e., the time a system requires to attain its new equilibrium state once its current state is perturbed by a sudden change of the environmental conditions, e.g., the temperature.
In the context of polymer collapse, the relaxation time is referred to as the collapse time $\tau_c$, which measures the time a polymer
that is initially in an extended state needs to reach its collapsed globular phase. Obviously, $\tau_c$ depends on the degree of polymerization or chain length $N$
(the number of repeating units or monomers in the chain) of the polymer, which can be understood via the scaling relation
\begin{equation}\label{tau_scaling}
\tau_c \sim N^z,
\end{equation}
where $z$ is the corresponding dynamic exponent. The above relation is reminiscent of the scaling one observes for dynamic
critical phenomena \cite{HH-review}. The other important aspect of the
kinetics is the growth of clusters of monomers that are formed during the collapse \cite{byrne1995,Abrams2002}. The
cluster growth has recently been understood by us using the phenomenological similarities of collapse with
coarsening phenomena in general \cite{MajumderEPL,majumder2017SM,christiansen2017JCP}. Moreover, along the same line one can also find evidence
of aging and related scaling laws \cite{Majumder2016PRE,majumder2016proceeding,majumder2017SM,christiansen2017JCP} that was mostly ignored in the past.
\begin{table*}[t!]\label{tab_for_tauc}
\caption{Summary of the simulation results for the scaling of the collapse time $\tau_c$ with the length of the polymer $N$
as described in \eqref{tau_scaling}.}
\centering
\begin{tabular}{r c c c c c}
\hline
Authors~~~~~~~~~~~ & Model & Method &Explicit Solvent & Hydrodynamics & $z$ \\
\hline
Byrne \textit{et al.} (1995) \cite{byrne1995} & Off-lattice & Langevin & No & No & $3/2$\\
Kuznetsov \textit{et al.} (1995) \cite{kuznetsov1995} & Lattice & MC simulations & No & No & $2$\\
Kuznetsov \textit{et al.} (1996) \cite{kuznetsov1996} & GSC equations & Numerically & No & No & $2$\\
Kuznetsov \textit{et al.} (1996) \cite{kuznetsov1996} & GSC equations & Numerically & No & Yes & $3/2$\\
Kikuchi \textit{et al.} (2005) \cite{yeomans2005} & Off-lattice & MD simulations & Yes & No & $1.89(9)$\\
Kikuchi \textit{et al.} (2005) \cite{yeomans2005} & Off-lattice & MD simulations & Yes & Yes & $1.40(8)$\\
Pham \textit{et al.} (2008) \cite{pham2008} & Off-lattice & BD simulations & No & No & $1.35(1)$\\
Pham \textit{et al.} (2008) \cite{pham2008} & Off-lattice & BD simulations & No & Yes & $1.01(1)$\\
Guo \textit{et al.} (2011) \cite{guo2011} & Off-lattice &DPD simulations & Yes & Yes & $0.98(9)$\\
Majumder \textit{et al.} (2017) \cite{majumder2017SM} & Off-lattice & MC simulations & No & No & $1.79(6)$\\
Christiansen \textit{et al.} (2017) \cite{christiansen2017JCP}& Lattice & MC simulations & No & No & $1.61(5)$\\
\hline
\end{tabular}
\end{table*}
\par
In this Colloquium, we intend to give a brief review of the results available on collapse
kinetics based on the above mentioned three topics: relaxation, coarsening, and aging. It is organized in the following way. We
will begin with an overview of the phenomenological theories of collapse dynamics followed by an overview of the previous simulation results in Section\ \ref{Overview_prev}. Afterwards, in Section\ \ref{recent_MC}, we will
discuss our recent developments concerning the understanding of relaxation time, cluster growth and aging for the kinetics of the collapse transition in a homopolymer. Then we will present in Section\ \ref{2D} some preliminary results on the special case of polymer collapse kinetics in space dimension $d=2$. In Section\ \ref{conclusion}, finally, we wrap up with a discussion and an outlook to future research in this direction.
\section{Overview of previous studies on collapse dynamics}\label{Overview_prev}
The first work on the collapse dynamics dates back to 1985 when
de Gennes proposed the phenomenological \textit{sausage} model \cite{deGennes1985}. It states that
the collapse of a homopolymer proceeds via the formation of a sausage-like intermediate
structure which eventually minimizes its surface energy through hydrodynamic dissipation
and finally forms a compact globule having a spherical shape. Guided by this picture, in the
next decade there was a series of numerical work by Dawson and co-workers
considering both lattice and off-lattice models \cite{byrne1995,timoshenko1995,kuznetsov1995,kuznetsov1996,kuznetsov1996eDNA,dawson1997}. However, the sequence of
events obtained in their simulations differs substantially from the sausage model. Later
in 2000, Halperin and Goldbart (HG) came up with their \textit{pearl-necklace} picture of the collapse \cite{Halperin2000},
consistent not only with the observations of Dawson and co-workers but also with all the
later simulation results. According to HG the collapse of a polymer upon quenching from an extended coil
state into the globular phase occurs in three different stages:
(i) initial stage of formation of many small nascent clusters of monomers out of the
density fluctuations along the chain,
(ii) growth and coarsening of the clusters by withdrawing monomers
from the bridges connecting the clusters until they coalesce with each other to form bigger clusters and eventually forming
a single cluster,
and (iii) the final stage of rearrangements of the monomers within the single cluster to form a compact globule.
Even before the pearl-necklace picture of collapse by HG, Klushin \cite{klushin1998} independently proposed a phenomenology for the
same picture based on similar coarsening of local clusters. It differs from the HG one as it does not consider
the initial stage of formation of the local ordering or small nascent clusters. However, almost all the simulation
results so far have shown evidence for the initial stage of nascent cluster formation.
\par
In addition to the above description, HG also provided time scales for each of these stages which scale with the number of monomers as
$N^0$, $N^{1/5}$ and $N^{6/5}$, respectively. Quite obviously this scaling of the collapse time
is dependent on the underlying dynamics of the system, i.e., on the consideration of hydrodynamic effects.
Klushin derived that the collapse time $\tau_c$ scales as $\tau_c \sim N^{1.6}$ in absence
of hydrodynamics whereas the collapse is much faster in presence of hydrodynamics with the scaling $\tau_c \sim N^{0.93}$ \cite{klushin1998}.
Similar conclusions were drawn in other theoretical and simulation studies as well. In the following subsection \ref{sub_collapse_time} we discuss some
of these numerical results on the scaling of the collapse time.
\subsection{Earlier results on scaling of collapse time}\label{sub_collapse_time}
As mentioned the dynamic exponent $z$ in Eq.\ \eqref{tau_scaling} depends on the intrinsic
dynamics of the system. It is thus important to notice the method and even the type
of model one uses for the computer simulations. The available
results can be divided into three categories: (i) Monte Carlo (MC) and Langevin simulations
with implicit solvent effect, (ii) molecular dynamics (MD) simulations with implicit solvent effect, and
(iii) MD simulations with explicit solvent effect. Results from MC and Langevin simulations do not incorporate
hydrodynamics and hence only mimic diffusive dynamics. On the other hand, MD simulations with implicit solvent, depending on the
nature of the thermostat used for controlling the temperature, can be with or without hydrodynamic effects.
At this point we caution the reader that there is a subtle difference between solvent effects and hydrodynamic effects. Thus
doing MD simulations with explicit solvent does not necessarily mean that the hydrodynamic modes are actively taken into account.
Rather this depends on how one treats the momenta of the solvent particles in the simulation, e.g., it depends on the choice of thermostat
used \cite{Frenkel_book}. This gets not only reflected in the nonequilibrium relaxation times like the collapse time but also in the
equilibrium autocorrelation time. The few existing studies on polymer collapse using MD simulations that account for solvent
effects by considering explicit solvent beads, thus, can also be classified on the basis of consideration of
hydrodynamic effects. Since there is no available appropriate theory for the
nonequilibrium relaxation time, the trend is to compare the scaling of the collapse time with the available theories of equilibrium polymer dynamics. In absence of hydrodynamic effects the dynamics is compared with Rouse scaling that
states that in equilibrium the diffusion coefficient $D$ scales with the chain length $N$ as $D \sim N^{-1}$, which implies that the relaxation time scales as $\tau \sim N^2$ \cite{Rouse1953}. On the other hand, in presence of hydrodynamics when
the polymer moves as a whole due to the flow field, the corresponding scaling laws are $D \sim N^{-0.6}$ and $\tau \sim N$, known
as the Zimm scaling \cite{Zimm1956}. Both Rouse and Zimm scalings have been verified in a number of computational studies as well as in experiments.
However, we stress that the nonequilibrium relaxation time, e.g., the collapse time $\tau_c$ does not necessary follow
the same scaling as the equilibrium autocorrelation time \cite{janke2012monte,janke2018monte}.
\par
In Table\ \ref{tab_for_tauc} we have summarized some of the relevant results on the scaling of the collapse time that one can find in the literature.
In the early days the simulations were done mostly by using methods that do not incorporate hydrodynamics, e.g., numerical solution of the Gaussian-self
consistent (GSC) equations, MC simulations and Langevin simulations.
They considered models which could be either on-lattice (interacting self-avoiding walks) or off-lattice (with Lennard-Jones
kind of interaction). The GSC approach and MC simulations (in a lattice model) provided $z$ that is in agreement
with the Rouse scaling in equilibrium \cite{kuznetsov1995,kuznetsov1996}. Langevin simulations of an off-lattice model
yielded $z\approx 3/2$ \cite{byrne1995} which was the value later obtained in a theory by Abrams \textit{et al.} \cite{Abrams2002}.
Kikuchi \textit{et al.} \cite{kikuchi2002} went a step further by doing MD simulations of an off-lattice model with
explicit solvent which also allows one to tune the hydrodynamic interactions. In absence of hydrodynamics they obtained
values of $z \approx 1.9$ close to the Rouse value of $2$ \cite{yeomans2005}. On the other hand, in presence of hydrodynamic interaction the dynamics is much
faster with $z\approx 1.4$ \cite{yeomans2005}. This is more or less in agreement with GSC results obtained considering hydrodynamic interaction
\cite{kuznetsov1996}. Later more simulations on polymer collapse with explicit solvent were performed. In this regard, relatively recent
Brownian dynamics (BD) simulations with explicit solvent (hydrodynamic interaction preserved) by Pham \textit{et al.} also provided even
faster dynamics with $z\approx 1$ \cite{pham2008}. There exist even newer results from dissipative-particle dynamics (DPD) simulation
that also reports $z \approx 1$ \cite{guo2011}. These results can be compared with the Zimm scaling applicable to equilibrium dynamics
in presence of hydrodynamics. The bottom line from this literature survey is that no consensus has been achieved for the value of $z$. In our recent results on collapse dynamics from
MC simulations a consistent value of $z$ was obtained between an off-lattice model and a lattice model with $z\approx1.7$ \cite{majumder2017SM,christiansen2017JCP}.
\subsection{Earlier results on cluster growth}
As discussed above most of the previous studies on kinetic of the collapse transition focused on understanding the scaling of the collapse time. However, going by the phenomenological picture
described by HG, as also observed in most of the available simulation results, the second stage of the collapse, i.e., the coalescence of the ``pearl-like'' clusters to form bigger clusters and thereby eventually
a single globule bears resemblance to usual coarsening of particle or spin systems. The nonequilibrium phenomenon of coarsening in particle or spin systems is
well understood \cite{Puribook,bray2002} with current focus shifting towards more challenging scenarios like fluid mixtures \cite{shimizu2015,basu2017}.
Fundamentally, too, it is still developing as for example in computationally expensive long-range systems \cite{Henrik2019,Janke2019CS,corberi2019}.
\par
In usual coarsening phenomena, e.g., in ordering of ferromagnets after quenching from the high-temperature disordered
phase to a temperature below the critical point, the nonequilibrium pathway is described by a growing length scale, i.e., average linear
size of the domains $\ell(t)$ as \cite{Puribook,bray2002}
\begin{equation}\label{length-scaling}
\ell(t) \sim t^{\alpha}.
\end{equation}
The value of the growth exponent $\alpha$ depends on the concerned system as well as the conservation of the order parameter during the entire process.
For example, in solid binary mixtures where the dynamics is conserved, $\alpha=1/3$ which is the Lifshitz-Slyozov (LS) growth
exponent \cite{LS-growth}, whereas for a ferromagnetic ordering where the order parameter
is not conserved, $\alpha=1/2$ which is referred to as the Lifshitz-Cahn-Allen (LCA) growth \cite{LCA-growth}. On the other hand, in fluids where
in simulations one must incorporate hydrodynamics, one observes three different
regimes; the early-time diffusive growth where $\alpha=1/3$ as in solids; the intermediate viscous hydrodynamic growth with $\alpha=1$ \cite{Siggia1979}; and
at a very late stage the inertial growth with $\alpha=2/3$ \cite{Furukawa1988}.
\par
In the context of polymer collapse, the concerned growing length scale
could be the linear size (or radius) of the clusters. However, in all the previous works it was chosen to be the average mass
$C_s(t)$, or average number of monomers present in a cluster. In spatial dimension $d$, it is related to the linear size of the cluster
as $C_s(t) \sim \ell(t)^d$. Thus in analogy with the power-law scaling\ \eqref{length-scaling} of the length scale during coarsening, the corresponding scaling
of the cluster growth can then be written as
\begin{eqnarray}\label{Cs_powerlaw}
C_s(t) \sim t^{\alpha_c},
\end{eqnarray}
where $\alpha_c=d\alpha$ is the corresponding growth exponent. Like the dynamic exponent $z$, the growth exponent $\alpha_c$ is also dependent
on the intrinsic dynamics of the system. Previous studies based on MC simulations of a lattice polymer model
reported $\alpha_{c}=1/2$ \cite{kuznetsov1995} and Langevin simulations of an off-lattice model
reported $\alpha_{c}=2/3$ \cite{byrne1995}, both being much smaller than $\alpha_c=1$ as observed for coarsening
with only diffusive dynamics. BD simulations with explicit solvent also provided
$\alpha_c \approx 2/3$ in absence of hydrodynamics. Like in coarsening of fluids, the dynamics of cluster growth during collapse, too, gets faster
when hydrodynamic effects are present. For instance, BD and DPD simulations with incorporation of hydrodynamic effects yield $\alpha_c \approx 1$ \cite{guo2011,pham2008}.
Surprisingly, our recent result on an off-lattice model via MC simulations also showed $\alpha_c \approx 1$ \cite{majumder2017SM}. This will be discussed in Section\ \ref{coarsening}.
\subsection{Earlier results on aging during collapse}
Apart from the scaling of the growth of the average domain size during a coarsening process there is another
important aspect, namely, aging \cite{henkelbook,Zannetti_book}. The fact that a younger system relaxes faster than an older one forms the
foundation of aging in general. This is also an essential concept from the point of view of glassy dynamics \cite{Bouchaud_book,Castillo2002}. Generally, aging is probed
by the autocorrelation function of a local observable $O_i$ given as
\begin{eqnarray}\label{auto_cor}
C(t,t_w)=\langle O_i(t)O_i(t_w) \rangle - \langle O_i(t) \rangle \langle O_i(t_w) \rangle,
\end{eqnarray}
with $t$ and $t_w < t$ being the observation and the waiting times, respectively.
The $\langle \dots \rangle$ denotes averaging over several randomly chosen realizations of the initial configuration and
independent time evolutions. The observable $O_i$ is generally chosen in such a way that it clearly reflects the changes happening during the concerned nonequilibrium process, e.g.,
the time- and space-dependent order parameter during ferromagnetic ordering.
\par
There are three necessary conditions for aging: (i) absence of time-translation invariance in $C(t,t_w)$, (ii) slow
relaxation, i.e., the relaxation times obtained from the decay of $C(t,t_w)$ should increase as function of $t_w$, and
(iii) the observation of dynamical scaling of the form
\begin{eqnarray}\label{autocorr_scaling}
C(t,t_w) \sim x_c^{-\lambda},
\end{eqnarray}
where $x_c$ is the appropriate scaling variable and $\lambda$ is the corresponding aging or autocorrelation exponent.
For coarsening, the scaling variable is usually taken as $x_c=t/t_w$, the ratio of the times $t$ and $t_w$, or $x_c=\ell/\ell_w$, the ratio of the corresponding growing length scales at those times.
Fisher and Huse (FH) in their study of ordering spin glasses proposed a bound on $\lambda$ which only depends on the dimension $d$ as \cite{Fisher1988}
\begin{eqnarray}\label{FH-bound}
\frac{d}{2} \le \lambda \le d.
\end{eqnarray}
Later this bound was found to be obeyed in the ferromagnetic ordering as well \cite{Liu1991,Lorenz2007,Midya2014}.
An even stricter and more general bound was later proposed by Yeung \textit{et al.} \cite{yeung1996} that also includes the case of conserved
order-parameter dynamics.
\par
In the context of polymer collapse, although analogous to coarsening phenomena in general, this particular aspect of aging
has received very rare attention \cite{pitard2001,Stanley2002}. There, like in other soft-matter systems \cite{Cloitre2000,bursac2005,wang2006} the results indicated presence of subaging, i.e., evidence for scaling similar to Eq.\ \eqref{autocorr_scaling}
but as a function of $x_c=t /t_{w}^{\mu}$ with $\mu <1$. Afterwards, there were no attempts to quantify this scaling with respect to the
ratio of the growing length scale. In our approach, both with off-lattice and lattice models
we showed that simple aging scaling as in Eq.\ \eqref{autocorr_scaling} with respect to the ratio of the cluster sizes can be observed
\cite{Majumder2016PRE,majumder2016proceeding,majumder2017SM,christiansen2017JCP}. Thus to quantify the aging scaling, by choosing
$x_c=C_s(t)/C_s(t_w)$ one has to transform Eq.\ \eqref{autocorr_scaling} to
\begin{eqnarray}\label{power-law_Cst}
C(t,t_w) \sim \left[\frac{C_s(t)}{C_s(t_w)}\right ]^{-\lambda_c}
\end{eqnarray}
where $\lambda_c$ is the associated autocorrelation exponent which is related to the traditional exponent $\lambda$ via the relation $\lambda_c=\lambda/d$.
\begin{figure*}[htb!]
\centering
\resizebox{0.73\textwidth}{!}{\includegraphics{snapshots3d.pdf}}
\caption{Time-evolution snapshots during collapse of a homopolymer showing pearl-necklace formation,
following a quench from an extended coil phase to a temperature, $T_q=1$ for OLM and $T_q=2.5$ for LM, in the globular phase. The chain lengths $N$ used are $724$ and $4096$ for OLM and LM, respectively.
Taken from Ref.\ \cite{majumder2018proceeding}.}
\label{figsnap}
\end{figure*}
\section{Recent Monte Carlo results in $d=3$}\label{recent_MC}
In this section we will review the very recent developments by us concerning the kinetics of homopolymer collapse from all above mentioned three perspectives. We will compare the results from an off-lattice model (OLM) and a lattice model (LM), focusing in this section on $d=3$ dimensions.
New results for the special case of $d=2$ will be presented in the next section to check the validity of the observations in general. Before moving on to a discussion of our findings next we first briefly describe the different models and methodologies used in our studies.
\subsection{Models and methods}\label{model}
For OLM, we consider a flexible bead-spring model where the connectivity between two successive
monomers or beads are maintained via the standard
finitely extensible non-linear elastic (FENE) potential
\begin{eqnarray}\label{FENE}
E_{\rm{FENE}}(r_{ii+1})=-\frac{K}{2}R^2\ln\left[1-\left(\frac{r_{ii+1}-r_0}{R}\right)^2\right].
\end{eqnarray}
We chose the force constant of the spring $K=40$, the mean bond length $r_0=0.7$ and the maximum allowed deviation from the mean position $R=0.3$ \cite{milchev2001}.
Monomers were considered to be spherical beads with diameter $\sigma =r_0/2^{1/6}$. The nonbonded interaction between the monomers
is given by
\begin{eqnarray}\label{potential_OLM}
E_{\rm {nb}}(r_{ij})=E_{\rm {LJ}}\left({\rm{min}}[r_{ij},r_c]\right)-E_{\rm {LJ}}(r_c),
\end{eqnarray}
where
\begin{eqnarray}\label{std_LJ}
E_{\rm {LJ}}(r)=4\epsilon \left[ \left( \frac{\sigma}{r} \right)^{12} - \left (\frac{ \sigma}{r} \right )^{6} \right]
\end{eqnarray}
is the standard Lennard-Jones (LJ) potential. Here $\epsilon(=1)$ is the interaction strength and $r_c$ $=2.5\sigma$ the cut-off radius.
\par
For LM, we consider a variant of the interactive self-avoiding walk on a simple-cubic lattice, where each lattice site can be
occupied by a single monomer. The Hamiltonian is given by
\begin{equation}\label{hamiltonian}
H=-\frac{1}{2} \sum_{i \ne j, j \pm 1} w(r_{ij}),~~ \textrm{where}~~
w(r_{ij})=\begin{cases} J & r_{ij} = 1 \\ 0 & \text{else}\end{cases}.
\end{equation}
Here $r_{ij}$ is the distance between two nonbonded monomers $i$ and $j$,
$w(r_{ij})$ is an interaction parameter that considers only nearest neighbors, and $J(=1)$ is the interaction strength.
We allowed a fluctuation in the bond length by considering diagonal bonds, i.e., the possible bond
lengths are $1$, $\sqrt{2}$ and $\sqrt{3}$. The model has been independently studied for equilibrium properties \cite{shaffer1994,dotera1996}.
It has certain similarities with the bond-fluctuation model \cite{carmesin1988}. For a comparison between them, please see Ref.\ \cite{subramanian2008}.
\par
The dynamics in the models can be introduced via Markov chain MC simulations \cite{janke2012monte,Landaubook}, however, with the restriction of allowing only local moves. For OLM the
local moves correspond to shifting of a randomly selected monomer to a new position randomly chosen within [$-\sigma/10 :\sigma/10$] of its current position.
For LM, too, the move set consists of just shifting a randomly chosen mon\-omer to another lattice site such that the bond connectivity constraint is maintained.
These moves are then accepted or rejected following the Metropolis algorithm with Boltzmann criterion \cite{janke2012monte,Landaubook}.
The time scale of the simulations is one MC sweep (MCS) which consists of $N$ (where $N$ is the number of monomers in the chain) such attempted moves.
\par
The collapse transition temperature is $T_{\theta}(N \rightarrow \infty) \approx 2.65 \epsilon/k_B$
and $\approx 4.0J/k_B$ for OLM and LM, respectively \cite{majumder2017SM,christiansen2017JCP}.
In all the subsequent discussion, the unit of temperature will always be $\epsilon/k_B$ or $J/k_B$ with
the Boltzmann constant $k_B$ being set to unity. Following the standard protocol of nonequilibrium studies
we first prepared initial conformations of the polymers at high temperatures $T_h \approx 1.5T_{\theta}$
that mimics an extended coil phase. Then this high-temperature conformation was quenched to a temperature $T_q < T_{\theta}$. Since LM is computationally less
expensive than OLM, the chain length of polymer used for LM is longer than what is used for OLM. Note that except for the evolution snapshots, for both models, all the results
presented were obtained after averaging over more than $300$ independent runs. For each such run, the starting conformation is an extended coil which were obtained
independently of each other by generating self-avoiding walks using different random seeds and then equilibrating them at high temperature.
\subsection{Phenomenological picture of the collapse}
As mentioned before even though the sausage picture of de Gennes \cite{deGennes1985} is the pioneer in describing the
phenomenology of the collapse dynamics, all simulation studies provided evidence in support of the
pearl-necklace picture of HG \cite{Halperin2000}. In our simulations, too, both with OLM and LM, we observed intermediates that support the
pearl-necklace phenomenology. Typical snapshots which we obtained from our simulations are shown
in Fig.\ \ref{figsnap}. The typical sequence of events happening during the collapse are captured by these snapshots.
At initial time the polymer is in an extended state with fluctuation of the local monomer density along the chain.
Soon there appear a number of local clusters of monomers which then start to grow by withdrawing monomers
from the rest of the chain. This gives rise to the formation of the so called pearl-necklace.
Once the tension in the chain is at maximum, two successive clusters along the chain coalesce with each other to grow
in size. This process goes on until a single cluster or globule is formed. The final stage
of the collapse is the rearrangement of the monomers within the single cluster to form
a compact globule. This last stage, however, is difficult to disentangle from the previous stages.
\begin{figure*}[t!]
\centering
\resizebox{0.72\textwidth}{!}{\includegraphics{schematic_fig.pdf}}
\caption{The upper panel shows evolution snapshots for the droplet formation in a particle system using the Ising lattice gas in two spatial dimensions. The lower panel
shows the evolution of a homopolymer obtained from simulation of the OLM. The figure illustrates the
similarities between the collapse kinetics with the usual coarsening of a particle system.}
\label{figschematic}
\end{figure*}
\par
The first stages of formation and growth of clusters during the collapse of a polymer as demonstrated in Fig.\ \ref{figsnap} is clearly
reminiscent of usual coarsening phenomena in particle or spin systems. As already mentioned traditionally for studying coarsening
one starts with an initial state where the distribution of particles or spins is
homogeneous, e.g., homogeneous fluid or paramagnet above the critical temperature.
Similarly to study the collapse kinetics one starts with a polymer in an extended coil phase which is analogous to the
homogeneous phase in particle or spin systems. Usual coarsening sets
in when the initial homogeneous configuration is suddenly brought down to a temperature below the critical temperature where the
equilibrium state is an ordered state, e.g., condensed droplet in fluid background or ferromagnet. Similarly, for a polymer, the collapse
occurs when the temperature is suddenly brought down below the corresponding collapse transition temperature.
There the equilibrium collapsed phase is analogous to the droplet phase in fluids.
\par
Now coarsening refers to the process via which the initial homogeneous system evolves while approaching the ordered phase.
This happens via the formation and subsequent growth of domains of like particles or spins. This is illustrated in the upper panel of
Fig.\ \ref{figschematic} where we show the time evolution of the droplet formation in a fluid starting from a homogeneous phase via MC
simulations of the Ising lattice gas. At early times many small domains or droplets are formed which then coarsen to form bigger droplets and eventually
giving rise to a single domain or droplet. A similar sequence of events is also observed during collapse of a polymer as shown once again
in the lower panel of Fig.\ \ref{figschematic} which explains the phenomenological analogy of collapse with usual coarsening phenomena.
Coarsening from a theoretical point of view is understood as a scaling phenomenon which means that certain morphology-characterizing functions
of the system at different times can be scaled onto each other using corresponding scaling functions \cite{Puribook,bray2002}.
This scaling in turn also implies that there must be scaling of the time-dependent length scale, too, which in most of the cases shows
a power-law scaling like in Eq.\ \eqref{length-scaling}. Based on this understanding in general and the above mentioned analogy
we will discuss in the remaining part of this section how to investigate the presence of nonequilibrium scaling laws in
the dynamics of collapse of a homopolymer.
\begin{figure}[t!]
\centering
\resizebox{0.95\columnwidth}{!}{\includegraphics{rg_vs_t_comp.pdf}}
\caption{(a) Time dependence of the squared radius of gyration, $R_g^2(t)$, for both OLM ($T_q=1.0$) and LM ($T_q=2.5$). The solid lines are fits to a
stretched exponential form described by Eq.\ \eqref{rg_decay}. (b) Scaling of the collapse time, $\tau_{50}$, with respect to $N$.
The solid lines are fits to the form \eqref{tau_N}. The dashed line is a fit of the OLM data for $N\ge128$,
to the form \eqref{tau_N} by fixing $z=1$. Adapted from Ref.\ \cite{majumder2018proceeding}.}
\label{figsrgcomp}
\end{figure}
\subsection{Relaxation behavior of the collapse}\label{relaxation}
In all earlier studies, the straightforward way to quantify the kinetics was to monitor the
time evolution of the overall size of the polymer, i.e., the squared radius of gyration given as
\begin{eqnarray}\label{r_gyr}
R_{g}^2=\frac{1}{N}\sum\limits_{i=1}^{N}(\boldsymbol{r}_i-\boldsymbol{r}_{\rm{cm}})^2
\end{eqnarray}
where $\boldsymbol{r}_{\rm{cm}}$ is the center of mass of the polymer. In the coiled state (above $T_{\theta}$), $R_{g}^2 \sim N^{2\nu_{F}}$ with $\nu_{F}=3/5$, in the Flory mean-field
approximation, whereas in the globular state (below $T_{\theta}$), $R_{g}^{2} \sim N^{2/d}$ \cite{Florybook}. Such decay of $R_{g}^2$ is shown in Fig.\ \ref{figsrgcomp}(a) for both OLM and LM.
Although in some of the earlier studies a power-law decay of $R_{g}^2$ is suggested, in most cases or at least in the present cases that does not work. Rather, the decay can be well
described by the form
\begin{eqnarray}\label{rg_decay}
R_{g}^2(t)=b_0+b_1\exp\left[-\left(\frac{t}{\tau_f} \right)^{\beta}\right],
\end{eqnarray}
where $b_0$ corresponds to the saturated value of $R_{g}^2(t)$ in the collapsed state,
$b_1$ is associated with the value at $t=0$, and $\beta$ and $\tau_f$ are fitting parameters.
For details about fitting the data with the form \eqref{rg_decay}, see Refs.\ \cite{majumder2017SM} and \cite{christiansen2017JCP} for OLM and LM, respectively.
An illustration of how appropriately this form works is shown in Fig.\ \ref{figsrgcomp}(a). There the respective solid lines are fits to the form \eqref{rg_decay}.
While the above form does not provide any detail about the specificity of the collapse process,
it gives a measure of the collapse time $\tau_c$ via $\tau_f$.
However, to avoid the unreliable extraction of the collapse time from such a fitting, one could alternatively use a rather direct way of
estimating $\tau_{50}$ which corresponds to the time when $R_g^2(t)$ has
decayed to half of its total decay, i.e., $\left[R_g^2(0)-R_g^2(\infty)\right]/2$. Data for both models as shown in Fig.\ \ref{figsrgcomp}(b) reflect a power-law scaling, to be quantified with the form
\begin{eqnarray}\label{tau_N}
\tau_c=B N^z+\tau_0,
\end{eqnarray}
where $B$ is a nontrivial constant that depends on the quench temperature $T_q$, $z$ is the
corresponding dynamic critical exponent, and the offset $\tau_0$ comes from finite-size corrections.
For LM a fitting (shown by the corresponding solid line) with the form \eqref{tau_N} provides $z=1.61(5)$
and is almost insensitive to the chosen range. However, for
OLM the fitting is sensitive to the chosen range. While using the whole range of data provides $z=1.79(6)$ (shown by the
corresponding solid line), fitting only the data for $N \ge128$ yields $z =1.20(9)$. In this regard,
a linear fit [fixing $z=1$ in \eqref{tau_N}], shown by the dashed line, also works quite well. For a comparison of the values of $z$ obtained
by us with the ones obtained by others, see Table\ \ref{tab_for_tauc}.
\subsection{Coarsening during collapse}\label{coarsening}
Having the phenomenological analogy between collapse of a polymer and usual coarsening of particle and spin systems established, in this subsection we present the scaling of the cluster growth
during the collapse under the light of well established protocols of the coarsening in particle or spin systems.
\subsubsection{Scaling of morphology-characterizing functions}
Coarsening in general is a scaling phenomenon, where certain structural quantities that quantify the morphology of the system, e.g.,
two-point equal-time correlation functions and structure factors show scaling behavior \cite{Puribook,bray2002}.
This means that the structure factors at two different times can be collapsed onto the same master curve by using the relevant length scales, i.e., cluster size or
domain size at those times. This fact is used to extract the relevant time-dependent length scale that governs the kinetics of coarsening.
For example one uses the first moment of the structure factor at a particular time to have a measure of the length scale or the average domain size during
coarsening. However, to understand the kinetics of cluster growth during the collapse of a polymer traditionally the average number of monomers present
in a cluster is used as the relevant length scale $C_s(t)$. For studying the OLM we used this definition to calculate $C_s(t)$, details of which can be found in
Ref.\ \cite{majumder2017SM} and later will also be discussed in the $d=2$ case. The validity of this definition as the relevant length scale can be verified by
looking at the expected scaling of the cluster-size distribution $P(C_d,t)$, i.e., the probability to find
a cluster of size $C_d$ among all the clusters at time $t$. Using this distribution we calculate the average cluster size
as $C_s(t)= \langle C_d \rangle $. The corresponding scaling behavior is given as
\begin{eqnarray}\label{cdist_scaling}
P(C_d,t) \equiv C_s(t)^{-1} \tilde{P} [C_d/C_s(t)],
\end{eqnarray}
where $\tilde{P}$ is the scaling or master function. This means that when $C_s(t)P(C_d,t)$ at different times are plotted against $C_d/C_s(t)$ they should fall
on top of each other. This verification is presented in Fig.\ \ref{cdist} where in the main frame we show plots of the (unscaled) distributions $P(C_d,t)$ at different times,
and in the inset the corresponding scaling plot using the form \eqref{cdist_scaling}. Coincidentally, here, the tail of the distribution shows an exponential decay
as observed in coarsening of particle \cite{majumder2011EPL} and spin systems \cite{Majumder2011,majumder2018_potts}.
\begin{figure}[t!]
\centering
\resizebox{0.92\columnwidth}{!}{\includegraphics{clsize_dist.pdf}}
\caption{ Normalized distribution of the cluster sizes at three different times during the
coarsening stage of the collapse at $T_q=1$ for a polymer with $N=724$ modeled by OLM. The inset demonstrates
the scaling behavior of the collapse phenomenon via
the semi-log plot of the corresponding scaling of the distribution functions. The solid line shows
consistency of the data with an exponential tail. Taken from Ref.\ \cite{majumder2017SM}.}
\label{cdist}
\end{figure}
\par
On the other hand, for a lattice model, one can use the advantage of having the monomers placed on lattice points.
There a two-point equal-time correlation function can be defined as
\begin{eqnarray}\label{cor_lattice}
C(r,t)=\langle \rho(0,t)\rho(r,t) \rangle
\end{eqnarray}
with
\begin{equation}\label{rho_lattice}
\rho_i(r,t)=\frac{1}{m_r}\sum_{j, r_{ij}=r} \theta(\boldsymbol{r}_j,t).
\end{equation}
where the characteristic function $\theta$ is unity if there is a monomer at position
$\boldsymbol{r}_j$ or zero otherwise. $m_r$ denotes the number of possible lattice points at distance $r$ from an arbitrary point of the lattice.
Plots for such correlation functions at different times during the collapse of a polymer using LM is shown in the main frame of Fig.\ \ref{corr_LM}.
Slower decay of $C(r,t)$ as time increases suggests the presence of a growing length scale. Thus following the trend in usual coarsening studies one can extract
an average length scale $\ell(t)$ that characterizes the clustering during the collapse, via the criterion
\begin{equation}\label{lenght_from_corr}
C\left(r=\ell(t),t \right)=h,
\end{equation}
where $h$ denotes an arbitrary but reasonably chosen value from the decay of $C(r,t)$. Calculation of $\ell(t)$ in the above manner automatically
suggests to look for the dynamical scaling of the form
\begin{equation}
C(r,t) \equiv \tilde{C}\left(r/\ell(t)\right),
\label{EqEquiv}
\end{equation}
where $\tilde{C}$ is the scaling function.
Such a scaling behavior is nicely demonstrated in the inset of Fig.\ \ref{corr_LM}, where we show the corresponding data presented in the
main frame as function of $r/\ell(t)$. Note that here $\ell(t)$ gives the linear size of the ordering clusters. Thus in order to compare $\ell(t)$ of LM
with the cluster size $C_s(t)$ obtained for OLM one must use the relation $\ell(t)^d\equiv C_s(t)$. For a check of the validity of
this relation, see Ref.\ \cite{christiansen2017JCP}.
\begin{figure}[t!]
\centering
\resizebox{0.95\columnwidth}{!}{\includegraphics{correlation_LM.pdf}}
\caption{ Morphology characterizing two-point equal-time correlation function $C(r,t)$ at different times, showing the presence of a growing length scale
during collapse of a polymer obtained via simulation of LM with $T_q=2.5$ and $N=8192$. The inset shows the presence of scaling in the process via the plot of the same data as a function of $r/\ell(t)$
where $\ell(t)$ is the characteristic length scale calculated using \eqref{lenght_from_corr} with $h=0.1$. Adapted from Ref.\ \cite{christiansen2017JCP}.}
\label{corr_LM}
\end{figure}
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{cl-size.pdf}}
\caption{ (a) Plots of the average cluster size $C_s(t)/N$, as function of time for the systems
presented in Fig.\ \ref{figsnap}. To make both the data visible on the same plot, we divide the time axis by
a factor $m$ to obtain $t_p=t/m$, where $m=1\times10^{6}$ and $3.5\times 10^{6}$ for OLM and LM, respectively.
The solid lines are fits to the form \eqref{cl_growth} with $\alpha_c=0.98$ for OLM and $\alpha_c=0.62$ for LM.
The plots in (b) and (c) demonstrate the scaling exercise for OLM with $\alpha_c=1.0$ and LM with
$\alpha_c=0.62$, respectively, showing that data for $C_s(t)$ at different quench temperatures $T_q$
can be collapsed onto a master curve using a nonuniversal metric factor in the scaling variable.
The solid lines represent the corresponding $Y(y_p) \sim y_p^{-\alpha_c}$ behavior. Taken from Ref.\ \cite{majumder2018proceeding}.}
\label{cluster_size}
\end{figure}
\subsubsection{Cluster growth}
Once it is established that the coarsening stage of polymer collapse is indeed a scaling phenomenon, the next interest goes
towards checking the associated growth laws. In Fig.\ \ref{cluster_size}(a), we show the time dependence of $C_s(t)$ for OLM and LM.
To make the data from both models visible on the same scale there the $y$-axis is scaled by the corresponding chain length $N$ of the polymer.
Note that saturation of the data for LM at a value less than unity is due to the fact that there we have calculated the average cluster size $C_s(t)$
from the decay of the correlation function $C(r,t)$ as described in the previous subsection. This gives a proportionate measure of the average
number of monomers present in the clusters and thus the data saturate to a value less than unity.
\par
In coarsening kinetics
of binary mixtures such time dependence of the relevant length scale can be described correctly when one
considers an off-set in the scaling ansatz \cite{Majumder2011,Majumder2010,das2012,majumder2013}. Similarly, it was later proved to
be appropriate for the cluster growth during the collapse of
a polymer \cite{MajumderEPL,majumder2017SM}. Following this one writes down the scaling ansatz as
\begin{eqnarray}\label{cl_growth}
C_s(t)=C_0+At^{\alpha_{c}},
\end{eqnarray}
where $C_0$ corresponds to the cluster size after crossing over from the initial cluster formation stage, and
$A$ is a temperature-dependent amplitude. The solid lines in Fig.\ \ref{cluster_size}(a) are fits to the form \eqref{cl_growth}
yielding $\alpha_c =0.98(4)$ and $0.62(5)$ for OLM and LM, respectively.
\par
One can verify the robustness of the growth by studying the dependence of cluster growth on the quench temperature $T_q$.
For this one uses data at different $T_q$ and can perform a scaling analysis based on
nonequilibrium finite-size scaling (FSS) arguments \cite{majumder2017SM}. The nonequilibrium FSS analysis was constructed based
on FSS analyses in the context of equilibrium critical phenomena \cite{Fisherbook,Privmanbook}. An account of the FSS formulation in the present context
can be found in Ref.\ \cite{majumder2017SM}. In brief, one introduces in the growth ansatz \eqref{cl_growth} a
scaling function $Y(y_p)$ as
\begin{eqnarray}\label{FS_ansatz}
C_s(t)-C_0=(C_{\max}-C_0)Y(y_p),
\end{eqnarray}
which implies
\begin{eqnarray}\label{FS_func_cl}
Y(y_p)=\frac{(C_s(t)-C_0)}{(C_{\max}-C_0)},
\end{eqnarray}
where $C_{\max} \sim N$ is the maximum cluster size a finite system can attain. In order to account for the
temperature-dependent amplitude $A(T_q)$, one uses the scaling variable
\begin{eqnarray}\label{FS_variable_T}
y_p= f_s\frac{(N-C_0)^{1/\alpha_{c}}}{(t-t_0)}
\end{eqnarray}
where
\begin{eqnarray}\label{FS_variable_T2}
f_s=\left[\frac{A(T_{q,0})}{A(T_q)}\right]^{1/\alpha_c}.
\end{eqnarray}
The metric factor $f_s$ is introduced for adjusting the nonuniversal amplitudes $A(T_q)$ at different $T_q$. Here, in addition to $C_0$ one also
uses the crossover time $t_0$ from the initial cluster formation stage.
A discussion of the estimation of $C_0$ and $t_0$ can be found in Refs.\ \cite{majumder2017SM,christiansen2017JCP}. While performing the exercise
we tune the parameters $\alpha_c$ and $f_s$ to obtain a data collapse along with the $Y(y_p) \sim y_p^{-\alpha_c}$ behavior
in the finite-size unaffected region. In Figs.\ \ref{cluster_size}(b) and (c), we demonstrate such scaling exercises with $\alpha_c=1.0$ and $0.62$ for OLM and LM, respectively. For $f_s$, we use the reference temperature $T_{q,0}=1.0$ and $2.0$ for OLM and LM, respectively. The collapse of data for different $T_q$ and consistency with the corresponding $y_p^{-\alpha_c}$ behavior in both plots
suggest that the growth is indeed quite robust and can be described by a single finite-size scaling function with nonuniversal metric factor $f_s$
in the scaling variable. However, $\alpha_c$ in OLM is larger than for LM, a fact in concurrence with the values
of $z$ estimated previously, and thus to some extent providing a support to the heuristic relation $z \sim 1/\alpha_c$. The use of a nonuniversal metric factor in order
to find out an universal FSS function was first introduced in the context of equilibrium critical phenomena using different lattice types \cite{Privman1984,hu1995}.
After adapting this concept to nonequilibrium FSS of polymer kinetics in Refs.\ \cite{majumder2017SM,christiansen2017JCP} as explained above, it was recently also transferred
to spin systems where its usefulness has been demonstrated in a coarsening study of the Potts model with conserved dynamics \cite{majumder2018_potts}.
\subsection{Aging and related scaling}\label{aging}
Apart from the scaling of the growing length scale or the cluster size that deals only with
equal-time quantities, coarsening processes are associated with the aging phenomenon as well. Thus along the same line, in order to
check aging during collapse of a polymer one can calculate the two-time correlation function or the autocorrelation function described in
Eq.\ \eqref{auto_cor}. However, unlike for spin systems here the choice of the observable $O_i$ is not trivial.
Nevertheless, for OLM we identified the observable $O_i$ as a variable
based on the cluster identification method. We assign $O_i = \pm1$ depending on whether
the monomer is inside ($+1$) or outside ($-1$) a cluster. It is apparent that our cluster
identification method is based on the local density around a monomer along the chain.
Thus $C(t,t_w)$ calculated using this framework gives an analogue of the usual
density-density autocorrelation functions in particle systems. On the other hand for LM,
we assign $O_i=\pm 1$ by checking the radius $r$ at which the local density, given by $\rho_i(r,t)$
[see Eqs.\ \eqref{cor_lattice} and \eqref{rho_lattice}], first falls below a threshold of $0.1$.
If this radius is smaller than $\sqrt{3}$ we assign $O_i=1$, marking a high local density, otherwise we
chose $O_i=-1$ to mark a low local density. For details see Refs.\ \cite{majumder2017SM} and \cite{christiansen2017JCP} for OLM and LM, respectively.
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{corr_lw_OLM.pdf}}
\vskip 0.1cm
\resizebox{0.9\columnwidth}{!}{\includegraphics{corr_lw_LM.pdf}}
\caption{ Demonstration of aging phenomenon during collapse of a polymer for (a) OLM and (b) LM. The main frame shows the
plot of the autocorrelation functions calculated using \eqref{auto_cor} at different waiting times $t_w$, as mentioned there.
The insets show the corresponding scaling plots with respect to the scaling variable $x_c=C_s(t)/C_s(t_w)$, in accordance with \eqref{power-law_Cst}.
The solid lines depict the consistency of the data with a power law having an exponent $\lambda_c=1.25$.}
\label{corr_lw}
\end{figure}
\par
In the main frames of Figs.\ \ref{corr_lw}(a) and (b) we show plots of the autocorrelation function $C(t,t_w)$ against the translated
time $t-t_w$ for (a) OLM and (b) LM. Data from both the cases clearly show breaking of time-translation invariance, one of the necessary
conditions for aging. It is also evident that as $t_w$ increases, the curves decay more slowly, an indication of slow relaxation behavior fulfilling the
second necessary condition for aging. For the check of the final condition for aging, i.e., dynamical scaling, in principle one could study
the scaling with respect to the scaled time $t/t_w$. Although such an exercise provides a reasonable collapse of data
for OLM, data for LM do not show scaling with respect to $t/t_w$. In this regard, one could look for special aging
behavior that can be achieved by considering \cite{henkelbook}
\begin{equation}\label{superaging}
C(t,t_w) \equiv G\left(\frac{h(t)}{h(t_w)}\right),
\end{equation}
with the scaling variable
\begin{equation}
h(t)=\exp\left(\frac{t^{1-\mu}-1}{1-\mu}\right).
\end{equation}
Here, $G$ is the scaling function and $\mu$ is a nontrivial exponent.
Special aging with $0 <\mu < 1$ is referred to as subaging and has been observed mostly in soft-matter systems \cite{Cloitre2000,bursac2005,wang2006}, in spin glasses \cite{hilhorst1981,herisson2004,Parker2006}, and recently
in long-range interacting systems \cite{christiansen2019non}. The $\mu>1$ case is referred to as superaging and was claimed to be observed in site-diluted Ising ferromagnets.
However, Kurchan's lemma \cite{Kurchan2002} rules out the presence of apparent superaging \cite{paul2007}. This was further consolidated via numerical
evidence in Ref.\ \cite{Park2010}. There it has been
argued that the true scaling is observed in terms of the ratio of growing length scales at the corresponding times, i.e., $\ell(t)/\ell(t_w)$.
In the case of polymer collapse with LM, too, one apparently observes special scaling of the form \eqref{superaging} with $\mu < 1$, i.e., subaging in this case. However,
following the argument of Park and Pleimling \cite{Park2010}, one gets also here the simple scaling behavior with respect to the scaling variable $x_c=C_s(t)/C_s(t_w)$, thus ruling out the presence
of subaging. Such scaling plots of the autocorrelation data both for OLM and LM are shown in the insets of Fig.\ \ref{corr_lw}. In both
cases the data seem to follow the power-law scaling with a decay exponent $\lambda_c \approx 1.25$.
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{rg_vs_Cs_OLM.pdf}}
\vskip 0.1cm
\resizebox{0.9\columnwidth}{!}{\includegraphics{rg_vs_Cs_LM.pdf}}
\caption{ Geometrical distance between monomers $i$ and $j$ which are at a distance $|i-j|$
along the contour of the chain for a polymer using (a) OLM and (b) LM, at different times mentioned. The respective chain lengths are $N=724$ and $2048$ and the
quench temperatures are $T_q=1.0$ and $2.5$. The solid line shows the expected behavior for an extended coil and the dashed line
shows the behavior in the collapsed phase. The plot in (a) is taken from Ref.\ \cite{majumder2017SM}.}
\label{rg_vs_Cs}
\end{figure}
\par
Relying on the fact that the calculation of $C(t,t_w)$ is based on the cluster identification criterion,
i.e., by calculating the local monomer densities around each monomer along the polymer chain, it gives
an analogue to the usual density-density autocorrelation function as used in glassy systems.
Keeping in mind the corresponding argument for the bounds on the respective aging exponent
for spin-glass and ferromagnetic ordering, one can thus assume \cite{Majumder2016PRE} $C(t,t_w) \sim \langle \rho(t) \rho(t_w) \rangle$
where $\rho$ is the average local density of monomers. Now let us consider a set of $C_s$ monomers
at $t~(\gg t_w)$ and assume that at $t_w$ the polymer is more or less in an extended coil
state where the squared radius of gyration scales as $R_{g}^2 \sim N^{2\nu_F}$. Using $C_s \equiv N$
in this case one can write
\begin{eqnarray}\label{den_tw}
\rho(t_w) \sim C_s/{R_g}^{d} \sim C_s^{-(\nu_F d-1)}.
\end{eqnarray}
The above fact can be verified from Figs.\ \ref{rg_vs_Cs}(a) and (b) for OLM and LM, respectively, where we plot the average geometrical (Euclidean)
distance $R_e$ ($\sim R_g$) between the monomers $i$ and $j$ placed at a distance $|i-j|$ along the contour
of the chain at different times during the collapse.
For both cases, the data at early times show that the behavior is consistent with an extended coil governed by the Flory exponent $\nu_F=3/5$. This consolidates the foundation of the relation \eqref{den_tw}
provided $t_w$ is at early times.
\begin{figure}[t!]
\centering
\resizebox{0.95\columnwidth}{!}{\includegraphics{corr_diff_T_OLM.pdf}}
\vskip 0.1cm
\resizebox{0.95\columnwidth}{!}{\includegraphics{cor_lw_diffT_LM.pdf}}
\vskip 0.1cm
\resizebox{0.95\columnwidth}{!}{\includegraphics{corr_lw_comp.pdf}}
\caption{ Plots demonstrating that aging scaling of the autocorrelation function $C(t,t_w)$ at different $T_q$ for (a) OLM and (b) LM can be described
by a single master curve when plotted as a function of $x_c=C_s(t)/C_s(t_w)$. The solid lines there again correspond to \eqref{power-law_Cst} with $\lambda_c=1.25$.
For OLM, the used data are at $t_w=5\times 10^3$, $10^4$ and $3\times 10^4$, respectively, for $T_q=0.6$, $1.0$ and $1.5$.
For LM, data for all temperatures are at $t_w \approx 10^3$. Note that here we have simply multiplied the $y$-axis by a factor $f$ to make the
data fall onto the same master curve. (c) Illustration of the universal nature of aging scaling in the two models. Here the used data
are at $t_w=10^4$ and $10^3$ for OLM and LM, respectively. Adapted from Refs.\ \cite{majumder2017SM,christiansen2017JCP,majumder2018proceeding}.}
\label{corr_lw_comp}
\end{figure}
\par
Now at the observation time $t$ there are two possibilities.
Firstly, if $t$ is late enough, then we expect that all the monomers will be inside a
cluster which gives $R_g\sim C_s^{1/d}$ so that $\rho(t)=1$. Thus considering the maximum
overlap between $\rho(t)$ and $\rho(t_w)$ we get
\begin{eqnarray}\label{lower_bound}
C(t,t_w) \sim C_s^{-(\nu_F d-1)}.
\end{eqnarray}
This gives the lower bound. Secondly, with the assumption that the polymer is in an extended coil
state even at time $t$, then $\rho(t)=\rho(t_w)$ holds and we obtain
\begin{eqnarray}\label{upper_bound}
C(t,t_w) \sim C_s^{-2(\nu_F d-1)},
\end{eqnarray}
providing the upper bound for the aging exponent $\lambda_c$. Thus by combining
\eqref{lower_bound} and \eqref{upper_bound} we arrive at the bounds \cite{Majumder2016PRE}
\begin{eqnarray}\label{poly-bound}
(\nu_F d-1) \le \lambda_c \le 2(\nu_F d-1).
\end{eqnarray}
Putting $\nu_F=3/5$ in \eqref{poly-bound} one would get $4/5 \le \lambda_c \le 8/5 $.
Further, inserting the more precise numerical estimate in $d=3$ as \cite{clisby2010,Clisby2016}
$\nu_F=0.587\,597$, we get
\begin{eqnarray}\label{The-bound}
0.762\,791\le \lambda_c \le 1.525\,582.
\end{eqnarray}
The validity of this bound can also be readily verified from the consistency
of our data in the insets of Fig.\ \ref{corr_lw} with the solid lines having a power-law decay with
exponent $1.25$. We make the choice of $t_w$ in all the plots so that the assumption
that at time $t_w$ the polymer is in an extended coil state is valid. This choice can also be
appreciated from the plots in Figs.\ \ref{rg_vs_Cs}(a) and (b) for OLM and LM, respectively. There it is evident that the extended coil behavior ($R_e \sim |i-j|^{3/5}$) at early times is gradually changing to the
behavior expected for the collapsed phase ($R_e \sim |i-j|^{1/d}$ with $d=3$) at late times.
The little off behavior of the data for higher $t_w$ in the inset of Fig.\ \ref{corr_lw} is indeed due to the
fact that at those times the formation of stable clusters has already initiated to change the extended
coil behavior of the chain. Confirmation of the value of $\lambda_c$ via finite-size scaling can also be done as
presented in Refs.\ \cite{Majumder2016PRE,christiansen2017JCP}.
\par
To confirm the robustness of the above bound and the value of $\lambda_c$, we plot $C(t,t_w)$ from different temperatures $T_q$ in Fig.\ \ref{corr_lw_comp}(a) for OLM and Fig.\ \ref{corr_lw_comp}(b) for LM. Mere plotting of those data yields curves that are parallel to each other
due to different amplitudes. However, if one uses a multiplier $f$ on the $y$-axis to adjust those
different amplitudes for different $T_q$ one obtains curves that fall on top of each other as shown.
The values of $f$ used for different $T_q$ are mentioned in the tables within the plots. Note that this non-trivial factor $f$ is similar to the
nonuniversal metric factor $f_s$ used for the cluster growth in the previous subsection. The solid lines in both the cases show the consistency
of the data with the scaling form \eqref{power-law_Cst} with $\lambda_c=1.25$. To further check the universality of the
exponent $\lambda_c$ we now compare the results from aging scaling obtained for the polymer collapse using the two polymer models.
For that we plot in Fig.\ \ref{corr_lw_comp}(c) the data for different $T_q$ coming from both models on the same graph. Here again, we have used
the multiplier $f$ for the data collapse. Collapse of data irrespective of the model and the temperatures $T_q$ onto a master-curve behavior
and their consistency with the power-law scaling \eqref{power-law_Cst} having $\lambda_c=1.25$ (shown by the solid line), speaks for
the universal nature of aging scaling during collapse of a polymer.
\begin{figure*}[t!]
\centering
\resizebox{0.8\textwidth}{!}{\includegraphics{snap2d.pdf}}
\caption{ Plot showing the time evolution of a polymer in $d=2$ using OLM after being quenched from a high-temperature extended coil phase to
a temperature $T_q=1.0$ where the equilibrium phase is globular. The times are mentioned in there and the used chain length is $N=512$.}
\label{snap2d}
\end{figure*}
\section{Results for the case of OLM in $d=2$ }\label{2D}
In this section we present some preliminary results for the kinetics of polymer collapse in $d=2$ dimensions using only OLM as defined
by Eqs.\ \eqref{FENE}, \eqref{potential_OLM}, and \eqref{std_LJ}. Experiments on polymer dynamics are often set up by using an
attractive surface which effectively confines the polymer to move
in two-dimensional space. Thus understanding the scenario in pure $d=2$ dimensions provides some impression about
such quasi-two-dimensional geometry \cite{deGennesbook,vanderzandebook}. From a technical point of view, simple Metropolis simulations of a polymer in $d=2$ are
much more time consuming than in $d=3$. This is due to the absence of one degree of freedom which makes the collapse of the polymer
difficult via local moves and thereby increasing the intrinsic time scale of collapse. In fact even in equilibrium there are very few
studies \cite{wittkop1996,polson2000,Grassberger2002,Zhou2006} and in particular we do not find any study that gives an idea about the
collapse transition temperature. Since for the study of the
kinetics the actual value of the transition temperature is not crucial we performed a few equilibrium simulations in $d=2$ covering a wide range of temperatures and
found that at $T_q=1.0$ the polymer is in the collapsed phase for a chain length of $N=512$, while it remains in an extended coil state at $T_h=10.0$.
So for this work we have used a polymer of length $N=512$ and prepared an initial configuration at $T_h=10.0$ before quenching it to a temperature
$T_q=1.0$. All the other specifications for the simulation method remain the same as we discussed it for OLM in Section\ \ref{model}, apart from
confining the displacement moves to only $d=2$ dimensions.
\par
In Fig.\ \ref{snap2d} we show the time evolutions during the collapse of the $d=2$ polymer at $T_q=1.0$. The sequence of events
portrayed by the snapshots shows formation of local ordering as observed for $d=3$, although the formation of a ``pearl-necklace'' is
not so evident. By comparing with the snapshots presented for $d=3$ in Figs.\ \ref{figsnap} and \ref{figschematic}, it is apparent
that the initial process of local cluster formation
is much slower in $d=2$. However, once the local clusters are formed (as shown in the snapshot at $t=10^6$ MCS) the time evolution shows coarsening of these
clusters to finally form a single cluster or globule. Thus the overall phenomenology seems to be in line with the $d=3$ case.
\begin{figure}[b!]
\centering
\resizebox{0.95\columnwidth}{!}{\includegraphics{rg_2d_512.pdf}}
\caption{ Time dependence of the average squared radius of gyration $R_g^2$ during collapse of a polymer in $d=2$. The system size and the quench temperature
are the same as in Fig.\ \ref{snap2d}. The continuous line is a fit to the data using Eq.\ \eqref{rg_decay}.}
\label{rg2d}
\end{figure}
\par
Following what has been done for the $d=3$ case, at first we look at the time dependence of the overall size of the polymer
by monitoring the squared radius of gyration $R_g^2$. In Fig.\ \ref{rg2d} we show the corresponding plot of $R_g^2$ (calculated as an
average over $300$ different initial realizations). Like in the $d=3$
case, the decay of $R_g^2$ can be described quite well via the empirical relation mentioned in Eq.\ \eqref{rg_decay}. The best fit
obtained is plotted as a continuous line in the plot. The obtained value of the non-trivial parameter $\beta$ in this fitting is $\approx 0.89$, which is compatible with the $d=3$ case \cite{majumder2017SM}. Still, the dependence of $\beta$ on the chain length $N$ would be worth investigating and will
be presented elsewhere. Along the same line an understanding of the scaling of the collapse time with the chain length will be interesting to
compare with the $d=3$ case. As this Colloquium is focused more on the cluster coarsening and aging during the collapse, here, we abstain ourselves from
presenting results concerning the scaling of the collapse time.
\subsection{Cluster coarsening }
As can be seen from the snapshots in Fig.\ \ref{snap2d}, during the course of the collapse, like in $d=3$, also for $d=2$ one notices formation of
local clusters which via coalescence with each other form bigger clusters and eventually form a single cluster or globule. We measure the
average cluster size in the following way. First we calculate the total
numbers of monomers in the nearest vicinity of the $i$-th monomer as
\begin{eqnarray}\label{cl_identify}
n_i=\sum\limits_{j=1}^{N}\Theta (r_{c}-r_{ij}),
\end{eqnarray}
where $r_c$ is the same cutoff distance used in the potential \eqref{potential_OLM} for the simulations and $\Theta$ is the
Heaviside step function. For $n_i \ge n_{\rm{min}}$,
there is a cluster around the $i$-th monomer and all those $n_i$ monomers belong to that
cluster. The total number of clusters calculated this way may include some overcounting,
which we remove via the corresponding Venn diagram, and thus the actual discrete
clusters $k=1,\dots, n_c(t)$ are identified and the number of monomers $m_{k}$ within each cluster
is determined. Finally the average cluster size is calculated as
\begin{eqnarray}
C_s(t) = \frac{1}{n_c(t)} \sum \limits_{k=1}^{n_c(t)} m_k,
\end{eqnarray}
where $n_c(t)$ is the total number of discrete clusters at time $t$. Note that in this calculation
we do not vary the cut-off radius $r_c$ and fix it to the same value ($r_c=2.5\sigma$) as we have used for our simulations. Hence, the obtained value of $C_s(t)$ depends only on one nontrivial choice, which is $n_{\rm{min}}$. Figure\ \ref{noc2d}(a) shows
how the identification of clusters depends on different choices of $n_{\min}$ during collapse
of a polymer having length $N=512$. There we have plotted the
average number of clusters as a function of time for different $n_{\min}$. One can notice for choices of
$n_{\min} \ge 10$ the late-time behaviors are more or less indistinguishable. However, the initial
structure formation stage is well covered by the choice $n_{\min}=12$. Thus we consider $n_{\min}=12$
as the optimal value to identify and calculate the average cluster size.
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{no_cluster2d.pdf}}
\vskip 0.1 cm
\resizebox{0.9\columnwidth}{!}{\includegraphics{cluster_size2d.pdf}}
\caption{ (a) Plot of the average number of clusters of monomers $n_c$ as a function of time during collapse of a polymer with chain length $N=512$
modeled via OLM in $d=2$ at $T_q=1.0$. Results for different choices of $n_{\min}$ are shown demonstrating the late-time consistency of the
data with each other. (b) Illustration of the scaling of the cluster growth during collapse via a plot of the average cluster size $C_s$ as
a function of time. Here we have used $n_{\min}=12$. The dashed and the solid lines correspond to different power-law behaviors observed
at early and late times, respectively.}
\label{noc2d}
\end{figure}
\par
In Fig.\ \ref{noc2d}(b) we show the time dependence of the average cluster size. One can clearly see the
presence of two distinct phases. The early-time phase corresponds to the stage of stable cluster
formation ($\le 10^6 $ MCS) and the later phase is the coarsening phase. The early-time data are consistent
with a behavior $C_s(t) \sim t^{1/4}$ which is slower than the corresponding behavior in $d=3$ (see Fig.\ 8(b) in Ref.\ \cite{majumder2017SM}).
The late-time behavior is consistent with a $C_s(t) \sim t$ behavior consistent with a $d=3$ polymer using OLM.
However, we caution the reader that one must be careful before interpreting the linear behavior. In this regard,
we believe that a proper finite-size scaling analysis as done for the $d=3$ case is required to confirm it, for which
one needs data from different system sizes. This analysis is in progress and will be presented elsewhere.
\subsection{Aging in $d=2$}
We now move on to present some preliminary results on the aging dynamics during polymer collapse in $d=2$ using the OLM.
Like in the $d=3$ case here also, we probe aging via calculation of the two-time autocorrelation function described in \eqref{auto_cor} by using
the same criterion for $O_i$ as used in $d=3$ for the OLM. To check the presence of aging we first confirm
the absence of time-translation invariance. This is demonstrated in Fig.\ \ref{corrtmtw} for the same system as presented for the
cluster growth in Fig.\ \ref{noc2d}. The plot shows the autocorrelation function $C(t,t_w)$ as a function of the translated time
$t-t_w$ for four different values of $t_w$ as mentioned in the figure. The absence of time-translation invariance is evident from the non-collapsing
behavior of the data. Along with that one can also notice that the larger $t_w$ the slower the autocorrelation decays which confirms
the second criterion of aging, i.e., slow dynamics. The last criterion for aging is the presence of dynamical scaling.
In the present case of polymer collapse in $d=2$, unlike in the $d=3$ case with OLM, we do not observed any data collapse with
respect to the scaling variable $x_c=t/t_w$. This, on the other hand, is similar to the results obtained for the LM in $d=3$.
However, to limit ourselves here rather than going for an analysis based on subaging scaling we immediately
look for the scaling with respect to $x_c=C_s(t)/C_s(t_w)$ and indeed find a reasonable collapse of data
implying the presence of simple aging behavior. This is demonstrated in Fig.\ \ref{corrlbylw} where we plot $C(t,t_w)$
as a function of $x_c=C_s(t)/C_s(t_w)$ for four different choices of $t_w$.
\begin{figure}[t!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{corr_tmtw.pdf}}
\caption{Demonstration of the breakdown of time-translation invariance by plotting the autocorrelation function
$C(t,t_w)$ as a function of the translated time $t-t_w$, during collapse of a polymer in $d=2$ modeled
by the OLM. The chain length and $T_q$ are the same as in Fig.\ \ref{noc2d}. The chosen values of the waiting times $t_w$
are mentioned within the graph.}
\label{corrtmtw}
\end{figure}
\begin{figure}[b!]
\centering
\resizebox{0.9\columnwidth}{!}{\includegraphics{corr_lbylw.pdf}}
\caption{Illustration of the presence of dynamical scaling of the autocorrelation function shown in Fig.\ \ref{corrtmtw}, plotted here on a double-log scale
as a function of the scaling variable $x_c=C_s(t)/C_s(t_w)$. The solid line shows the consistency of the data with a power-law decay having
an exponent $\lambda_c=1.0$.}
\label{corrlbylw}
\end{figure}
\par
The other important aspect of aging is to quantify the autocorrelation exponent $\lambda_c$ for which an idea can be obtained from the double-log plot in Fig.\ \ref{corrlbylw}. There
for intermediate values of $x_c$, the collapsed data show almost a linear behavior implying a power-law scaling. The solid line corresponds to the power-law decay in Eq.\ \eqref{power-law_Cst}
with an exponent $\lambda_c=1$ that is consistent with the data. For a better quantification of
$\lambda_c$ one would need to do a finite-size scaling analysis by using data from a few larger chain lengths. From the general bound given
in Eq.\ \eqref{poly-bound}, one can read off the corresponding bound in $d=2$,
\begin{eqnarray}\label{2d-bound}
0.5 \le \lambda_c \le 1.0,
\end{eqnarray}
where we have used the fact that in $d=2$, the Flory exponent is exactly $\nu_F=0.75$ \cite{Florybook,vanderzandebook}. The consistency of our data in Fig.\ \ref{corrlbylw} with the autocorrelation exponent
$\lambda_c=1$ implies that in $d=2$ the bound is marginally obeyed. However, to have an appropriate verification of the bound one needs to have a more reliable estimate of $\lambda_c$ as already mentioned.
\section{Conclusion and outlook}\label{conclusion}
We have presented an overview of results existing in the literature regarding the
collapse dynamics of a homopolymer. Although research in this direction started long back with the
proposition of the sausage model of collapse by de Gennes, after a series of works by Dawson and co-workers \cite{byrne1995,timoshenko1995,kuznetsov1995,kuznetsov1996,kuznetsov1996eDNA,dawson1997}
and a few other \cite{pitard1998,klushin1998,Halperin2000,kikuchi2002,Abrams2002,Montesi2004,yeomans2005}, it eventually faded away. Particularly, in experiments it was difficult to monitor a single
polymers to verify the phenomenological theories developed around collapse dynamics. Recently, motivated
by the successful experimental development for monitoring single polymers and polymers in very dilute solutions,
we have provided some new insights in the collapse dynamics of polymers via computer simulations. In this regard, we borrowed
tools and understanding from the general nonequilibrium process of coarsening in particle and spin systems.
This allowed us to explore different nonequilibrium scaling laws that could be associated with
kinetics of the collapse transition of polymers.
\par
When speaking of scaling laws concerning collapse dynamics of a polymer the first thing one looks
for is the scaling of the overall collapse time $\tau_c$ with the chain length $N$ (which was also the main focus of the studies in the past). From a survey of the available results in this direction it is clear that for
power-law\- scaling of the form $\tau_c \sim N^z$, the value of the dynamical exponent $z$ obtained
depends on the intrinsic dynamics used in the simulations. Especially one has to be careful about
presence of hydrodynamics while quoting the value of $z$. However, in our work with
an off-lattice model via Monte Carlo dynamics for large $N$, we obtained a value of $z$ that is close
to the one obtained from molecular dynamics simulations with preservation of hydrodynamic effects. This
raises the question of to what extent hydrodynamics interactions are important during collapse. A proper answer to this could
be obtained via systematic studies of polymer models with explicit solvent \cite{pham2008,chang2001solvent,polson2002}. For the latter there also exist few studies; however,
with no consensus about the value of $z$. In the context of
doing simulations with explicit solvent it would also be interesting to see the effect of
the viscosity of the solvent particles on the dynamics. Building of such a framework is
possible with an approach based on the dissipative particle dynamics \cite{hoogerbrugge1992,espanol1995,groot1997,espanol2017}. Recently, we have taken up
this task by using an alternative approach to dissipative particle dynamics \cite{lowe1999,koopman2006}.
In this context, we have successfully constructed the set up and tested that it reproduces the correct dynamics
in equilibrium taking consideration of the hydrodynamic interactions appropriately \cite{majumder2019dissipative}.
To add more to this understanding recently we have also considered the task of doing all-atom
molecular dynamics simulations with explicit solvent \cite{majumder2019macro}. There the focus is on understanding the collapse of a polypeptide in water
with the aim to get new insights to the overall folding process of a protein which
contains these polypeptides as backbone.
\par
Coming back to the scaling laws during collapse our approach of understanding the collapse
in analogy with usual coarsening phenomena allows us to explore the cluster kinetics
appropriately. Our findings from studies using both off-lattice and lattice models
show that the average cluster size $C_s(t)$ during the collapse grows in a power-law fashion
as $C_s(t) \sim t^{\alpha_c}$. However, the growth exponent $\alpha_c$ is not universal
with $\alpha_c \approx 1$ for the off-lattice model and $\alpha_c \approx 0.62$ for the lattice model.
For quantification of this growth exponent one must be careful about the initial cluster formation stage
which sets a high off-set while fitting the data to a simple power law. In this regard, we have
introduced a nonequilibrium finite-size scaling analysis which helps to estimate
the value of $\alpha_c$ unambiguously.
\par
Along with the growth kinetics where one deals with single-time quantities, it is also important
to have understanding of the multiple-time quantities which provide information
about the aging during such nonequilibrium processes. In analogy with the two-time density or order-parameter
autocorrelation function used in usual coarsening of particle or spins systems, we have shown how one can
construct autocorrelation functions to study aging during collapse of a polymer. Depending on the
nature of the model (whether off-lattice or lattice) the chosen observable to calculate the
autocorrelation may vary; however, qualitatively they should give the same information. Our results
indeed support our choice of the respective observables and provide evidence of aging and corresponding dynamical scaling
of the form $C(t,t_w) \sim \left[C_s(t)/C_s(t_w)\right ]^{-\lambda_c}$. Unlike the growth exponent, the
dynamic aging exponent was found to be $\lambda_c=1.25$ irrespective of the nature of the model, implying that
the aging behavior is rather universal. In this regard, it is worth mentioning that even choosing two different
bond criteria for the lattice model (one with the diagonal bonds and the other without it \cite{christiansen2017JCP})
yielded cluster growth exponents that are different, however, the aging
exponent $\lambda_c$ still remains universal with a value of $1.25$. To check the robustness of this universality,
a study of other polymer models both off-lattice and lattice, along with different methods of
simulations as mentioned previously is required.
\par
In addition to the review of the existing results we have also presented preliminary results
in the context of polymer collapse in $d=2$ dimensions. To understand a two-dimensional system is not only of fundamental
interest \cite{Jia_PRL}, but could be of relevance in the context of polymers confined to an attractive surface. Indeed
there are experiments of synthetic polymers on two-dimensional gold or silver surfaces \cite{forster2014structure,forster2014}. Our results
on the kinetics of polymer collapse in $d=2$ show that the phenomenology
associated with this process can still be described by the ``pearl-necklace'' picture of Halperin and Goldbart, albeit
the identification of the small pearl-like clusters which coarsen to form the final globule
is not as distinct as in the $d=3$ case. Via an extension of the $d=3$ methodologies to $d=2$ , we observe that the
cluster formation stage in $d=2$ is rather slow. However, the late-time coarsening of the clusters
follows the same power-law scaling $C_s(t) \sim t^{\alpha_c}$ with $\alpha_c \approx 1$.
We also have presented results for the aging dynamics in this regard as well. There the autocorrelation function
shows the same kind of power-law scaling as in $d=3$ with a corresponding exponent $\lambda_c\approx 1$.
A more detailed study not only with the off-lattice model but also with the lattice
model is in progress.
\par
Finally, we feel that this novel approach of understanding the collapse dynamics of polymers from
the perspective of usual coarsening studies of particle and spin systems shall serve as a general platform which could
be used to analyze the nonequilibrium evolution of macromolecules in general across any conformational transition.
Of course, due to their distinct features, for each class of this transition the associated techniques
shall be modified accordingly. One has to choose the appropriate properties of the system
and find out the best quantities that describe the corresponding transition appropriately in nonequilibrium.
For example, one can look at the helix-coil transition of macromolecules as well \cite{Arashiro2006,Arashiro2007}. There certainly the
average cluster size would not work as a suitable quantity to monitor the kinetics. Rather one may define some
local helical order parameter and look at the corresponding time dependence.
\begin{acknowledgments}
This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
under project Nos. JA 483/33-1 and 189\,853\,844 -- SFB/TRR 102 (project B04), and the
Deutsch-Franz\"osische Hoch\-schule (DFH-UFA) through the Doctoral College ``$\mathbb{L}^4$''
under Grant No.\ CDFA-02-07. We further acknowledge support by the Leipzig Graduate School of Natural Sciences ``BuildMoNa''.
\end{acknowledgments}
\section*{Author contribution statement}
S.M. planned the structure of the manuscript with inputs from the co-authors. All the authors
contributed equally in writing and developing the text.
|
1,108,101,565,762 | arxiv | \section{Introduction}
In recent years, consumer electronics products have been changing with each passing day and, as a result of this rapid and steady increase in available technology and customers’ expectations, smartphones are essentially necessary for everyone~\cite{decker2010estimating} . People are becoming accustomed to using mobile phones instead of professional cameras to take pictures in various environments because of the portability of mobile phones. One of the special photo scenarios that was once reserved for professional cameras, but is increasingly important to non-professional photographers, occurs when smartphone users take pictures in extremely dark environments, usually under 3 lux~\cite{hasinoff2016burst}. For example, when taking a photo under the light of the full moon, the light is usually about 0.6 Lux. Another example, which is even more common among amateur photographers, comes when users take a photo in a dark indoor environment without lights (in which the light level is roughly 0.1 lux). We are interested in this kind of extremely dark scene because in this kind of dim environment, taking pictures with a portable mobile phone can help us ``see'' things that are difficult to see with the naked eye.
Compared with single-lens reflex (SLR) cameras, using a mobile phone to get good pictures in this kind of dark environment is extremely difficult. This is due to the fact that smartphones, by their very design, preclude the possibility of large aperture lens designs, which makes it impossible to collect enough light when taking pictures. Due to the small amount of light entering the aperture, the shot images are usually extremely dark with a great deal of noise; furthermore, the color in the photos cannot reflect the real-world color of the image~\cite{remez2017deep}.
Theoretical deduction has proven that increasing the photon counts can effectively improve the image signal-to-noise ratio (SNR)~\cite{el2005cmos}. There are many ways to increase the photon counts, one of which is to increase the exposure time~\cite{xiao2009mobile}. However, this is usually done by mounting the camera on a tripod. During the long exposure time, any movement of the camera and any moving objects in the scene will cause objects to become blurry in the picture. An additional problem of increasing the exposure time occurs when one takes photos of high dynamic range scenes; the darkest areas of the image will likely still have a lot of noise, while the brightest areas will tend to be saturated~\cite{seetzen2004high}. Another way to increase the exposure time in the industry is through the multi-frame fusion method which combines many short exposure frames together. It is equivalent to increasing the exposure time to achieve the purpose of improving the SNR~\cite{hasinoff2016burst}.
Traditional denoising methods based on single-frame images have matured and the performance is slowly approaching saturation~\cite{ghimpecteanu2016decomposition}\cite{irum2015review}\cite{jain2016survey}. Furthermore, in extremely dark scenes, all the current traditional denoising methods have failed to be effective~\cite{plotz2017benchmarking}\cite{chen2018learning}. Chen, et. al., has proven that a fully-convolutional network, which operates directly on single-frame raw data can replace the traditional image processing pipeline and more effectively improve the image quality~\cite{chen2018learning}. However, after further investigation, it turns out that in a great deal of dark cases, using Chen, et. al.'s network processing on single-frame results may miss a lot of details; additionally, sometimes the color cannot reflect the real color. This is because, under dark environments, the SNR of the shot's image is quite low. A great deal of useful information will be concealed by strong noise and cannot be fully recovered through a single image.
Inspired by this work and traditional multi-frame denoising methods, we propose a Recurrent Fully Convolutional Network (RFCN) to process burst photos taken under extremely low-light conditions and to obtain denoised images with improved brightness. The major contributions of the work can be summarized as follows:
1. We proposed an innovative framework which directly maps multi-frame raw images to denoised and color-enhanced sRGB images, all of which is end-to-end processing through our network. By using raw data, we were able to avoid the information loss which occurs in the traditional image processing pipeline. We have established that using raw bursts images achieves better results than state-of-the-art methods under dark environments.
2. We have proven, moreover, that our framework has high portability and cross-platform potential, i.e., the model trained by one mobile phone can be directly applied to different cameras' raw bursts without fine-tuning and can obtain a similar level of enhancement.
3. Our framework is relatively flexible since we can produce either a best image or generate a multi-frame denoised image sequence. This opens up the possibility of expanding our framework to cover video denoising, as well.
The paper is organized as follows: The first part consists of an introduction of the problems that occur when photos are taken under dark environments. The second part gives a general overview of image denoising methods and some related work. The third part,``Methods and Materials,'' describes the overall framework, the network architecture and training details, and the data used to train and test the network. Finally, the results, discussion, and conclusion are detailed in Sections 4 and 5.
\begin{figure*}[t]
\centering
\includegraphics[width=16cm]{1.png}
\caption{A conceptual illustration of the system framework. }
\label{fig1}
\end{figure*}
\section{Related Work}
Image denoising is a long-established low-level computer vision task. Many traditional methods have been proposed to solve the denoising problem, such as nonlocal filtering, Block-Matching and 3D Filtering (BM3D), Weighted Nuclear Norm Minimization (WNNM), and so on~\cite{buades2005non}\cite{dabov2007video}\cite{gu2014weighted}. Meanwhile, computer vision has made rapid progress with the development of deep neural networks~\cite{remez2017deep}. One of the first attempts to use deep neural networks for image denosing was DnCNN~\cite{zhang2017beyond}. DnCNN can realize end-to-end denoising by using neural networks, as well as by adopting residual learning strategies which can achieve blind image denoising, and largely surpasses traditional methods. Residual Encoder-Decoder networks (RED) propose a residual network with skip connections~\cite{chen2017low}. RED uses a convolution and deconvolution symmetric network structure. Convolution operations extract features, while deconvolution is an upsampling of the extracted features; thus, the technique completes the whole procedure from image to feature, and then from feature to image. Similar to RED, U-net has also been used for image denoising and has achieved good results~\cite{ronneberger2015u}. U-net also has skipping connections, and its large receptive field is capable of effectively reducing the number of layers. We also use U-Net-like Fully Convolutional Network (FCN) architecture in this work.
Some multi-frame denoising methods have also been proposed and can usually achieve better results than single-image denoising~\cite{hasinoff2016burst}. The fusion of multi-frame images can effectively improve SNR, hence enhancing the image quality. Among the traditional multi-frame denoising methods, V-BM4D finds the motion trajectory of the target block between the frames, regards the series of in-motion trajectories as the target volume, and then finds similar volumes to form a four-dimensional set. Finally, it performs filtering in the four-dimensional set, which can achieve effective video denoising~\cite{maggioni2011video}. There have been several attempts to deal with multi-frame image denoising through deep neural networks. Godard, et. al., uses a Recurrent Neural Network (RNN) ~\cite{godard2018deep} . The use of RNN can efficiently aggregate the information of the frames before and after, as well as increasing the effective depth of the network to enlarge the receptive field~\cite{mikolov2010recurrent}. It is worth noting that ~\cite{godard2018deep} does not use skip connections. We also use RNN, but with skip connections to combine multi-frame image information for denoising and enhancement.
In the case of extremely dark environments, Chen, et. al., proposed a new pipeline to replace the traditional one, which includes several procedures including balance, demosaicing, denoising, sharpening, color space conversion, gamma correction, and more. These processes are specifically tuned in the camera module to suit the hardware. However, because of these non-linear processes on the raw data, some information is lost. Starting from raw data can help to improve the image quality. Previous research has proven that using raw data instead of sRGB can effectively enhance the quality of denoising. Raw images are generally 10bit, 12bit or 14bit, and often contain more bits of information than 8-bit sRGB images~\cite{schwartz2019deepisp}. Especially in the case of extremely dark environments, raw image can be used to obtain more low-brightness information. Therefore, we will use an end to end system starting from raw data and directly generating an output of a denoised and enhanced sRGB image. We handed over all the processes that were originally handled by the ISP to the neural network. On the one hand, the neural network can fully use the information of these raw images, and on the other hand, it will simplify the pipeline.
\begin{figure*}[t]
\centering
\includegraphics[width=14cm]{2.png}
\caption{This image illustrates the network architecture, in which Frame t is taken as an example. The above part is a single-frame network from $S_1$ to $S_L$, in which L is the number of the layers. It is a U-Net structure with skip connections. The multi-frame recurrent network from $M_1$ to $M_L$, shown below, is also a U-Net structure with skip connections. There is a recurrent connection in each unit of the multi-frame network. The multi-frame network takes each scale feature of the single-frame network as an input at each recurrent connection, and the output obtained by the unit in the previous frame is also concatenated to this unit. In contrast to common RNNs, convolution is used in place of Gated Recurrent Units (GRU) and Long Short Term Memory (LSTM). Finally, $I_{s}^{t}$ represents output from single-frame network for Frame t with $I_{m}^{t}$ representing output from multi-frame network for Frame t. They will be used for calculating the loss. $I_{m}^{t}$ is also the denoised output image of Frame t.}
\label{fig2}
\end{figure*}
\section{Methods and Materials}
In the following sections, we describe how to implement the proposed network for multi-frame denoising.
\subsection{Framework}
Fig. 1 represents the framework of our system. After obtaining the raw burst data, we organize multi-frame raw images into appropriate structures as input for the neural network. For the Bayer Pattern raw image, a common practice is to pack 4 color channels with the resolution of each channel reduced by half. Then, we subtract the black level, which is the value produced by the dark current released by the photodiode without light. Most of the raw image values are distributed in areas close to the black level in extremely dark environments; different cameras produce different black level values. Subtracting black level from the raw image can help to apply the trained model directly to raw images from different cameras. This linear processing does not affect valid information. Following this, we scale the data according to the desired amplification ratio, which is a factor that can be tuned ~\cite{chen2018learning}. The amplification of brightness is difficult to acquire in the convolutional neural network denoising task, especially in different raw images. Instead of using neural networks to acquire this coefficient, it is more appropriate to set a separate amplification ratio outside of the network. By tuning the amplification ratio, we can satisfy the different needs of a variety of scenarios. When training the model, we need to multiply the amplification ratio to get an appropriate input brightness to match the brightness of the ground-truth. The packed and amplified data is then fed into the RFCN. The RFCN network has an architecture which is composed of a U-net combined with RNN. Finally, the network generates a multi-frame denoised image sequence.
\subsection{Network Architecture}
Fig. 2 shows the network architecture. We propose to use the RNN method to fully process multi-frame images; the multi-scale features can be fused in a recurrent manner to obtain context information and perform sequential processing. As a network for processing sequential data, RNN is relatively flexible and easy-to-expand. For our entire network, all parameters are shared, with each frame using the same parameters. Parameter sharing enables a reduction in the number of parameters, which results in a shortened training time and a reduced possibility of overfitting. For CNN, parameter sharing is cross-regional, while for RNN, parameter sharing is cross-sequence; thus, parameters are shared by deep computational graphs. The convolution kernel for any position in any frame is the same. Therefore, the entire network can be extended to sequences of any length.
Similar to the technique used in ~\cite{godard2018deep} , we use the overall network architecture of a dual network, which is divided into a single-frame network and a multi-frame RNN network. The single-frame network is U-Net, which is a fully convolutional neural network that is suitable for any size of input. U-net's different scale features are fed separately to the corresponding scale recurrent connections in the multi-frame network. These features are preliminary processed feature information and are more efficient for multi-frame networks. The multi-frame network takes each scale feature of the single-frame network as input at each recurrent connection. The single-frame network first processes each frame of image separately, and then inputs the multi-scale features into the multi-frame recurrent network. Since the basic structure we use is U-net, for single-frame networks and multi-frame networks the process from the front to the back of the network is downsampling and upsampling. Moreover, the single-frame network and the multi-frame network are consistently sampled up and down, thereby ensuring that the scale of the features is consistent. Compared to the structure used in ~\cite{godard2018deep}, we can use the U-net to better extract information.
For the entire network, there are F frames of raw images as input and F frames sRGB of images as output. The latter output frame indirectly utilizes all of the previous information. More output information can be aggregated in the later frame. It is equivalent to a very deep network. In general, the later the frame, the more denoising has been performed, and therefore the higher the image quality obtained.
\begin{figure*}[h]
\centering
\includegraphics[width=16cm]{result0.jpg}
\caption{Four images: (a) was produced using the traditional image processing pipeline; (b) is the ground-truth, taken using long exposure; (c) was produced using a single-frame enhanced network, as per ~\cite{chen2018learning}; (d) was produced using our multi-frame network. It can be clearly seen from the enlarged portion of (c) as compared with (a) that (c) is less noisy; however, the wall color is uneven and does not correspond to the ground-truth and, furthermore, the details of the magazine covers are lacking. In contrast, in (d) the enlarged portion shows that the wall color is much closer than (c) to ground-truth, and more details have been recovered. The PSNR and SSIM of (c) are 21.82 and 0.889 respectively, while the PSNR and SSIM of (d) are 24.93 and 0.903 respectively.}
\label{fig2}
\end{figure*}
\subsection{Data}
There are very few datasets for extremely dark environments. The most related one is the See-in-the-Dark (SID) dataset~\cite{chen2018learning}. This dataset contains different sets of raw short-exposure burst images. Each set has a corresponding long-exposure reference image which captures the same scene, which will be used as ground-truth. They were all obtained by real life phone camera shots under very dark environments. Outdoor photos were taken at night under moonlight or streetlights, with illumination ranging from 0.2 lux to 5 lux; and indoor photos' illumination ranges from 0.03 lux to 0.3 lux. Therefore, the images were taken under extremely dark environments, but nothing out of the ordinary for normal cell phone camera users. Each group contains up to 3 types of short exposure bursts of different exposure times: 0.033s, 0.04s, and 0.1s, respectively. The corresponding ground-truth is a long exposure image. Its exposure time is 10s or 30s; although there is still some noise, the quality can be considered high enough. All photos were remotely controlled by a tripod and did not require alignment. We chose the Sony camera Bayer pattern raw images with a resolution of $4240*2832$ as our main training datasets. In addition, we collected data from extremely dark situations taken by other mobile phones as a generalized test of trained models.
\subsection{Training Details}
We used the TensorFlow framework. The single-frame denoising network regresses denoised image $I_{s}^{t}=f_{s}(N^{t}, \theta_{s})$ from noisy raw input $N^{t}$, given the model parameters $\theta_{s}$, while the multi-frame denoising network regresses each noisy frame, $I_{m}^{t}=f_{m}^{t}({N^{t}}, \theta_{m})$, given the model parameters $\theta_{m}$. We train the network by minimizing the L1 distance between the predicted outputs and the ground-truth target images as follows~\cite{godard2018deep}:
\begin{equation*}
E=\sum_{t}^{F} \left| I^{t}-f_{s}(N^{t}, \theta_{s} )\right| + \left| I^{t}-f_{m}^{t}([N^{t}], \theta_{m} )\right|
\end{equation*}
The patch size is 512*512. We used a relatively large patch size because, when using U-Net, the image quality of the patch edge is not as good as the middle after downsampling and upsampling. A large patch size ensures that the training would be satisfactory. We also performed data augmentation on the dataset. We set the Adam with learning rate to $0.5*10^{-4}$, then decay it by one half every 1000 epochs. 137 sequences (10 burst images, $4240* 2832$) were used for training, while 41 were reserved for testing.
\begin{figure*}
\centering
\includegraphics[width=14cm]{result1.jpg}
\caption{In Column (a), the images were obtained via a traditional image processing pipeline. In Column (b), the images are the ground-truth, as obtained by long-exposure. In Column (c), the images represent the baseline for fair comparison, which is calculated by inputting each burst into ~\cite{chen2018learning} network for denoising, after which the output is averaged. In Column (d), the images represent our results. As can be seen, in Column (d), the colors and details are more accurate and correspond better to the ground-truth as shown in Column (b). This can best be seen on a screen, where the images can be magnified.}
\label{fig2}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8cm]{result2.jpg}
\caption{This figure shows the results of a 10-frame output model, arranged from the first frame (top) to the last (bottom). PSNR and SSIM generally increase in value over these ten progressive frames; this can be confirmed visually as well, with the text on the frame image generally becoming clearer in each successive frame.}
\label{fig2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm]{result31.jpg}
\caption{This example shows how the trained model applies to different cameras' raw bursts (Blackberry key2, 10 bursts, exposure time: 0.1s). In (a), we converted the original raw burst data directly using traditional pipeline to an RGB image. (b) Due to excessive darkness, we use Photoshop to brighten the image in order to see the content. The detailed image clearly has a lot of noise. In (c), we applied the trained model to the bursts. From the images it can be seen that the trained model can be applied to different cameras' images and can obtain good denoising and enhancement.}
\label{fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7.5cm]{result32.jpg}
\caption{This is a second example which shows how the trained model applies to different cameras' raw bursts (iPhone 7 Plus, 10 bursts, exposure time: 0.033s). In (a), we converted one of the original raw burst data directly using traditional pipeline to an RGB image. In (b), we applied the trained model to the bursts. From the images it can be seen that the trained model can be applied to different cameras' images and can obtain good denoising and enhancement.}
\label{fig5}
\end{figure}
\section{Results and Discussion}
To begin with, we use one example from the SID dataset to compare three images to the ground truth: first, the image produced by the traditional image processing pipeline; second, the image produced by ~\cite{chen2018learning}, which is a single-frame enhanced network; and third, our result. We can see from Fig. 3 that, compared with the traditional image processing pipeline-generated image, the single-frame enhanced network has already greatly improved the image, with greatly reduced noise when compared with ground-truth (the long exposure image). However, the result still lacks many details; additionally, sometimes the colors in the image do not reflect the real color. This is due to the fact that the SNR of the shot's image is quite low under dark environment. Strong noise conceals a substantial amount of useful information, which cannot be recovered completely through a single image. In contrast, use of our multi-frame network permits more effecitive recovery of the details and corrects the colors to make them more in line with the ground-truth.
For fair comparison, each frame of the 10 bursts of the same scene in the SID dataset was input to the network for denoising and then averaged. This works as a baseline for comparison. We compared our 10-frame denoised results with the baseline and found that our network can obtain more details, less noise, and better enhancement. Some of the resulting images can be seen in Fig. 4. As is clear from these images, in the first example, the average processed image cannot clearly identify the writing on the sticky note, while the images obtained by our network are significantly closer to the ground-truth. Furthermore, the color of the apple in our image is also closer to the ground-truth. Similarly, in the second row, the images obtained by our network clearly show the details of the leaves, while the leaves in the average processed image are blurred. Continuing to the third row, the image obtained by our network reveals the outline of the house and the texture of the ground, which cannot be seen in the average processed image. In the fourth row, the image obtained by our network shows the texture of the tree’s trunk and branches, while in the average processed image these things cannot be seen clearly. Finally, in the fifth row, the text in the image obtained by our network is clearer than the text in the average processed image. We have calculated the corresponding Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). On average, the PSNR and SSIM of our results are 30.75 and 0.822 respectively, while the PSNR and SSIM of the baseline are 30.06 and 0.808 respectively. The PSNR of our results can exceed 0.69dB, while the SSIM has increased by 0.014 from the baseline.
To explore how our network processes image sequences, we trained a 10-frame model and output the first to last frames of the image on the test set. Figure 5 shows the results of the 10 output frames. PSNR and SSIM generally show an increasing trend in value, while the text in the image of each frame can be visually confirmed as becoming less blurry and clearer in each successive frame. This is because our recurrent architecture processes the image sequence frame-by-frame, with each successive frame aggregating and utilizing all of the previous information from each preceding frame. In general, the more previous information which is aggregated and utilized, the better the later output will be.
We also collected some RAW images taken with other mobile phones. These images, too, were taken in very dark environments. Fig.6 and Fig.7 show how the trained model applies to different cameras' raw bursts. We used our trained model to process these images and obtained the same good results without fine-tuning. In theory, because of the different sensors, these images could be used as datasets to train and retest in order to achieve the best results. However, we directly used the model that was trained with SID before and tested the different camera's datasets, which led to good results and makes the photos similar to the ones in the previous experiments. This shows that our network can be generalized between different models of cameras. The model obtained by training images captured by a camera can be effectively used on images obtained by other cameras, and thus has good portability. It is only necessary to repack the Bayer pattern of the model according to the Bayer pattern of the corresponding camera, putting it in the same order. The black level of the corresponding camera is subtracted in the normalization, thus allowing completion of the migration.
\section{Conclusions}
The proposed framework, based on an RFCN (i.e., a U-net combined with RNN), is designed to process raw burst photos taken under extremely low-light conditions and to obtain denoised images with improved brightness. All of this is end-to-end processing done through our network. We have illustrated that the use of raw bursts images obtains better results than state-of-the-art methods under dark environments. Additionally, our model maps raw burst images directly to sRGB outputs, either producing a best image or generating a multi-frame denoised image sequence. As a consequence, our framework has a relatively high level of flexibility, and opens up the possibility of expanding our framework to cover video as well as image denoising. Finally, we have proven that our framework is highly portable with a great deal of cross-platform potential; therefore, the model trained by one mobile phone can be directly applied to another camera's raw bursts without the necessity fine-tuning, and a similar level of enhancement can be expected.
In the future, by optimizing the network architecture and training procedure, we expect to continue to yield further improvements in image quality.
{\small
\bibliographystyle{ieee}
|
1,108,101,565,763 | arxiv | \section{Introduction}
\label{intro}
Let $G$ be a simple graph with vertex set $\{1,2,\ldots,n\}$. The \emph{adjacency matrix} of $G$ is the $n\times n$ symmetric matrix $A=(a_{i,j})$, where $a_{i,j}=1$ if $i$ and $j$ are adjacent; $a_{i,j}=0$ otherwise. We often identify a graph $G$ with its adjacency matrix $A$. For example, the \emph{spectrum} of $G$, denoted by $\sigma(G)$, refers to the \emph{spectrum} of $A$, i.e., the roots (including multiplicities) of the characteristic polynomial
$\chi(A;x)=\det(x I-A)$ of $A$. Two graphs with the same spectrum are called \emph{cospectral}. Isomorphic graphs are clearly cospectral (as their adjacency matrices are similar via a permutation matrix), but the converse is not true in general. A graph $G$ is \emph{determined by its spectrum} (DS for short) if any graph cospectral with $G$ is isomorphic to $G$. A fundamental and challenging problem in spectral graph theory is to determine whether or not a given graph is DS. For basic results on spectral characterizations (determination)
of graphs, we refer the readers to the survey papers \cite{ervdamLAA2003,ervdamDM2009}.
The \emph{generalized spectrum} of a graph $G$ is the ordered pair $(\sigma(G),\sigma(\overline{G}))$, where $\overline{G}$ is the complement of $G$. Naturally, two graphs are \emph{generalized cospectral} if they have the same generalized spectrum; a graph $G$ is said to be \emph{determined by its generalized spectrum}
(DGS for short) if any graph generalized cospectral with $G$ is isomorphic to $G$. For a graph $G$, the \emph{walk matrix} of $G$ is
\begin{equation}
W=W(G):=[e,Ae,\ldots,A^{n-1}e],
\end{equation}
where $e$ is the all-ones vector. A graph $G$ is \emph{controllable} if $W(G)$ is nonsingular. We shall restrict ourselves to controllable graphs; the family of controllable graphs of order $n$ is denoted by $\mathcal{G}_n$.
The following simple arithmetic criterion for a controllable graph being DGS was proved in \cite{wang2013ElJC,wang2017JCTB}.
\begin{theorem}[\cite{wang2013ElJC,wang2017JCTB}]\label{sqf}
Let $G\in \mathcal{G}_n$. If $2^{-\lfloor\frac{n}{2}\rfloor}\det W$ is odd and square-free, then $G$ is DGS.
\end{theorem}
Recently, Theorem \ref{sqf} has been extended or partially extended in various ways. For example, Qiu et al.~\cite{qiu2019DM} proved a similar result for the signless Laplacian spectrum. Li and Sun~\cite{li2021DM} considered the problem for $A_\alpha$-spectrum and unified Theorem \ref{sqf} and the result of Qiu et al. \cite{qiu2019DM}. We refer to \cite{qiu2019EJC, qiu2021LAA,wang2020EJC,wang2021EUJC} for more results on the generalizations of Theorem \ref{sqf}.
The main aim of this paper is to improve upon Theorem \ref{sqf}, that is, to give a weaker condition to guarantee a graph to be DGS. In general, if $\det W$ contains a multiple odd prime factor then $G$ may not be DGS. To obtain a more effective sufficient condition, we use the notions of Smith normal forms and invariant factors of integral matrices. We briefly recall these notions with an additional assumption that the involved integral matrices are square and invertible.
Two $n\times n$ integral matrices $M_1$ and $M_2$ are \emph{integrally equivalent} if $M_2$ can be
obtained from $M_1$ by a sequence of the following operations: row permutation, row negation, addition of an integer multiple of one row to another and the corresponding column operations. Any integral invertible matrix $M$ is integrally equivalent to a diagonal matrix $\textup{diag~}[d_1,d_2 \ldots,d_n]$, known as the \emph{Smith normal form} of $M$, in which $d_1,d_2\ldots,d_n$ are positive integers with $d_i\mid d_{i+1}$ for $i = 1,2,...,n-1$. The diagonal elements $d_1,d_2\ldots,d_n$ are the \emph{invariant factors} of $M$. We note that for an integral square matrix $M$, the determinant can be easily recovered, up to a sign, from the Smith normal form. Indeed, $\det M=\pm d_1d_2\cdots d_n$. But it is generally impossible to determine the Smith normal form of $M$ from its determinant.
The following proposition obtained in \cite{wang2017JCTB} is an exception, which gives an equivalent description of the condition in Theorem \ref{sqf}.
\begin{proposition}[\cite{wang2017JCTB}]\label{maxrank}
If $\det W=\pm 2^{\lfloor\frac{n}{2}\rfloor}b$ for some odd and square-free integer $b$, then the Smith normal form of $W$ is
$$\textup{diag~}[\underbrace{1,1,\ldots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,2,\ldots,2,2b}_{\lfloor\frac{n}{2}\rfloor}].$$
\end{proposition}
Now we introduce a polynomial for a graph $G$ associated with a prime $p$, which plays a key role in this paper. We use $\mathbb{F}_p$ to denote the finite field of order $p$, and use $J$ to denote the all-ones matrix (of order $n$).
\begin{definition}\label{Phip}\normalfont
Let $p$ be an odd prime and $G$ be a graph with adjacency matrix $A$. We define
\begin{equation}\Phi_p(G;x)=\gcd(\chi(A;x),\chi(A+J;x))\in \mathbb{F}_p[x],
\end{equation}
where the greatest common divisor (gcd) is taken over $\mathbb{F}_p$.
\end{definition}
\begin{remark}\label{invgc}\normalfont
Write $f(t,x)=\chi(A+tJ;x)$, $t\in \mathbb{Z}$. Note that $f(t,x)$ is linear in $t$. It is not difficult to see that $\Phi_p(G;x)$ is invariant under generalized cospectrality. That is, if $G$ and $H$ are generalized cospectral, then $\Phi_p(G;x)=\Phi_p(H;x)$.
\end{remark}
Let $p$ be an odd prime and $f\in \mathbb{F}_p[x]$ be a monic polynomial over the field $\mathbb{F}_p$. Now let $f = \prod_{1\le i\le r}f_i^{e_i}$
be the irreducible factorization of $f$, with distinct monic irreducible polynomials $f_1,f_2,\ldots, f_r$ and positive integers $e_1,e_2,\ldots, e_r$. The \emph{square-free part} of $f$, denoted by $\textup{sfp} (f)$, is $\prod_{1\le i\le r}f_i$; see \cite[p.~394]{gathen}.
For an integral matrix $M$ and a prime $p$, we use $\textup{rank}_p M$ and $\textup{nullity}_p M$ to denote the rank and the nullity of $M$ over $\mathbb{F}_p$, respectively. We shall prove that for any graph $G$ and prime $p$,
\begin{equation}\label{basicupperbound}
\deg \textup{sfp}(\Phi_p(G;x))\le \textup{nullity}_p W(G).
\end{equation}
The main result of this paper is the following theorem.
\begin{theorem}\label{main}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W=W(G)$. Suppose that $d_n$ is square-free. If for each odd prime factor $p$ of $d_n$,
\begin{equation}\label{keyequ}
\deg\textup{sfp}(\Phi_p(G;x))= \textup{nullity}_p W,
\end{equation} then $G$ is DGS.
\end{theorem}
We shall show that \eqref{keyequ} always holds for the case that $\textup{nullity}_p W=1$; see Corollary \ref{np1} in Section \ref{pt4}. Using Proposition \ref{maxrank}, we easily see that any graph satisfying the condition of Theorem \ref{sqf} necessarily satisfies the condition of Theorem \ref{main}. The converse is not true of course; as seen from later examples. This means that Theorem \ref{main} does improve upon Theorem \ref{sqf}. Furthermore, the proof of Theorem \ref{main} gives an alternative proof of Theorem \ref{sqf}.
The main strategy in proving Theorem \ref{main} uses some ideas from \cite{qiuarXiv}. In \cite{qiuarXiv}, Qiu et al. strengthen Theorem \ref{sqf} in a different way. The argument developed in \cite{qiuarXiv} gives a new proof of Theorem \ref{sqf}. Nevertheless, their argument essentially depends on the assumption that $ \textup{nullity}_p W=1$. To overcome this restriction, we generalize a familiar property for the characteristic polynomial of a symmetric matrix over $\mathbb{R}$ to the case of $\mathbb{F}_p$ or its extension. This is the main aim of Section 2. The proof of Theorem \ref{main} is given in Section 3. Some examples and discussions are given in the last section.
\section{Orthogonality over an extension field of $\mathbb{F}_p$}
Throughout this section, we assume that $p$ is a fixed odd prime. Let $\overline{\mathbb{F}}_p$ be the algebraic closure of the finite field $\mathbb{F}_p$. Let $\overline{\mathbb{F}}_p^n$ denote the linear space consisting of all $n$-dimensional column vectors over $\overline{\mathbb{F}}_p$. Two vectors $u,v\in \overline{\mathbb{F}}_p^n$ are called \emph{orthogonal} if $u^\textup{T} v=0$. The notation for this is $u \perp v$. Naturally, two subspaces $U$ and $V$ are called \emph{orthogonal} and denoted by $U\perp V$, if $\xi\perp \eta$ for any $\xi\in U$ and $\eta \in V$.
\begin{definition}[\cite{babai1992}]\normalfont
For a subspace $V$ of $\overline{\mathbb{F}}_p^n$, the \emph{orthogonal space} of $V$ is
\begin{equation}
V^\perp=\{u\in \overline{\mathbb{F}}_p^n\colon\, v^\textup{T} u=0\text{~for every~$v\in V$} \}.
\end{equation}
\end{definition}
Of course, $V^\perp$ is a subspace of $\overline{\mathbb{F}}_p^n$ and $V^\perp$ has dimension $n-\dim V$. A major difficulty here is that $V^\perp\cap V$ may contain some nonzero vector and hence $\overline{\mathbb{F}}_p^n=V\oplus V^\perp$ does \emph{not} hold in general. This explains why we do not call $V^\perp$ the \emph{orthogonal complement} of $V$, a name usually used in Euclidian space $\mathbb{R}^n$. A subspace $V\subset \mathbb{F}^n$ is \emph{isotropic} if $V\cap V^\perp$ contains a nonzero vector. Otherwise it is \emph{anisotropic} \cite{babai1992}. Note that $(\overline{\mathbb{F}}_p^n)^\perp$ contains only zero vector and hence $\overline{\mathbb{F}}_p^n$ is anisotropic by definition.
\begin{lemma} [{{\cite[p.270]{roman}}}]\label{equforani}Let $U$ and $V$ be two subspace of $\overline{\mathbb{F}}_p^n$ with $U\subset V$. Then
\begin{equation}\label{inequdim}
\dim (U^\perp \cap V)\ge \dim V -\dim U.
\end{equation}
Moreover, the equality in \eqref{inequdim} holds if $V$ is anisotropic.
\end{lemma}
\begin{proof}
Note that $\dim U^\perp=n-\dim U$. We have
\begin{equation}\label{upv}
\dim (U^\perp \cap V)=(n-\dim U) +\dim V-\dim (U^\perp +V).
\end{equation}
Thus, \eqref{inequdim} holds as $\dim (U^\perp +V)\le n$.
Now suppose that $V$ is anisotropic. By definition, we have $V^\perp\cap V=\{0\}$ and hence $\dim (V^\perp +V)=\dim V^\perp+\dim V=n$. Noting that $V^\perp+V\subset U^\textup{T} +V\subset \overline{\mathbb{F}}_p^n$ as $U\subset V$, we must have $\dim (U^\textup{T} +V)=n$. By \eqref{upv}, the equality in \eqref{inequdim} holds.
\end{proof}
Let $A$ be an $n\times n$ matrix over $\overline{\mathbb{F}}_p$. We usually identify $A$ as a linear transformation (also denoted by $A$) on $\overline{\mathbb{F}}_p^n$ defined by $A\colon\,x\mapsto Ax$. A subspace $U\subset \overline{\mathbb{F}}_p^n$ is \emph{$A$-invariant} if $AU\subset U$, that is, if $Ax\in U$ for any $x\in U$. For an $A$-invariant subspace $U$, we use $\restr{A}{U}$ to denote the linear transformation $A$ restricted to $U$.
\begin{lemma}\label{facchi}
If $A$ is a symmetric matrix over $\overline{\mathbb{F}}_p$ and $U$ is an $A$-invariant subspace of $\overline{\mathbb{F}}_p^n$. Then $U^\perp$ is $A$-invariant and
\begin{equation}\label{ff}
\chi(A;x)=\chi(\restr{A}{U};x)\chi(\restr{A}{U^\perp};x).
\end{equation}
\end{lemma}
\begin{proof}
The first assertion is simple as one can check that the usual argument for the same assertion in the field $\mathbb{R}$ is also valid for $\overline{\mathbb{F}}_p$. Nevertheless, we need some extra work to establish (\ref{ff}) as the equality $\overline{\mathbb{F}}_p^n=U\oplus U^\perp$ may fail.
Let $\chi(A;x)=(x-\lambda_1)^{v_1}\cdots(x-\lambda_k)^{v_k}$, where $\lambda_1,\ldots,\lambda_k$ are distinct roots of $\chi(A;x)$. Let $V_i=\mathcal{N}(A-\lambda_i I)^{v_i}$ be the nullspace of $(A-\lambda_i I)^{v_i}$. Then by the primary decomposition theorem (see e.g.~\cite{hoffman1971}), we have
\noindent(\rmnum{1}) each $V_i$ is $A$-invariant;
\noindent(\rmnum{2}) $\dim V_i=v_i$ and $\chi(\restr{A}{V_i};x)=(x-\lambda_i)^{v_i}$;
\noindent(\rmnum{3}) $\overline{\mathbb{F}}_p^n=V_1\oplus\cdots\oplus V_k$;
\noindent(\rmnum{4}) there are polynomials $h_1,\ldots,h_k$ such that each $h_i(A)$ is
the identity on $V_i$ and is zero on all the other $V_i$'s.
Noting that $U$ is $A$-invariant, we have
\begin{equation}\label{Uop}
U=(U\cap V_1)\oplus\cdots\oplus (U\cap V_k),
\end{equation}
see \cite[p.~264]{hoffman1971}. Similarly, as $U^\perp$ is also $A$-invariant, we have
\begin{equation}\label{Uperp}
U^\perp =(U^\perp \cap V_1)\oplus\cdots\oplus (U^\perp \cap V_k).
\end{equation}
\noindent\emph{Claim} 1: $V_i\perp V_j$ for all distinct $i$ and $j$.
Let $\xi$ and $\eta$ be any vectors in $V_i$ and $V_j$ respectively. As $h_i(A)$ is the identity on $V_i$ and is zero on $V_j$ , we have $h_i(A)\xi=\xi$ and $h_i(A)\eta=0$. Noting that $A^\textup{T} =A$, we have
\begin{equation}
\xi^\textup{T} \eta=(h_i(A)\xi)^\textup{T} \eta=\xi^\textup{T} (h_i(A))^\textup{T} \eta=\xi^\textup{T} (h_i(A) \eta)=0.
\end{equation}
This proves Claim 1.
\noindent\emph{Claim} 2: Each $V_i$ is anisotropic.
Let $V_i'=\oplus_{j\neq i}V_j$. By (\rmnum{3}), we see that $\dim V_i'=n-\dim V_i$. On the other hand, by Claim 1, we know that $V_i\perp V_j$ for $j\neq i$ and hence $V_i\perp V_i'$, i.e., $V_i'\subset V_i^\perp$. Noting that $\dim V_i^\perp=n-\dim V_i$, the two spaces $V_i'$ and $V_i^\perp$ must coincide. Therefore, $V_i\cap V_i^\perp=V_i\cap V_i'=\{0\}$ and Claim 2 follows.
\noindent\emph{Claim} 3: $U^\perp \cap V_i=(U\cap V_i)^\perp \cap V_i$ for each $i$.
Let $U_i=U\cap V_i$ for $i\in \{1,\ldots,k\}$. As $U_i\subset U$, we have $U_i^\perp\supset U^\perp$ and hence $U_i^\perp \cap V_i \supset U^\perp\cap V_i$. It remains to show that $U_i^\perp \cap V_i \subset U^\perp\cap V_i$. Pick any $\xi\in U_i^\perp \cap V_i$. As $\xi\in V_i$, Claim 1 implies that $\xi\perp V_j$ and hence $\xi\perp U_j$ for any $j\neq i$. This, together with the fact that $\xi\in U_i^\perp$, implies that $\xi\perp U_j$ for all $j\in\{1,\ldots,k\}$. Noting that $U=U_1\oplus\cdots\oplus U_k$ by \eqref{Uop}, we have $\xi\perp U$, i.e., $\xi\in U^\perp$. Thus, $\xi\in U^\perp \cap V_i$ and hence $U_i^\perp \cap V_i \subset U^\perp\cap V_i$ by the arbitrariness of $\xi$. This proves Claim 3.
By Claim 3, we can rewrite \eqref{Uperp} as
\begin{equation}\label{Uperpo}
U^\perp =(U_i^\perp \cap V_1)\oplus\cdots\oplus(U_k^\perp\cap V_k).
\end{equation}
Let $u_i=\dim U_i$, and $w_i=\dim (U_i^\perp \cap V_i)$ for $i\in\{1,\ldots,k\}$. Note that $U_i\subset V_i$, $\dim V_i=v_i$, and $V_i$ is anisotropic by Claim 2. It follows from Lemma \ref{equforani} that $\dim (U_i^\perp\cap V_i)=\dim V_i-\dim U_i$, i.e.,
\begin{equation}\label{wvu}
w_i=v_i-u_i.
\end{equation} Note that $U_i$ is $A$-invariant and $U_i\subset V_i$. We see that $\chi(\restr{A}{U_i};x)$ is a factor of $\chi(\restr{A}{V_i};x)$ and hence $\chi(\restr{A}{U_i};x)=(x-\lambda_i)^{u_i}$. Consequently, we have $\chi(\restr{A}{U};x)=(x-\lambda_1)^{u_1}\cdots(x-\lambda_k)^{u_k}$. Similarly, by \eqref{Uperpo}, we have
$\chi(\restr{A}{U^\perp};x)=(x-\lambda_1)^{w_1}\cdots(x-\lambda_k)^{w_k}$. Thus, \eqref{ff} holds by \eqref{wvu}. This completes the proof.
\end{proof}
\section{Proof of Theorem \ref{main}}\label{pt4}
An orthogonal matrix $Q$ is called \emph{regular} if $Qe=e$ (or equivalently, $Q^\textup{T} e=e$). An old result of Johnson and Newman \cite{johnson1980JCTB} states that two graphs $G$ and $H$ are generalized cospectral if and only if there exists a regular orthogonal matrix $Q$ such that $Q^\textup{T} A(G)Q=A(H)$. For controllable graphs, the corresponding matrix $Q$ is unique and rational.
\begin{lemma}[\cite{johnson1980JCTB,wang2006EuJC}]\label{gcQ}
Let $G\in \mathcal{G}_n$ and $H$ be a graph generalized cospectral with $G$. Then there exists a unique regular rational orthogonal matrix $Q$ such that $Q^\textup{T} A(G) Q=A(H)$. Moreover, the unique $Q$ satisfies $Q^\textup{T} =W(H)W^{-1}(G)$ and hence is rational.
\end{lemma}
For a controllable graph $G$, define $\mathcal{Q}(G)$ to be the set of all regular rational orthogonal matrices $Q$ such that $Q^\textup{T} A(G)Q$ is an adjacency matrix. For a rational matrix $Q$, the \emph{level} of $Q$, denoted by $\ell(Q)$, or simply $\ell$, is the smallest positive integer $k$ such that $kQ$ is an integral matrix. Note that a regular rational orthogonal matrix with level one is a permutation matrix.
The following two important results are direct consequences of Lemma \ref{gcQ}.
\begin{lemma}[\cite{wang2006EuJC}]\label{pdn}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W$. Then $\ell(Q)\mid d_n$ for any $Q\in \mathcal{Q}(G)$.
\end{lemma}
\begin{lemma}[\cite{wang2006EuJC}]\label{onelevel}
Let $G\in \mathcal{G}_n$. Then $G$ is DGS if and only if $\ell(Q)=1$ for each $Q\in \mathcal{Q}(G)$.
\end{lemma}
\begin{lemma}[\cite{wang2006EuJC}]\label{tdw}
For any graph $G$ of order $n$, we have $2^{\lfloor\frac{n}{2}\rfloor}\mid\det W$.
\end{lemma}
For nonzero integers $d$, $m$ and positive integer $k$, we use $d^k\mid\mid m$ to indicate that $d^k$ precisely divides $m$, i.e., $d^k\mid m$ but $d^{k+1}\nmid m$. The following result was obtained in \cite{wang2017JCTB} using an involved argument; we refer to \cite{qiuarXiv} for a simpler proof.
\begin{lemma}[\cite{wang2017JCTB,qiuarXiv}]\label{oddl}
Let $G\in \mathcal{G}_n$. If $2^{\lfloor\frac{n}{2}\rfloor}\mid\mid \det W$ then any $Q\in\mathcal{Q}(G)$ has odd level.
\end{lemma}
\begin{lemma}[\cite{wang2021LAA}]\label{tmf}
For any graph $G$ of order $n$, at most $\lfloor\frac{n}{2}\rfloor$ invariant factors of W are congruent to $2$ modulo $4$.
\end{lemma}
\begin{corollary}\label{oddlevel}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W$. If $d_n\equiv 2\pmod{4}$ then any $Q\in \mathcal{Q}(G)$ has odd level.
\end{corollary}
\begin{proof}
Since $d_n\equiv 2\pmod{4}$ and $d_1\mid d_2\mid \cdots\mid d_n$, each invariant factor is either odd or congruent to $2$ modulo $4$. It follows from Lemma \ref{tmf} that $2^{\lfloor\frac{n}{2}\rfloor+1}\nmid\det W$. By Lemma \ref{tdw}, we see that $2^{\lfloor\frac{n}{2}\rfloor}\mid\mid\det W$. The assertion follows by Lemma \ref{oddl}.
\end{proof}
The remaining part of this section is devoted to showing that, for any $Q\in \mathcal{Q}(G)$ with $G$ satisfying the condition of Theorem \ref{main}, the level $\ell(Q)$ contains none odd prime factor. We begin with a fundamental property on the column vectors of $W$.
\begin{lemma}[\cite{liesen2012,qiu2019DM}]\label{firstr}
Let $r=\textup{rank}_p W$. Then the first $r$ columns of $W$ are linearly independent over $\overline{\mathbb{F}}_p$ and hence constitute a basis of the column space of $W$.
\end{lemma}
\begin{definition}\normalfont
Let $p$ be an odd prime. The \emph{$p$-main polynomial} of a graph $G$, denoted by $m_p(G;x)$, is the monic polynomial $f\in \mathbb{F}_p[x]$ of
smallest degree such that $f(A)e = 0$.
\end{definition}
We recall that the ordinary \emph{main polynomial} $m(G;x)$ (over $\mathbb{Q}$) can be defined in the same manner; see \cite{teranishi2002LMA,rowlinson2007AADM}. It is known that the ordinary main polynomial is invariant under generalized cospectrality. Unfortunately, the $p$-main polynomial does not have such a nice property in general. In other words, two generalized cospectral graphs $G$ and $H$ may have different $p$-main polynomials for some odd prime $p$; see Remark \ref{dispmain} in Section \ref{dissec}. However, a key intermediate result of this paper shows that such an inconsistency can never happen under the restriction that one graph, say $G$, satisfies the assumption of Theorem \ref{main}. The overall idea is simple. We shall show that under the condition of Theorem \ref{main}, there is a direct connection between the $p$-main polynomial $m_p(G;x)$ and the polynomial $\Phi_p(G;x)$ which is invariant under generalized cospectrality (see Eq. \eqref{formp} in Lemma \ref{basicinq}).
To simplify the notations in the following lemmas, we fix a graph $G$ and use $A$ and $W$ to denote the adjacency matrix and walk matrix of $G$, respectively.
\begin{definition}
$A_t=A+tJ$ and $W_t=[e,A_t e,\ldots,A_t^{n-1}e]$ for $t\in \overline{\mathbb{F}}_p$.
\end{definition}
\begin{lemma}\label{invt}
$\mathcal{N}(W_t^\textup{T})$ is constant on $t\in \overline{\mathbb{F}}_p$.
\end{lemma}
\begin{proof}
Note that $J\xi=(ee^\textup{T})\xi=(e^\textup{T} \xi)e\in \textup{Span~}\{e\}$ for any $\xi\in \overline{\mathbb{F}}_p^n$. Thus, for any $t\in \overline{\mathbb{F}}_p$ and positive integer $k$, there exist $c_0,\ldots,c_{k-1}\in \overline{\mathbb{F}}_p$ such that
\begin{equation}
(A+tJ)^k e=A^k e+\sum_{i=0}^{k-1}c_iA^i e.
\end{equation}
It follows that there exists an $n\times n$ upper triangular matrix $U$ with 1 on the diagonal such that
\begin{equation}
[e,(A+tJ)e,\ldots,(A+tJ)^{n-1}e]=[e,Ae,\ldots,A^{n-1}e]U,
\end{equation}
i.e., $W_t=W U$. Thus, $W_t^\textup{T}=U^\textup{T} W^\textup{T}$ and hence $\mathcal{N}(W_t^\textup{T})=\mathcal{N}(W^\textup{T})$ as $U^\textup{T}$ is invertible.
\end{proof}
\begin{lemma}\label{invs}
$\mathcal{N}(W^\textup{T})$ is an $(A+tJ)$-invariant subspace for any $t\in\overline{\mathbb{F}}_p$.
\end{lemma}
\begin{proof}
Let $\chi(A;x)=c_0+c_1x+\cdots+c_{n-1} x^{n-1}+x^{n}$ and $C$ be the companion matrix, that is,
\begin{equation}
C=\begin{pmatrix}
0&0&\cdots&0&-c_0\\
1&0&\cdots&0&-c_1\\
0&1&\cdots&0&-c_2\\
\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&\cdots&1&-c_{n-1}
\end{pmatrix}.
\end{equation}
It follows from the Cayley-Hamilton Theorem that $A^ne=-c_0e-c_1Ae-\cdots-c_{n-1}A^{n-1}e$ and hence $AW=WC$, or equivalently, $W^\textup{T} A=C^\textup{T} W^\textup{T}$ as $A$ is symmetric. Let $\xi$ be any vector in $\mathcal{N}(W^\textup{T})$. Then we have $W^\textup{T} (A\xi)=C^\textup{T} W^\textup{T} \xi=0$ and hence $A\xi\in \mathcal{N}(W^\textup{T})$. Moreover, as $e^\textup{T}$ is the first row of $W^\textup{T}$, we see that $e^\textup{T} \xi=0$ and hence $J \xi=0$. Thus,
$(A+tJ)\xi=A\xi\in \mathcal{N}(W^\textup{T})$. This indicates that $\mathcal{N}(W^\textup{T})$ is $(A+tJ)$-invariant, as desired.
\end{proof}
\begin{lemma}\label{mainchi} $
m_p(G;x)=\chi(\restr{A}{\mathcal{N}^\perp(W^\textup{T})};x).
$
\end{lemma}
\begin{proof}
Let $r=\textup{rank}_p W$ and $f=\chi(\restr{A}{\mathcal{N}^\perp(W^\textup{T})};x)$. Then $\deg f=\dim \mathcal{N}^\perp(W^\textup{T})=r$. By Lemma \ref{firstr}, we see that $A^ke\in \textup{Span~}\{e,Ae,\ldots,A^{k-1}e\}$ if and only if $k\ge r$. This implies that $\deg m_p(G;x)=r$. Thus, it suffices to show $f(A)e=0$. Indeed, by Cayley-Hamilton Theorem, we have $\restr{f(A)}{\mathcal{N}^\perp(W^\textup{T})}$ is zero. As $e\perp \xi$ for any $\xi\in \mathcal{N}(W^\textup{T})$, we see that $e\in \mathcal{N}^\perp(W^\textup{T})$. Therefore, $f(A)e=0$ and we are done.
\end{proof}
\begin{lemma}\label{sameroots}
$\chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)$ divides $\Phi_p(G;x)$, and $\textup{sfp} (\Phi_p(G;x))$ divides $\chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)$.
\end{lemma}
\begin{proof}
By Lemma \ref{invs}, the space $\mathcal{N}(W^\textup{T})$ is $(A+tJ)$-invariant for any $t\in \overline{\mathbb{F}}_p$. Let $f_t\in \overline{\mathbb{F}}_p[x]$ denote $\chi(\restr{(A+tJ)}{\mathcal{N}(W^\textup{T})};x)$. Since $\restr{J}{\mathcal{N}(W^\textup{T})}$ is zero, we find that $f_t$ does not depend on $t$. Clearly $f_t\mid \chi(A+tJ;x)$. Since $f_0=f_1$, we have $f_0\mid \gcd(\chi(A;x),\chi(A+J;x))$, which is exactly the first assertion.
To prove the second assertion, it suffices to show that every root of $\Phi_p(G;x)$ is a root of $f_0$ (or $f_1$). Let $\lambda\in \overline{\mathbb{F}}_p$ be any root of $\Phi_p(G;x)$, that is, $\lambda$ is a common eigenvalue of $A$ and $A+J$. Then there exist two nonzero vectors $\xi$ and $\eta$ such that $A\xi=\lambda \xi$ and $(A+J)\eta=\lambda \eta$. We claim that either $e^\textup{T} \xi=0$ or $e^\textup{T} \eta=0$. Actually, we have
\begin{equation}
\xi^\textup{T} (\lambda I-A)\eta=\xi^\textup{T} J\eta=\xi^\textup{T} ee^\textup{T}\eta=(e^\textup{T}\xi)(e^\textup{T} \eta).
\end{equation}
Taking transpose and noting that $A$ is symmetric, we have $\xi^\textup{T} (\lambda I-A)\eta=\eta^\textup{T} (\lambda I-A)\xi=0$. Thus $(e^\textup{T}\xi)(e^\textup{T} \eta)=0$ and the claim follows. Suppose that $e^\textup{T}\xi=0$. Then $e^\textup{T} A^k\xi=e^\textup{T} \lambda^k \xi=0$ for any positive $k$ and hence $W^\textup{T} \xi=0$, i.e., $\xi\in \mathcal{N}(W^\textup{T})$. Since $\xi$ is an eigenvector of $\restr{A}{\mathcal{N}(W^\textup{T})}$, the corresponding eigenvalue $\lambda$ must be a root of $f_0$. Now suppose that $e^\textup{T}\eta=0$. Similarly we have $\eta\in \mathcal{N}(W_1^\textup{T})$. But $\mathcal{N}(W_1^\textup{T})=\mathcal{N}(W^\textup{T})$ by Lemma \ref{invt}. Thus, $\eta\in \mathcal{N}(W^\textup{T})$ and we see that $\lambda$ must be a root of $f_1$. Recall that $f_0=f_1$. We find that $\lambda$ is always a root of $f_0$. This completes the proof.
\end{proof}
\begin{lemma}\label{basicinq}
$\deg\textup{sfp}(\Phi_p(G;x))\le \textup{nullity}_p W\le \deg\Phi_p(G;x)$. Moreover, if the first equality holds then
\begin{equation}\label{formp}
m_p(G;x)=\frac{\chi(A;x)}{\textup{sfp}(\Phi_p(G;x))}.
\end{equation}
\end{lemma}
\begin{proof}
Note that $\deg\chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)=\dim \mathcal{N}(W^\textup{T})=\textup{nullity}_p W$. The first assertion clearly follows from Lemma \ref{sameroots}. Note that $\deg m_p(G;x)=\textup{rank}_p W=n-\textup{nullity}_p W$. It follows from Lemmas \ref{mainchi}, \ref{facchi} and \ref{sameroots} that
\begin{eqnarray}\label{twoinq}
n-\textup{nullity}_p W &=&\deg m_p(G;x)\nonumber\\
&=&\deg \chi(\restr{A}{\mathcal{N}^\perp(W^\textup{T})};x)\nonumber\\ \nonumber
&= &\deg\frac{\chi(A;x)}{\chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)}\nonumber\\
&\le&\deg\frac{\chi(A;x)}{\textup{sfp}(\Phi_p(G;x))}\\
&=&n-\deg\textup{sfp}(\Phi_p(G;x)).\nonumber
\end{eqnarray}
Suppose that $\deg\textup{sfp}(\Phi_p(G;x))= \textup{nullity}_p W$. Then the inequality in \eqref{twoinq} must become an equality. Clearly, this happens precisely when $ \chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)=\textup{sfp}(\Phi_p(G;x))$. Thus, \eqref{formp} holds and the proof is complete.
\end{proof}
\begin{corollary}\label{np1}
If $\textup{nullity}_p W=1$ then $\deg\textup{sfp}(\Phi_p(G;x))=1$.
\end{corollary}
\begin{proof}
As $\textup{nullity}_p W=1$, Lemma \ref{basicinq} implies that $\deg\textup{sfp}(\Phi_p(G;x))\le 1$ and $\deg\Phi_p(G;x)\ge 1$. Now clearly, $\Phi_p(G;x)$ has the form $(x-\lambda)^k$ for some $\lambda\in \overline{\mathbb{F}}_p$ (indeed $\lambda\in \mathbb{F}_p$) and positive integer $k$. Thus, $\textup{sfp}(\Phi_p(G;x))=x-\lambda$ and the corollary follows.
\end{proof}
\begin{corollary}\label{samepmain}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W(G)$. Suppose that $d_n$ is square-free and $p$ is an odd prime factor of $d_n$. If $
\deg\textup{sfp}(\Phi_p(G;x))= \textup{nullity}_p W(G)$,
then $\textup{nullity}_p W(G)=\textup{nullity}_p W(H)$ and $m_p(G;x)=m_p(H;x)$ for any graph $H$ generalized cospectral with $G$.
\end{corollary}
\begin{proof}
Write $k=\textup{nullity}_p W(G)$. Then exactly the last $k$ invariant factors $d_{n-k+1},\ldots,d_n$ of $W(G)$ are multiple of $p$. Since $p\mid\mid d_n$ and $d_{n-k+1}\mid d_{n-k+2}\mid\cdots\mid d_n$, all these invariant factors must have $p$ as a \emph{simple} factor. Thus $p^k\mid\mid\det W(G)$ and hence $p^k\mid\mid\det W(H)$ as $\det W(G)=\pm \det W(H)$. Consequently, we have $\textup{nullity}_p W(H)\le k$. On the other hand, noting that $\Phi_p(G;x)=\Phi_p(H;x)$, Lemma \ref{basicinq} together with the condition of this proposition implies
$$\textup{nullity}_p W(H)\ge \deg\textup{sfp}(\Phi_p(H;x))=\deg\textup{sfp}(\Phi_p(G;x))= \textup{nullity}_p W(G)=k.$$
Therefore, we have $\textup{nullity}_p W(H)=k$. Now, using the second part of Lemma \ref{basicinq} for both $G$ and $H$, we find that $m_p(G;x)=m_p(H;x)$.
\end{proof}
The following corollary is not needed for the proof of Theorem \ref{main} but will be used to give a better understanding of the counterexample given in the next section.
\begin{corollary}\label{samepmain2}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W(G)$. Suppose that $d_n$ is square-free and $p$ is an odd prime factor of $d_n$. If $\textup{nullity}_p W(G)=2$ then, for any graph $H$ generalized cospectral with $G$, one of the following two statements holds.\\
(\rmnum{1}) $\textup{nullity}_p W(H)=2$ and $m_p(G;x)=m_p(H;x)$;\\
(\rmnum{2)} $\textup{nullity}_p W(H)=1$ and $m_p(G;x)\neq m_p(H;x)$.
\end{corollary}
\begin{proof}
By Lemma \ref{basicinq}, we have $\deg\textup{sfp}(\Phi_p(G;x))\le 2\le \deg\Phi_p(G;x)$. Thus, we have $\deg\textup{sfp}(\Phi_p(G;x))=2$ or $1$. If $\deg\textup{sfp}(\Phi_p(G;x))=2$, then (\rmnum{1}) holds by Corollary \ref{samepmain}. Now assume that $\deg\textup{sfp}(\Phi_p(G;x))=1$. Then, using a similar argument as in the proof of Corollary \ref{samepmain}, we have $p^2\mid\mid \det W(H)$ and hence $\textup{nullity}_p W(H)=1$ or $2$. If $\textup{nullity}_p W(H)=1$ then the two polynomials $m_p(G;x)$ and $m_p(H;x)$ have different degrees and of course $m_p(G;x)\neq m_p(H;x)$. It remains to consider the case that $\textup{nullity}_p W(H)=2$.
Since $\deg\textup{sfp}(\Phi_p(G;x))=1$ and $\deg \Phi_p(G;x)\ge 2$, we have $\Phi_p(G;x)=(x-\lambda)^k$ for some $\lambda\in\mathbb{F}_p$ and integer $k\ge 2$. By Lemma \ref{sameroots}, we see that $\chi(\restr{A}{\mathcal{N}(W^\textup{T}(G))};x)$ is a factor of $\Phi_p(G;x)$. As $\deg \chi(\restr{A}{\mathcal{N}(W^\textup{T}(G))};x)=\textup{nullity}_p W(G)=2$, we must have $\chi(\restr{A}{\mathcal{N}(W^\textup{T}(G))};x)=(x-\lambda)^2$. Thus, by Lemmas \ref{mainchi} and \ref{facchi}, we have
$m_p(G;x) =\frac{\chi(A(G);x)}{(x-\lambda)^2}$. Since $\textup{nullity}_p W(H)=2$, the same argument also works for $H$. Noting that $\chi(A(H);x)=\chi(A(G);x)$ and $\Phi_p(H;x)=\Phi_p(G;x)$, we see that $m_p(G;x)=m_p(H;x)$. This completes the proof.
\end{proof}
\begin{proposition}\label{almostmain}
Let $Q\in\mathcal{Q}(G)$ with level $\ell$. If $p\mid\mid d_n$ and $\deg\textup{sfp}(\Phi_p(G;x))=\textup{nullity}_p W$ then $p\nmid\ell$.
\end{proposition}
\begin{proof}
Let $A=A(G)$ and $A'=Q^\textup{T} A Q$. Let $f(x)\in \mathbb{Z}[x]$ be a monic polynomial such that $f(x)\equiv m_p(G;x)\pmod p$. By Corollary \ref{samepmain}, we have $f(A)e\equiv f(A')e\equiv 0\pmod{p}$. Write $k=\textup{nullity}_p W$. Note that $\deg f(x)=n-k$. Define
\begin{equation} \overline{W}=\left[e,Ae,\ldots,A^{n-k-1}e,\frac{1}{p}f(A)e,\frac{1}{p}Af(A)e,\ldots,\frac{1}{p}A^{k-1}f(A)e\right]
\end{equation}
and
\begin{equation} \overline{W'}=\left[e,A'e,\ldots,A'^{n-k-1}e,\frac{1}{p}f(A')e,\frac{1}{p}A'f(A')e,\ldots,\frac{1}{p}A'^{k-1}f(A')e\right].
\end{equation}
Then both $\overline{W}$ and $\overline{W'}$ are integral matrices and we still have $Q^\textup{T} \overline{W} =\overline{W'}$. This indicates that $\ell(Q^\textup{T})\mid \det \overline{W}$, or equivalently, $\ell\mid \det \overline{W}$. On the other hand, as $p^{k}\mid\mid \det W$ and $\det \overline{W}=p^{-k}\det W$, we see that $p\nmid \det \overline{W}$. Thus, $p\nmid \ell$, as desired.
\end{proof}
Now, we are in a position to present the proof of Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}]The case that $n=1$ is trivial and hence we assume $n\ge 2$. Let $Q$ be any matrix in $\mathcal{Q}(G)$ and $\ell$ be its level. Noting that $n\ge 2$, Lemma \ref{tdw} implies that $\det W$ and hence $d_n$ is even. Since $d_n$ is square-free, we see that $d_n\equiv 2\pmod{4}$. It follows from Corollary \ref{oddlevel} that $\ell$ is odd. In order to show $\ell=1$, we need to show that $\ell$ has no odd prime factor. Suppose to the contrary that there is an odd prime $p$ such that $p\mid \ell$. By Lemma \ref{pdn}, we know that $\ell\mid d_n$ and hence $p\mid d_n$. Moreover, as $d_n$ is square-free, we must have $p\mid\mid d_n$. Now, by Proposition \ref{almostmain}, we have $p\nmid \ell$. This is a contradiction. Therefore, $\ell=1$ and $G$ is DGS by Lemma \ref{onelevel}. This completes the proof.
\end{proof}
\section{Discussions}\label{dissec}
We first give an example to illustrate that Theorem \ref{main} does improve upon Theorem \ref{sqf}. We use Mathematica for the computation.
\begin{example}\normalfont
Let $n=16$ and $G$ be the graph with adjacency matrix
$$ A=\scriptsize{\left(
\begin{array}{cccccccccccccccc}
0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\
1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\
0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 \\
1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \\
1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
\end{array}
\right)}.
$$
The Smith normal form of $W(G)$ is
$$\textup{diag~}[\underbrace{1,1,1,1,1,1,1,1}_{8},\underbrace{2,2,2,2,2,2,2\times3,2b}_{8}],$$
where $b=3\times 23\times 29\times 1225550789\times6442787651$, which is square-free. From the Smith normal form, we see that Theorem \ref{sqf} is not applicable here. We turn to Theorem \ref{main}. Consider $p=3$. Then $\Phi_p(G;x)=x^4+2 x^3+2 x^2+x+1$, which has the standard factorization $\Phi_p(G;x)=\left(x^2+x+2\right)^2$ over $\mathbb{F}_p$. Thus,
$\textup{sfp}~\Phi_p(G;x)=x^2+x+2$. As $\textup{nullity}_p W= 2$, we see that \eqref{keyequ} holds in this case. Moreover, by Corollary \ref{np1}, all other odd prime factors of $b$ (or $2b$) must satisfy $\eqref{keyequ}$. Thus $G$ is DGS by Theorem \ref{main}.
\end{example}
Our next example indicates that if \eqref{keyequ} is not satisfied, then $G$ may not be DGS.
\begin{example}\label{ex2}\normalfont
Let $n=9$ and $G$ be the graph with adjacency matrix
$$ A=\scriptsize{\left(
\begin{array}{ccccccccc}
0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1 \\
1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \\
1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 \\
0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 \\
1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\
1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 \\
\end{array}
\right)}.
$$
The Smith normal form of $W$ is
$$\textup{diag~}[1,1,1,1,1,2,2,2\times 3\times 5,2\times 3\times 5].$$
Now we see $\textup{nullity}_3 W=\textup{nullity}_5 W=2$.
Direct computation (using Mathematica) indicates that
$\textup{sfp} (\Phi_3(G;x))=x+2$ (over $\mathbb{F}_3$) and $\textup{sfp} (\Phi_5(G;x))=x^2+x+1$ (over $\mathbb{F}_5$). Thus, \eqref{keyequ} holds for $p=5$ but not for $p=3$. This means that for this graph, Proposition \ref{almostmain} is applicable only for $p=5$. Therefore, we can not eliminate the possible that there exists some $Q\in \mathcal{Q}(G)$ with level $3$. Indeed, such a $Q$ does exist for this particular example. Let
$$Q=\scriptsize{\frac{1}{3}\left(
\begin{array}{ccccccccc}
1 & -1 & 0 & 2 & 1 & 0 & -1 & 1 & 0 \\
-1 & 1 & 0 & 1 & 2 & 0 & 1 & -1 & 0 \\
1 & -1 & 0 & -1 & 1 & 0 & 2 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 3 \\
1 & 2 & 0 & -1 & 1 & 0 & -1 & 1 & 0 \\
-1 & 1 & 0 & 1 & -1 & 0 & 1 & 2 & 0 \\
0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 1 & 0 & 1 & -1 & 0 & 1 & -1 & 0 \\
0 & 0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 \\
\end{array}
\right).}$$
Then $Q^\textup{T} AQ$ is an adjacency matrix of a graph. This indicates that $G$ is not DGS by Lemma \ref{onelevel}.
\begin{remark}\label{dispmain}\normalfont
Let $H$ be the graph with adjacency matrix $Q^\textup{T} AQ$, where $A$ and $Q$ are matrices as described in Example \ref{ex2}.
We claim that $m_p(G;x)\neq m_p(H;x)$ for $p=3$. Otherwise, noting that $\deg m_3(G;x)=2$ and using the same procedure as in the proof of Proposition \ref{almostmain}, we would get that $3\nmid\ell(Q)$, which is a contradiction. Actually, $m_3(G;x)=x^7+2 x^6+2 x^5+x^4+2 x^3+2 x^2+x$ and $m_3(H;x)=x^8+x^7+2 x^5+x^4+2 x^2+2 x$.
\end{remark}
\begin{remark}\normalfont
Let $G$ and $H$ be a pair of generalized cospectral graphs whose walk matrices have the same Smith normal form as follows:
$$\textup{diag~}[\underbrace{1,\ldots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,\ldots,2,2b_1,2b_2}_{\lfloor\frac{n}{2}\rfloor}],$$
where $b_2$ (and hence $b_1$) is odd and square-free. We claim that $G$ and $H$ must be isomorphic. Let $Q$ be the regular rational orthogonal matrix such that $Q^\textup{T} A(G)Q=A(H)$. We need to eliminate the possibility that $p\mid \ell(G)$ for any odd prime factor $p$ of $b_1$. Note that for such a prime $p$, Corollary \ref{samepmain2} clearly implies that $m_p(G)=m_p(H)$. Consequently, using the same argument as in the proof of Proposition \ref{almostmain}, we can show that $\ell(Q)\mid p^{-2}\det W(G)$. This means $p\nmid \ell(Q)$, as desired.
\end{remark}
We end the discussion of Example \ref{ex2} by suggesting the following natural and interesting problem.
\begin{problem}
Let $G$ and $H$ be a pair of generalized cospectral graphs whose walk matrices have the same Smith normal form as follows:
$$\textup{diag~}[\underbrace{1,\ldots,1}_{\lceil\frac{n}{2}\rceil},\underbrace{2,\ldots,2,2b_1,2b_2,\ldots,2b_k}_{\lfloor\frac{n}{2}\rfloor}],$$
where $b_k$ (and hence each $b_i$) is odd and square-free. Suppose that $k\ge 3$. Can we still guarantee that $G$ and $H$ are isomorphic?
\end{problem}
\end{example}
To see the extent to which Theorem \ref{main} improves upon Theorem \ref{sqf}, we have performed a series of numerical experiments. The graphs are randomly generated using the random graph model $G(n;p)$ model with $p=1/2$. For each $n\in\{10,15,\ldots,50\}$ we generated 1,000 graphs randomly, and
counted the number of graphs satisfying the condition of Theorem \ref{sqf} and Theorem \ref{main}, respectively. To see how often that \eqref{keyequ} is met under the assumption that $d_n$ is square-free, we also record the number of graphs satisfying this assumption. Table 1 records one of such experiments. For example, for $n=10$, among 1,000 graphs generated in one experiment, $261$ graphs have a square-free invariant factor $d_n$. For these $261$ graphs, $226$ graphs satisfy the condition of Theorem \ref{sqf} while $253$ graphs satisfy the condition of Theorem \ref{main}. The remaining $8$ graphs do not satisfy \eqref{keyequ} and hence we do not know whether they are DGS or not.
\begin{table}[htbp]
\footnotesize
\centering
\caption{\label{computer} Comparison between Theorem \ref{sqf} and Theorem \ref{main}}
\begin{threeparttable}
\begin{tabular}{ccccc}
\toprule
$n$ &\# graphs & \#DGS & \#DGS &\#Unknown\\
(graph order) & (with $d_n$ square-free\tnote{*} ) &(by Theorem \ref{sqf}) & (by Theorem \ref{main}) &(by Theorem \ref{main})\\
\midrule
10 & 261 &226&253&8\\
15 & 283 &217&265&18\\
20 & 268 &228&262&6\\
25 & 254 &221&245&9\\
30 & 257 &213&243&14\\
35 & 252 &204&245&7\\
40 & 280 &238&270&10\\
45 & 250 &204&237&13\\
50 & 275 &224&259&16\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[*]The numbers $d_n$ are usually huge integers and hence complete factorizations are unavailable in a reasonable time. We use the fast command FactorInteger[$d_n$,\textbf{Automatic}] in Mathematica to factor $d_n$. Note that this command extracts only factors that are easy to find.
\end{tablenotes}
\end{threeparttable}
\end{table}
At the end of this paper, we would like to suggest a possible improvement on Theorem \ref{main}. We begin with a definition.
\begin{definition}\normalfont
Let $f\in \mathbb{F}_p[x]$ be a monic polynomial with irreducible factorization $f =\prod_{1\le i\le r}f_i^{e_i}$.
We define the \emph{square-root} of $f$, denoted by $\textup{sqrt} (f)$, to be $\prod_{1\le i\le r}f_i^{\lceil\frac{e_i}{2}\rceil}$.
\end{definition}
We remind the reader that $(\textup{sqrt} (f))^2\neq f$ unless all $e_i$'s are even. Note that $\textup{sqrt} (f)$ is always a multiple of $\textup{sfp} (f)$, and they are equal precisely when all $e_i$ are either one or two. Thus, for any graph $G$ and prime $p$, we always have
\begin{equation}
\deg \textup{sfp} (\Phi_p(G;x))\le \deg \textup{sqrt}(\Phi_p(G;x)).
\end{equation}
While Lemma \ref{sameroots} tells us $\textup{sfp}(\Phi_p(G;x))$ divides $\chi(\restr{A}{\mathcal{N}(W^\textup{T})};x)$, it seems that the corresponding result also holds if we replace $\textup{sfp}(\Phi_p(G;x))$ by $\textup{sqrt} (\Phi_p(G;x))$. If we can show this improvement, then we can strengthen Inequality \eqref{basicupperbound} as
\begin{equation}
\deg \textup{sqrt}(\Phi_p(G;x))\le \textup{nullity}_p W(G),
\end{equation}
and moreover we can improve upon Theorem \ref{main} simply by replacing $\textup{sfp}(\Phi_p(G;x))$ with $\textup{sqrt}(\Phi_p(G;x))$. We write such a possible improvement on Theorem \ref{main} as the following conjecture.
\begin{conjecture}
\label{mainconj}
Let $G\in \mathcal{G}_n$ and $d_n$ be the last invariant factor of $W=W(G)$. Suppose that $d_n$ is square-free. If for each odd prime factor $p$ of $d_n$,
\begin{equation}\label{keyequconj}
\deg\textup{sqrt}(\Phi_p(G;x))=\textup{nullity}_p W,
\end{equation} then $G$ is DGS.
\end{conjecture}
\section*{Acknowledgments}
This work is supported by the National Natural Science Foundation of China (Grant Nos. 12001006, 11971376 and 11971406) and the Scientific Research Foundation of Anhui Polytechnic University (Grant No.\,2019YQQ024).
|
1,108,101,565,764 | arxiv | \section{Introduction}
The most fundamental problem in solid-state physics is to understand
why elements (and most compounds) crystallize in ordered periodic
structures, for this forms the basis of
all of solid-state physics. While it is well known that
the driving principle behind this ordering is a lowering of the
ground-state energy of the material, and there has been significant progress
with {\it ab initio} methods to predict the ground-state properties of these
ordered phases in real materials, there still are no exactly solvable
models for crystal formation that describe the statistical-mechanical
mechanism behind the ordering of the electrons and ions on a periodic
lattice. Furthermore, it is not understood what the physical mechanisms are
that are necessary for creating a crystallized state. This crystallization
problem is ubiquitous; it also describes the statistical mechanics behind
binary-alloy formation or phase separation since the two problems can be
mapped onto each other (as described below), and it
may also describe the physics behind charge-stripe formation in the cuprates.
It may sound surprising that no solvable statistical-mechanical model for
crystallization exists, since a statistical-mechanical model for
magnetic order has been known ever since Onsager solved
the two-dimensional Ising model\cite{onsager}. Onsager's solution
produced a paradigm for understanding phase
transitions in many different physical systems and provided a textbook
example of much of the theory behind modern critical phenomena. In fact,
Lee and Yang\cite{lee_yang} modified the Ising model to consider the
magnetic order in an external magnetic field, and mapped the problem onto a
lattice gas, where the up spins denoted sites occupied by ions, and the down
spins denoted empty sites. Onsager's method of solution does not extend to
the case of a finite magnetic field, so no exact results are known for
the lattice gas, except in the case where the number of ions equals
one half the number of lattice sites, which corresponds to the zero-field
case. These models of crystallization neglect the electronic degrees of
freedom of the valence electrons, and hence are not directly applicable
to real materials such as metals and alloys.
It turns out
that the Ising model, and many other models for magnetism, simplify when
they are examined in high dimensions. In fact, the Ising model is solved by
a static mean field theory in four and higher dimensions.
A similar situation is expected for electronic problems, except they remain
nontrivial even in the infinite-dimensional limit\cite{metzner_vollhardt,GKKR}.
Metzner and Vollhardt showed that the electronic problem
requires a dynamical mean-field theory for its solution
in infinite dimensions. Furthermore,
a wide range of evidence indicates that this dynamical mean-field theory
provides a quantitative approximation to the solutions of correlated electron
problems in three dimensions (at least if one is not too close to a critical
point). In fact, it is precisely the nonuniversal properties (such as a
transition temperature) that the dynamical mean-field theory
determines accurately, and it's solution provides a wealth of
information on the qualitative behavior of the model studied.
We employ the dynamical mean-field theory here to produce an
exact solution of the crystallization problem which includes the electronic
degrees of freedom.
The simplest model that can describe crystallization and include electronic
degrees of freedom is the spinless
Falicov-Kimball model\cite{falicov_kimball} which consists of two kinds
of particles: localized ions and itinerant (spinless) electrons.
The localized ions ($w_i=0$ or 1)
occupy sites on a lattice in real space with an energy $E$, and the electrons
can hop (with a
hopping integral $-t^*/[2\sqrt{d}]$) between
neighboring lattice sites. In addition, there is a screened Coulomb
interaction $U$ between electrons and ions that occupy the same lattice site.
Since the electrons do not interact with each other, the ``spin'' degree of
freedom is unimportant, and is neglected. The Hamiltonian is
\begin{equation}
H=-\frac{t^*}{2\sqrt{d}}\sum_{<i,j>}c_i^{\dagger}c_j+E\sum_iw_i
+U\sum_ic_i^{\dagger}c_iw_i,
\end{equation}
with $c_i^{\dagger}$ $(c_i)$ the creation (annihilation) operator for
electrons at site $i$, and $w_i$ denoting the ion occupancy at site $i$.
We use $t^*=1$ as the energy scale.
The Falicov-Kimball model can be viewed as a simplified approximation to
a real material in a variety of ways. If the material has a single valence
electron, and only one electronic band lies near the Fermi level, then
the crystallization problem would correspond to the case where
the electron and ion concentrations ($\rho_e$ and $\rho_i$) are the same
(which is called the neutral case), since one electron is donated
by each ion.
If, instead, there are many bands near the Fermi level, then one can map
the combined bands into a single ``effective'' band which will have
an electron filling determined by the average filling of the electrons in
the most important band. In this case, each ion may donate only a fraction of
an electron to the crystal, because the rest of the electron goes into other
hybridized bands that lie close to the Fermi level. Hence, one may find
it useful to also consider nonneutral cases for the crystallization
problem, where the electron and ion concentrations are not equal.
This model can also be mapped
onto the binary alloy problem, where a site occupied by an ion is mapped to
a site occupied by an $A$ ion and a site unoccupied by an ion is mapped
to a site occupied by a $B$ ion, and the screened Coulomb interaction is
mapped to the difference in the site energies for electrons on an $A$
ion versus on a $B$ ion.
As it stands, the Falicov-Kimball model doesn't appear to be a many-body
problem at all, since the ions are localized and do not move, which implies that
the quantum-mechanical problem for the electrons
can be solved by diagonalizing a single-particle
problem of an electron moving in the potential determined by the
given configuration of the ions $\{w_i\}$. The many-body problem aspects
enter by taking an annealed average over all possible ion configurations with
the chosen ion concentration. This produces long-range
interactions between the ions, that can cause them to order or phase
separate at low temperatures.
Much is already known about the physics of the Falicov-Kimball model (as
reviewed by Gruber and Macris\cite{GM}). In
the neutral case where each particle concentration equals 1/2,
Lieb and Kennedy \cite{lieb_kennedy} and Brandt and Schmidt
\cite{brandt_schmidt} proved that the system always orders
in an alternating ``chessboard'' phase at a finite transition temperature
in all dimensions greater than 1. This ordered phase can
be interpreted as the transition from a high-temperature homogeneous
(liquid/gas) phase to a low-temperature ordered (solid) phase.
The appearance of a low-temperature ordered phase follows as a consequence of
the Pauli exclusion principle, since Lieb and Kennedy also showed that if
the itinerant particles were Bosons instead of electrons, they would clump
together and not form a periodically ordered ground state.
The Falicov-Kimball model is expected to be in the same universality class as
the Ising model, but, because of the electronic degrees of freedom, one
needs to solve the full statistical model to determine the ``effective
magnetic exchange parameters'' between different lattice sites.
The parameters can be extracted in a systematic expansion if the electronic
kinetic energy (the hopping term) is taken as a
perturbation,\cite{BKU,DFF,GMMU} but such an
analysis is only valid in the strong-coupling regime, and rapidly becomes
problematic. {\it It is precisely this complication that has frustrated
attempts at finding an exact solution to the crystallization problem
when electronic degrees of freedom are introduced.}
The one-dimensional limit of the Falicov-Kimball model has also been
extensively studied. Here there are no finite-temperature phase
transitions, but the system can have phase transitions in the ground state.
The first attempt at studying the one-dimensional Falicov-Kimball model
proceeded along the lines of {\it ab initio} band-structure calculations
for real materials---a small number of candidate ion configurations
were chosen for the ground state, and a restricted phase diagram was determined
for all structures within the subset \cite{freericks_falicov}.
The numerical solutions produced two conjectures: the first was a result for
the case where $\rho_e\ne 1-\rho_i$, which stated that if
the screened Coulomb interaction $U$ was large enough, then the
system would segregate into an empty lattice (with no ions and all the
electrons), and
a full lattice (with all the ions and no
electrons). The second was a generalization of the Peierls instability,
which says that in the small $U$ limit the system will order in such a fashion
that the ions produce a band structure that has a maximal gap at the Fermi
level. This first conjecture (the segregation principle)
was later proven to be true by Lemberger
\cite{lemberger} while the second conjecture was shown to be false if
the electron concentration was sufficiently far from half-filling. In
that case, the system would phase separate between the empty lattice,
and an optimally chosen ion structure that had the Fermi level lying
in the gap \cite{freericks_gruber_macris}.
The other limit that has been extensively studied is the large-dimensional
limit where Brandt and Mielsch \cite{brandt_mielsch} provided the
solution of the transition temperature as a function of $U$ for the
half-filled symmetric case. Their solution involves solving a coupled
set of transcendental equations which display first and second-order
phase transitions.
Freericks \cite{freericks} later showed that the model (on a hypercubic lattice)
also displayed incommensurate order and segregation.
There are two kinds of lattices that are usually investigated in the large
coordination-number limit: the hypercubic lattice, which is the generalization
of the cubic lattice to large dimensions; and the Bethe lattice, which is
a thermodynamic limit of the Cayley tree when the number of
nearest neighbors becomes large. The noninteracting band structure for the
hypercubic lattice produces a density of states
that is a Gaussian [$\rho_H(\epsilon)=\exp(-\epsilon^2)/\sqrt{\pi}]$,
while on the Bethe lattice the density of states is Wigner's semicircle
$[\rho_B(\epsilon)=\sqrt{4-\epsilon^2}/(2\pi)]$. The hypercubic density of
states has an infinite bandwidth, but most of the weight lies within a range of
$\pm2$ about the origin. The Bethe lattice density of states has the same
behavior as a three-dimensional system at the band edge (square-root behavior)
but has no van Hove singularities in the interior of the band.
Because both density of
states are nontrivial, the many-body problem maintains much of it's
rich behavior that arises from the competition between kinetic-energy
effects and interaction-energy effects. In particular, the Falicov-Kimball
model continues to have phase transitions
in the large coordination number limit, but the transitions have
mean-field theory exponents.
In this contribution, we examine
what happens in the case when the Coulomb interaction
becomes infinite $U\rightarrow\infty$ (the attractive case is
equivalent to this case through a particle-hole transformation of the electrons,
which carries $\rho_e\rightarrow 1-\rho_e)$. In this case, the electrons
avoid the sites of the lattice occupied by the ions, so the electron
concentration varies from zero up to $1-\rho_i$. We investigated the
non-unit-density cases,
where the electron concentration was restricted to $0\le \rho_e <1-\rho_i$.
In Section II the formalism and results for calculations on the Bethe lattice
are presented. In Section III, results for the hypercubic lattice are given
and in Section IV we present our conclusions.
\section{Formalism and Results for the Bethe Lattice}
In the thermodynamic limit, the local lattice Green's function is defined to be
\begin{equation}
G_n=G(i\omega_n)=-\int_0^{\beta}d\tau e^{i\omega_n\tau}\frac{Tr<e^{-\beta
(H-\mu N)}
T_{\tau} c(\tau)c^{\dagger}(0)>}{Tr<e^{-\beta (H-\mu N)}>},
\label{eq: greendef}
\end{equation}
where $i\omega_n=i\pi T(2n+1)$ is the Fermionic Matsubara frequency,
$\beta=1/T$ is the inverse temperature, $\mu$ is the electron chemical
potential, and $T_{\tau}$ denotes $\tau$-ordering. The angle brackets in
Eq.~(\ref{eq: greendef}) denote the sum over ionic configurations.
The local Green's function is determined by mapping onto an atomic problem
in a time-dependent field, with the following action
\begin{equation}
S_{at}=\int_0^{\beta}d\tau\int_0^{\beta}d\tau^{\prime}c^{\dagger}(\tau)
G_0^{-1}(\tau-\tau^{\prime})c(\tau^{\prime})+U\int_0^{\beta}d\tau
c^{\dagger}(\tau)c(\tau)w+Ew,
\label{eq: action}
\end{equation}
where $w=0, 1$ is the ion number for the atomic site
and $G_0^{-1}$ is the mean-field or effective-medium
Green's function, which is determined self-consistently (as described below).
The atomic Green's function, with the action in Eq.~(\ref{eq: action}),
is computed to be
\begin{equation}
G_n=\frac{1-\rho_i}{G_0^{-1}(i\omega_n)}+\frac{\rho_i}{G_0^{-1}(i\omega_n)-U},
\label{eq: greenatomic}
\end{equation}
with $\rho_i$ the average ion density $<w>$. On the other hand, the local
lattice Green's function satisfies
\begin{equation}
G_n=\int_{-\infty}^{\infty}d\epsilon \frac{\rho(\epsilon)}{i\omega_n+\mu-
\Sigma_n-\epsilon},
\label{eq: greenlocal}
\end{equation}
where $\rho(\epsilon)$ is the noninteracting density of states for the
infinite lattice and $\Sigma_n$ is the self-energy. The self-consistency
relation is that the self-energy $\Sigma_n$ in Eq.~(\ref{eq: greenlocal})
must coincide with the self-energy of the atomic problem, i.~e.
\begin{equation}
\Sigma(i\omega_n)=G_0^{-1}(i\omega_n)-G_n^{-1}.
\label{eq: sigmadyson}
\end{equation}
Equations (\ref{eq: greenatomic}), (\ref{eq: greenlocal}), and
(\ref{eq: sigmadyson}) constitute the mean-field theory for homogeneous
phases. In the limit $d\rightarrow\infty$ Eq.~(\ref{eq: sigmadyson})
is an exact equation for the lattice problem. We note that for periodic
phases, if they exist, one needs to replace the atomic problem by a more
complicated many-site problem\cite{bethe_periodic}.
These equations are complicated to solve analytically but a simplification
occurs for $0\le\rho_e\le1-\rho_i$ in the limit $U\rightarrow\infty$.
Indeed, when $U$ is large the spectrum of the Hamiltonian consists of two
bands separated by a gap of order $U$ for electron fillings that satisfy
$0\le\rho_e\le1-\rho_i$. In this case the chemical potential lies within the
lower band, so that $\mu$ is $O(1)$. We note that $G_0$ is a function of
$\mu$, and therefore for any finite $\mu$, $1/[G_0(i\omega_n)U]\rightarrow 0$
as $U\rightarrow\infty$. Then Eq.~(\ref{eq: greenatomic}) becomes
\begin{equation}
G_n=(1-\rho_i)G_0(i\omega_n),
\label{eq: greeninfinity}
\end{equation}
and substituting
Eq.~(\ref{eq: greeninfinity}) into Eq.~(\ref{eq: sigmadyson}) and solving for
the self energy, then yields
\begin{equation}
\Sigma_n=-\frac{\rho_i}{G_n},
\label{eq: sigmainfinity}
\end{equation}
for the relation between the local self energy and the Green's function.
Hence, in the limits $U\rightarrow\infty$ and $d\rightarrow\infty$ the equations
for the homogeneous phase reduce to Eqs.~(\ref{eq: greenlocal}) and
(\ref{eq: sigmainfinity}).
In the case of the
Bethe lattice, $\rho_B(\epsilon)=\sqrt{4-\epsilon^2}/(2\pi)$, for
$-2<\epsilon <2$, so that
the integral in Eq.~(\ref{eq: greenlocal}) can be performed analytically
\begin{equation}
G_n=\frac{i\omega_n+\mu-\Sigma_n}{2}-\frac{1}{2}
\sqrt{(i\omega_n+\mu-\Sigma_n)^2-4}.
\label{eq: greenbethe}
\end{equation}
Substituting the result from Eq.~(\ref{eq: sigmainfinity}) into
Eq.~(\ref{eq: greenbethe}) and solving for $G_n$ yields the exact result for the
interacting Green's function in the strongly correlated limit
\begin{equation}
G_n=\frac{i\omega_n+\mu}{2}-\frac{1}{2}\sqrt{(i\omega_n+\mu)^2-4(1-\rho_i)},
\label{eq: greeninteracting}
\end{equation}
where the phase of the square root is chosen so that the Green's function
has the correct sign to it's imaginary part.
This form is identical to that of a noninteracting Green's function
[Eq.~(\ref{eq: greenbethe}) with $\Sigma_n=0$], with a bandwidth narrowed
from 4 to $4\sqrt{1-\rho_i}$, and containing a spectral weight of $1-\rho_i$
(since the remaining spectral weight is shifted to infinite energies).
This is easiest seen from the interacting density of states, which
satisfies\cite{vandongen}
\begin{equation}
\rho_B^{int}(\epsilon)=\frac{1}{2\pi}\sqrt{4(1-\rho_i)-\epsilon^2}.
\label{eq: dosbethe}
\end{equation}
Note that in the infinite-interaction-strength limit, we have an analytic
form for the Green's functions, and do not need to iteratively solve
transcendental equations as is normally done in the finite-$U$
case.\cite{brandt_mielsch}
Furthermore, even though the Green's function has the same form as a
noninteracting Green's function, the self-energy is nontrivial and does
not correspond to a Fermi liquid!
This form for the Green's function fits a rather simple physical picture.
The electron avoids sites occupied by an ion when $U\rightarrow\infty$,
so the number of available sites is reduced by the fraction $1-\rho_i$.
This means, on average, the number of nearest neighbors is reduced by
the same factor, which reduces the bandwidth by $\sqrt{1-\rho_i}$. The
total spectral weight is also reduced from 1 to $1-\rho_i$, because the
upper band (with $\rho_i$ states) is located at infinite energy. What is
surprising is that this ``hand-waving'' argument is exact for the Bethe
lattice (we will see below it is a good approximation for the hypercubic
lattice, but is not exact).
The interacting density of states is temperature-independent\cite{temp_ind}
in the local approximation, which means that we can examine the ground state
at $T=0$ to see if the system phase separates, or if the homogeneous
phase is lowest in energy. The ground state energy for an ion concentration
$\rho_i$ and an electron concentration $\rho_e$ is
\begin{equation}
E(\rho_e,\rho_i)=\int_{-\infty}^{\mu}d\epsilon\rho_B^{int}(\epsilon)\epsilon ,
\label{eq: gs_energy_def}
\end{equation}
with $\mu$ the chemical potential defined by
\begin{equation}
\rho_e=\int_{-\infty}^{\mu}d\epsilon\rho_B^{int}(\epsilon) ,
\label{eq: gs_mu_def}
\end{equation}
and $\rho_B^{int}$ the interacting density of states. Substituting in the
exact result from Eq.~(\ref{eq: dosbethe}) yields
\begin{equation}
E(\rho_e,\rho_i)=-\frac{4}{3\pi}(1-\rho_i)^{3/2}\left [ 1-
\frac{\mu^2}{4(1-\rho_i)}\right ]^{3/2},
\label{eq: gs_energy}
\end{equation}
and
\begin{equation}
\rho_e=\frac{1-\rho_i}{\pi}\left [ \cos^{-1}
\left ( \frac{-\mu}{2\sqrt{1-\rho_i}}\right )
+\frac{\mu}{2\sqrt{1-\rho_i}}\sqrt{1-\frac{\mu^2}{4(1-\rho_i)}}\right ].
\label{eq: gs_mu}
\end{equation}
Using Eqs.~(\ref{eq: gs_energy}) and (\ref{eq: gs_mu}), we will show that the
mixture of the state with no ions and an electron filling $\rho_e/(1-\rho_i)$
with the state with all ions and no electrons has a lower energy than the
homogeneous state, i.~e.
\begin{equation}
E(\rho_e,\rho_i)>(1-\rho_i)E(\frac{\rho_e}{1-\rho_i},0)+\rho_iE(0,1).
\label{eq: gs_ineq}
\end{equation}
Moreover, from Eq.~(\ref{eq: gs_ineq}) we will deduce that the mixture
corresponding to the right hand side of Eq.~(\ref{eq: gs_ineq}) has lower
energy than any other mixture between homogeneous states.
In other words,
\begin{equation}
\alpha E(\rho_e^{\prime},\rho_i^{\prime})+(1-\alpha)E(\rho_e^{\prime\prime},
\rho_i^{\prime\prime})>(1-\rho_i)E(\frac{\rho_e}{1-\rho_i},0)+\rho_iE(0,1),
\label{eq: gs_ineq2}
\end{equation}
where $0\le\alpha\le 1$ and $\rho_e=\alpha\rho_e^{\prime}+(1-\alpha)
\rho_e^{\prime\prime}$, $\rho_i=\alpha\rho_i^{\prime}+(1-\alpha)
\rho_i^{\prime\prime}$, $0<\rho_e^{\prime}<1-\rho_i^{\prime}$, and
$0<\rho_e^{\prime\prime}<1-\rho_i^{\prime\prime}$. To obtain
Eq.~(\ref{eq: gs_ineq}),
we first notice that $E(0,1)=0$ and that the chemical potential
$\bar\mu$ corresponding to an electron filling of $\rho_e/(1-\rho_i)$
and an ion filling of zero
is $\bar\mu=\mu/(2\sqrt{1-\rho_i})$, as can be seen from
Eq.~(\ref{eq: gs_mu}). Therefore, Eq.~(\ref{eq: gs_energy}) yields
\begin{equation}
(1-\rho_i)E(\frac{\rho_e}{1-\rho_i},0)+\rho_iE(0,1)=
-\frac{4}{3\pi}(1-\rho_i)\left [ 1-
\frac{\mu^2}{4(1-\rho_i)}\right ]^{3/2}
=\frac{1}{\sqrt{1-\rho_i}}E(\rho_e,\rho_i)<E(\rho_e,\rho_i),
\label{eq: gs_maxwell}
\end{equation}
which proves Eq.~(\ref{eq: gs_ineq}).
The proof of Eq.~(\ref{eq: gs_ineq2}) relies on an application of
Eq.~(\ref{eq: gs_ineq})
\begin{eqnarray}
&\alpha& E(\rho_e^{\prime},\rho_i^{\prime})+(1-\alpha)E(\rho_e^{\prime\prime},
\rho_i^{\prime\prime})>\cr
&\alpha& \left [ (1-\rho_i^{\prime})E(\frac{\rho_e^{\prime}}{1-\rho_i^{\prime}},
0)+\rho_i^{\prime}E(0,1)\right ]
+(1-\alpha)
\left [ (1-\rho_i^{\prime\prime})E(\frac{\rho_e^{\prime\prime}}{1-
\rho_i^{\prime\prime}}, 0)+\rho_i^{\prime\prime}E(0,1)\right ].
\label{eq: gs_ineq3}
\end{eqnarray}
The right hand side of Eq.~(\ref{eq: gs_ineq3}) is equal to
\begin{equation}
(1-\rho_i)\left [ \frac{\alpha (1-\rho_i^{\prime})}{1-\rho_i}
E(\frac{\rho_e^{\prime}}{1-\rho_i^{\prime}},0)+
\frac{(1-\alpha)(1-\rho_i^{\prime\prime})}{1-\rho_i}
E(\frac{\rho_e^{\prime\prime}}{1-\rho_i^{\prime\prime}}, 0)\right ]
+\rho_iE(0,1).
\label{eq: gs_ineq4}
\end{equation}
On the other hand, $E(\rho_e,0)$ is a convex function of $\rho_e$, so
the term inside the brackets in Eq.~(\ref{eq: gs_ineq4}) is greater than
$E(\rho_e/[1-\rho_i],0)$, which yields Eq.~(\ref{eq: gs_ineq}). We remark
that the convexity of $E(\rho_e,0)$ is obvious from the fact that the free
electron system cannot phase separate. Formally, it can be seen as follows:
differentiating Eqs.~(\ref{eq: gs_energy}) and (\ref{eq: gs_mu}) with respect
to $\rho_e$ gives $E^{\prime}(\rho_e,0)=\mu\rho_B(\mu)\partial\mu/\partial
\rho_e$ and $1=\rho_B(\mu)\partial\mu/\partial\rho_e$. Thus
$E^{\prime}(\rho_e,0)=\mu$ and $E^{\prime\prime}(\rho_e,0)=\partial\mu/
\partial\rho_e=1/\rho_B(\mu)>0$.
Our interest now is to determine the finite-temperature
phase diagram of the infinite-$U$
Falicov-Kimball model since we know the system always phase separates
at low temperature (although we have not yet ruled out the possibility
of charge-density-wave phases being lower in energy than the phase-separated
ground state). The first step is to evaluate the conduction
electron charge-density-wave susceptibility. It is often stated that the
Bethe lattice can only support antiferromagnetic or uniform order---no
incommensurate or other ``periodic'' phases can exist. But this statement
has never been proven, and recent work has shown it to be
false\cite{bethe_periodic} by a counterexample of a period-three phase
stabilized on the infinite-dimensional Bethe lattice at zero temperature.
The momentum dependence
enters the dressed susceptibility only through the momentum dependence of
the bare susceptibility because the vertex function is local in the
infinite-dimensional limit. This allows us to simply take
the $U\rightarrow\infty$
of the Brandt-Mielsch result\cite{brandt_mielsch}, which gives
\begin{equation}
1=\rho_i\sum_{n=-\infty}^{\infty}\frac{G_n^2+\chi_n^0(X)}{G_n^2+\rho_i
\chi_n^0(X)},
\label{eq: chitc}
\end{equation}
with $X$ being the parameter that determines the modulation
of the charge-density-wave over the Bethe lattice
and with $\chi_n^0(X)$ the corresponding bare susceptibility.
We do not provide the general formula for all possible charge-density waves
here. Rather, we present the
three simplifying cases for the susceptibility on the Bethe lattice:
(i) the local susceptibility, where $\chi_n^0(local)=-G_n^2$; (ii)
the ($X=-1$) ``antiferromagnetic'' susceptibility, where
\begin{equation}
\chi_n^0(-1)=-\frac{G_n}{i\omega_n+\mu-\Sigma_n};
\label{eq: chi0bethe-1}
\end{equation}
and (iii) the ($X=1$) uniform susceptibility, where
\begin{equation}
\chi_n^0(1)=\frac{\partial G_n}{\partial\mu}=-\frac{G_n}{\sqrt{
(i\omega_n+\mu-\Sigma_n)^2-4}}.
\label{eq: chi0bethe1}
\end{equation}
The local susceptibility never has a transition, because the numerator
of Eq.~(\ref{eq: chitc}) vanishes. The condition for an ``antiferromagnetic''
charge density wave becomes
\begin{equation}
1=\rho_i\sum_{-\infty}^{\infty}\frac{G_n}{i\omega_n+\mu},
\label{eq: tcaf}
\end{equation}
after substituting in the infinite-$U$ form for the self-energy, and using the
quadratic equation $G_n^2-(i\omega_n+\mu)G_n+1-\rho_i=0$ that the interacting
Green's function satisfies. Now, substituting the integral form for $G_n$
\begin{equation}
G_n=(1-\rho_i)\int_{-\infty}^{\infty}d\epsilon\frac{\rho_B(\epsilon)}
{i\omega_n+\mu-\sqrt{1-\rho_i}\epsilon},
\label{eq: greenintegral}
\end{equation}
into Eq.~(\ref{eq: tcaf}) and performing the sum over Matsubara frequencies
yields the final integral form for $T_c$
\begin{equation}
1=-\frac{\rho_i\sqrt{1-\rho_i}}{2T}\int_{-2\sqrt{1-\rho_i}}^{2\sqrt{1-\rho_i}}
\frac{dz}{z}\frac{\rho_B\left ( \frac{z}{\sqrt{1-\rho_i}}\right )
\tanh\frac{\beta z}{2}}
{\cosh^2\frac{\beta\mu}{2}(1-\tanh\frac{\beta\mu}{2}\tanh\frac{\beta z}{2})},
\label{eq: tcafint}
\end{equation}
(see Appendix A).
But this integrand is positive for all $z$, so the right hand side is always
less than zero, and there is no ``antiferromagnetic'' $T_c$. The staggered
charge-density-wave order has been found near half-filling
$\rho_i=\rho_e=1/2$ when the lowest-order exchange for finite-$U$ is
included,\cite{letfulov} but can only occur at $T=0$ and $\rho_i=\rho_e=1/2$
when $U=\infty$.
The uniform susceptibility case is analyzed as follows: First the uniform
susceptibility from Eq.~(\ref{eq: chi0bethe1}) is substituted into
Eq.~(\ref{eq: chitc}), and the square root is eliminated by using the
exact form for the interacting Green's function in
Eq.~(\ref{eq: greenbethe}). Next, the self energy is replaced by its exact form
from Eq.~(\ref{eq: sigmainfinity}), and the quadratic equation for $G_n$ is used
to simplify the $T_c$ equation to
\begin{equation}
1=\rho_i\sum_{n=-\infty}^{\infty}\left [ 1+\frac{1-\rho_i}{(i\omega_n+\mu)
G_n-2(1-\rho_i)}\right ].
\label{eq: tc2}
\end{equation}
Now the interacting form for $G_n$ from Eq.~(\ref{eq: greeninteracting}) is
substituted into Eq.~(\ref{eq: tc2}) and the results simplified to yield
\begin{equation}
1=\rho_i\sum_{n=-\infty}^{\infty}\frac{(i\omega_n+\mu)G_n-2(1-\rho_i)}
{(i\omega_n+\mu)^2-4(1-\rho_i)}.
\label{eq: tc3}
\end{equation}
The final step is to substitute in the integral form for $G_n$ from
Eq.~(\ref{eq: greenintegral}) and perform the summation over Matsubara
frequencies (see Appendix A). After making a trigonometric substitution,
the transcendental equation for $T_c$ becomes
\begin{equation}
1=\frac{\rho_i\sqrt{1-\rho_i}}{2\pi T}\int_0^{\pi}d\theta\cos\theta\tanh
\frac{\beta}{2}(2\sqrt{1-\rho_i}\cos\theta-\mu).
\label{eq: tctrans}
\end{equation}
We do not discuss any of the other periodic cases here, because the numerics
involved is cumbersome. But, we expect the Bethe lattice to have similar
behavior as the hypercubic lattice, where the transition always went into
the uniform charge-density wave, signifying a phase separation transition.
Details of the other periodic phases will be reported in a future
publication.
The results for the transition temperature for the uniform charge-density wave
are presented in
Figure~\ref{fig: bethe_tc}(a). We choose nine different ion concentrations
ranging from 0.1 to 0.9 in steps of 0.1. The electron density then
varies from 0 to $1-\rho_i$ for each case. As can be seen in the figure, the
maximal transition temperature is about $0.12t^*$ and it occurs at
half-filling of the lower band $\rho_e=(1-\rho_i)/2$ with $\rho_i\approx 0.65$
(coincidentally, this maximal transition temperature is nearly identical to the
maximal $T_c$ to charge-density-wave order at $\rho_e=\rho_i=1/2$ when
evaluated as a function of the interaction strength $U$).
Since $T_c\ll 1$, we expand Eq.~(\ref{eq: tctrans}) for small $T$ by replacing
the $\tanh x$ by ${\rm sgn} x$ to find
\begin{equation}
T_c\approx\frac{\rho_i\sqrt{1-\rho_i}}{\pi}\sqrt{1-\frac{\mu^2}{4(1-\rho_i)}}.
\label{eq: tcapprox}
\end{equation}
Since the chemical potential will scale with $\sqrt{1-\rho_i}$ for the same
relative electron filling in the lower band $[\rho_e/(1-\rho_i)]$, as shown
in eq.~(\ref{eq: gs_mu}), this form motivates a scaling
plot of $T_c/(\rho_i\sqrt{1-\rho_i})$ versus $\rho_e/(1-\rho_i)$, which
appears in Figure~\ref{fig: bethe_tc}(b). As can be seen there, the data nearly
collapse on top of each other for $T_c$ {\it which is usually a nonuniversal
quantity}. In fact, the variation in $T_c$ is less than 10\% for all different
cases.
The susceptibility analysis shows that the system orders in a uniform
charge-density-wave, which indicates that the system will phase separate
(or segregate) into two regions, one with a higher concentration of electrons
and one with a lower concentration (as we already showed at $T=0$).
Such a phase separation is usually
associated with a first-order phase transition, rather than a second-order
transition. Hence, it is important to perform a Maxwell construction of
the free energy that includes mixtures of two states with different electron
and ion concentrations such that $\rho_e=\alpha\rho_e^{\prime}+(1-\alpha)
\rho_e^{\prime\prime}$,
$\rho_i=\alpha\rho_i^{\prime}+(1-\alpha)\rho_i^{\prime\prime}$, and that the
free energy of the
mixture $F(\rho_e^{\prime},\rho_i^{\prime};\rho_e^{\prime\prime},
\rho_i^{\prime\prime})=\alpha
F(\rho_e^{\prime},\rho_i^{\prime})+(1-\alpha)F(\rho_e^{\prime\prime},
\rho_i^{\prime\prime})$ is lower in energy than
the pure-phase free energy $F(\rho_e,\rho_i)$. The second-order phase
transition is the spinodal-decomposition temperature, below which the
free energy becomes locally unstable in the region of $(\rho_e,\rho_i)$;
in most cases the global free energy is minimized by the Maxwell construction
at a temperature above this spinodal-decomposition temperature. The
spinodal-decomposition temperature marks the lowest temperature that the system
can be supercooled to before it must undergo a phase transition.
We can calculate the free energy $F(\rho_e,\rho_i)$ for a homogeneous phase
with electron filling $\rho_e$ and ion concentration $\rho_i$ in two
equivalent ways. The first method is from Brandt and
Mielsch\cite{brandt_mielsch} which expresses the free energy in terms of
a summation over Matsubara frequencies as follows:
\begin{eqnarray}
F(\rho_e,\rho_i)&=&-T\ln\frac{1+e^{\beta\mu}}{1-\rho_i}+\int_{-\infty}^{\infty}
d\epsilon\rho(\epsilon)T\sum_{n=-\infty}^{\infty}\ln\left [
\frac{i\omega_n+\mu}{(1-\rho_i)(i\omega_n+\mu-\Sigma_n-\epsilon)}\right ]\cr
&+&\mu\rho_e-\left (T\ln\frac{\rho_i}{1+\rho_i}+T\ln(1+e^{\beta\mu})+T
\sum_{n=-\infty}^{\infty}\ln\left [\frac{1-\rho_i}{(i\omega_n+\mu)G_n}\right ]
\right )\rho_i.
\label{eq: free_bm}
\end{eqnarray}
Similarly, we can evaluate the free energy in the same fashion as Falicov
and Kimball\cite{falicov_kimball} did
\begin{equation}
F(\rho_e,\rho_i)=T\int_{-\infty}^{\infty}d\epsilon
\rho^{int}(\epsilon)\ln\left [ \frac{1}{1+e^{-\beta(\epsilon-\mu)}}\right ]
+T\left [\rho_i\ln\rho_i+(1-\rho_i)\ln(1-\rho_i)\right ]+\mu\rho_e,
\label{eq: free_fk}
\end{equation}
where $\rho^{int}(\epsilon)=\sqrt{4(1-\rho_i)-\epsilon^2}/(2\pi)$ is
the interacting density of states for the Bethe lattice. We find that both
forms (\ref{eq: free_bm}) and (\ref{eq: free_fk}) are numerically equal
to each other, but are unable to show this result analytically. Since the
interacting density of states is known for the Bethe lattice, we use the
computationally simpler form in Eq.~(\ref{eq: free_fk}) in our calculations.
For the hypercubic lattice evaluated in Section III, we employ
Eq.~(\ref{eq: free_bm}) in the free-energy analysis.
The numerical minimization proceeds in four phases: (i) First a coarse grid is
established for $\rho_i^{\prime}$ and $\rho_i^{\prime\prime}$ and the free
energy is minimized
over this grid [the electron fillings are determined by the constraints
that the chemical potential is the same in region 1 and region 2 and that
$\rho_e=\alpha\rho_e^{\prime}+(1-\alpha)\rho_e^{\prime\prime}$, with
$\alpha$ already determined
from $\rho_i=\alpha\rho_i^{\prime}+(1-\alpha)\rho_i^{\prime\prime}$];
(ii) The filling $\rho_i^{\prime\prime}$
is fixed at it's coarse-grid minimal value, and $\rho_i^{\prime}$ is varied on
a finer
grid to determine the new minimum; (iii) $\rho_i^{\prime}$ is fixed at the new
minimum and $\rho_i^{\prime\prime}$ is now varied on a fine grid to yield a
new minimal
$\rho_i^{\prime\prime}$; (iv) $\rho_i^{\prime}$ and $\rho_i^{\prime\prime}$
are varied together on the same fine
grid to determine the final minimization of the Maxwell construction. We
found that the minimal values of $\rho_i^{\prime}$ and $\rho_i^{\prime\prime}$
rarely changed
in step (iv) confirming the convergence of this method.
We plot our results in Fig.~\ref{fig: bethe_free}. The first case considered
in Fig.~\ref{fig: bethe_free}(a)
is the case of relative half filling $\rho_e=(1-\rho_i)/2$. In this case the
chemical potential is always at zero, and the relative electron filling
remains unchanged for all $\rho_i$. The solid line is the first-order
transition line and the dotted line is the spinodal-decomposition
temperature. The horizontal distance between the solid lines at a fixed
temperature is a measure of the order parameter $\rho_i^{\prime}-\rho_i^{\prime
\prime}$. Notice
how the first-order transition temperature is always close to the
spinodal-decomposition temperature, but that the difference becomes largest at
concentrations close to zero and one. Note further how the two curves meet
at the maximum (as they must) where the first-order transition disappears
and becomes a second-order transition at a classical critical point.
In Fig.~\ref{fig: bethe_free}(b) we plot the phase diagram for the case
with $\rho_i=0.65$ and $\rho_e=(1-\rho_i)/4$. In this case the chemical
potential changes as a function of temperature, and as $T\rightarrow 0$
and the system is phase separating into regions with ion densities close
to zero and one, we find that the chemical potential will lie outside of the
bandwidth of the interacting density of states as $\rho_i^{\prime\prime}
\rightarrow 1$ because the bandwidth $(4\sqrt{1-\rho_i})$
becomes narrowed to zero. In that case, the electron density approaches
zero exponentially fast, which is why the spinodal-decomposition temperature
approaches zero so rapidly in that regime. For this reason, we find that
the relative electron filling is not a constant in this phase diagram,
as it approaches zero exponentially fast near $\rho_i^{\prime\prime}=1$ and it
is somewhat larger than quarter filled near $\rho_i^{\prime}=0$. The two
phase diagrams in Fig.~\ref{fig: bethe_free}(a) and (b)
look similar in the low-density regime, however. This may
imply that there is an analogous scaling regime for the first-order $T_c$,
but it does not look like there would be a universal curve for the region
close to $\rho_i^{\prime\prime}=1$. The numerical effort required to perform
the free-energy analysis is significant, so a thorough analysis of any possible
scaling forms for $T_c$ was not performed.
\section{Results for the hypercubic lattice}
The formalism for the hypercubic lattice is essentially unchanged from
the Bethe lattice. The main differences are that the integrals cannot
be performed analytically anymore, which requires results to be worked out
numerically, and requires more computational effort. The basic framework
in Eqs.~(\ref{eq: greendef})---(\ref{eq: sigmainfinity}) is identical
as before, except now the noninteracting density of states is a Gaussian
for the hypercubic lattice. The integral for the local Green's function
is no longer elementary, and so one needs to solve the problem iteratively
as was done previously for the Falicov-Kimball model: (i) first the
self energy is set equal to zero; (ii) next the local Green's function is
determined from Eq.~(\ref{eq: greenlocal}); (iii) then the self energy is
determined from Eq.~(\ref{eq: sigmainfinity}); (iv) then steps (ii) and (iii)
are repeated until the equations converge. We can compare the results
of the interacting Green's function to the form found before for the Bethe
lattice, by approximating the interacting density of states in the same
fashion as before: we narrow the Gaussian by the factor $\sqrt{1-\rho_i}$
and have a total weight of $1-\rho_i$ in the density of states. When we
compare the Green's function along the imaginary axis at half filling
in Fig.~\ref{fig: hyp_green}(a) we find that that approximation works well
at high energies, but begins to fail near zero frequency (we chose $\rho_e=1/6$,
$\rho_i=2/3$, and $T=0.1$). The solid line is the exact result and the
dotted line is the approximate (band-narrowed) result. The infinite-$U$
Green's function on the hypercubic lattice is more complicated than on the Bethe
lattice and the simple form that describes it for the Bethe lattice
no longer holds. This is the main reason why the hypercubic lattice
is more complicated to deal with than the Bethe lattice. To see this more
fully, we examine the interacting density of states in
Fig.~\ref{fig: hyp_green}(b). The interacting density of states is determined
by solving the same equations for the Green's function, but this time on the
real axis, rather than the imaginary axis. Notice how the band-narrowed form
$\sqrt{1-\rho_i}\exp(-\epsilon^2/[1-\rho_i])/\sqrt{\pi}$ (dotted line)
is a reasonable approximation to the interacting density of states (solid line)
but that it is too narrow, and it overestimates the peak height.
Since we do not know an analytic form for
the interacting density of states, we cannot
perform the same kind of analysis that we did before at zero temperature to
see if the system is phase separated. But we can examine the finite-temperature
phase diagrams in the same manner. The susceptibility diverges whenever
Eq.~(\ref{eq: chitc}) is satisfied. The bare susceptibility now
takes the form
\begin{equation}
\chi_n^0(X)=-\frac{1}{2\pi}\int_{-\infty}^{\infty}\rho(y)\frac{1}{i\omega_n
+\mu-\Sigma_n-y}\int_{-\infty}^{\infty}dz\rho(z)\frac{1}{i\omega_n+\mu-\Sigma_n
-yX-z\sqrt{1-X^2}},
\label{eq: chi0_hyp}
\end{equation}
where $X=\lim_{d\rightarrow\infty}\sum_{j=1}^d\cos(k_j)$ for the ordering
wavevector {\bf k}. The bare susceptibility continues to assume a simple
form for the same three cases: (i) $X=0$ the local susceptibility where
$\chi_n^0(0)=-G_n^2$; (ii) $X=-1$ the ``antiferromagnetic'' susceptibility,
where $\chi_n^0(-1)=-G_n/(i\omega_n+\mu-\Sigma_n)$; and (iii) $X=1$ the
uniform susceptibility, where
\begin{equation}
\chi_n^0(1)=\frac{\partial G_n}{\partial\mu}=2[1-(i\omega_n+\mu-\Sigma_n)G_n].
\label{eq: chi0_uniform_hyp}
\end{equation}
If we try to approximate the transition temperature by substituting in the
approximate form we have for $G_n$ derived by assuming the interacting
density of states has the same shape, but is band narrowed, we find that
the $T_c$'s generated are not accurate at all. Hence, the simple
band-narrowing approximation
works reasonably well for the Green's function, but is poor for the
susceptibility.
Instead, we simply solve for the transition temperatures numerically. We
find in every case that we examined that the transition temperature is
always highest for $X=1$ and vanishes for all $X\le 0$. This is shown in
Fig.~\ref{fig: tc_x} for the cases of $\rho_e=1/6$, 1/12, 1/24, 1/48, 1/96,
1/192, 1/384, and 1/768 and $\rho_i=2/3$, which ranges from relative
half filling to the low-density regime. We
plot $T_c(X)$ and see that the system always favors the uniform charge-density
wave, signifying that the system wants to phase separate. We calculate the
spinodal-decomposition temperature for phase separation by finding the
temperature at which the uniform susceptibility diverges as a function
of $\rho_e$ and $\rho_i$. These temperatures are plotted in
Fig.~\ref{fig: tc_hyp_spinodal}(a). This plot looks similar to
what we found for the Bethe lattice before, so we try the same scaling
form in Fig.~\ref{fig: tc_hyp_spinodal}(b),
plotting $T_c/(\rho_i\sqrt{1-\rho_i})$ versus $\rho_e/(1-\rho_i)$.
Once again we see a data collapse, but the spread in the $T_c$'s is somewhat
larger than that seen in the Bethe lattice.
Finally, we calculate the full phase diagram for the case of relative
half filling $\rho_e=(1-\rho_i)/2$ where $\mu=0$ in Fig.~\ref{fig: tc_hyp_free}.
The form of this result is similar to what was seen in the Bethe lattice.
The first-order transition temperature and the spinodal-decomposition
temperature meet at the peak of the curve where the first-order transition
becomes second order. We did not perform a free-energy calculation at
other relative
fillings here, because the numerical solution was significantly more
difficult due to the fact that we needed to use Eq.~(\ref{eq: free_bm}) rather
than the computationally simpler Eq.~(\ref{eq: free_fk}).
\section{Conclusions and Discussion}
We have provided an exact solution to the spinless Falicov-Kimball model in
the strongly correlated limit of $U=\infty$. We only considered the
less than unit density cases $0<\rho_e<1-\rho_i$, because they all satisfy
a similar functional form. On the Bethe lattice we found that the
system always phase separated at $T=0$ to states where the electrons all moved
to one part of the lattice, and the ions moved to the other part. The
spinodal-decomposition temperature for segregation solved a simple
transcendental equation, which we showed collapsed onto a scaling curve.
In addition, we solved for the first-order transition temperature for
a select number of cases and discovered that the first-order transition usually
occurred quite close to the spinodal-decomposition temperature. On the
hypercubic lattice, we found similar results, but had to carry the analysis out
numerically for all cases considered. We were able to explicitly show that
phase separation precluded incommensurate (or commensurate) charge-density-wave
order for the hypercubic lattice.
These results show that when the screened Coulomb interaction is large,
or in the alloy picture, when the A ions are extremely different from the
B ions, then the system will segregate at low temperatures. This proves
the segregation principle for the infinite-dimensional limit, and leads
us to believe that it holds for all dimensions (since it has also been
demonstrated in one dimension). Future problems to be investigated include
a study of the case $\rho_e+\rho_i=1$, as well as finite Coulomb interaction
$U$. In the unit-density
case, we expect charge-density-wave order to be more prevalent, perhaps
precluding the segregated phase for all $U$. When the strength of
the Coulomb interaction is reduced, we expect the segregated phase to
gradually disappear and be taken over by other phase-separated or
charge-density-wave ordered phases.
It is possible that the phenomena described here incorporates the relevant
physics of the charge-stripe phases in the cuprate materials: that the
stripes occurred because of the strong propensity towards phase separation
in the strongly correlated limit. The analogy would stem from considering
the down spin electrons of the Hubbard model to be frozen in a particular
configuration, and then examine how the mobile up spin electrons react
to the down spins. The quantum fluctuations of the Hubbard model are
replaced by the thermal fluctuations of the Falicov-Kimball model, and
it can be viewed as a simplifying approximation to the charge dynamics of
the strongly correlated Hubbard model, but not incorporating the spin
dynamics. In this case, as postulated by
Emery and Kivelson,\cite{emery_kivelson} the stripes would form from a
balance between the desire for the system to phase separate, and the
long-range Coulomb interaction, which would prevent the electrons from
completely separating from the ions. There is evidence for alternative
points of view, however. White and collaborators\cite{white} have shown
that the Hubbard model on a ladder displays charge-stripe order even without
the long-range Coulomb interaction. This order arises from the correlation
of the spins and the holes and a desire to reduce the frustration induced
by the hole motion. In their picture, the stripe ordering arises completely
from a model that includes no long-range forces. Nevertheless,
it is our belief that
the phase separation exhibited here will be an important element of a
complete description of the charge-stripe order in the cuprates and
nickelates, because it must occur if $U$ becomes large enough.
\acknowledgments
J.K.F. acknowledges support of this work from
the Office of Naval Research Young Investigator Program N000149610828.
J.K.F. also acknowledges the hospitality he received at the Institut de
Physique Theorique in Lausanne, where this work was initiated.
|
1,108,101,565,765 | arxiv | \section{Introduction}
Synchronization of rhythmic dynamical elements exhibiting periodic oscillations is widely observed in the real world~\cite{winfree80,pikovsky01,strogatz03}. Collective oscillations arising from the mutual synchronization of dynamical elements often play important functional roles, such as the synchronized secretion of insulin from pancreatic beta cells~\cite{winfree80,sherman91} and synchronized oscillation of power generators~\cite{strogatz03,motter13,dorfler}. Clarifying the mechanisms of collective synchronization and devising efficient methods of mutual synchronization are thus both fundamentally and practically important.
The stable periodic dynamics of rhythmic elements are often modeled as limit-cycle oscillators~\cite{winfree80,pikovsky01,strogatz03,strogatz15}.
When mutual interactions between limit-cycle oscillators are weak, synchronization dynamics of the oscillators can be analyzed using the phase reduction theory~\cite{winfree80,kuramoto84,hoppensteadt97,ermentrout10,nakao16,ashwin16}.
In this approach, nonlinear multi-dimensional dynamics of an oscillator is reduced to a simple approximate phase equation, characterized by the natural frequency and phase sensitivity of the oscillator.
The phase reduction theory facilitates systematic and detailed analysis of synchronization dynamics. It has been used to explain nontrivial synchronization dynamics of
coupled oscillators, such as the collective synchronization transition of an ensemble of coupled oscillators~\cite{winfree80,kuramoto84,hoppensteadt97,ermentrout10,nakao16,ashwin16}.
Generalization of the method for non-conventional limit-cycling systems, such as time-delayed oscillators~\cite{novicenko12,kotani12}, hybrid oscillators~\cite{shirasaka17a,park18}, collectively oscillating networks~\cite{kawamura08}, and rhythmic spatiotemporal patterns~\cite{kawamura13,nakao14}, has also been discussed.
Recently, the phase reduction theory has been applied for the control and optimization of synchronization dynamics in oscillatory systems.
For example, Moehlis {\it et al.}~\cite{moehlis06}, Harada {\it et al.}~\cite{harada10}, Dasanayake and Li~\cite{dasanayake11}, Zlotnik {\it et al.}~\cite{zlotnik12,zlotnik13,zlotnik16}, Pikovsky~\cite{pikovsky15}, Tanaka {\it et al.}~\cite{tanaka14a,tanaka14b,tanaka15}, Wilson {\it et al.}~\cite{wilson2015}, Pyragas {\it et al.}~\cite{pyragas2018}, and Monga {\it et al.}~\cite{monga2018a,monga2018b} have used the phase reduction theory (as well as the phase-amplitude reduction theory) to derive optimal driving signals for the stable entrainment of nonlinear oscillators in various physical situations.
In a similar spirit, in our previous study~\cite{shirasaka17b}, we considered a problem of improving the linear stability of synchronized state between a pair of limit-cycle oscillators by optimizing a cross-diffusion coupling matrix between the oscillators, where different components of the oscillators are allowed to interact. We also considered a pair of mutually interacting reaction-diffusion systems exhibiting rhythmic spatiotemporal patterns and derived optimal spatial filters for stable mutual synchronization~\cite{kawamura17}.
In this study, we consider this problem in a more general setting, whereby the oscillators can interact not only by their present states but also through the time sequences of their past states.
We first consider linear coupling with time delay or temporal filtering and derive the optimal delay time or linear filter.
We then consider general nonlinear coupling with a mutual drive-response configuration and derive the optimal response function and driving function for stable synchronization. We argue that, although we consider general coupling that can depend on the past time sequences of the oscillators, the optimal mutual coupling can be obtained as a function of the present phase values of the oscillators in the framework of the phase-reduction approximation.
The results are illustrated by numerical simulations using Stuart-Landau and FitzHugh-Nagumo oscillators as examples.
This paper is organized as follows. In Sec. II, we introduce a general model of coupled limit-cycle oscillators and reduce it to coupled phase equations. In Sec. III, we consider the case with linear coupling and derive the optimal time delay and optimal linear filter for coupling signals. In Sec. IV, we consider nonlinear coupling of the drive-response type and derive the optimal response function and driving function. In Sec. V, a summary is provided.
\section{Model}
\subsection{Pair of weakly coupled oscillators}
In this study, we consider a pair of weakly and symmetrically coupled limit-cycle oscillators with identical properties, where the oscillators can mutually interact not only through their present states but also through their past time sequences.
We assume that the oscillators are generally described by the following equations:
\begin{align}
\label{eq1}
\dot{\bm X}_1(t) &= {\bm F}({\bm X}_1(t)) + \epsilon \hat{\bm H}\{ {\bm X}_1^{(t)}(\cdot), {\bm X}_2^{(t)}(\cdot) \},
\cr
\dot{\bm X}_2(t) &= {\bm F}({\bm X}_2(t)) + \epsilon \hat{\bm H}\{ {\bm X}_2^{(t)}(\cdot), {\bm X}_1^{(t)}(\cdot) \},
\end{align}
where ${\bm X}_{1, 2} \in {\mathbb R}^N$ are $N$-dimensional state vectors of the oscillators $1$ and $2$ at time $t$, ${\bm F} : {\mathbb R}^N \to {\mathbb R}^N$ is a sufficiently smooth vector field representing the dynamics of individual oscillators,
and
$\epsilon \hat{\bm H}$ represents weak mutual coupling between the oscillators.
Here, $\hat{\bm H} : C \times C \to {\mathbb R}^{N}$ ($C$ is a function space of the time sequences of length $T$) is a functional of two vector functions; i.e., the past time sequences of ${\bm X}_{1, 2}(t)$, and $0 < \epsilon \ll 1$ is a small parameter representing the smallness of the mutual coupling.
We assume that an isolated system, $\dot{\bm X} = {\bm F}({\bm X})$, has an exponentially stable limit cycle $\tilde{\bm X}_0(t) = \tilde{\bm X}_0(t+T)$ of period $T$ and frequency $\omega = 2\pi / T$, and the system state deviates only slightly from this limit cycle if weak perturbations are applied.
The time sequences ${\bm X}_{i}^{(t)}(\cdot)$ ($i=1,2$) in the coupling functional $\hat{\bm H}$ represent functions ${\bm X}_{i}(\tau)$ on the interval $[t-T, t]$ and are defined as
\begin{align}
{\bm X}_{i}^{(t)}(\tau) = {\bm X}_{i}(t + \tau) \quad (-T \leq \tau \leq 0).
\end{align}
In this study, we fix the length of the time sequences used for the coupling as the natural period $T$ of the limit-cycle oscillator.
Under the assumptions stated above, we can employ the standard method of phase reduction~\cite{winfree80,kuramoto84,hoppensteadt97,ermentrout10,nakao16,ashwin16} and introduce a phase function $\Theta({\bm X}) : {\mathbb R}^N \to [0, 2\pi)$, which satisfies $\dot\Theta({\bm X}) = {\bm F}({\bm X}) \cdot \nabla \Theta({\bm X}) = \omega$ in the basin of the limit cycle. Using $\Theta({\bm X})$, the phase variable of this oscillator can be defined as $\theta = \Theta( {\bm X})$, which constantly increases with time as $\dot{\theta} = \omega$, both on and in the basin of the limit cycle ($2\pi$ is identified with $0$ as usual).
The oscillator state on the limit cycle can be represented as a function of $\theta$ as ${\bm X}_0(\theta) = \tilde{\bm X}_0(t = \theta / \omega)$, which is a $2\pi$-periodic function of $\theta$, ${\bm X}_0(\theta) = {\bm X}_0(\theta+2\pi)$.
Similarly, to represent a time sequence on the limit cycle, we also introduce the notation
\begin{align}
{\bm X}_0^{(\theta)}(\tau) = {\bm X}_0(\theta + \omega \tau) \quad (- T \leq \tau \leq 0)
\end{align}
and abbreviate this as ${\bm X}_0^{(\theta)}(\cdot)$.
We hereafter use these notations to represent the system states and their time sequences on the limit cycle.
\subsection{Phase reduction and averaging}
Assuming that perturbations applied to the oscillator are sufficiently weak, the state vector of the oscillator can be represented approximately by ${\bm X}(t) \approx {\bm X}_0(\theta(t))$ as a function of the phase $\theta(t)$.
Then, the phase sensitivity function (PSF), defined by ${\bm Z}(\theta) = \nabla \Theta({\bm X})|_{{\bm X} = {\bm X}_0(\theta)} : [0, 2\pi) \to {\mathbb R}^N$, characterizes the linear response property of the oscillator phase to weak perturbations.
When the oscillator is weakly driven by a perturbation, ${\bm p}$, as $\dot{\bm X} = {\bm F}({\bm X}) + \epsilon {\bm p}$, the reduced phase equation is given by $\dot{\theta} = \omega + \epsilon {\bm Z}(\theta) \cdot {\bm p}$ up to $O(\epsilon)$.
The PSF can be calculated as a $2\pi$-periodic solution to an adjoint equation $\omega d{\bm Z}(\theta)/d\theta = - {\rm J}^{\dag}({\bm X}_0(\theta)) {\bm Z}(\theta)$, with a normalization condition ${\bm Z}(\theta) \cdot d{\bm X_0(\theta)}/{d\theta} = 1$, where ${\rm J}({\bm X}) : {\mathbb R}^N \to {\mathbb R}^{N \times N}$ is a Jacobian matrix of ${\bm F}$ at ${\bm X} = {\bm X}_0(\theta)$ and $\dag$ denotes transpose.
In the numerical analysis, ${\bm Z}(\theta)$ can be calculated easily using the backward time-evolution of the adjoint equation as proposed by Ermentrout~\cite{ermentrout10}.
Defining the phase values of the oscillators $1, 2$ as $\theta_{1, 2} = \Theta({\bm X}_{1, 2})$,
the following pair of equations can be derived from Eq.~(\ref{eq1}):
\begin{align}
\dot{\theta}_1(t) &= \omega + \epsilon {\bm Z}(\theta_1(t)) \cdot \hat{\bm H}\{ {\bm X}_1^{(t)}(\cdot), {\bm X}_2^{(t)}(\cdot) \},
\cr
\dot{\theta}_2(t) &= \omega + \epsilon {\bm Z}(\theta_2(t)) \cdot \hat{\bm H}\{ {\bm X}_2^{(t)}(\cdot), {\bm X}_1^{(t)}(\cdot) \},
\label{phase00}
\end{align}
which are correct up to $O(\epsilon)$.
Next, we use the fact that the deviation of each oscillator state from the limit cycle is small and of $O(\epsilon)$:
\begin{align}
{\bm X}_{1,2}^{(t)}(\tau) = {\bm X}_{0}^{(\theta_{1,2}(t))}(\tau) + O(\epsilon)
\quad
(-T \leq \tau \leq 0).
\end{align}
Substituting into Eq.~(\ref{phase00}) and ignoring errors of $O(\epsilon^2)$, we obtain a pair of reduced phase equations,
\begin{align}
\dot{\theta}_1 &= \omega + \epsilon {\bm Z}(\theta_1) \cdot \hat{\bm H}\{ {\bm X}_0^{(\theta_1)}(\cdot), {\bm X}_0^{(\theta_2)}(\cdot) \},
\cr
\dot{\theta}_2 &= \omega + \epsilon {\bm Z}(\theta_2) \cdot \hat{\bm H}\{ {\bm X}_0^{(\theta_2)}(\cdot), {\bm X}_0^{(\theta_1)}(\cdot) \},
\end{align}
which are also correct up to $O(\epsilon)$.
The coupling term, $\hat{\bm H}\{ {\bm X}_0^{(\theta_1)}(\cdot), {\bm X}_0^{(\theta_2)}(\cdot) \}$, is a functional of the two time sequences ${\bm X}_0^{(\theta_1)}(\cdot)$ and ${\bm X}_0^{(\theta_2)}(\cdot)$.
However, because we can neglect the deviations of the oscillator states from the limit cycle at the lowest order approximation, the functionals of ${\bm X}_0^{(\theta_1)}(\cdot)$ and ${\bm X}_0^{(\theta_2)}(\cdot)$ are actually determined solely by the two phase values $\theta_1$ and $\theta_2$.
Therefore, we can regard the coupling term as an ordinary function of $\theta_1$ and $\theta_2$, defined by
\begin{align}
{\bm H}(\theta_1, \theta_2) = \hat{\bm H}\{ {\bm X}_0^{(\theta_1)}(\cdot), {\bm X}_0^{(\theta_2)}(\cdot) \},
\end{align}
and rewrite the phase equations as
\begin{align}
\label{eqphase}
\dot{\theta}_1 &= \omega + \epsilon {\bm Z}(\theta_1) \cdot {\bm H}( \theta_1, \theta_2 ),
\cr
\dot{\theta}_2 &= \omega + \epsilon {\bm Z}(\theta_2) \cdot {\bm H}( \theta_2, \theta_1 ).
\end{align}
Thus, though we started from Eq.~(\ref{eq1}) with general coupling functionals that depend on the past time sequences of the oscillators,
the coupled system reduces to a pair of simple ordinary differential equations that depend only on the present phase values $\theta_1$ and $\theta_2$ of the oscillators within the phase-reduction approximation.
Following the standard averaging procedure~\cite{kuramoto84,hoppensteadt97}, we introduce slow phase variables $\phi_{1, 2} = \theta_{1, 2} - \omega t$, rewrite the equations as
\begin{align}
\dot{\phi}_1 &= \epsilon {\bm Z}(\phi_1+\omega t) \cdot {\bm H}( \phi_1+\omega t, \phi_2+\omega t ),
\cr
\dot{\phi}_2 &= \epsilon {\bm Z}(\phi_2+\omega t) \cdot {\bm H}( \phi_2+\omega t, \phi_1+\omega t ),
\end{align}
and average the small right-hand side of these equations over one-period of oscillation.
This yields the following averaged phase equations, which are correct up to $O(\epsilon)$:
\begin{align}
\label{eq7}
\dot{\theta}_1 &= \omega + \epsilon \Gamma( \theta_1 - \theta_2 ),
\cr
\dot{\theta}_2 &= \omega + \epsilon \Gamma( \theta_2 - \theta_1 ),
\end{align}
where $\Gamma(\phi)$ is the phase coupling function defined as
\begin{align}
\Gamma(\phi)
&= \frac{1}{2\pi} \int_0^{2\pi} {\bm Z}(\phi + \psi) \cdot {\bm H}( \phi +\psi, \psi) d\psi
\cr
&=
\la {\bm Z}(\phi + \psi) \cdot {\bm H}( \phi +\psi, \psi) \ra_\psi
\cr
&= \la {\bm Z}(\psi) \cdot {\bm H}( \psi, \psi - \phi ) \ra_\psi.
\end{align}
Here, we have defined an average of a function $f(\psi)$ over one period of oscillation as
\begin{align}
\la f(\psi) \ra_{\psi} = \frac{1}{2\pi} \int_0^{2\pi} f(\psi) d\psi.
\end{align}
\subsection{Linear stability of the in-phase synchronized state}
From the coupled phase equations~(\ref{eq7}), the dynamics of the phase difference $\phi = \theta_1 - \theta_2$ obeys
\begin{align}
\label{eq10}
\dot{\phi} = \epsilon [ \Gamma(\phi) - \Gamma(-\phi) ],
\end{align}
where the right-hand side is the antisymmetric part of the phase coupling function $\Gamma(\phi)$.
This equation always has a fixed point at $\phi=0$ corresponding to the in-phase synchronized state. In a small vicinity of $\phi=0$, the above equation can be linearized as
\begin{align}
\dot{\phi} \approx 2 \epsilon \Gamma'(0) \phi.
\end{align}
The derivative of the phase coupling function is given by
\begin{align}
\Gamma'(\phi) =& - \la {\bm Z}(\psi) \cdot {\bm H}'_2(\psi, \psi-\phi) \ra_\psi,
\end{align}
where the partial derivative of ${\bm H}$ with respect to the second argument is denoted as
\begin{align}
{\bm H}_2'(\psi_1, \psi_2) &= \frac{\partial {\bm H}(\psi_1, \psi_2)}{\partial \psi_2}.
\end{align}
Thus, the linear stability of this state is characterized by the exponent $2 \epsilon \Gamma'(0)$, and a larger $-\Gamma'(0)$ yields a higher linear stability of the in-phase synchronized state.
In this study, we consider optimization of the mutual coupling term, ${\bm H}$, or either the parameters or functions included in it, so that the linear stability
\begin{align}
- \Gamma'(0)
=& \la {\bm Z}(\psi) \cdot {\bm H}_2'(\psi, \psi) \ra_\psi
\end{align}
of the in-phase synchronized state, $\phi = 0$, is maximized under appropriate constraints on the intensity of the mutual coupling.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\hsize,clip]{fig1.pdf}
\caption{Limit-cycle orbit and phase sensitivity function of the FitzHugh-Nagumo oscillator. Time sequences of the $x$ and $y$ components are plotted for one period of oscillation, $0 \leq \theta < 2\pi$. (a) Limit cycle $(x_0(\theta), y_0(\theta))$. (b) Phase sensitivity function $(Z_x(\theta), Z_y(\theta))$.}
\label{fig1}
\end{figure}
\subsection{Examples of limit-cycle oscillators}
The Stuart-Landau (SL) oscillator is used in the following numerical examples, which is a normal form of the supercritical Hopf bifurcation and is
described by
\begin{align}
{\bm X} = \begin{pmatrix} x \\ y \end{pmatrix} \in {\mathbb R}^2,
\end{align}
\begin{align}
{\bm F} = \begin{pmatrix} F_x \\ F_y \end{pmatrix} =
\begin{pmatrix}
{ x - a y - \left( x ^ { 2 } + y ^ { 2 } \right) ( x - b y )}
\\
{ a x + y - \left( x ^ { 2 } + y ^ { 2 } \right) ( b x + y )}
\end{pmatrix}
,
\end{align}
where the parameters are fixed at $a=2$ and $b=1$. This oscillator has a stable limit cycle with a natural frequency $\omega = a - b = 1$ and period $T = 2 \pi$, represented by
\begin{align}
{\bm X}_0(\theta) = \begin{pmatrix} x_0(\theta) \\ y_0(\theta) \end{pmatrix} = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix}, \quad (0 \leq \theta < 2\pi).
\end{align}
The phase function can be explicitly represented by
\begin{align}
\Theta(x, y) = \arctan \frac{y}{x} - \frac{b}{2} \ln ( x^2 + y^2 )
\end{align}
on the whole $xy$-plane except $(0, 0)$, and the PSF is given by
\begin{align}
{\bm Z}(\theta) = \begin{pmatrix} Z_x \\ Z_y \end{pmatrix} = \left( \begin{array} { c } { - \sin \theta - b \cos \theta } \\ { \cos \theta - b \sin \theta } \end{array} \right).
\end{align}
As another example, we use the FitzHugh-Nagumo (FHN) oscillator, described by
\begin{align}
{\bm X} = \begin{pmatrix} x \\ y \end{pmatrix} \in {\mathbb R}^2,
\end{align}
\begin{align}
{\bm F} = \begin{pmatrix} F_x \\ F_y \end{pmatrix} =
\left( \begin{array} { c } { x ( x - c ) ( 1 - x ) - y }
\\
\mu^{-1} ( x - d y )
\end{array}
\right),
\end{align}
where the parameters are fixed at $c = - 0.1$, $d = 0.5$, and $\mu = 100$. As $\mu$ is large, this oscillator is a slow-fast system
whose $x$ variable evolves much faster than the $y$ variable, leading to relaxation oscillations. With these parameters, the natural period of the
oscillation is $T \approx 126.7$ and the natural frequency is $\omega \approx 0.0496$.
The limit cycle ${\bm X}_0(\theta) = (x_0(\theta), y_0(\theta))^{\dag}$ and phase function $\Theta({\bm X})$ cannot be obtained analytically for this model, but the PSF ${\bm Z}(\theta)$ can be obtained by numerically solving the
adjoint equation.
Figure~\ref{fig1} shows the time sequences of the limit-cycle orbit ${\bm X}_0(\theta)$
and PSF ${\bm Z}(\theta)$ for one period of oscillation, $0 \leq \theta < 2\pi$.
\section{Linear coupling}
\subsection{Time-delayed coupling}
First, we consider a simple time-delayed coupling, where each oscillator is driven by the past state of the other oscillator. The model is given by
\begin{align}
\label{eq3}
\dot{\bm X}_1 &= {\bm F}({\bm X}_1) + \epsilon \sqrt{P} {\rm K} {\bm X}_2(t-\tau),
\cr
\dot{\bm X}_2 &= {\bm F}({\bm X}_2) + \epsilon \sqrt{P} {\rm K} {\bm X}_1(t-\tau),
\end{align}
where $0 < \epsilon \ll 1$ is a small parameter representing the strength of the interaction, $P > 0$ is a real constant that controls the norm of the coupling signal, ${\rm K} \in {\mathbb R}^{N \times N}$ is a constant matrix specifying which components of the oscillator states ${\bm X}_{1, 2}(t)$ are coupled, and $\tau$ ($0 \leq \tau \leq T$) is a time delay.
In our previous study~\cite{shirasaka17b}, we considered optimization of the matrix ${\rm K}$ for the case where the two oscillators are nearly identical and coupled without time delay.
Here, we consider two oscillators with identical properties, fix the matrix ${\rm K}$ constant, and vary the time delay, $\tau$, to improve the linear stability of the in-phase synchronized state.
In this case, the coupling functionals are given by
\begin{align}
\hat{\bm H}\{ {\bm X}_1^{(t)}, {\bm X}_2^{(t)} \} = \sqrt{P} {\rm K} {\bm X}_2(t-\tau),
\cr
\hat{\bm H}\{ {\bm X}_2^{(t)}, {\bm X}_1^{(t)} \} = \sqrt{P} {\rm K} {\bm X}_1(t-\tau),
\end{align}
which can be expressed as functions of $\theta_1$ and $\theta_2$ as
\begin{align}
{\bm H}( \theta_1, \theta_2 ) = \sqrt{P} {\rm K} {\bm X}_0(\theta_2-\omega\tau),
\cr
{\bm H}( \theta_2, \theta_1 ) = \sqrt{P} {\rm K} {\bm X}_0(\theta_1-\omega\tau),
\end{align}
after phase reduction. The phase coupling function is
\begin{align}
\Gamma(\phi)
&= \la {\bm Z}(\psi) \cdot {\bm H}( \psi, \psi - \phi ) \ra_\psi
\cr
&= \la {\bm Z}(\psi) \cdot \sqrt{P} {\rm K} {\bm X}_0(\psi-\phi - \omega \tau) \ra_\psi,
\end{align}
and the linear stability is characterized by
\begin{align}
- \Gamma'(0)
&= \la {\bm Z}(\psi) \cdot {\bm H}_2'(\psi, \psi) \ra_\psi
\cr
&=
\la {\bm Z}(\psi) \cdot \sqrt{P} {\rm K} {\bm X}_0'(\psi-\omega \tau) \ra_\psi,
\label{delay-stability}
\end{align}
where ${\bm X}_0'(\theta) = d{\bm X}_0(\theta) / d\theta$.
The maximum stability is attained only when $\tau$ satisfies
\begin{align}
\frac{\partial}{\partial \tau} \{ - \Gamma'(0) \}
&=
- \omega \la {\bm Z}(\psi) \cdot \sqrt{P} {\rm K} {\bm X}_0''(\psi-\omega \tau) \ra_\psi
= 0,
\label{delay-cond}
\end{align}
where ${\bm X}_0''(\theta) = d^2 {\bm X}_0(\theta)/d\theta^2$. We denote the value of $\tau$ satisfying the above equation as $\tau^*$, i.e.,
\begin{align}
\la {\bm Z}(\psi) \cdot \sqrt{P} {\rm K} {\bm X}_0''(\psi-\omega \tau^*) \ra_\psi = 0.
\end{align}
By partial integration using the $2\pi$-periodicity of ${\bm Z}(\theta)$ and ${\bm X}_0(\theta)$, this can also be expressed as
\begin{align}
\label{eq21}
\la {\bm Z}'(\psi) \cdot \sqrt{P} {\rm K} {\bm X}_0'(\psi-\omega \tau^*) \ra_\psi = 0,
\end{align}
which has the form of a cross-correlation function between ${\bm Z}'(\theta)$ and $\sqrt{P} {\rm K}{\bm X}_0'(\theta)$.
Because both of these functions are $2\pi$-periodic with zero-mean, the left-hand side of Eq.~(\ref{eq21}) is also $2\pi$-periodic with zero mean. Thus, there are at least two values of $\tau$ satisfying the above equation, as long as ${\bm Z}'(\theta)$ and $\sqrt{P} {\rm K}{\bm X}_0'(\theta)$ are non-constant functions (which holds generally for ordinary limit cycles). By choosing an appropriate value of $\tau^*$, the maximum stability is given by
\begin{align}
- \Gamma'(0)
&= \sqrt{P \la {\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(\psi-\omega \tau^*) \ra_\psi^{2}}.
\end{align}
\subsection{Coupling via linear filtering}
Generalizing the time-delayed coupling, we consider a case in which the past time sequences of both oscillator states are linearly filtered and used as driving signals for the other oscillators. The model is given by
\begin{align}
\dot{\bm X}_1 &= {\bm F}({\bm X}_1) + \epsilon \int_{0}^{T} h(\tau) {\rm K} {\bm X}_2(t-\tau) d\tau,
\cr
\dot{\bm X}_2 &= {\bm F}({\bm X}_2) + \epsilon \int_{0}^{T} h(\tau) {\rm K} {\bm X}_1(t-\tau) d\tau,
\end{align}
where $h(\tau) : [0, T] \to {\mathbb R}$ is a $T$-periodic real scalar function representing a linear filter, and ${\rm K} \in {\mathbb R}^{N \times N}$ is a constant matrix specifying which components of ${\bm X}$ are coupled. We optimize the linear filter $h(\tau)$ for a given coupling matrix ${\rm K}$ under a constraint specified below.
The coupling functionals are given by
\begin{align}
\hat{\bm H}\{ {\bm X}_1^{(t)}, {\bm X}_2^{(t)} \} = \int_{0}^{T} h(\tau) {\rm K} {\bm X}_2(t-\tau) d\tau,
\cr
\hat{\bm H}\{ {\bm X}_2^{(t)}, {\bm X}_1^{(t)} \} = \int_{0}^{T} h(\tau) {\rm K} {\bm X}_1(t-\tau) d\tau,
\end{align}
which simplify to ordinary functions
\begin{align}
{\bm H}( \theta_1, \theta_2 ) = \int_{0}^{T} h(\tau) {\rm K} {\bm X}_0(\theta_2-\omega \tau) d\tau,
\cr
{\bm H}( \theta_2, \theta_1 ) = \int_{0}^{T} h(\tau) {\rm K} {\bm X}_0(\theta_2-\omega \tau) d\tau,
\end{align}
after phase reduction. The phase coupling function is given by
\begin{align}
\Gamma(\phi)
&= \la {\bm Z}(\phi + \psi) \cdot \int_0^T h(\tau) {\rm K}
{\bm X}_0(\psi - \omega \tau) d\tau \ra_\psi
\cr
&= \la \int_0^T {\bm Z}(\psi) \cdot h(\tau) {\rm K} {\bm X}_0(\psi - \omega \tau - \phi) d\tau \ra_\psi
\label{filter-phasecoupling}
\end{align}
and the linear stability of the in-phase synchronized state is characterized by
\begin{align}
- \Gamma'(0)
&=
\la
\int_0^T
{\bm Z}(\psi) \cdot h(\tau) {\rm K} {\bm X}_0'(\psi - \omega \tau)
d\tau\ra_\psi.
\end{align}
We constrain the $L^2$-norm $\| h(\tau) \| = \sqrt{ \int_0^T h(\tau)^2 d\tau }$ of the linear filter, $h(\tau)$, as $\| h(\tau) \|^2 = Q$, where $Q > 0$ controls the overall coupling intensity, and seek the optimal $h(\tau)$ that maximizes the linear stability, $-\Gamma'(0)$. That is, we consider an optimization problem:
\begin{align}
\mbox{maximize} \quad - \Gamma'(0) \quad \mbox{subject to} \quad \| h(\tau) \|^2 = Q.
\end{align}
To this end, we define an objective functional as
\begin{align}
S\{ h, \lambda \}
=& - \Gamma'(0) + \lambda ( \| h(\psi) \|^2 - Q )
\cr
=&
\la \int_0^T {\bm Z}(\psi) \cdot h(\tau) {\rm K} {\bm X}_0'(- \omega \tau + \psi) d\tau \ra_\psi
\cr
&+ \lambda \left( \int_0^{T} h(\tau)^2 d\tau - Q \right),
\end{align}
where $\lambda$ is a Lagrange multiplier. From the extremum condition of $S$, the functional derivative of $S$ with respect to $h(\tau)$ should satisfy
\begin{align}
\frac{\delta S}{\delta h(\tau)} = \la {\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(- \omega \tau + \psi) \ra_\psi + 2\lambda h(\tau) = 0
\end{align}
and the partial derivative of $S$ by $\lambda$ should satisfy
\begin{align}
\frac{\partial S}{\partial \lambda} =
\int_0^{T} h(\tau)^2 d\tau - Q = 0.
\end{align}
Thus, the optimal linear filter $h(\tau)$ is given by
\begin{align}
h(\tau)\
= - \frac{1}{2 \lambda} \la {\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(\psi - \omega \tau) \ra_{\psi}.
\label{optimalfilter}
\end{align}
The Lagrange multiplier $\lambda$ is determined from the constraint $\| h(\tau) \|^2 = Q$, i.e.,
\begin{align}
\frac{1}{4 \lambda^2} \int_0^T \la
{\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(\psi - \omega \tau) \ra_{\psi}^2 d\tau = Q,
\end{align}
as
\begin{align}
\lambda = - \sqrt{ \frac{1}{4 Q} \int_0^T \la {\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(\psi - \omega \tau) \ra_{\psi}^2 d\tau },
\end{align}
where the negative sign should be chosen for the in-phase synchronized state to be linearly stable, $-\Gamma'(0) > 0$. The maximum linear stability with the optimized $h(\tau)$ is
\begin{align}
- \Gamma'(0)
&=
\sqrt{Q
\int_0^T \la {\bm Z}(\psi) \cdot {\rm K} {\bm X}_0'(\psi - \omega \tau) \ra_{\psi}^2 d\tau}.
\label{filter-linearstab}
\end{align}
\subsection{Numerical examples}
\subsubsection{Time-delayed coupling}
We use the SL and FHN oscillators in the following numerical illustrations. For both models, the coupling matrix is assumed to be
\begin{align}
{\rm K} =
\begin{pmatrix}
1 & 0\\
0 & 0
\end{pmatrix}.
\label{k-matrix}
\end{align}
We compare the optimized case with the non-optimized case, i.e.,
\begin{align}
\dot{\bm X}_1 &= {\bm F}({\bm X}_1) + \epsilon \sqrt{P} {\rm K} {\bm X}_2(t),
\cr
\dot{\bm X}_2 &= {\bm F}({\bm X}_2) + \epsilon \sqrt{P} {\rm K} {\bm X}_1(t),
\label{non-optimized}
\end{align}
where $\epsilon$ is a small parameter that determines the coupling strength and $P$ control the norm of the coupling signal. The mean square of the coupling term over one-period of oscillation is the same in both cases,
that is,
\begin{align}
\la | \sqrt{P} {\rm K} {\bm X}_0(\psi) |^2 \ra_\psi = \la | \sqrt{P} {\rm K} {\bm X}_0(\psi-\omega\tau^*) |^2 \ra_\psi.
\label{kx-norm}
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=\hsize,clip]{fig2.pdf}
\caption{Synchronization of two Stuart-Landau oscillators coupled with time delay. The results with the optimal time delay are compared with those without time delay. (a) Evolution of the difference $\Delta x$ between $x$ variables of the two oscillators. (b) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize,clip]{fig3.pdf}
\caption{Synchronization of two FitzHugh-Nagumo oscillators coupled with time delay. In (b--d), the results with the optimal time delay are compared with those without time delay. (a) Linear stability $- \Gamma'(0)$ and its derivative $-\partial \Gamma'(0) / \partial \tau$ vs. time delay $\tau$. The crosses indicate the values of $\tau$ where $-\partial \Gamma'(0) / \partial \tau = 0$. (b) Antisymmetric part of the phase coupling function, $\Gamma(\phi) - \Gamma(-\phi)$. (c) Evolution of the difference $\Delta x$ between $x$ variables of the two oscillators. (d) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig3}
\end{figure}
First, for the SL oscillator, we can analytically calculate the optimal time delay. The linear stability of the in-phase synchronized state, Eq.~(\ref{delay-stability}), is given by
\begin{align}
- \Gamma'(0) =
\sqrt{P} \la Z_x(\psi) x_0'(\psi - \omega \tau) \ra_{\psi}
= \frac{\sqrt{P}}{2} [ \cos (\omega \tau) - b \sin (\omega \tau) ].
\label{sl-delay-stab}
\end{align}
The optimal time delay $\tau = \tau^*$ is determined from Eq.~(\ref{delay-cond}), or equivalently from
\begin{align}
\la Z_x(\psi) x_0''(\psi - \omega \tau) \ra_{\psi} = \frac{b \cos (\omega \tau ) + \sin (\omega \tau)}{2} = 0.
\end{align}
For the parameter values $b=1$ and $\omega = 1$, this equation is satisfied when $\tau = 3\pi / 4$ or $\tau = 7\pi / 4$. Substituting this into Eq.~(\ref{sl-delay-stab}), we find that $\tau^*=7\pi / 4$ should be chosen, and the maximum linear stability is given by $-\Gamma'(0) = \sqrt{P} / \sqrt{2}$. For the case with no time delay, the linear stability is $-\Gamma'(0) = \sqrt{P} / 2$. Thus, by appropriately choosing the time delay, the linear stability improves by a factor of $\sqrt{2}$ in this case.
Figure~\ref{fig2} shows synchronization of two SL oscillators for the cases with the optimal time delay and without time delay, where $\epsilon = 0.02$, $P=1$, and the initial phase difference is $\phi(0) = \pi / 4$. In Fig.~\ref{fig2}(a), the difference $\Delta x$ between the $x$ variables of the two oscillators, obtained by direct numerical simulations of the coupled SL oscillators, is plotted as a function of $t$. It can be seen that the in-phase synchronized state is established faster in the optimized case because of the higher linear stability. Figure~\ref{fig2}(b) shows the convergence of the phase difference $\phi$ to $0$. It can be seen that the results of the reduced phase equation agree well with those of direct numerical simulations.
Figure~\ref{fig3} shows the results for two FHN oscillators, where $\epsilon = 0.003$, $P=1$, and the initial phase difference is $\phi = \pi / 4$. Figure~\ref{fig3}(a) plots the linear stability $-\Gamma'(0)$ and its derivative $-\partial \Gamma'(0) / \partial \tau$ as functions of the time delay $\tau$, where there are two extrema of $-\Gamma'(0)$. We choose the larger extremum, which is attained at the optimal time delay $\tau^* \approx 117.6$. The antisymmetric part of the phase coupling function, $\Gamma(\phi) - \Gamma(-\phi)$, is shown in Fig~\ref{fig3}(b) for the cases with the optimal delay and without delay. It can be seen that the stability of the in-phase synchronized state $\phi=0$ is improved, as indicated by the straight lines in Fig.~\ref{fig3}(b), where $-\Gamma'(0)\approx0.654$ with the optimized time delay and $-\Gamma'(0)\approx0.221$ without the time delay. The evolution of the difference $\Delta x$ between the $x$ variables of the oscillators is plotted as a function of $t$ in Fig.~\ref{fig3}(c). The phase differences $\phi$ converging toward $0$, obtained from the phase equation and direct numerical simulations of the original model, are shown in Fig.~\ref{fig3}(d). It can be seen that the convergence to the in-phase synchronization is faster with the optimized time delay, and the results of the reduced phase equation agree well with direct numerical simulations.
\subsubsection{Coupling via linear filtering}
We also assume that the coupling matrix ${\rm K}$ is given by Eq.~(\ref{k-matrix}), and compare the results for the optimized case with linear filtering with those for the non-filtered case given by Eq.~(\ref{non-optimized}).
We choose the parameter $Q$ that constrains the norm of the linear filter so that the squared average of the coupling term over one-period of oscillation becomes equal to that in the non-filtered case given by Eq.~(\ref{non-optimized}), i.e.,
\begin{align}
\la \left| \int_0^T h(\tau) {\rm K} {\bm X}_0(\psi-\omega\tau) d\tau \right|^2 \ra_\psi = \la | \sqrt{P} {\rm K} {\bm X}_0(\psi) |^2 \ra_\psi.
\label{kxh-norm}
\end{align}
For the SL oscillators, the optimal filter $h(\tau)$, Eq.~(\ref{optimalfilter}), is explicitly calculated as
\begin{align}
h(\tau) = \sqrt{\frac{Q \omega }{\pi (1 + b^2 ) }} [ \cos ( \omega \tau ) - b \sin ( \omega \tau ) ].
\end{align}
The optimal phase coupling function, Eq.~(\ref{filter-phasecoupling}), and optimized linear stability, Eq.~(\ref{filter-linearstab}), are expressed as
\begin{align}
\Gamma(\phi) = -\frac{1}{2} \sqrt{\frac{\pi \left(1+b^2\right) Q}{\omega }} \sin \phi,
\end{align}
and
\begin{align}
-\Gamma'(0) = \frac{1}{2} \sqrt{\frac{\pi \left(1+b^2\right) Q}{\omega }},
\end{align}
respectively. We take $Q = \omega P / \pi$ so that Eq.~(\ref{kxh-norm}) is satisfied.
The linear stability is then $-\Gamma'(0) = \sqrt{\left(1+b^2\right) P} / 2$ when the optimized linear filter is used and $-\Gamma'(0) = \sqrt{P}/2$ when no filtering of the oscillator state is performed.
Thus, the linear stability is improved by a factor of $\sqrt{2}$ when $b=1$.
It is important to note that, in the SL oscillator case, ${\bm X}_0(\psi)$, ${\bm Z}(\psi)$,
and hence the linear filter $h(\tau)$ contain only the fundamental frequency, i.e., they are purely sinusoidal. Thus, the linear filtering can only shift the phase of the coupling signal and gives the same result as the previous case with the simple time delay. It is also interesting to note that the stability cannot be improved (it is already optimal without filtering) when the parameter $b$, which characterizes non-isochronicity of the limit cycle, is zero.
Figure~\ref{fig4} shows the synchronization of two SL oscillators, with and without linear filtering, where $\epsilon = 0.02$, $P=1$, and the initial phase difference is $\phi = \pi / 4$. Figure~\ref{fig4}(a) shows evolution of the difference $\Delta x$ between the $x$ variables of the oscillators, and Fig.~\ref{fig4}(b) shows the convergence of the phase difference $\phi$ to $0$. We can see that the in-phase synchronized state is established faster in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
For the FHN oscillators, the optimal linear filter can be calculated from the time sequences of the limit-cycle solution and PSF obtained numerically. Figure~\ref{fig5} shows the synchronization of two coupled FHN oscillators, with and without linear filtering, where
$\epsilon = 0.003$, $P=1$, $Q \approx 0.0522$, and the initial phase difference is $\phi = \pi / 4$.
Figure~\ref{fig5}(a) shows the optimal filter, (b) antisymmetric part $\Gamma(\phi) - \Gamma(-\phi)$ of the phase coupling function $\Gamma(\phi)$, (c) evolution of the difference $\Delta x$ between the $x$ variables of the oscillators, and (d) convergence of the phase difference $\phi$ toward $0$.
The linear stability is given by
$-\Gamma'(0)\approx0.844$
for the case with the optimal filter and by
$-\Gamma'(0)\approx0.221$
for the case without filtering, as shown by the straight lines in Fig.~\ref{fig5}(b).
The in-phase synchronized state is established faster in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
Because the FHN oscillator has the higher harmonic components in ${\bm X}_0(\psi)$ and ${\bm Z}(\psi)$, the optimal filter $h(\tau)$ can exploit these components, and hence the improvement in the linear stability is larger than that for the case with simple delay.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize,clip]{fig4.pdf}
\caption{Synchronization of two Stuart-Landau oscillators coupled with linear filtering. The results with the optimal filtering are compared with those without filtering. (a) Evolution of the difference $\Delta x$ in the $x$ variables between the oscillators. (b) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize,clip]{fig5.pdf}
\caption{Synchronization of two FitzHugh-Nagumo oscillators coupled with linear filtering. In (b--d), the results with the optimal filtering are compared with those without filtering. (a) Optimal linear filter $h(\tau)$. (b) Antisymmetric part of the phase coupling function $\Gamma(\phi) - \Gamma(-\phi)$. (c) Evolution of the difference $\Delta x$ in the $x$ variables between the oscillators. (d) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig5}
\end{figure}
\section{Nonlinear coupling}
\subsection{Mutual drive-response coupling}
In this section, we consider the case of oscillators interacting through nonlinear coupling functionals.
We assume that the coupling is of a drive-response type, i.e., it can be written as a product of a response matrix of the driven oscillator and a driving function that transforms the signal from the other oscillator.
The model is given by
\begin{align}
\label{eq44}
\dot{\bm X}_1(t) &= {\bm F}({\bm X}_1(t)) + \epsilon \hat{\rm A}\{ {\bm X}_1^{(t)}(\cdot) \} \hat{\bm G} \{ {\bm X}_2^{(t)}(\cdot) \},
\cr
\dot{\bm X}_2(t) &= {\bm F}({\bm X}_2(t)) + \epsilon \hat{\rm A}\{ {\bm X}_2^{(t)}(\cdot) \} \hat{\bm G} \{ {\bm X}_1^{(t)}(\cdot) \},
\end{align}
where the matrix $\hat{\rm A} : C \to {\mathbb R}^{N \times N}$ is a functional of the time sequence of each oscillator representing its response properties and $\hat{\bm G} : C \to {\mathbb R}^N$ is a functional that transforms the time sequence of the other oscillator to a driving signal.
It should be noted that we may also include self-coupling terms of the form $\epsilon \hat{\bm I}\{{\bm X}^{(t)}(\cdot)\}_{1,2}$ to each equation, which allows the analysis, for example, of diffusive coupling that depends on the state difference between the oscillators. As explained in the Appendix A, it can be shown that inclusion of such self-coupling terms does not alter the results and the linear stability remains the same in the phase-reduction approximation. We thus analyze Eq.~(\ref{eq44}) hereafter.
The coupling functionals in this case are given by
\begin{align}
\hat{\bm H}\{ {\bm X}_1^{(t)}, {\bm X}_2^{(t)} \} = \hat{\rm A}\{ {\bm X}_1^{(t)}(\cdot) \} \hat{\bm G} \{ {\bm X}_2^{(t)}(\cdot) \},
\cr
\hat{\bm H}\{ {\bm X}_2^{(t)}, {\bm X}_1^{(t)} \} = \hat{\rm A}\{ {\bm X}_2^{(t)}(\cdot) \} \hat{\bm G} \{ {\bm X}_1^{(t)}(\cdot) \},
\end{align}
and, as argued in Sec.~IIB, at the lowest-order phase reduction, these functionals can be expressed as ordinary functions of the phase $\theta_1$ and $\theta_2$ as
\begin{align}
{\bm H}( \theta_1, \theta_2 ) = {\rm A}( \theta_1 ) {\bm G} ( \theta_2 ),
\cr
{\bm H}( \theta_2, \theta_1 ) = {\rm A}( \theta_2 ) {\bm G} ( \theta_1 ),
\end{align}
where we introduced ordinary $2\pi$-periodic functions ${\rm A}$ and ${\bm G}$ of $\theta_1$ and $\theta_2$.
Using these functions, the phase coupling function is given by
\begin{align}
\Gamma(\phi)
&= \la {\bm Z}(\psi) \cdot {\rm A}(\psi){\bm G}(\psi-\phi) \ra_\psi,
\end{align}
and the linear stability is characterized by
\begin{align}
-\Gamma'(0)
=& \la {\bm Z}(\psi) \cdot {\rm A}(\psi){\bm G}'(\psi) \ra_\psi
\cr
=& \la {\rm A}^{\dag}(\psi) {\bm Z}(\psi) \cdot {\bm G}'(\psi) \ra_\psi
\cr
=& - \la \frac{d}{d\psi} \left[ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) \right] \cdot {\bm G}(\psi) \ra_\psi,
\end{align}
where the last expression is obtained by partial integration using $2\pi$-periodicity of ${\rm A}(\psi)$, ${\bm Z}(\psi)$, and ${\bm G}(\psi)$.
Therefore, although we started from Eq.~(\ref{eq44}) with a general drive-response coupling that depends on the past time sequences of the oscillators, the linear stability can be represented only by the present phase values of the oscillators at the lowest-order phase reduction.
In the following subsections, we consider the optimization of the response matrix ${\rm A}(\psi)$ or the driving function ${\bm G}(\psi)$, represented as functions of the phase $\psi$.
\subsection{Optimal response matrix}
As for the first case, we optimize the response matrix ${\rm A}(\psi)$ as a function of the phase $\psi$, assuming that the driving functional $\hat{\bm G}$ is given. We introduce a constraint that the squared Frobenius norm of ${\rm A}(\psi)$ averaged over one period of oscillation is fixed as $\la \| {\rm A}(\psi) \|^2 \ra_\psi = P$, and consider an optimization problem:
\begin{align}
\mbox{maximize} \quad -\Gamma'(0) \quad \mbox{subject to} \quad \la \| {\rm A}(\psi) \|^2 \ra_\psi = P,
\end{align}
where $ \| {\rm A} \| = \sqrt{ \sum_{i,j} A_{ij}^2 }$ represents the Frobenius norm of the matrix ${\rm A} = ( A_{ij} )$.
By defining an objective functional,
\begin{align}
S\{ {\rm A}, \lambda \}
&= - \Gamma'(0) + \lambda \left( \la \| {\rm A}(\psi) \|^2 \ra_\psi - P \right)
\cr
&= \la {\bm Z}(\psi) \cdot {\rm A}(\psi) {\bm G}'(\psi) \ra_\psi + \lambda \left( \la \| {\rm A}(\psi) \|^2 \ra_\psi - P \right),
\cr
\end{align}
where $\lambda$ is a Lagrange multiplier, and by taking the functional derivative with respect to each component, $A_{ij}$, of ${\rm A}$, we obtain the extremum condition. In this case,
\begin{align}
\frac{\delta S}{\delta A_{ij}(\psi)} = \frac{1}{2\pi} Z_i(\psi) G_j'(\psi) + \frac{\lambda}{\pi} A_{ij}(\psi) = 0,
\end{align}
and we obtain
\begin{align}
A_{ij}(\psi) = - \frac{1}{2\lambda} Z_i(\psi) G_j'(\psi)
\end{align}
i.e.,
\begin{align}
{\rm A}(\psi) = - \frac{1}{2\lambda} {\bm Z}(\psi) {\bm G}'(\psi)^{\dag},
\end{align}
and the Lagrange multiplier is determined from the constraint,
\begin{align}
\la \| {\rm A}(\psi) \|^2 \ra_{\psi}
= \frac{1}{4\lambda^2} \la \| {\bm Z}(\psi) {\bm G}'(\psi)^{\dag} \|^2 \ra_\psi
= P
\end{align}
as
\begin{align}
\lambda
&= - \sqrt{ \frac{1}{4P} \la \| {\bm Z}(\psi) {\bm G}'(\psi)^{\dag} \|^2 \ra_\psi },
\end{align}
where the negative sign is chosen so that $-\Gamma'(0) > 0$. The maximum stability of the in-phase synchronized state is
\begin{align}
- \Gamma'(0)
&= \sqrt{ P \la \| {\bm Z}(\psi) {\bm G}'(\psi)^{\dag} \|^2 \ra_\psi }.
\end{align}
\subsection{Optimal driving functional}
We can also seek the function ${\bm G}(\psi)$ that provides the optimal driving signal as a function of the phase $\psi$, assuming that the response matrix $\hat{\rm A}$ is given.
We constrain the squared average of ${\bm G}(\psi)$ over one period of oscillation as $\la | {\bm G}(\psi) |^2 \ra_\psi = P$, and maximize the linear stability of the in-phase state:
\begin{align}
\mbox{maximize} \quad -\Gamma'(0) \quad \mbox{subject to} \quad \la | {\bm G}(\psi) |^2 \ra_\psi = P.
\end{align}
We define an objective functional,
\begin{align}
S\{ {\rm A}, \lambda \}
=& - \Gamma'(0) + \lambda \left( \la | {\bm G}(\psi) |^2 \ra_\psi - P \right)
\cr
=& - \la \frac{d}{d\psi} \left[ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) \right] \cdot {\bm G}(\psi) \ra_\psi
\cr
&+ \lambda \left( \la | {\bm G}(\psi) |^2 \ra_\psi - P \right),
\cr
\end{align}
where $\lambda$ is a Lagrange multiplier. From the extremum condition for $S$, we obtain
\begin{align}
\frac{\delta S}{\delta {\bm G}(\psi)} = - \frac{1}{2\pi} \frac{d}{d\psi} [ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) ] + \frac{\lambda}{\pi} {\bm G}(\psi) = 0
\end{align}
and the constraint on ${\bm G}$. The optimal driving function is given by
\begin{align}
{\bm G}(\psi) = \frac{1}{2\lambda} \frac{d}{d\psi} [ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) ],
\label{opt-driv}
\end{align}
where the Lagrange multiplier $\lambda$ should be chosen to satisfy the norm constraint,
\begin{align}
\frac{1}{4 \lambda^2} \la \left| \frac{d}{d\psi} [ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) ] \right|^2 \ra_\psi = P.
\end{align}
This yields
\begin{align}
\lambda = - \sqrt{ \frac{1}{4 P} \la \left| \frac{d}{d\psi} [ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) ] \right|^2 \ra_\psi },
\end{align}
where the negative sign is taken to satisfy $\Gamma'(0)<0$. The maximum stability is
\begin{align}
- \Gamma'(0)
&=
\sqrt{ P \la \left| \frac{d}{d\psi} [ {\rm A}^{\dag}(\psi) {\bm Z}(\psi) ] \right|^2 \ra_\psi }.
\end{align}
\subsection{Numerical examples}
\subsubsection{Optimal response matrix}
As an example, we assume that the driving functional $\hat{\bm G} \{ {\bm X}^{(t)}(\cdot) \}$ is simply given by $\hat{\bm G} \{ {\bm X}^{(t)}(\cdot) \} = {\bm X}(t)$, and seek the optimal response matrix ${\rm A}(\psi)$ satisfying $\la \| {\rm A}(\psi) \|^2 \ra_\psi = P$. For comparison, we also consider an identity response matrix, ${\rm A}_I = \mbox{diag}(\sqrt{P/2}, \sqrt{P/2})$, normalized to satisfy $\la \| {\rm A}_I \|^2 \ra_\psi = P$. Note that both the $x$ and $y$ components are coupled, in contrast to the previous section where only the $x$ component is coupled.
For the SL oscillator, the optimal response matrix can be analytically expressed as
\begin{align}
&
{\rm A}(\psi) = \sqrt{\frac{P}{1+b^2}}
\cr
&\times
\left(
\begin{array}{rr}
\sin \psi (b \cos \psi +\sin \psi ) & -\cos \psi (b \cos \psi +\sin \psi ) \\
\sin \psi (b \sin \psi -\cos \psi ) & \cos \psi (\cos \psi -b \sin \psi )
\end{array}
\right),
\cr
\end{align}
and the phase coupling function is given by $\Gamma(\phi) = - \sqrt{ (1 + b^2) P } \sin \phi$,
which gives the optimal linear stability $-\Gamma'(0) = \sqrt{ ( 1 + b^2 ) P }$.
In contrast, for the identity matrix ${\rm A}_I$, the phase coupling function is
$\Gamma(\phi) = -\sqrt{P/2} (b \cos \phi +\sin \phi )$
and the linear stability is $-\Gamma'(0) = \sqrt{P}/\sqrt{2}$.
Thus, the linear stability is improved by a factor of $\sqrt{ 2 (1+b^2) }$.
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize,clip]{fig6.pdf}
\caption{Synchronization of Stuart-Landau oscillators with the optimal response matrix. (a) Evolution of the difference $\Delta x$ between the $x$ variables of the oscillators. (b) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig6}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\hsize,clip]{fig7.pdf}
\caption{Synchronization of FitzHugh-Nagumo oscillators with the optimal response matrix. In (b--d), the results with the optimal response matrix are compared with those with the identity response matrix. (a) Four components of the optimal response matrix ${\rm A}(\psi)$. (b) Antisymmetric part of the phase coupling function, $\Gamma(\phi) - \Gamma(-\phi)$. (c) Evolution of the difference $\Delta x$ between the $x$ variables of the oscillators. (d) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig7}
\end{figure}
Figure~\ref{fig6} shows synchronization of two SL oscillators for the cases with the optimal response matrix ${\rm A}(\psi)$ and with the identity matrix ${\rm A}_I(\psi)$, where $b=1$, $\epsilon = 0.01$, $P=2$,
and the initial phase difference is $\phi = \pi / 4$. Figure~\ref{fig6}(a) shows the evolution of the difference $\Delta x$ in the $x$ variables between the two oscillators, and Fig.~\ref{fig6}(b) shows the convergence of the phase difference $\phi$ to $0$. The in-phase synchronized state is more quickly established in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
For the FHN oscillator, the optimal response matrix can be calculated numerically. Figure~\ref{fig7} compares the synchronization dynamics of two coupled FHN oscillators with the optimal and identity response matrices, where $\epsilon = 0.0002$, $P=2$, and the initial phase difference is $\phi = \pi / 4$. Figure~\ref{fig7}(a) shows four components of the optimal response matrix ${\rm A}(\psi)$ for $0 \leq \psi < 2\pi$.
It is notable that the magnitude of $A_{21}(\psi)$ is much larger than the other components, indicating that driving the $y$ component of each oscillator by using the $x$ component of the other oscillator is efficient in synchronizing the oscillators in this case.
Figure~\ref{fig7}(b) plots the antisymmetric part of the phase coupling functions for the optimal and identity response matrices, which shows that a much higher stability is attained in the optimized case ($-\Gamma'(0)\approx10.1$ for the optimized response matrix and $-\Gamma'(0)\approx0.999$ for the identity response matrix).
Figure~\ref{fig7}(c) shows the time evolution of the difference $\Delta x$ between the two oscillators, and Fig.~\ref{fig7}(d) shows the convergence of the phase difference $\phi$ to $0$.
In order to use the optimal response matrix, instantaneous phase values of the oscillators are necessary. In the direct numerical simulations shown here, we approximately evaluated the phase value by linearly interpolating two consecutive crossings times of the oscillator state at an appropriate Poincar\'e section, and this value was used to generate the driving signal.
It can be seen from the figures that the in-phase synchronized state is established much faster in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
\subsubsection{Optimal driving functional}
For the numerical simulations, we assume that ${\rm A}(\psi)$ is simply given by an identity matrix, $\mbox{diag}(1, 1)$.
The optimal driving function ${\bm G}(\psi)$ is then simply given as ${\bm G}(\psi) \propto {\bm Z}'(\psi)$ from Eq.~(\ref{opt-driv}), with the norm constraint $\la | {\bm G}(\psi) |^2 \ra_{\psi} = P$.
For the SL oscillator, the optimal driving function is explicitly given by
\begin{align}
{\bm G}(\psi) =
\sqrt{\frac{P}{1+b^2}}
\left(
\begin{array}{c}
\cos \psi - b \sin \psi \\
b \cos \psi + \sin \psi \\
\end{array}
\right).
\end{align}
Figure~\ref{fig8} shows synchronization of two SL oscillators coupled through the optimal driving function, and coupled without transformation of the oscillator state, i.e., $\hat{\bm G} \{ {\bm X}^{(t)}(\cdot) \} = {\bm X}(t)$, where $b=1$, $\epsilon = 0.01$, and $P=1$, and the initial phase difference is $\phi = \pi / 4$.
Figure~\ref{fig8}(a) shows the evolution of the difference $\Delta x$ between the $x$ variables of the two oscillators, and Fig.~\ref{fig8}(b) shows the convergence of the phase difference $\phi$ to $0$. It is confirmed that the linear stability of the in-phase synchronized state is improved in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
For the FHN oscillator, the norm of ${\bm X}_0(\psi)$ is $\la | {\bm X}_0(\psi) |^2 \ra_{\psi} \approx 0.221$, and we fix the norm $P$ of ${\bm G}(\psi)$ to this value. The optimal driving function can be calculated from ${\bm X}_0(\psi)$ and ${\bm Z}(\psi)$ obtained numerically.
Figure~\ref{fig9} shows synchronization of two FHN oscillators coupled with the optimal driving function, as well as comparison with the non-transformed case, where $\epsilon = 0.0002$, $P \approx 0.221$, and the initial phase difference is $\phi = \pi / 4$.
Figure~\ref{fig9}(a) shows the optimal driving function ${\bm G}(\psi)$ for $0 \leq \psi < 2\pi$, which is proportional to the derivative ${\bm Z}'(\psi)$. Figure~\ref{fig9}(b) plots the antisymmetric part of the phase coupling function for the optimal driving function ${\bm G}(\psi)$ with that without transformation, respectively, indicating a much higher linear stability in the optimized case
($-\Gamma'(0)\approx12.8$ with the optimized driving function and $-\Gamma'(0)\approx0.999$ without optimization).
Figure~\ref{fig9}(c) shows a plot of the evolution of the difference $\Delta x$ between $x$ variables of the oscillators, and Fig.~\ref{fig9}(d) shows the convergence of the phase difference $\phi$ to $0$.
Similar to the previous case with the optimal response matrix, instantaneous phase values of the oscillators are approximately evaluated by linear interpolation and used to generate the optimal driving signal in the direct numerical simulations.
We can confirm that the in-phase synchronized state is established much faster in the optimized case, and the results of the reduced phase model and direct numerical simulations agree well.
\begin{figure}[t]
\centering
\includegraphics[width=\hsize,clip]{fig8.pdf}
\caption{Synchronization of Stuart-Landau oscillators coupled with the optimal driving function. (a) Evolution of the difference $\Delta x$ between the $x$ variables of the oscillators. (b) Evolution of the phase difference $\phi$ between the oscillators.}
\label{fig8}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\hsize,clip]{fig9.pdf}
\caption{Synchronization of FitzHugh-Nagumo oscillators coupled with the optimal driving function. In (b--d), the results with the optimal driving function are compared with those without transformation. (a) Optimal driving function ${\bm G}(\psi) = (G_1(\psi), G_2(\psi))$. (b) Antisymmetric part of the phase coupling function, $\Gamma(\phi) - \Gamma(-\phi)$. (c) Evolution of the difference $\Delta x$ between the $x$ variables of the oscillators. (d) Evolution of the phase difference $\phi$.}
\label{fig9}
\end{figure}
\section{Discussion}
We have shown that, by optimizing the mutual coupling between coupled oscillators, the linear stability of the in-phase synchronized state can be improved and faster convergence to the synchronization can be achieved.
We have shown that, even if we start from a system of coupled oscillators with general coupling functionals that depend on the past time sequences of the oscillators, the system can be approximately reduced to a pair of simple ordinary differential equations that depend only on the present phase values of the oscillators within the phase reduction theory, and the optimal coupling function between the oscillators can be obtained as a function of the phase values.
Though we have considered only the simplest cases where two oscillators with identical properties are
symmetrically coupled without noise, the theory can also be extended to include heterogeneity
of the oscillators or noise.
The linear coupling with time delay or linear filtering can be realized without measuring the phase values of the oscillator, once the correlation functions of the PSF and the limit-cycle orbit (or their derivatives) are obtained.
The nonlinear coupling requires the measurement of the phase values of the oscillators, but can further improve the linear stability of the synchronized state. We have shown that a simple approximate evaluation of the phase values by a linear interpolation gives reasonable results even though it may yield a somewhat incorrect evaluation of the true phase values.
It is interesting to compare the present analysis for stable synchronization between the two oscillators with the optimization of driving signals for injection locking of a single oscillator, which has been analyzed by Zlotnik {\it et al.}~\cite{zlotnik13} and others (briefly explained in the Appendix B for a simple case).
In Sec. IV C, we have obtained the optimal driving functional. In particular, when ${\rm A}(\psi) = {\rm K}$, where ${\rm K}$ is a constant matrix, the optimal driving signal is
\begin{align}
{\bm G}(\psi) = \frac{1}{2\lambda} {\rm K}^{\dag} {\bm Z}'(\psi)
\end{align}
and the maximum stability is
\begin{align}
- \Gamma'(0)
&=
\sqrt{ P \la \left| {\rm K}^{\dag} {\bm Z}'(\psi) \right|^2 \ra_\psi }.
\end{align}
This result coincides with the optimal injection signal for stable synchronization of a single oscillator, obtained by Zlotnik {\it et al.}~\cite{zlotnik13}.
Thus, the optimal coupling between the oscillators is realized by measuring the present phase $\psi$ of the other oscillator and applying a driving signal that is proportional to ${\rm K}^{\dag} {\bm Z}'(\psi)$ to the oscillator.
It is also interesting to note that we have obtained similar expressions for the maximum stability in all examples,
$-\Gamma'(0) = \sqrt{ P \la \cdots \ra_\psi^2 }$,
where $\cdots$ depends on the quantity to be optimized.
This is because we are essentially maximizing the inner product of
the PSF with the derivative of the driving
signal under a mean-square constraint on the parameters or functions
included in the driving signal in all cases.
The linear coupling schemes in Sec.~III would be easy to realize experimentally.
The nonlinear coupling schemes in Sec.~IV require the evaluation of the phase values
from the oscillators, but can yield an even higher linear stability.
These methods may be useful when higher stability of the in-phase synchronized state
between oscillators is desirable in technical applications.
It would also be interesting to study interactions between rhythmic elements e.g. in
biological systems from the viewpoint of synchronization efficiency.
\acknowledgements
This work is financially supported by JSPS KAKENHI Grant Numbers JP16K13847, JP17H03279, 18K03471, JP18H032, and 18H06478.
|
1,108,101,565,766 | arxiv | \section{Introduction}
Distributed convex optimization has attracted intense research attention in recent years, due to its theoretic significance and broad applications in many research fields such as sensor networks, smart grids and social networks.
Various models of distributed optimization have been proposed and studied in the literature. Most works have focused on consensus-based formulations, where each agent estimates the entire optimal solution via plentiful discrete-time algorithms (e.g., see \cite{Nedich2016Achieving, Zhu2012Distributed} and the references therein). Recently, more and more effort has also been done for distributed continuous-time algorithms (see \cite{Shi2013reaching,Gharesifard2014Distributed,Liu2015Second, Zeng2017Distributed} for instance), partly due to the development of its hardware implementation \cite{Forti2004Generalized} and flexible application in continuous-time physical systems \cite{Zhang2017Distributed}.
Here we consider distributed optimizations with separable cost functions and coupled constraints. In the presence of a coupled constraint, the feasible region of one agent's decision variable is influenced by some other agents' decision variables. If such a constraint is known by all the related agents, various algorithms were obtained, such as dual gradient algorithms \cite{Nedic2009Approximate, Necoara2015Linear}, primal-dual algorithms \cite{Nedic2009Subgradient, Feijer2010Stability,Cherukuri2016Asymptotic}, the saddle-point-like algorithm \cite{Niederlander2016Distributed}, and the distributed Newton-type algorithm \cite{Wei2013Distributed}. However, coupled constraints may not be available to each agent in practice, and then the aforementioned algorithms may not work if there is no central coordinator in the network. To deal with the challenges, \cite{Cherukuri2016Initialization,Yi2016Initialization} developed distributed initialization-free algorithms for the optimal resource allocation, while \cite{Zeng2016Continuous} proposed a distributed algorithm for the extended monotropic optimization. Note that \cite{Cherukuri2016Initialization,Yi2016Initialization,Zeng2016Continuous} considered coupled equality constraints. Moreover, \cite{Chang2014Distributed} proposed a distributed algorithm for coupled inequality constraints, based on the average consensus technique to estimate the constraint functions along with a local primal-dual perturbed subgradient method. These distributed algorithms adopted local dynamics to evaluate the optimal dual solution instead of the original centralized one. On the other hand, all of them have to further employ auxiliary dynamics in order to guarantee the correctness and convergence, whereas the distributed design may become quite complicated in the case with coupled inequality constraints.
The objective of this note is to develop a distributed algorithm for nonsmooth convex optimization with coupled inequality constraints. We propose a modified Lagrangian function such that not only its saddle point yields the correct optimal solution to the original problem, but also its primal-dual subgradient dynamics is fully distributed. Particularly, we introduce local multipliers to decouple the constraints and employ a nonsmooth penalty function for the correctness. Based on this modified Lagrangian function, we propose a continuous-time projected subgradient algorithm for saddle-point computation. Our algorithm is fully distributed since each agent updates its local variables according to its local data and the information of its neighbors, without requiring any center in the network. Moreover, our algorithm only involves the primal variables and local multipliers, which yields a lower order dynamics than those in existing algorithms.
The rest organization is as follows: Section 2 provides necessary preliminaries, while Section 3 formulates the problem. Then Section 4 presents the main results to prove the convergence of our nonsmooth algorithm, and Section 5 gives two numerical examples. Finally, Section 6 gives some concluding remarks.
{\em Notations}: Denote $\mathbb{R}^n$ as the
$n$-dimensional real vector space and $\mathbb{R}^n_+$ as the positive orthant in $\mathbb{R}^n$. Denote $\bm{0}$ as a vector with each component being zero. For a vector $a\in \mathbb{R}^n$, $a\leq \bm{0}$ (or $a<\bm{0}$) means that each component of $a$ is less than or equal to zero (or less than zero). Denote $\|\cdot\|$ and $|\cdot|$ as the $\ell_2$-norm and $\ell_1$-norm for vectors, respectively. Denote $col(x_1,...,x_N) = (x_1^{T}, ... , x_n^{T})^{T}$ as the column vector stacked with column vectors $x_1,...,x_N$. For a set $\Omega\subset \mathbb{R}^n$, $\rint(\Omega)$ is the relative interior and
$d_\Omega(x) \triangleq \inf_{y\in \Omega}\|y-x\|$ is the distance function between point $x$ and set $\Omega$.
\section{Preliminaries}
In this section, we introduce relevant preliminary knowledge about convex analysis, differential inclusions, and graph theory.
A set $C \subseteq \mathbb{R}^n$ is {\em convex} if $\lambda z_1
+(1-\lambda)z_2\in C$ for any $z_1, z_2 \in C$ and $\lambda\in [0,\,1]$. For $x\in C$, the {\em tangent cone} to $C$ at $x$, denoted by $\mathcal{T}_C(x)$, is defined as
\begin{equation*}
\mathcal{T}_C(x) \triangleq \big\{\lim_{k\to\infty}\frac{x_k-x}{t_k}\,|\,x_k\in C, t_k>0,\text{ and }x_k \to x, t_k\to 0\big\},
\end{equation*}
while the {\em normal cone} to $C$ at $x$, denoted by $\mathcal{N}_C(x)$, is defined as
\begin{equation*}
\mathcal{N}_C(x) \triangleq \{v\in \mathbb{R}^n \,|\, v^T(y -x) \leq 0, \text{ for all } y\in C\}\text{.}
\end{equation*}
A projection operator is defined as $P_C(z) \triangleq \mathop{\argmin}_{x\in C}\|x-z\|$, and an operator that projects a point $z\in\mathbb{R}^n$ (or a set) onto the tangent cone $\mathcal{T}_C(x)$ is $\Pi_C(x;z) \triangleq P_{\mathcal{T}_C(x)}(z)$.
A function $f: C\to \mathbb{R}$ is said to be {\em convex} (or {\em strictly convex}) if $f(\lambda z_1
+(1-\lambda)z_2) \leq \text{ (or $<$) } \lambda f(z_1) + (1-\lambda)f(z_2)$ for any $z_1, z_2 \in C, z_1 \neq z_2$ and $\lambda\in (0,\,1)$. A function $g$ is said to be a {\em concave function} if $-g$ is a convex function.
A set-valued map $\mathcal{F}$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ is a map associated with any $x\in \mathbb{R}^n$ a subset $\mathcal{F}(x)$ of $\mathbb{R}^n$. $\mathcal{F}$ is said to be {\em upper semicontinuous} at $x_0\in \mathbb{R}^n$ if for any open set $E$ containing $\mathcal{F}(x_0)$, there exists a neighborhood $D$ of $x_0$ such that $\mathcal{F}(D)\subset E$. We say that $\mathcal{F}$ is upper semicontinuous if it is so at every $x_0\in \mathbb{R}^n$. The graph of $\mathcal{F}$, denoted by $\gph\mathcal{F}$, is the set consists of all pairs $(x,y)$ satisfying $y\in \mathcal{F}(x)$.
A differential inclusion can be expressed as follows:
\begin{equation}\label{eq:DI}
\dot{x} \in \mathcal{F}(x), \quad x(0) = x_0.
\end{equation}
A map $x(t): [0, +\infty) \rightarrow \mathbb{R}^n$ is said to be a {\em solution} to \eqref{eq:DI} if it is absolutely continuous and satisfies the inclusion for almost all $t \in [0, +\infty)$.
A graph of a network is denoted by $\mathcal{G}=(\mathcal{V},\,\mathcal{E})$, where $\mathcal{V} = \{1,...,N\}$ is a set of nodes and $\mathcal{E} \subseteq
\mathcal{V}\times \mathcal{V} $ is a set of edges. Node $j$ is said to be a {\em neighbor} of node $i$ if $\{i,j\}\in \mathcal{E}$. The set of all the neighbors of node $i$ is denoted by $\mathcal{N}_i$. $\mathcal{G}$ is said to be {\em undirected} if $(i,\,j)\in \mathcal{E} \Leftrightarrow (j,\,i)\in \mathcal{E}$. A path of $\mathcal{G}$ is a sequence of distinct nodes where any pair of consecutive nodes in the sequence
has an edge of $\mathcal{G}$. Node $j$ is said
to be {\em connected} to node $i$ if there is a path from $j$ to $i$. $\mathcal{G}$ is said to be connected if any two nodes are
connected. The detailed knowledge about graph theory can be found in \cite{Godsil01}.
The following lemma collects some results given in \cite{Clarke1998Nonsmooth} that will be used in our analysis
\begin{lemma}\label{lem:subdifferential}
Let $f: X \to \mathbb{R}$ be locally Lipschitz continuous and let $\Omega \subset X$ be a closed convex subset, where $X\subset \mathbb{R}^n$. Then the following statements hold.
\begin{enumerate}[leftmargin=*]
\item \label{item:subdifferential2} $\gph\partial f$ is closed.
\item \label{item:subdifferential3} If $f$ is convex, then $x^*\in \mathop{\argmin}_{x\in \Omega}f(x)$ if and only if $0\in \partial f(x^*) + \mathcal{N}_\Omega(x^*)$.
\item \label{item:subdifferential4} Suppose $f$ has a Lipschitz constant $K_0$ on an open set that contains $\Omega$. When $K>K_0$, $x^*\in \mathop{\argmin}_{x\in \Omega}f(x)$ if and only if $x^*\in \mathop{\argmin}_{x\in X}f(x)+Kd_\Omega(x)$.
\end{enumerate}
\end{lemma}
A collection of results in \cite{Aubin1984Differential} with respect to set-valued maps and differential inclusions are given below.
\begin{lemma}\label{lem:setvalue}
The following statements hold.
\begin{enumerate}[leftmargin=*]
\item \label{item:setvalue1}A set-valued map $\mathcal{F}$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ is upper semicontinuous if it has compact values and $\gph\mathcal{F}$ is closed.
\item \label{item:setvalue3}Let $\mathcal{F}_0$ from $\mathbb{R}^n$ to $\mathbb{R}^n$ be a set-valued map and $C\subset \mathbb{R}^n$ be a closed convex subset. Consider the following two differential inclusions
\begin{align}
\label{eq:di1}
\dot x &\in \mathcal{F}_0(x) - \mathcal{N}_{C}(x), \quad x(0) = x_0\in C,\\
\label{eq:di2}
\dot x & \in \Pi_{C}(x,\mathcal{F}_0(x)), \quad x(0) = x_0 \in C.
\end{align}
Then $x(\cdot)$ is a solution to \eqref{eq:di1} if and only if it is a solution to \eqref{eq:di2}.
\item \label{item:setvalue4} For any $x_0\in C$, there is a solution to the differential inclusion \eqref{eq:di1} if $\mathcal{F}_0$ is upper semicontinuous and $C$ is compact and convex.
\end{enumerate}
\end{lemma}
Moreover, we introduce a lemma from \cite[Lemma 4.2]{Zeng2017Distributed} which will be used in the convergence analysis.
\begin{lemma}\label{lem:semistability}
Let $x(t)$ be a solution to the differential inclusion \eqref{eq:DI}. If $z$ is a Lyapunov stable equilibrium of \eqref{eq:DI} and is also a cluster point of $x(\cdot)$, then $\lim_{t\to+\infty}x(t) = z$.
\end{lemma}
\section{Problem Formulation}
Consider a multi-agent network with $N$ agents, whose label set is denoted as $\mathcal{V} = \{1,...,N\}$, cooperating over a graph $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$. For each agent $i$, there are a local decision variable $x_i\in \mathbb{R}^{n_i}$ and a local constraint set $\Omega_i\subset \mathbb{R}^{n_i}$ for $i\in \mathcal{V}$. Define $\bm{x}\triangleq col(x_1,...,x_N)$, and define
the total cost function of the network as $f(\bm{x}) \triangleq \sum_{i\in \mathcal{V}}f_i(x_i)$, where $f_i:\Omega_i\to \mathbb{R}$ is a (nonsmooth) local cost function of agent $i$. In addition, the agents are subject to coupled inequality constraints in the form of $\bm{g}(\bm{x}) \triangleq \sum_{i\in\mathcal{V}} \bm{g}_i(x_i)\leq \bm{0}$, where $\bm{g}_i:\Omega_i\to\mathbb{R}^M$ are continuous mappings for $i\in \mathcal{V}$ (that is, $\bm{g}_i = (g_{i1},...,g_{iM})^T$ and $g_{ik}:\Omega_i \to \mathbb{R}$ are continuous functions for all $i$ and $k$). To be strict, the optimization problem can be formulated as:
\begin{equation}\label{eq:optimizationProblem}
\min_{\bm{x}\in\Omega} f(\bm{x}), \text{ s.t. } \bm{g}(\bm{x})\leq \bm{0},
\end{equation}
where $\Omega\triangleq \prod_{i\in \mathcal{V}}\Omega_i \subset \mathbb{R}^{n_1+\cdots+n_N}$ denotes the local constraints of $N$ agents.
The following assumption is needed to ensure the well-posedness of problem \eqref{eq:optimizationProblem}.
\begin{assumption}\label{assum:1}
~
\begin{enumerate}[leftmargin=*]
\item (Convexity and continuity) For all $i\in\mathcal{V}$, $\Omega_i$ is compact and convex. On an open set containing $\Omega_i$, $f_i$ is strictly convex and $\bm{g}_i$ is convex, and $f_i$ and $\bm{g}_i$ are locally Lipschitz continuous.
\item (Slater's constraint qualification) There exists $\bar{\bm{x}}\in \rint(\Omega)$ such that $\bm{g}(\bar{\bm{x}})< \bm{0}$.
\item (Communication topology) The graph $\mathcal{G}$ is connected and undirected.
\end{enumerate}
\end{assumption}
This assumption is quite mild and similar ones are widely used in the literature (e.g., \cite{Yi2016Initialization}).
The goal of this note is to develop a {\em distributed continuous-time algorithm} for solving \eqref{eq:optimizationProblem} with each agent communicating with their neighbors. Moreover, for every $i\in\mathcal{V}$, agent $i$ can only access $\bm{g}_i(x_i)$ rather than $\bm{g}(\bm{x})$.
The differences between our problem and those in existing literature are as follows.
\begin{itemize}[leftmargin=*]
\item The decision variables can be heterogeneous with possibly different dimensions, in contrast to those consensus-based models.
\item The cost and constraint functions can be nonsmooth, while some projected dynamics \cite{Feijer2010Stability,Cherukuri2016Asymptotic,Yi2016Initialization} and Newton type method \cite{Wei2013Distributed} depend on smoothness.
\item The coupled constraints may be unavailable to local agents, different from \cite{Niederlander2016Distributed}. Also, coupled inequality constraints are considered as in \cite{Chang2014Distributed}, different from the coupled (affine) equality ones studied in \cite{Cherukuri2016Initialization,Yi2016Initialization,Zeng2016Continuous}.
\end{itemize}
\section{Main results}
In this section, we first propose a modified Lagrangian function and then propose a distributed continuous-time algorithm for the considered optimization problem. Moreover, we prove the existence of the solution to the nonsmooth algorithm along with the discussions on its convergence and the rate.
\subsection{Lagrangian Function and Distributed Algorithm Design}
Consider the following dual problem with respect to the primal one \eqref{eq:optimizationProblem},
\begin{equation}\label{eq:dualProblem}
\max_{\lambda\geq \bm{0}}q(\lambda), \quad q(\lambda) \triangleq \min_{\bm{x}\in \Omega}\mathcal{L}(\bm{x},\lambda),
\end{equation}
where $\mathcal{L}:\Omega\times \mathbb{R}^M_+ \to \mathbb{R}$ is the Lagrangian function defined as
\begin{equation}\label{eq:Lagrangian}
\mathcal{L}(\bm{x},\lambda) \triangleq f(\bm{x}) + \lambda^T \bm{g}(\bm{x}).
\end{equation}
It has been shown in \cite{Nedic2009Approximate} that the optimal dual solution $\lambda^*$ of \eqref{eq:dualProblem} lies in a compact set $\mathcal{D}\subset \mathbb{R}^M_+$, given by
\begin{equation}\label{eq:dualSet}
\mathcal{D} \triangleq \{\lambda\in \mathbb{R}^M_+\,|\, \|\lambda\|\leq \frac{f(\bar{\bm{x}}) - \tilde{q}}{\gamma}\},
\end{equation}
where $\bar{\bm{x}}$ is a Slater point of \eqref{eq:optimizationProblem}, $\tilde{q} = \min_{\bm{x}\in \Omega}\mathcal{L}(\bm{x},\tilde{\lambda})$ is a dual function value for an arbitrary $\tilde{\lambda}\geq 0$, $\gamma = \min_{k = 1,...,M}\{-\sum_{i\in\mathcal{V}}g_{ik}(\bar{x}_i)\}$.
We present the following lemma, which is a well-known convex optimization result \cite{Rockafellar1998Variational}.
\begin{lemma}\label{lem:collection}
Under Assumption \ref{assum:1}, the following statements are equivalent:
\begin{enumerate}[leftmargin=*]
\item (Primal-dual characterization) $(\bm{x}^*, \lambda^*)$ is a primal-dual solution pair of problems \eqref{eq:optimizationProblem} and \eqref{eq:dualProblem}.
\item (Saddle-point characterization) $(\bm{x}^*, \lambda^*)$ is a {\em saddle point} of Lagrangian function \eqref{eq:Lagrangian}, that is,
\begin{equation*}
\mathcal{L}(\bm{x}^*,\lambda) \leq \mathcal{L}(\bm{x}^*,\lambda^*) \leq \mathcal{L}(\bm{x},\lambda^*), \quad \forall\, \bm{x}\in \Omega,\,\lambda\geq \bm{0}.
\end{equation*}
\item (KKT characterization) $(\bm{x}^*, \lambda^*)$ satisfies
\begin{equation*}
\bm{0} \in \partial f(\bm{x}^*) + \partial\bm{g}(\bm{x}^*)\lambda^{*} + \mathcal{N}_{\Omega}(\bm{x}^*), \quad \bm{0} \leq \lambda^* \perp -\bm{g}(\bm{x}^*) \geq \bm{0}.
\end{equation*}
\item (Minimax characterization) $(\bm{x}^*,\lambda^*)$ is a solution of the minimax problem
\begin{equation*}
\min_{x\in \Omega}\{\max_{\lambda\in \mathcal{D}} \mathcal{L}(\bm{x},\lambda)\}.
\end{equation*}
\end{enumerate}
Moreover, since $f(\bm{x})$ is strictly convex, such $\bm{x}^*$ is unique while $\lambda^*$ may not be unique.
\end{lemma}
A centralized projected primal-dual algorithm with respect to $\mathcal{L}(\bm{x},\lambda)$ can be written as (referring to \cite{Cherukuri2016Asymptotic}):
\begin{equation*}
\left\{\begin{aligned}
\dot{\bm{x}} &\in \Pi_{\Omega}(\bm{x}, - \partial f(\bm{x}) - \partial\bm{g}(\bm{x})\lambda), \, &&\bm{x}(0) \in \Omega,\\
\dot{\lambda} &\in \Pi_{\mathbb{R}^{M}_+}(\lambda, \bm{g}_1(x_1) + \cdots + \bm{g}_N(x_N)), \, &&\lambda(0) \in \mathbb{R}^{M}_+,
\end{aligned}\right.
\end{equation*}
which needs a center to broadcast $\lambda$ and gather $\bm{g}_1, ..., \bm{g}_N$ for the update. In order to develop fully distributed algorithms without a center, we employ local multipliers and a nonsmooth penalty function to construct a modified Lagrangian function. To be specific, define
\begin{subequations}
\begin{align}
\bm{\lambda} & \triangleq col(\lambda_1,..., \lambda_N)\in \mathbb{R}^{MN}_+,\\
\label{eq:S}
\mathcal{S} & \triangleq \{\bm{\lambda}\in \mathbb{R}^{MN}_+\,|\, \lambda_1 = \cdots = \lambda_N\},\\
\label{eq:phi}
\phi(\bm{\lambda}) & \triangleq \frac{1}{2} \sum_{i\in \mathcal{V}} \sum_{j\in \mathcal{N}_i}|\lambda_i-\lambda_j|\text{,}\\
\label{eq:tildeL}
\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) & \triangleq \sum_{i\in \mathcal{V}}f_i(x_i)+\lambda_i^T\bm{g}_i(x_i) - K\phi(\bm{\lambda}),
\end{align}
\end{subequations}
where $\bm{\lambda}$ is a collection of {\em local multipliers} employed for distributed design and $\mathcal{S}$ is a cone for the local multipliers to reach a consensus there. Moreover, $\phi(\bm{\lambda})$ serves as a {\em metric of consensus} for the multipliers and $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ is a modified Lagrangian function with a constant $K>0$.
The following lemma reveals that the nonsmooth $K\phi(\bm{\lambda})$ plays a role as an exact penalty function for the consensus of multipliers.
\begin{lemma}\label{lem:exactPenalty}
Under Assumption \ref{assum:1}, for any $\bm{x}\in \Omega$, there holds
\begin{equation*}
\mathop{\argmax}_{\bm{\lambda}\in \mathbb{R}^{MN}_+} \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) = \mathop{\argmax}_{\bm{\lambda}\in \mathcal{S}} \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}),
\end{equation*}
provided $K > \sqrt{N}K_0$, where $K_0\triangleq \max_{\bm{x}\in \Omega}\|col(\bm{g}_1(x_1),...,\bm{g}_N(x_N))\|$.
\end{lemma}
\begin{IEEEproof}
It follows from part \ref{item:subdifferential4}) in Lemma \ref{lem:subdifferential} that, for any $K_d>K_0$,
\begin{equation*}
\mathop{\argmax}_{\bm{\lambda}\in \mathcal{S}}\sum_{i\in \mathcal{V}}f_i(x_i)+\lambda_i^T\bm{g}_i(x_i) = \mathop{\argmax}_{\bm{\lambda}\in \mathbb{R}^{MN}_+} \sum_{i\in \mathcal{V}}f_i(x_i)+\lambda_i^T\bm{g}_i(x_i) - K_dd_{\mathcal{S}}(\bm{\lambda}).
\end{equation*}
Since $\phi(\bm{\lambda}) = d_{\mathcal{S}}(\bm{\lambda}) = 0, \forall\, \bm{\lambda} \in \mathcal{S}$, it suffices to prove $\sqrt{N}\phi(\bm{\lambda}) > d_{\mathcal{S}}(\bm{\lambda})$ for all $\bm{\lambda}\in \mathbb{R}^{MN}_+\setminus \mathcal{S}$.
On one hand,
\begin{equation*}
d_{\mathcal{S}}^2(\bm{\lambda}) = \min_{\tilde{\bm{\lambda}}\in S} \|\tilde{\bm{\lambda}} - \bm{\lambda}\|^2= \sum_{k=1}^N \big\|\lambda_k - \frac{\lambda_1 + \cdots + \lambda_N}{N}\big\|^2 \leq \frac{1}{N}\sum_{k=1}^N \sum_{l=1}^N\|\lambda_k-\lambda_l\|^2 \leq \frac{1}{N}\sum_{k=1}^N \sum_{l=1}^N|\lambda_k-\lambda_l|^2.
\end{equation*}
On the other hand, since the graph is connected and undirected, there is a path $\mathcal{P}_{kl}\subset \mathcal{E}$ connecting nodes $k$ and $l$ for any $k,l\in\mathcal{V}$. Then
\begin{align*}
\phi(\bm{\lambda}) = \frac{1}{2}\sum_{(i,j)\in \mathcal{E}}|\lambda_i - \lambda_j| \geq \frac{1}{2}\sum_{(i,j)\in \mathcal{P}_{kl}}|\lambda_i - \lambda_j| \geq |\lambda_k-\lambda_l|.
\end{align*}
Thus, $d_{\mathcal{S}}^2(\bm{\lambda}) \leq N\phi^2(\bm{\lambda})$ and the equality holds if and only if $\bm{\lambda} \in \mathcal{S}$, which implies the conclusion.
\end{IEEEproof}
The correctness of the Lagrangian function $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ to problem \eqref{eq:optimizationProblem} is indicated in the following result.
\begin{theorem}\label{thm:equilibrium}
Under Assumption \ref{assum:1}, the following statements are equivalent:
\begin{enumerate}[leftmargin=*]
\item $(\bm{x}^*, \bm{\lambda}^*) \in \Omega \times \mathbb{R}^{MN}_+$ renders the following equations
\begin{subequations}\label{eq:equilibrium}
\begin{align}
0 &\in \Pi_{\Omega}(\bm{x}^*, - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x}^*,\bm{\lambda}^*)),\\
0 &\in \Pi_{\mathbb{R}^{MN}_+}(\bm{\lambda}^*, -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x}^*,\bm{\lambda}^*)).
\end{align}
\end{subequations}
\item $(\bm{x}^*, \bm{\lambda}^*)$ is a saddle point of $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ in $\Omega \times \mathbb{R}^{MN}_+$.
\item $\bm{\lambda^*} = col(\lambda^*,...,\lambda^*)$ and $(\bm{x}^*, \lambda^*)$ is a saddle point of $\mathcal{L}(\bm{x},\lambda)$ in $\Omega \times \mathbb{R}^{M}_+$.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
1) $\Rightarrow$ 2): Let $(\bm{x}^*, \bm{\lambda^*})\in \Omega \times \mathbb{R}^{MN}_+$ satisfying \eqref{eq:equilibrium}. Then
\begin{subequations}\label{eq:equlibriumCondition}
\begin{align}
\label{eq:equlibriumCondition1}
0 &\in - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x}^*,\bm{\lambda}^*) - \mathcal{N}_\Omega(\bm{x}^*),\\
\label{eq:equlibriumCondition2}
0 &\in -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x}^*,\bm{\lambda}^*) - \mathcal{N}_{\mathbb{R}^{MN}_+}(\bm{\lambda}^*).
\end{align}
\end{subequations}
Since $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ is convex in $\bm{x}$ and concave in $\bm{\lambda}$ (or equivalently, $-\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ is convex in $\bm{\lambda}$), it follows from part \ref{item:subdifferential3}) in Lemma \ref{lem:subdifferential} that $\bm{x}^*$ is the minimum point of $\tilde{\mathcal{L}}(\cdot,\bm{\lambda}^*)$ in $\Omega$ and $\bm{\lambda}^*$ is a maximum point of $\tilde{\mathcal{L}}(\bm{x}^*,\cdot)$ in $\mathbb{R}^{MN}_+$, which implies statement 2).
2) $\Rightarrow$ 3): Let $(\bm{x}^*, \bm{\lambda}^*)$ be a saddle point of $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ in $\Omega \times \mathbb{R}^{MN}_+$. Then
\begin{equation}\label{eq:equlibriumCondition3}
\bm{\lambda}^* \in \mathop{\argmax}_{\bm{\lambda}\geq \bm{0}} \tilde{\mathcal{L}}(\bm{x}^*,\bm{\lambda}).
\end{equation}
It follows from Lemma \ref{lem:exactPenalty} that $\bm{\lambda}^* = col(\lambda^*,...,\lambda^*)$. Substituting this $\bm{\lambda}^*$ into the saddle point inequalities with respect to $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ yields
\begin{equation}\label{eq:equlibriumCondition4}
\bm{x}^* \in \mathop{\argmin}_{\bm{x}\in \Omega}\mathcal{L}(\bm{x},\lambda^*)\text{ and } \lambda^* \in \mathop{\argmax}_{\lambda\geq \bm{0}} \mathcal{L}(\bm{x}^*,\lambda),
\end{equation}
because of the identity $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}^*) = \mathcal{L}(\bm{x},\lambda^*)$. Therefore, the conclusion follows.
3) $\Rightarrow$ 1): Suppose $\bm{\lambda^*} = col(\lambda^*,...,\lambda^*)$ and $(\bm{x}^*, \lambda^*)$ is a saddle point of $\mathcal{L}(\bm{x},\lambda)$. According to Lemma \ref{lem:exactPenalty}, condition \eqref{eq:equlibriumCondition3} holds. Again, from part \ref{item:subdifferential3}) in Lemma \ref{lem:subdifferential}, condition \eqref{eq:equlibriumCondition} holds, which implies statement 1).
\end{IEEEproof}
By Theorem \ref{thm:equilibrium} and Lemma \ref{lem:collection}, the saddle points of $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$ match exactly the saddle points of $\mathcal{L}(\bm{x},\lambda)$, which are in accordance with the optimal primal-dual solutions.
Based on $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$, we present a distributed continuous-time algorithm to solve \eqref{eq:optimizationProblem} as follows:
\begin{equation}\label{eq:distributedAlgorithm}
\forall\,i\in\mathcal{V}:\, \left\{\begin{aligned}
\dot{x}_i &\in \Pi_{\Omega_i}(x_i, - \partial_{x_i}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})), \, && x_i(0) \in \Omega_i\\
\dot{\lambda}_i &\in \Pi_{\mathbb{R}^M_+}(\lambda_i, -\partial_{\lambda_i}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda})), \, && \lambda_i(0) \in \mathbb{R}^M_+,
\end{aligned}\right.
\end{equation}
where
\begin{equation*}
\partial_{x_i}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) = \partial f_i(x_i) + \partial \bm{g}_{i}(x_i)\lambda_i, \quad -\partial_{\lambda_i}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}) = \bm{g}_i(x_i) - K\sum_{j\in \mathcal{N}_i}\sign(\lambda_i-\lambda_j),
\end{equation*}
(the second equality follows because graph $\mathcal{G}$ is undirected) and $\sign(\cdot)$ is the set-valued sign function with each component defined as
\begin{equation*}
\sign(y) \triangleq \partial |y| = \left\{\begin{aligned}
& \{1\} &&\text{ if } y>0\\
& \{-1\} &&\text{ if } y<0\\
& [-1,\,1] &&\text{ if } y=0\\
\end{aligned}\right.\text{.}
\end{equation*}
For simplicity, we rewrite algorithm \eqref{eq:distributedAlgorithm} in a compact form
\begin{equation}\label{eq:distributedAlgorithmCompact}
\left\{\begin{aligned}
\dot{\bm{x}} &\in \Pi_{\Omega}(\bm{x}, - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})), \, && \bm{x}(0) \in \Omega\\
\dot{\bm{\lambda}} &\in \Pi_{\mathbb{R}^{MN}_+}(\bm{\lambda}, -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda})), \, && \bm{\lambda}(0) \in \mathbb{R}^{MN}_+.
\end{aligned}\right.
\end{equation}
Algorithm \eqref{eq:distributedAlgorithm} is fully distributed since each agent $i\in\mathcal{V}$ only updates its local variables $x_i$ and $\lambda_i$ according to its local functions $f_i, \bm{g}_i$ and the information of its neighbors $\lambda_j, j\in \mathcal{N}_i$.
\begin{remark}
Some discussions about our method are given below.
\begin{itemize}[leftmargin=*]
\item In the original $\mathcal{L}(\bm{x},\lambda)$, each $\bm{g}_i(x_i)$ shares a common $\lambda$ and the multiplier $\lambda$ performs on the coupled $\bm{g}(\bm{x})$, while, in the modified one $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$, all the local parts of cost and constraint functions are gathered in a decoupled way.
\item Our distributed algorithm involves only primal variables and local multipliers without auxiliary dynamics, while some existing distributed algorithms such as those given in \cite{Cherukuri2016Initialization,Yi2016Initialization,Zeng2016Continuous,Chang2014Distributed} employed auxiliary dynamics for the convergence.
\item From the estimation $K_0 \leq \sum_{i=1}^N \max_{x_i \in \Omega_i}\{\|g_i(x_i)\|\}$ for Lemma \ref{lem:exactPenalty}, parameter $K$ can be assigned via local estimation of each $\max_{x_i \in \Omega_i}\|g_i(x_i)\|$ and calculating the sum in a distributed manner.
\end{itemize}
\end{remark}
\subsection{Existence and Convergence}
Dynamics \eqref{eq:distributedAlgorithmCompact} is nonsmooth due to the projection operator and subgradients of the nonsmooth Lagrangian function. Thus, we need to check the existence of its solution (trajectory).
\begin{theorem}\label{thm:existence}
Under Assumption \ref{assum:1}, for any initial value $\bm{x}(0)\in \Omega, \bm{\lambda}(0) \in \mathbb{R}^{MN}_+$, there exists a solution to \eqref{eq:distributedAlgorithmCompact}.
\end{theorem}
\begin{IEEEproof}
Let $\tilde{\mathcal{D}}$ be the convex hull of $\bm{\lambda}(0)$ and $\prod_{i=1}^N\mathcal{D}$, where $\mathcal{D}$ is in \eqref{eq:dualSet}. Then $\tilde{\mathcal{D}}$ is compact and convex. Consider the following differential inclusion
\begin{equation}\label{eq:compactDI}
\left\{\begin{aligned}
\dot{\bm{x}} &\in \Pi_{\Omega}(\bm{x}, - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}))\\
\dot{\bm{\lambda}} &\in \Pi_{\tilde{\mathcal{D}}}(\bm{\lambda}, -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}))
\end{aligned}\right.\text{.}
\end{equation}
Since $\mathcal{T}_{\tilde{\mathcal{D}}}(\bm{\lambda}) \subset \mathcal{T}_{\mathbb{R}^{MN}_+}(\bm{\lambda}), \forall\, \bm{\lambda}\in \tilde{\mathcal{D}}$, any solution to \eqref{eq:compactDI} is also a solution to \eqref{eq:distributedAlgorithmCompact}. Thus, it suffices to prove the existence of solution for \eqref{eq:compactDI}.
Let $\mathcal{F}(\bm{x},\bm{\lambda}) \triangleq col(- \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}), -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}))$, and $C \triangleq \Omega\times \tilde{\mathcal{D}}$. We claim that $\mathcal{F}$ is upper semicontinuous over $C$. The locally Lipschtiz continuity of $f(\bm{x}), \bm{g}(\bm{x}), \phi(\bm{\lambda})$ implies that $\mathcal{F}$ has compact values over the compact set $C$. Then it suffices to prove $\gph\mathcal{F}$ is closed due to part \ref{item:setvalue1}) in Lemma \ref{lem:setvalue}. Let $\{\bm{x}_k, \bm{\lambda}_k\}$ and $\{\zeta_k,\eta_k\}$ be sequences in $C$ and $\mathbb{R}^{n_1+\cdots+n_N}\times \mathbb{R}^{MN}$ such that (a) $col(\zeta_k,\eta_k) \in \mathcal{F}(\bm{x}_k,\bm{\lambda}_k)$, (b) $(\bm{x}_k, \bm{\lambda}_k)$ converges to $(\bm{x}, \bm{\lambda})$, and (c) $(\zeta,\eta)$ is a cluster point of the sequence $(\zeta_k,\eta_k)$. We can extract a subsequence of $(\zeta_k,\eta_k)$ (without relabeling) such that $\lim_{k\to+\infty}(\zeta_k,\eta_k) = (\zeta,\eta)$. Since $- \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) = - \partial f(\bm{x}) - \diag\{\partial \bm{g}_1(\bm{x}), ..., \partial \bm{g}_N(\bm{x})\}\bm{\lambda}$, $\zeta_k = -\alpha_k - \beta_k\bm{\lambda}_k$, where $\alpha_k\in \partial f(\bm{x}_k)$ and $\beta_k \in \diag\{\partial \bm{g}_1(\bm{x}_k), ..., \partial \bm{g}_N(\bm{x}_k)\}$. It follows from part \ref{item:subdifferential2}) in Lemma \ref{lem:subdifferential} that $\lim_{k\to+\infty} \alpha_k = \alpha \in \partial f(\bm{x})$ and $\lim_{k\to+\infty} \beta_k = \beta \in \diag\{\partial \bm{g}_1(\bm{x}), ..., \partial \bm{g}_N(\bm{x})\}$ after extracting subsequences of $\{\alpha_k\}$ and $\{\beta_k\}$ without relabeling. Therefore, $\zeta \in - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$. Similarly, $\eta \in -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda})$. Thus, $(\zeta,\eta) \in \mathcal{F}(\bm{x},\bm{\lambda})$, i.e., $\gph\mathcal{F}$ is closed.
Finally, according to part \ref{item:setvalue3}) and part \ref{item:setvalue4}) in Lemma \ref{lem:setvalue}, there exists a solution to system \eqref{eq:compactDI}, which is also a solution to \eqref{eq:distributedAlgorithmCompact}.
\end{IEEEproof}
Then it is time to show the convergence of our algorithm.
\begin{theorem}\label{thm:convergence}
Under Assumption \ref{assum:1}, algorithm \eqref{eq:distributedAlgorithmCompact} is stable and any of its solutions converges to the set of saddle points of $\tilde{\mathcal{L}}$. Moreover, for any solution $(\bm{x}(t),\bm{\lambda}(t))$, there exists a saddle point $(\bm{x}^*,\tilde{\bm{\lambda}}^*)$ of $\tilde{\mathcal{L}}$ such that
\begin{equation}\label{eq:convergence}
\lim_{t\to+\infty} (\bm{x}(t),\bm{\lambda}(t)) = (\bm{x}^*,\tilde{\bm{\lambda}}^*).
\end{equation}
\end{theorem}
\begin{IEEEproof}
For all $\bm{x}, \underline{\bm{x}}\in \Omega, \bm{\lambda}, \underline{\bm{\lambda}} \in \mathbb{R}_+^{MN}$, the following basic conditions hold, according to the definitions of projection, normal cone and the convexity-concavity of $\tilde{\mathcal{L}}$.
\begin{itemize}[leftmargin=*]
\item (projection)
\begin{align*}
&\Pi_{\Omega}(\bm{x}, - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})) \subset - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) - \mathcal{N}_\Omega(\bm{x}), \\ &\Pi_{\mathbb{R}^{MN}_+}(\bm{\lambda}, -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda})) \subset -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}) - \mathcal{N}_{\mathbb{R}^{MN}_+}(\bm{\lambda}),
\end{align*}
\item (normal cone)
\begin{equation*}
(\underline{\bm{x}}-\bm{x})^Tu_x \leq 0, \, \forall\,u_x \in \mathcal{N}_\Omega(\bm{x}), \quad
(\underline{\bm{\lambda}} -\bm{\lambda})^Tu_\lambda \leq 0, \, \forall\,u_\lambda \in \mathcal{N}_{\mathbb{R}^{MN}_+}(\bm{\lambda}),
\end{equation*}
\item (convexity-concavity)
\begin{align*}
&(\underline{\bm{x}}-\bm{x})^Tv_x \leq \tilde{\mathcal{L}}(\underline{\bm{x}},\bm{\lambda}) - \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}), \quad \forall\, v_x\in \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda}),\\
&(\underline{\bm{\lambda}}-\bm{\lambda})^Tv_\lambda \leq \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) - \tilde{\mathcal{L}}(\bm{x},\underline{\bm{\lambda}}), \quad \forall\, v_\lambda\in \partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}).
\end{align*}
\end{itemize}
Therefore,
\begin{subequations}\label{eq:condition1}
\begin{align}
(\bm{x}-\underline{\bm{x}})^Tw_x &\leq \tilde{\mathcal{L}}(\underline{\bm{x}},\bm{\lambda}) - \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}), \quad \forall\, w_x\in \Pi_{\Omega}(\bm{x}, - \partial_{\bm{x}}\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})), \\
(\bm{\lambda}-\underline{\bm{\lambda}})^Tw_\lambda & \leq \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}) - \tilde{\mathcal{L}}(\bm{x},\underline{\bm{\lambda}}), \quad \forall\, w_\lambda\in \Pi_{\mathbb{R}^{MN}_+}(\bm{\lambda}, -\partial_{\bm{\lambda}}(-\tilde{\mathcal{L}})(\bm{x},\bm{\lambda}))\text{.}
\end{align}
\end{subequations}
Let $(\bm{x}^*,\bm{\lambda}^*)$ be an equilibrium point of \eqref{eq:distributedAlgorithmCompact}, which satisfies \eqref{eq:equilibrium}. From Lemma \ref{lem:collection} and Theorem \ref{thm:equilibrium}, $\bm{x}^*$ coincides with the unique solution of the primal problem and $(\bm{x}^*,\bm{\lambda}^*)$ is a saddle point of $\tilde{\mathcal{L}}(\bm{x},\bm{\lambda})$. Define
\begin{equation*}
W(\bm{x},\bm{\lambda}) \triangleq \tilde{\mathcal{L}}(\bm{x},\bm{\lambda}^*) - \tilde{\mathcal{L}}(\bm{x}^*,\bm{\lambda}),\, \forall\, \bm{x} \in \Omega, \bm{\lambda} \geq \bm{0}.
\end{equation*}
Obviously, $W(\cdot)$ is a locally Lipschitz continuous function. Moreover, since $(\bm{x}^*, \bm{\lambda}^*)$ is a saddle point of $\tilde{\mathcal{L}}$, $W(\bm{x},\bm{\lambda}) \geq 0$ and $W(\bm{x},\bm{\lambda}) = 0$ if and only if $(\bm{x},\bm{\lambda}) = (\bm{x}^*,\tilde{\bm{\lambda}}^*)$ for some saddle point $(\bm{x}^*,\tilde{\bm{\lambda}}^*)$ of $\tilde{\mathcal{L}}$.
Consider a Lyapunov function
\begin{equation*}
V(\bm{x},\bm{\lambda}) = \frac{1}{2}\|\bm{x} - \bm{x}^*\|^2+\frac{1}{2}\|\bm{\lambda} - \bm{\lambda}^*\|^2.
\end{equation*}
Let $(\bm{x}(t),\bm{\lambda}(t))$ be any solution to \eqref{eq:distributedAlgorithmCompact}. Since $\dot{\bm{x}} \in \mathcal{T}_\Omega(\bm{x})$ and $\dot{\bm{\lambda}} \in \mathcal{T}_{\mathbb{R}^{MN}_+}(\bm{\lambda})$, we have $\bm{x}(t) \in \Omega, \, \bm{\lambda}(t)\geq \bm{0},\, \forall\,t\geq 0$. Moreover, it follows from \eqref{eq:condition1} that
\begin{equation}\label{eq:dV}
\frac{d}{dt}V(\bm{x}(t),\bm{\lambda}(t)) = (\bm{x}(t)-\bm{x}^*)^T\dot{\bm{x}}(t) + (\bm{\lambda}(t)-\bm{\lambda}^*)^T\dot{\bm{\lambda}}(t) \leq - W(\bm{x}(t),\bm{\lambda}(t)) \leq 0,
\end{equation}
for almost all $t\geq 0$. Therefore, \eqref{eq:distributedAlgorithmCompact} is stable.
Furthermore, since $W(\cdot)$ is locally Lipschitz continuous and $\bm{x}(t),\bm{\lambda}(t)$ are absolutely continuous, $W(t)$ (shorthand for $W(\bm{x}(t),\bm{\lambda}(t))$) is uniformly continuous in $t$. We claim that $W(t)$ is Riemann integrable over the infinite interval $[0,+\infty)$. In fact, the Riemann integral of the continuous function $W$ over any finite interval $[0,t)$ equals to the corresponding Lebesgue integral. Moreover, $\int_{0}^tW(\tau)d\tau$ is monotonically increasing since $W$ is nonnegative, and it follows from \eqref{eq:dV} that the Lebesgue integral of $W$ over the infinite interval $[0,+\infty)$, if exists, must be bounded.
As a result, $\int_0^{+\infty}W(\tau)d\tau$ exists and is finite. Then, by the Barbalat's lemma, $(\bm{x}(t),\bm{\lambda}(t))$ converges to the zeros set of $W$, which is exactly the set of saddle points of $\tilde{\mathcal{L}}$.
Let $(\bm{x}^*,\tilde{\bm{\lambda}}^*)$ be a cluster point of $(\bm{x}(t),\bm{\lambda}(t))$ as $t\to +\infty$. Then $(\bm{x}^*,\tilde{\bm{\lambda}}^*)$ is a saddle point of $\tilde{\mathcal{L}}$. Define
\begin{equation*}
\tilde V(\bm{x},\bm{\lambda}) = \frac{1}{2}\|\bm{x} - \bm{x}^*\|^2+\frac{1}{2}\|\bm{\lambda} - \tilde{\bm{\lambda}}^*\|^2.
\end{equation*}
It follows from similar arguments that $\dot {\tilde V }(t)\leq -\tilde{\mathcal{L}}(\bm{x}(t),\tilde{\bm{\lambda}}^*)+ \tilde{\mathcal{L}}(\bm{x}^*,\bm{\lambda}(t))\leq 0$ for almost all $t>0$. Hence, $(\bm{x}^*,\tilde {\bm{\lambda}}^*)$ is Lyapunov stable. It follows from Lemma \ref{lem:semistability} that \eqref{eq:convergence} holds.
\end{IEEEproof}
Finally, we discuss the convergence rate. Define
\begin{equation}\label{eq:hatxlambda}
\hat{\bm{x}}(t) \triangleq \frac{1}{t}\int_0^t\bm{x}(\tau)d\tau,\quad \hat{\bm{\lambda}}(t) \triangleq \frac{1}{t}\int_0^t\bm{\lambda}(\tau)d\tau\text{,}
\end{equation}
where trajectories $\bm{x}(\cdot),\bm{\lambda}(\cdot)$ are in Theorem \ref{thm:convergence}.
\begin{theorem}\label{thm:rate
Under Assumption \ref{assum:1}, there exists a constant $\theta_0>0$ such that
\begin{equation}\label{eq:rate}
\|\tilde{\mathcal{L}}(\hat{\bm{x}}(t),\hat{\bm{\lambda}}(t)) - \tilde{\mathcal{L}}(\bm{x}^*,\tilde{\bm{\lambda}}^*)\| \leq \frac{\theta_0}{t}, \quad \forall\, t>0\text{.}
\end{equation}
\end{theorem}
\begin{IEEEproof
Since $\Omega$ is convex and $\bm{x}(\cdot) \in \Omega, \bm{\lambda}(\cdot)\in \mathbb{R}_+^{MN}$, $\hat{\bm{x}}(t)\in \Omega, \hat{\bm{\lambda}}(t)\in \mathbb{R}_+^{MN}, \, \forall\, t>0$. It follows from the Jensen's inequality for the convex-concave $\tilde{\mathcal{L}}$ that, for any $\underline{\bm{x}}\in \Omega, \underline{\bm{\lambda}} \in \mathbb{R}_+^{MN}$,
\begin{equation}\label{eq:Jensen}
\frac{1}{t}\int_0^t \tilde{\mathcal{L}}(\bm{x}(\tau),\underline{\bm{\lambda}})d\tau \geq \tilde{\mathcal{L}}(\hat{\bm{x}}(t),\underline{\bm{\lambda}}), \quad
\frac{1}{t}\int_0^t \tilde{\mathcal{L}}(\underline{\bm{x}},\bm{\lambda}(\tau))d\tau \leq \tilde{\mathcal{L}}(\underline{\bm{x}},\hat{\bm{\lambda}}(t))\text{.}
\end{equation}
Moreover, it follows from \eqref{eq:condition1} that, for almost all $\tau>0$,
\begin{equation}\label{eq:derivativecondition}
\begin{aligned}
\frac{d}{d\tau}(\frac{1}{2}\|\bm{x}(\tau) - \underline{\bm{x}}\|^2) \leq \tilde{\mathcal{L}}(\underline{\bm{x}},\bm{\lambda}(\tau)) - \tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau)),\\
\frac{d}{d\tau}(\frac{1}{2}\|\bm{\lambda}(\tau) - \underline{\bm{\lambda}}\|^2)\leq \tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau)) - \tilde{\mathcal{L}}(\bm{x}(\tau),\underline{\bm{\lambda}}).
\end{aligned}
\end{equation}
For any fixed $t>0$, the time average of \eqref{eq:derivativecondition} over integral interval $[0,t]$ with relaxation \eqref{eq:Jensen} indicates
\begin{equation}\label{eq:ratecondition1}
\begin{aligned}
-\frac{\|\bm{x}(0) - \underline{\bm{x}}\|^2}{2t} \leq \tilde{\mathcal{L}}(\underline{\bm{x}},\hat{\bm{\lambda}}(t)) -\frac{1}{t} \int_0^t \tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau))d\tau, \\
-\frac{\|\bm{\lambda}(0) - \underline{\bm{\lambda}}\|^2}{2t} \leq \frac{1}{t} \int_0^t\tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau))d\tau - \tilde{\mathcal{L}}(\hat{\bm{x}}(t),\underline{\bm{\lambda}})\text{,}
\end{aligned}
\end{equation}
Replacing $(\underline{\bm{x}}, \underline{\bm{\lambda}})$ by $(\hat{\bm{x}}(t), \hat{\bm{\lambda}}(t))$ in \eqref{eq:ratecondition1} yields
\begin{equation*}
\big\|\frac{1}{t} \int_0^t \tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau))d\tau - \tilde{\mathcal{L}}(\hat{\bm{x}}(t),\hat{\bm{\lambda}}(t))\big\|\leq \frac{\theta_1}{2t},
\end{equation*}
where $\theta_1 \triangleq \max_{t>0}\{\|\bm{x}(0) - \hat{\bm{x}}(t)\|^2, \|\bm{\lambda}(0) - \hat{\bm{\lambda}}(t)\|^2\}$. Note that $(\hat{\bm{x}}(t), \hat{\bm{\lambda}}(t))$ are uniformly bounded for $t\in [0,+\infty)$ due to \eqref{eq:convergence}. Similarly, since $\tilde{\mathcal{L}}(\bm{x}^*,\hat{\bm{\lambda}}(t)) \leq \tilde{\mathcal{L}}(\bm{x}^*,\tilde{\bm{\lambda}}^*) \leq \tilde{\mathcal{L}}(\hat{\bm{x}}(t),\tilde{\bm{\lambda}}^*)$, we have from replacing $(\underline{\bm{x}}, \underline{\bm{\lambda}})$ by $(\bm{x}^*,\tilde{\bm{\lambda}}^*)$ in \eqref{eq:ratecondition1} that
\begin{equation*}
\big\|\frac{1}{t} \int_0^t\tilde{\mathcal{L}}(\bm{x}(\tau),\bm{\lambda}(\tau))d\tau - \tilde{\mathcal{L}}(\bm{x}^*,\tilde{\bm{\lambda}}^*) \big\|\leq \frac{\theta_2}{2t},
\end{equation*}
where $\theta_2 \triangleq \max\{\|\bm{x}(0) - \bm{x}^*\|^2, \|\bm{\lambda}(0) - \tilde{\bm{\lambda}}^*\|^2\}$. Thus, \eqref{eq:rate} holds with $\theta_0 \triangleq (\theta_1 + \theta_2)/2$.
\end{IEEEproof}
Theorems \ref{thm:equilibrium}--\ref{thm:rate} provide a complete procedure to prove that algorithm \eqref{eq:distributedAlgorithmCompact} solves problem \eqref{eq:optimizationProblem}. In particular, Theorem \ref{thm:rate} indicates that the value of the Lagrangian function with respect to time average trajectories converges to the value at saddle points with the convergence rate $O(\frac{1}{t})$.
\section{Numerical Examples}
In this section, we first take a simple example for illustration and then consider a more practical example for the performance of our algorithm.
\begin{example}
Consider 4 agents for the optimization problem \eqref{eq:optimizationProblem} with nonsmooth cost and constraint functions:
\begin{equation*}
f_i(x_i) = (x_{i,1} + a_{i,1}x_{i,2})^2 + x_{i,1} + a_{i,2}x_{i,2} + \sqrt{x_{i,1}^2+x_{i,2}^2},\; i = 1,2,3,4
\end{equation*}
and
\begin{equation*}
g_{i,1}(x_i) = \sqrt{x_{i,1}^2 + x_{i,2}^2} - d_{i,1},\quad g_{i,2}(x_i) = -x_{i,1} - x_{i,2} + d_{i,2},
\end{equation*}
where $x_i = (x_{i,1}, x_{i,2}) \in \mathbb{R}^2$, $d_i = (d_{i,1}, d_{i,2}) \in \mathbb{R}^2$ and $a_i = (a_{i,1},a_{i,2})\in \mathbb{R}^2$ for $i = 1,2,3,4$. The local constraint sets of the four agents are
\begin{equation*}
\begin{aligned}
\Omega_1 & = \{x_1 \in \mathbb{R}^2\,|\,(x_{1,1}-2)^2 + (x_{1,2}-3)^2 \leq 25\},\\
\Omega_2 & = \{x_2 \in \mathbb{R}^2\,|\,x_{2,1} \geq 0, x_{2,1} \geq 0, x_{2,1} + 2x_{2,2}\leq 4\},\\
\Omega_3 & = \{x_3 \in \mathbb{R}^2\,|\,4\leq x_{3,1} \leq 6, 2\leq x_{3,2} \leq 5\},\\
\Omega_4 & = \{x_4 \in \mathbb{R}^2\,|\,0 \leq x_{4,1} \leq 15, 0\leq x_{4,1} \leq 20\}.
\end{aligned}
\end{equation*}
The communication graph is shown in Fig. \ref{fig:topology} and algorithm parameters are listed in Table \ref{table:parameters}. Both centralized primal-dual algorithm and our distributed algorithm are utilized to solve this problem and the results are shown in Figs. \ref{fig:Fa}--\ref{fig:Fc}. The trajectories of primal variables are both within their local constraint sets as shown in Figs. \ref{fig:Fa} and \ref{fig:Fb}, while the Lyapunov functions of the algorithms decrease monotonically as shown in Fig. \ref{fig:Fc}.
\begin{figure}
\centering
\begin{tikzpicture}[scale = 2]
\tikzstyle{every node}=[shape=circle,draw,minimum size = 20 pt,ball color = red!40]
\path (180 : 1.5cm) node (1) {$\,1\,$};
\path (180 : 0.5cm) node (2) {$\,2\,$};
\path (0 : 0.5cm) node (3) {$\,3\,$};
\path (0 : 1.5cm) node (4) {$\,4\,$};
\draw [ultra thick] [blue] (1) -- (2) -- (3) -- (4);
\draw[ultra thick] [blue] (1) to [out=30,in=150] (4);
\end{tikzpicture}
\caption{The communication graph of the four agents.
}\label{fig:topology}
\end{figure}
\begin{table}
\begin{center}
\caption{Parameters setting}\label{table:parameters}
\begin{tabular}{cccc}
\hline
& $d_i$ & $a_i$ & $x_i(0)$ \\ \hline
$i=1$ & $(6,2)$ & $(8,2)$ & $(2,6)$ \\
$i=2$ & $(6,3)$ & $(4,7)$ & $(1,1)$ \\
$i=3$ & $(6,4)$ & $(0.13,8)$ & $(5,4)$ \\
$i=4$ & $(6,5)$ & $(4,20)$ & $(10,5)$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{Fa.eps}
\caption{The trajectories of agent 1 and agent 2} \label{fig:Fa}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{Fb.eps}
\caption{The trajectories of agent 3 and agent 4} \label{fig:Fb}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{Fc.eps}
\caption{The Lyapunov functions} \label{fig:Fc}
\end{center}
\end{figure}
\end{example}
\begin{example}
Consider problem \eqref{eq:optimizationProblem} with each local constraint as $x_i \in [0,1]$, where each cost function is
\begin{equation*}
f_i(x_i) = a_ix_i^2 + \ln(1+ b_ix_i) + c_i|x_i-d_i| + e_i x_i,
\end{equation*}
and the coupled inequality constraints are $\bm{g}(\bm{x}) = P\bm{x} - q\leq \bm{0}$. We randomly generate coefficients $a_i, b_i, c_i, d_i, e_i \in [0,1]$, matrices $\bm{0}_{M\times N} \leq P \leq \bm{1}_M\bm{1}_N^T$ and $ q \geq \bm{0}_M$ such that strictly feasible point exists. We choose the network size as $N = 10, 20, 50$ and the number of coupled constraints as $M = 5$. For each problem setting, we randomly generate 100 communication graphs. Over each graph, we conduct the numerical experiment and take the relative error $e(t) = \frac{\max_{i =1,...,N}|x_i(t) - x_i^*|}{\max_{i =1,...,N}|x_i^*|}$ for $t = 20, 60, 100$. The average results are shown in Table \ref{tab:ourAlgorithm}, which indicates the effectiveness of our distributed algorithm.
\begin{table}
\begin{center}
\caption{relative error vs. network size}
\label{tab:ourAlgorithm}
\begin{tabular}{cccc}
\hline
& $t = 20$ & $t = 60$ & $t = 100$ \\ \hline
$N=10$ & $0.1982$ & $0.0711$ & $0.0143$ \\
$N=20$ & $0.5530$ & $0.0290$ & $0.0042$ \\
$N=50$ & $0.1391$ & $0.0170$ & $0.0105$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\end{example}
\section{Conclusion}
In this note, a distributed nonsmooth convex optimization problem with coupled inequality constraints has been studied. Based on a modified Lagrangian function constructed via local multipliers and nonsmooth penalty technique, a distributed continuous-time algorithm has been proposed. Also, the convergence of the nonsmooth dynamics has been proved and the convergence rate has been analyzed. Additionally, the effectiveness of the algorithm has been illustrated by two numerical examples.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,108,101,565,767 | arxiv | \section{Introduction}
Deep inelastic scattering (DIS) of electrons and positrons
on protons at HERA has been central to the exploration
of proton structure and quark-gluon interaction dynamics as
described by perturbative Quantum Chromo Dynamics (pQCD).
HERA was operated at a centre-of-mass energy
of up to $\sqrt{s} \simeq 320\,$GeV.
This enabled the two collaborations, H1 and ZEUS, to explore a large
phase space in Bjorken $x$, $x_{Bj}$,
and negative four-momentum-transfer squared, $Q^2$.
Cross sections for neutral current (NC) interactions were
published for
$0.045 \leq Q^2 \leq 50000 $\,GeV$^2$
and $6 \cdot 10^{-7} \leq x_{Bj} \leq 0.65$.
HERA was operated in two phases: HERA\,I, from 1992 to 2000, and HERA\,II,
from 2002 to 2007. It was operated with an electron beam energy of
$E_e \simeq 27.5$\,GeV.
For most of HERA\,I and~II, the proton beam energy
was $E_p = 920$\,GeV, resulting in the highest centre-of-mass energy of
$\sqrt{s} \simeq 320\,$GeV.
During all of HERA running,
the H1 and ZEUS collaborations
collected total integrated luminosities of
approximately 500\,pb$^{-1}$ each,
divided about equally between $e^+p$ and $e^−p$ scattering.
The data presented here is the final combination of HERA inclusive data
based on all published H1 and ZEUS measurements corrected to zero beam polarisation. This includes data taken
with proton beam energies of
$E_p = 920$, 820, 575 and 460\,GeV
corresponding to
at $\sqrt{s}\simeq$\,320, 300, 251 and 225\,GeV.
The combination was performed using the
package HERAverager~\cite{HERAverager} and the pQCD analysis using
HERAFitter~\cite{HERAFitter}.
The correlated systematic uncertainties
and global normalisations were treated
such that one coherent data set was obtained.
The combination leads to a significantly reduced uncertainty compared to the
orginal inputs and compared to the previous combination of HERA-I data,
particularly in the electron sector, see Fig.~\ref{fig:d15039f4f7}.
The combined data demonstrate electroweak unification beautifully and allow an extraction of $xF_3^{\gamma Z}$, see Fig.~\ref{fig:d15039f74f78}
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f4.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f7.eps,width=0.7\linewidth}}
\caption {HERA combined NC $e^+p$ reduced
cross sections as a function of
$Q^2$ for selected $x_{\rm Bj}$-bins compared to the individual
H1 and ZEUS data(top); and HERA combined NC $e^-p$ reduced cross sections compared to the to the HERA-I combination (bottom).
}
\label{fig:d15039f4f7}
\end{figure}
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f74.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f78.eps,width=0.7\linewidth}}
\caption {NC and CC $e^-p$ and $e^+p$ cross sections (top)
The structure function $xF_3^{\gamma Z}$ at $Q^2=1000\,$GeV$^{2}$ (bottom).
The data are compared with the prediction from HERAPDF2.0 NLO.
}
\label{fig:d15039f74f78}
\end{figure}
Within the framework of pQCD,
the proton is described
by parton distribution functions (PDFs) which provide probabilities
for a particle
to scatter off partons, gluons or quarks,
carrying the fraction $x$ of the proton momentum.
Perturbative QCD determines the evolution of the PDFs to any scale once
they are provided at a starting scale. The name HERAPDF stands for a pQCD
analysis, within the
DGLAP formalism, to determine the PDFs at the starting scale by fitting the $x_{Bj}$ and $Q^2$ dependences of the combined HERA NC and CC DIS
cross sections. The name
HERAPDF2.0 refers to this analysis based on
the newly combined inclusive DIS cross sections from all of HERA~I and HERA~II.
The strength of the HERAPDF approach is that
one coherent high-precision data set containing NC and CC cross sections
is used as input.
The newly combined data entering the HERAPDF2.0 analysis span
four orders of magnitude in $Q^2$ and $x_{Bj}$.
The availability of precision NC and CC cross sections
over such a large large phase space allows HERAPDF to be based on
$ep$ scattering data only and makes HERAPDF independent
of any nuclear corrections.
The difference between the NC $e^+p$ and $e^-p$ cross sections
at high $Q^2$, together with the high-$Q^2$ CC data,
constrain the valence quark distributions.
The CC data also constrain
the down sea-quark distribution in the proton without assuming
isospin symmetry.
The lower-$Q^2$ NC data
constrain the low-$x$ sea-quark distributions.
The precisely measured $Q^2$ variations
of the DIS cross sections
in different bins of $x_{Bj}$
constrain the gluon distribution. Measurement of cross sections at
different beam energies constrains the longitudinal structure function, $F_L$,
and thus provides independent information on the gluon distribution.
The consistency of the input data allowed the determination of the
experimental uncertainties on the HERAPDF2.0 parton distributions
using rigorous statistical methods.
Uncertainties resulting from model assumptions
and from the choice of the parameterisation of the PDFs
are considered separately.
Both H1 and ZEUS also published charm-production cross sections,
which were combined and analysed previously, as well
as jet-production cross sections.
These data were included to obtain a variant HERAPDF2.0Jets.
The inclusion of jet cross-sections made it possible to simultaneously
determine the PDFs and the strong coupling constant $\alpha_s(M_Z^2)$.
Full details of the analysis are given in ref.~\cite{thepaper}.
\section{HERAPDF2.0 and its variations}
Fig.~\ref{fig:d15039f21f23} shows summary plots at $\mu_f=10$GeV$^2$ of the
valence, total Sea and
gluon PDFs for HERAPDF2.0 analysed at NLO and at NNLO, for the standard cut
$Q^2 >3.5$GeV$^2$. The experimental uncertainties are shown in red. Model
uncertainties are shown in yellow. These are due to variation of the central
choices for: the $Q^2$ cut; the values of the pole-masses of the charm and beauty
quarks; the fractional contribution and the shape of the strange-PDF. HERA data on charm and beauty
production determine the central choices of the heavy quark masses and their model
variations. Parametrization uncertainties are shown in green. These are due
to: variation of the starting scale; addition of extra parameters. The central
choice of parametrization is determined as usual for HERAPDF analyses by saturation of the
$\chi^2$, but the addition of extra parameters sometimes results in close-by
but distinct minima. Additionally seen on these figures is the result of an
alternative gluon parametrisation HERAPDF2.0AG, for which the gluon must be
positive definite for all $Q^2$ above the starting scale. These PDFs are similar
in $\chi^2$ to the standard ones at NLO but are disfavoured at NNLO.
An LO set is also available using the alternative gluon parametrisation.
It is shown compared to the NLO set in Fig.~\ref{fig:d15039f26}. The standard fits use a
value of the strong coupling constant, $\alpha_s(M_Z^2)=0.118$, at NLO and NNLO, and,
$\alpha_s(M_Z^2)=0.130$, at LO, but sets using a range of values from
$0.110$ to $0.130$ are also available.
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f21.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f23.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of
HERAPDF2.0 NLO(top) and NNLO(bottom), $xu_v$, $xd_v$, $xS=2x(\bar{U}+\bar{D})$, $xg$,
at $\mu_{\rm f}^{2} = 10\,$GeV$^{2}$.
The gluon and sea distributions are scaled down
by a factor of $20$.
The experimental, model and parameterisation
uncertainties are shown
separately.
The dotted lines represent HERAPDF2.0AG NLO with an alternative
gluon parameterisation.
}
\label{fig:d15039f21f23}
\end{figure}
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f26.eps,width=0.7\linewidth}}
\caption {HERAPDF2.0 PDFs at LO and NLO are compared using experimental uncertaities only. The alternative gluon form of the parametrisation is used for both.
}
\label{fig:d15039f26}
\end{figure}
A more extreme variation of the $Q^2$ cut, $Q^2 > 10$GeV$^2$, has also been
considered, resulting in the HERAPDF2.0HiQ2 PDFs, since it was observed that
the $\chi^2$ per degree of freedom of
the fit decreases steadily until $Q^2 > 10$GeV$^2$. This is true for both NLO and
NNLO fits and it is
true independent of the heavy quark scheme used to analyse the data, see Fig.~\ref{fig:d15039f20af20b}. In fact
it depends mostly on the order to which $F_L$ is evaluated. The fits do not
favour the evaluation of $F_L$ to $O(\alpha_s^2)$. It is also somewhat counter-intuitive that the $\chi^2$ is not improved when going from NLO to NNLO within the same scheme,
the fits do not favour the faster
NNLO evolution.
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f20a.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f20b.eps,width=0.7\linewidth}}
\caption {The dependence of $\chi^2/d.o.f$ on $Q^2_{min}$ for HERAPDF2.0 fits:
(top) using the RTOPT and FONLL schmes at NLO and NNLO; (bottom) using RTOPT,
ACOT and FONLL-B schemes and fixed flavour number schemes at NLO.}
\label{fig:d15039f20af20b}
\end{figure}
HERA kinematics are such that low $Q^2$ is also low $x$. Thus HERAPDF2.0HiQ2 PDFs are
used to assess any bias resulting from the inclusion of low-$Q^2$, low-$x$ data
which might require analysis beyond the DGLAP formalism, such as: resummation
of $ln(1/x)$ terms, non-linear evolution equations and non-perturbative
effects. Figs.~\ref{fig:d15039f55bf57b} show that there is no bias at high scale
due to the inclusion of the lower $Q^2$ data.
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f55b.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f57b.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of
HERAPDF2.0, $xu_v$, $xd_v$, $xS=2x(\bar{U}+\bar{D})$, $xg$,
at $\mu_{\rm f}^{2} = 10000\,$GeV$^{2}$ with $Q^{2}_{\rm min} = 3.5$\,GeV$^{2}$
compared to the PDFs of HERAPDFHiQ2 with $Q^2_{\rm min}=10$\,GeV$^2$.
Top for the NLO fits; bottom for the NNLO fits
}
\label{fig:d15039f55bf57b}
\end{figure}
Figs~\ref{fig:d15039f47af47b} compare the HERAPDF2.0NLO fit to the
HERAPDF1.0NLO fit. One can see the reduction in the high-x uncertainties and
the fact that the high-$x$ Sea is now much less hard.
Figs.~\ref{fig:d15039f49af49b} make the same comparison for the HERAPDF2.0NNLO and the HERAPDF1.5NNLO fit.
Again the high-x uncertainty is reduced and in particular, the high-x gluon has a much
reduced uncertainty band and its central value moves towards the lower end of the
HERAPDF1.5 uncertainty band.
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f47a.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f47b.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of
HERAPDF2.0 NLO,
$xu_v$, $xd_v$, $xS=2x(\bar{U}+\bar{D})$, $xg$,
at $\mu_{\rm f}^{2}=10\,$GeV$^{2}$
compared to HERAPDF1.0NLO on log (top)
and linear (bottom) scales.
}
\label{fig:d15039f47af47b}
\end{figure}
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f49a.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f49b.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of
HERAPDF2.0 NNLO,
$xu_v$, $xd_v$, $xS=2x(\bar{U}+\bar{D})$, $xg$,
at $\mu_{\rm f}^{2}=10\,$GeV$^{2}$
compared to HERAPDF1.5 NNLO on log (top)
and linear (bottom) scales.
}
\label{fig:d15039f49af49b}
\end{figure}
Two sets of PDFs using a fixed flavour number scheme have been extracted, as
shown in Fig~\ref{fig:d15039f60af60b}. These differ from each other in three respects: the order at
which $F_L$ is evaluated $O(\alpha_s^2)$ (FF3A), $O(\alpha_s)$ (FF3B); whether or not $\alpha_s$ runs with 3-flavours (FF3A)
or with variable flavour (FF3B); and the use of pole masses (FF3A) or current masses (FF3B).
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f60a.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f60b.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of HERAPDF2.0FF3A NLO
and HERAPDF2.0FF3B NLO,
$\textit{xu}_{\textit{v}}$,$\textit{xd}_{\textit{v}}$,$\textit{xS=2x(}\bar{U
}+\bar{D}$),$ \textit{xg}$,
at $\mu_{\rm f}^{2}$ = 10\,GeV$^{2}$.
The experimental, model
and parameterisation uncertainties are shown separately.
}
\label{fig:d15039f60af60b}
\end{figure}
Heavy flavour data from the charm combination has also been added to the fit,
but it does not make much difference once its constraining effect on the charm mass has been taken into account.
Adding data on jet productuion also does not make much difference IF the value
of $\alpha_s(M_Z^2)$ is kept fixed. However if $\alpha_s(M_Z^2)$ is free then jet data have a
dramatic effect in constraining its value, see Fig.~\ref{fig:d15039f65}, where
the $\chi^2$ profiles vs $\alpha_s(M_Z^2)$ are shown for the NLO and NNLO fits
to inclusive data alone and the same profile is shown for the NLO fit including
jets. (Note that we cannot include jets in an NNLO fit since jet production cross sections in DIS have not been calculated to NNLO).
A simultaneous fit of the PDFs and the value of $\alpha_s(M_Z^2)$
can be made once the jet data are included resulting in the value,
$\alpha_s(M_Z^2) = 0.1183\pm 0.009(exp) \pm 0.0005(model/param) \pm 0.0012 (had) ^{+ 0.0037}_{-0.0030}(scale)$, where 'had' indicates extra uncertainties due to the hadronisation of the jets.
The gluon PDF is strongly correlated to the value of $\alpha_s(M_Z^2)$ and
thus, in a fit where $\alpha_s(M_Z^2)$ is free, the gluon uncertainty increases.
However, provided that
jet data are included in the fit this increase is not dramatic, see Figs~\ref{fig:d15039f66af66b}.
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f65.eps,width=0.7\linewidth}}
\caption {$\Delta \chi^2$ vs. $\alpha_s(M_Z^2)$ for pQCD fits with different $Q^2_{\rm min}$ using
data on (a) inclusive, charm and jet production at NLO,
(b) inclusive $ep$ scattering only at NLO, and
(c) inclusive $ep$ scattering only at NNLO.
}
\label{fig:d15039f65}
\end{figure}
\begin{figure}[tbp]
\vspace{-0.5cm}
\centerline{
\epsfig{figure=f66a.eps,width=0.7\linewidth}}
\centerline{
\epsfig{figure=f66b.eps,width=0.7\linewidth}}
\caption {The parton distribution functions of
HERAPDF2.0Jets NLO, $xu_v$, $xd_v$, $xS=2x(\bar{U}+\bar{D})$, $xg$,
at $\mu_{\rm f}^{2} = 10\,$GeV$^{2}$
with fixed $\alpha_s(M_Z^2)=0.118$ (top) and free $\alpha_s(M_Z^2)$ (bottom).
The experimental, model and parameterisation
uncertainties are shown separately.
The hadronisation uncertainty is also included, but it is
only visible for the fit with free $\alpha_s(M_Z^2)$.
}
\label{fig:d15039f66af66b}
\end{figure}
\section{Conclusions}
The H1 and ZEUS data on inclusive $e^{\pm}p$ neutral and charged current cross sections have been
combined into a data set with a total integrated luminosity of $\sim 1fb^{-1}$. This data set
spans six orders of magnitude in both $x$ and $Q^2$. The combined cross sections were used as
input to a pQCD analysis to extract the parton distribution functions HERAPDF2.0 at LO, NLO and NNLO.
The effect of using various different heavy flavour schemes and different $Q^2$ cuts on the data
was investigated. All heavy flavour schemes show some sensitivity to the minimim $Q^2$ cut,
however the choice of this cut does not bias data at high scale significantly. For the standard fits the value of $\alpha_s(M_Z^2)$ is fixed, but a measurement of $\alpha_s(M_Z^2)$ can be made if jet data are included in the fit, resulting in the value
$\alpha_s(M_Z^2) = 0.1184 \pm 0.0016$ at NLO, excluding scale uncertainties.
The data and the PDFs are available on https://www.desy.de/h1zeus/herapdf20.
|
1,108,101,565,768 | arxiv | \section{Introduction}
The attractive Hubbard model represents an unvaluable tool to
understand properties of pairing and superconductivity in systems
with attractive interactions. The simplifications introduced in this model
allow a comprehensive study of the evolution from the weak-coupling
regime, where superconductivity is due to BCS pairing in
a Fermi liquid phase, and a strong coupling regime, in which the system
is better described in terms of bosonic pairs, whose condensation
gives rise to superconductivity (Bose Einstein (BE) superconductivity)
\cite{micnas}.
It has been convincingly shown that such an evolution is a smooth crossover
and the highest critical temperature is achieved in the intermediate
regime where none of the limiting approaches is rigorously
valid\cite{micnas,bcsbeqmc}.
A realization of such a crossover scenario has been
recently obtained through the development of experiments on the condensation
of ultracold trapped fermionic atoms\cite{atomi}.
In these systems the strength of the
attraction can be tuned by means of a tunable Fano-Feschbach
resonance, and the whole crossover can be described\cite{xoveratomi}.
In the context of high-temperature superconductivity,
the intermediate-strong coupling regime in which incoherent pairs
are formed well above the critical temperature has been invoked as an
interpretation of the pseudogap phase\cite{bcsbeqmc}. Moreover, since
the early days of the discovery of these materials,
the evolution with the doping level of both the normal- and the
superconducting-phase properties
induced some authors\cite{bcsbemix,bcsbestrin} to recognize the fingerprints
of a crossover between a relatively standard BCS-like superconductivity in
the overdoped materials and a strong-coupling superconductivity associated
to Bose-Einstein condensation (BE) in the underdoped materials.
Indeed at optimal doping the zero-temperature coherence
length is estimated to be around
10$-$20 \AA \cite{pan,iguchi}, i.e., much smaller than
for conventional superconductors but still large enough to
exclude the formation of local pairs.\cite{crossover,uemura}
It is understood that the attractive Hubbard model has not to be taken as
a microscopic model for the cuprates, since
a realistic description of the copper-oxygen planes of these materials
unavoidably requires a proper treatment of strong Coulomb repulsion.
This simplified model represents instead an ideal framework where the
evolution from weak to strong coupling can be studied
by simply tuning the strength of the attraction.
The main aim of the present work is to identify if, and to which extent,
at least some aspects of the phenomenology of the cuprates can be interpreted
simply in terms of a crossover from weak to strong coupling.
The main simplifications introduced by the attractive Hubbard model
can be summarized as {\it (i)} Neglect of repulsion. Even if some attraction
has to develop at
low energy, the large short-range Coulomb repulsion implies that
the interaction must become repulsive at high-energy in real systems.
In some sense,
an attractive Hubbard model picture can at most be applied to the
low-energy quasiparticles.
{\it{(ii)}} The model naturally presents s-wave superconductivity, as opposed
to the d-wave symmetry observed in the cuprates
{\it{(iii)}} Neglect of retardation effects. The Hubbard model describes
instantaneous interactions, while every physical pairing is expected to
present a typical energy scale.
The model is written as
\begin{eqnarray}
\label{hubbard}
{\cal H} &=& -t \sum_{<ij>\sigma} c_{i\sigma}^{\dagger} c_{j\sigma}
-U\sum_{i}\left ( n_{i\uparrow}-\frac{1}{2}\right )
\left ( n_{i\downarrow}-\frac{1}{2}\right )+\nonumber\\
& & -\mu\sum_i (n_{i\uparrow}+n_{i\downarrow})
\end{eqnarray}
where $c_{i\sigma}^{\dagger}$ ($c_{i\sigma}$) creates (destroys)
an electron with spin $\sigma$ on the site $i$ and $n_{i\sigma} =
c_{i\sigma}^{\dagger}c_{i\sigma}$ is the number operator;
$t$ is the hopping amplitude and $U$ is the Hubbard on-site attraction
(we take $U > 0$, with an explicit minus sign in the hamiltonian).
Notice that, with this notations, the Hamiltonian is explicitly
particle-hole symmetric for $\mu = 0$, which therefore corresponds to
$n=1$ (half-filling).
Despite its formal simplicity, this model can be solved exactly only in
$d = 1$, while in larger dimensionality analytical calculations
are typically limited to weak ($U \ll t$)
or strong ($U \gg t$) coupling, where the BCS and the BE approaches
are reliable approximations.
It is anyway known that for $d \ge 1$,
the ground state of (\ref{hubbard})
is superconducting for all values of $U$ and all densities $n$, with the
only exception of the one-dimensional half-filled case.
At half-filling the model has an extra-symmetry and the superconducting
and the charge-density-wave order parameters become degenerate.
A reliable description of the evolution of the physics as a function of
$U$ requires to treat the two limiting regimes on equal footing
overcoming the drawbacks of perturbative expansions.
Quantum Monte Carlo (QMC) simulations represent a valuable tool in this regard,
and they have been applied to the two\cite{bcsbeqmc,moreo,singerold,
singernew,angleres} and
three\cite{sewer} dimensional attractive Hubbard
model. Even if the sign problem does not affect these simulations, finite
size effects and memory requirements still partially limit the potentiality
of this approach.
A different non perturbative approach is the Dynamical Mean-Field Theory
(DMFT),
that neglects the spatial correlations beyond the
mean field level in order to fully retain the local quantum
dynamics, and becomes exact in the limit of infinite dimensions\cite{dmft}.
Due to the local nature of the interaction in the attractive Hubbard model,
we expect that the physics of local pairing is well described in DMFT.
Moreover, this approach is not biased toward metallic or insulating states,
and it is therefore particularly useful to analyze the BE-BCS crossover.
On the other hand, the simplifications introduced by the DMFT are
rigorously valid only in the infinite dimensionality limit, and even if
the DMFT has obtained many successes for three dimensional systems,
its relevance to lower dimensionality like $d=2$ is much less
established, and represents a fourth limitation of our study in light
of a comparison with the physics of the cuprates.
In particular, the role of dimensionality in determining
the pseudogap properties
of the attractive Hubbard model has been discussed in Refs.
\cite{dimensionalita}.
The study of the attractive Hubbard model can greatly benefit
of a mapping onto a repulsive model in a magnetic field.
The mapping is realized in a bipartite
lattice\cite{auerbach} by a 'staggered' particle-hole transformation on
the down spins $c_{i\downarrow} \to (-1)^i {c}_{i\downarrow}^\dagger$.
The attractive model with
a finite density $n$ transforms into a half-filled repulsive model with a
finite magnetization $m = n-1$. The
chemical potential is transformed, accordingly, into a
magnetic field $h = \mu$.
In the $n=1$ case (half-filling) the two models are therefore
completely equivalent. We notice that the above mapping does not
only hold for the normal phases, but extends to the broken symmetry
solutions. The three components of the antiferromagnetic order
parameter of the repulsive Hubbard model are in fact mapped onto
a staggered charge-density-wave parameter ($z$ component of the spin)
and an s-wave superconducting order parameter ($x-y$ components).
The above mapping is extremely useful, since it allows to exploit
all the known results for the
repulsive model and for the Mott-Hubbard transition
to improve our understanding of the attractive model.
In recent works the DMFT has been used to study the normal phases
of the attractive Hubbard model. In particular, a phase transition
has been found both at finite\cite{keller} and at zero temperature\cite{prl}
between a metallic solution and an pairing phase of pairs.
The insulating pairs phase is nothing but a realization of a superconductor
without phase coherence, i.e., a collection of independent pairs.
As it has been discussed in Ref.\cite{prl,keller}, this phase is the
'negative-$U$' counterpart of the paramagnetic Mott insulator found for the
repulsive Hubbard model.
We notice that the insulating character of the pairing phase is a
limitation of the DMFT approach, in which the residual kinetic energy
of the preformed pairs is not described.
The pairing transition has been first identified in Ref. \cite{keller}
by means of a finite temperature QMC solution of the
DMFT.
The $T=0$ study of Ref. \cite{prl} has clarified that the
pairing transition is always of first order except for the half-filled
case, and that it takes
place with a finite value of the
quasiparticle weight $Z = (1-\partial \Sigma(\omega)/\partial \omega)^{-1}$,
associated to a finite spectral weight at the Fermi level.
In the latter paper, it has also been shown that the pairing transition
gives rise to phase separation.
For what concerns the onset of superconductivity, a DMFT calculation of the
critical temperature $T_c$ has been
performed for the case of $n=0.5$ in the same Ref. \cite{keller}.
The $T_c$ curve, extracted from the divergence of the pair-correlation
function
in the normal phase, displays a clear maximum at intermediate coupling and
reproduces correctly both the BCS and the BE predictions in
the asymptotic limits, remaining finite for all $U \neq 0$.
In this work we complement the analysis of Ref. \cite{prl}, by extending
our phase diagram to finite temperature, still using Exact
Diagonalization (ED) to solve the impurity model associated with
the DMFT of the Hubbard model\cite{caffarel}.
We also compare the normal state solutions with the
superconducting solutions which are stable at low temperatures.
The use of ED allows us to reach arbitrarily small temperatures which
are hardly accessible by means of QMC. Quite naturally,
the extension of ED to finite temperature requires a more severe
truncation of the Hilbert space. We have checked that all the
thermodynamical quantities we show are only weakly dependent on the
truncation.
The plan of the paper is the following: in Sec. II we briefly introduce the
DMFT method and its generalization to the superconducting phase;
In Sec. III we discuss the finite temperature phase diagram in the normal phase
characterizing the low-temperature pairing transition; In Sec. IV we analyze
the superconducting solutions; In Sec. V we compare different estimators
of the pseudogap temperature in the high-temperature normal phase. Sec. VI
contains our concluding remarks.
\section{method}
The DMFT extends the concept of classical mean-field theories to quantum
problems, by describing a lattice model in terms of an effective
dynamical local theory.
The latter can be represented through an impurity model subject
to a self-consistency condition, which contains all the information
about the original lattice structure through the non-interacting
density of states (DOS)\cite{dmft}.
Starting from the Hubbard model (\ref{hubbard}), we obtain an attractive
Anderson impurity model
\begin{eqnarray}
\label{aim}
{\cal H}_{AM} &=& -\sum_{k,\sigma} V_k c^{\dagger}_{k,\sigma} c_{0,\sigma} +
H.c. + \sum_{k,\sigma} \epsilon_k c^{\dagger}_{k,\sigma} c_{k,\sigma}
\nonumber\\
&-& U\left ( n_{0\uparrow}-\frac{1}{2}\right )
\left ( n_{0\downarrow}-\frac{1}{2}\right )+\mu n_0,
\end{eqnarray}
The self-consistency is expressed by requiring the identity between
the local self-energy of the lattice model and the impurity self-energy
\begin{equation}
\label{sigma}
\Sigma(i\omega_n) = {\cal G}^0(i\omega_n)^{-1} - G(i\omega_n)^{-1},
\end{equation}
where $G(i\omega_n)$ is the local Green's function of (\ref{aim}), and
${\cal G}^0(i\omega_n)^{-1}$ is the dynamical Weiss field, related to the
parameters in (\ref{aim}) by
\begin{equation}
\label{g0}
{\cal G}^0(i\omega_n)^{-1}= i\omega_n +\mu -\sum_k\frac{V_k^2}{i\omega_n -
\epsilon_k}.
\end{equation}
By expressing the local component of the Green's function in terms of the
lattice Green's function, namely $G(r=0,i\omega_n)= \sum_k G(k,i\omega_n)$,
Eq. (\ref{sigma}) implies
\begin{equation}
\label{selfconsistence}
{\cal G}^0(i\omega_n)^{-1} = \left ( \int d\epsilon
\frac{D(\epsilon)}{i\omega_n + \mu -\epsilon - \Sigma(i\omega_n)}
\right )^{-1} + \Sigma(i\omega_n),
\end{equation}
where $D(\epsilon)$ is the non interacting density of states of the original
lattice.
We consider the infinite-coordination Bethe lattice,
with semicircular DOS of half-bandwidth $D$ (i.e.,
$D(\epsilon) = (2/\pi D^2) \sqrt{D^2 - \epsilon^2}$), for which
Eq. (\ref{selfconsistence})
is greatly simplified and becomes
\begin{equation}
\label{selfbethe}
{\cal G}^0(i\omega_n)^{-1} = i\omega_n + \mu - \frac{D^2}{4}G(i\omega_n).
\end{equation}
In this work we also consider solutions with explicit s-wave superconducting
order, by allowing for local anomalous Green's functions
$F(\tau) = -\langle T_{\tau} c_{0\uparrow}(\tau) c_{0\downarrow}\rangle$.
The whole DMFT formalism can then be recast in Nambu-Gor\'kov spinorial
representation\cite{dmft}, and Eqs. (\ref{sigma}) and (\ref{selfconsistence})
must be read as matrix identities in the Nambu space.
As far as the impurity model is concerned, we need to describe an
Anderson impurity model with a superconducting bath or, equivalently,
with an anomalous hybridization in which Cooper pairs are created and
destroyed in the electronic bath, i.e., a term
$\sum_k V^s_k (c_{k\uparrow}c_{k\downarrow} + H.c.)$ is added to (\ref{aim}).
The heaviest step of the DMFT approach is to compute $G(i\omega_n)$
for the Anderson
model (\ref{aim}). This solution requires either a numerical approach or
some approximation. Here we use Exact Diagonalization.
Namely, we discretize the Anderson model, by truncating the sums over $k$
in Eqs. (\ref{aim}) and (\ref{g0}) to a finite number of levels
$N_s$. It has been shown that extremely small values of $N_s$ provide
really good results for thermodynamic properties and reliable results
for spectral functions.
In this work we use the ED approach at finite temperature, where it is
not possible to use the Lanczos algorithm, which allows to find the
groundstate of extremely large matrices. To obtain the full spectrum
of the Hamiltonian, needed to compute the finite-temperature properties,
we are forced to a rather small value of $N_s$, up to 6.
All the results presented here are for $N_s = 6$, and
we always checked that changing $N_s$ from 5 to 6 does not affect the
relevant observables we discuss in the present work, except for the
real-frequency spectral properties.
\section{The pairing Transition}
In this section we limit our analysis to normal phase paramagnetic solutions
in which no superconducting ordering is allowed.
Even if the s-wave superconducting solution is expected to be the stable one
at low temperatures, our normal state
solutions are representative of the normal phase above the critical
temperature.
The region in which the normal state is stable may of course be enlarged by
frustrating superconductivity through, e.g., a magnetic field.
Moreover, the nature of the normal phase gives important indications
on the nature of the pairing in the different regions of the phase diagram.
As mentioned above, it has been shown that the normal phase of the
attractive Hubbard model is characterized by a ``pairing'' transition
between a Fermi-liquid phase and a phase in which the electrons are paired,
but without any phase coherence among the pairs.
The pairing transition has been first discussed at finite temperature
in Ref. \cite{keller}, and a complete characterization at $T=0$ has been
given in Ref. \cite{prl}.
In this paper we complete the finite temperature study of the
transition and connect it to the zero-temperature phase diagram,
finally drawing a complete phase diagram in the attraction-temperature
plane for a density $n=0.75$, taken as representative
of a generic density (except for the peculiar particle-hole
symmetric $n=1$ case). This situation would
correspond to a repulsive model at half-filling in an external magnetic
field tuned to give a finite magnetization $m=0.25$.
The $T=0$ DMFT solution of the attractive Hubbard model is characterized by
the existence of two distinct solutions, a metallic one with
a finite spectral weight at the Fermi level and an insulating solution
formed by pairs, with no weight at the Fermi level.
The previous study has also clarified that the quasiparticle weight
$Z = (1-\partial\Sigma(\omega)/\partial\omega)^{-1}$, which may be used
as a sort of order parameter for the Mott transition at half-filling,
loses this role for the doped attractive Hubbard model, being it finite
both in the metallic and pairing phases.
At $T=0$, the metallic solution exists only for $U < U_{c2}$, and
the insulating one for $U > U_{c1}$, with $U_{c1} < U_{c2}$. In other
words, a coexistence region is present where both solutions exist, and
where the actual ground state is determined minimizing the
internal energy.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6.5cm]{fig1.eps}
\end{center}
\caption{Evolution of the imaginary part of the Green's function as a
function of temperature for $U/D = 2.4$.
In each panel are shown the metallic ($+$) and insulating ($\times$) solutions.
(the chosen value of the attraction lies
in the coexistence region).}
\label{img}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=6.5cm]{fig2.eps}
\end{center}
\caption{Average double occupation as a function of the attraction strength
for different temperatures. The first order transition at low temperatures
becomes a continuous evolution at high temperatures, where there is no more
distinction between metallic and pairing solutions.}
\label{doubocc}
\end{figure}
The clear-cut $T=0$ characterization of the two solutions based by the
low-energy spectral weight
is lost at finite temperature, where both solution have finite weight at the
Fermi level. Nonetheless, two families of solutions can still be defined, each
family being obtained by continuous evolution of the different $T=0$ phases.
The two solutions are still clearly identified
at relatively low temperatures, further increasing the
temperature, the differences between the two solutions is gradually washed out,
as shown in Fig. \ref{img}, where we plot the temperature
evolution of the imaginary part of the Green's function in imaginary
frequency for $U= 2.4D$, which lies in the $T=0$ coexistence region.
While at $T=1/75D$ and $T=1/50D$ the difference in the two solutions is
still clear, at $T=1/31D$, the two solutions become basically
indistinguishable.
This result suggests that, as intuitively expected, the temperature
reduces the difference between the solutions and consequently, the size of
the coexistence region, which is expected to close at some finite
temperature critical point (the attractive counterpart of the
endpoint of the line of metal-insulator transitions in the
repulsive model \cite{lange}).
A similar information is carried by the analysis of the average value of
double occupancy $n_d = \langle n_{\uparrow} n_{\downarrow}\rangle$.
This quantity naturally discriminates between an pairing phase with
a large value of $n_d$ and a metal with a smaller value.
As shown in Fig. \ref{doubocc}, at low temperature we have two solutions with
a different value of $n_d$ in the coexistence region, and a jump in this
quantity at the transition. Upon increasing the temperature, the two solutions
tend to join smoothly one onto the other, signaling again the closure
of the coexistence region, which is substituted by a crossover region.
Analogous behavior is displayed by the quasiparticle weight $Z$.
Repeating the same analysis for a wide range of coupling constants
and temperatures,
we are able to construct a finite-temperature phase diagram for the
pairing transition, shown in Fig. \ref{phd}.
For temperatures smaller than a critical temperature $T_{pairing}$,
we compute the finite temperature extensions of $U_{c1}$ and $U_{c2}$,
which mark the boundary of the coexistence region.
The two lines (depicted as dashed lines in Fig. 3) converge into
a finite temperature critical point at $U = U_{pairing} \simeq 2.3 D$ and
$T=T_{pairing} \simeq 0.03 D $.
Despite the closure of the coexistence
region, a qualitative difference between weak coupling and strong coupling
solutions can still be identified for $T > T_{pairing}$,
determining a crossover region in which
the character of the solution smoothly evolves from one limit to the other
as the attraction is tuned.
At this stage, the crossover region
is ``negatively'' defined as the range in which the Green's function does not
resemble any of the two low temperature phases. The crossover lines are
estimated as the points in which it becomes impossible to infer from the
Matsubara frequency Green's function whether the low-energy behavior is
metallic or insulating.
It has been shown for the repulsive Hubbard model that this kind of
crossover is accompanied by a qualitative difference in transport
properties. In the region on the left of the crossover, the conduction is
metallic and the resistivity increases with temperature. In the
intermediate crossover region the system behaves like a semiconductor with a
resistivity which decreases upon heating, and finally in the phase on the right
of the crossover region the system behaves like a heated insulator\cite{dmft}.
Coming from the left, the first
crossover occurs when $G(i\omega_n)$ has no longer a clear metallic behavior
with a finite value at zero frequency, while the second crossover line delimits
the region in which the gap of the paired solution is closed by thermal
excitations. We will come back later to the crossover region and compare the
above defined lines with physically sensible estimators of the
pseudogap temperature, like the specific heat and the spin susceptibility.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8.5cm]{fig3.eps}
\end{center}
\caption{Phase diagram in the $U$-$T$ plane. At low temperature two critical
lines $U_{c1}(T)$ and $U_{c2}(T)$ individuate the coexistence region.
The two lines converge in a finite temperature critical point. At higher
temperatures we can still define two crossover lines.
The superconducting critical temperature is also drawn as a solid line (cfr.
Fig. 4).}
\label{phd}
\end{figure}
Turning to the coexistence region, we can also ask ourselves which
is the stable phase.
This requires a comparison between the Gibbs free energies of the
two phases. At half-filling, where the attractive and the repulsive model
are equivalent, it has been shown that at $T=0$ the
metallic solution is stable in the whole coexistence region\cite{moeller}.
At finite temperature it has been shown numerically that
the insulator becomes stable in a large
portion of the coexistence region due to its large entropy\cite{dmft}.
The transition is therefore of first order for all temperature below the
critical temperature, except for the two second-order endpoints
at $T=0$ and $T=T_{pairing}$.
For densities out of half-filling it has been shown in Ref. \onlinecite{prl}
that the transition is of first order
already at $T=0$ and it is accompanied by a small phase separation region.
For $n=0.75$, the $T=0$ first-order transition occurs quite close to $U_{c2}$.
Analogously to the half-filling case, the finite temperature almost
immediately favors the pairing phase. Indeed, computing the free
energy following, e.g., Ref. \cite{gabilandau}, we find
the pairing phase stable for almost every point in the coexistence region.
We had to use an extremely dense mesh of points in the $U$ direction
to identify a small section where the metallic phase is stable at finite
temperature.
Therefore the finite temperature first-order transition occurs
extremely close to the $U_{c1}$ line for finite temperature and rapidly moves
closer to $U_{c2}$ only at really small temperatures.
\section{The Superconducting Phase}
The above stability analysis has been restricted to normal phase solutions.
Indeed the superconducting solution is expected to be the stable one
at $T=0$ for all densities and values of the interaction $U$.
The critical temperature $T_c$ is obtained directly as the highest temperature
for which a non-vanishing anomalous Green's function $F(\omega)$ exists.
The DMFT critical temperature $T_c$ for $n=0.75$ as a function of $U$ is
reported in
Fig. \ref{tc} (full dots) and it qualitatively
reproduces the limiting behavior,
with an exponential BCS-like behavior for small U's and a $1/U$ decrease at
large
$U$ according to the expression
for the BE condensation temperature hard-core boson system\cite{kellerlow}.
As a result, $T_c$ assumes its maximum value of about $0.1 D$ for an
intermediate coupling strength $U_{max} \simeq 2.1 D$.
Interestingly, the maximum $T_c$ occurs
almost exactly at the coupling for which the pairing transition in the
normal phase would take place in the absence of superconductivity.
It might be noticed however that, while the BE result (open triangles in Fig.
\ref{tc}) basically falls on top of the DMFT results, the BCS formula
(open circles) only qualitatively follows the full solution. This
``asymmetry'' in recovering the BCS behavior
arises from the partial screening of the bare attraction due to
second order polarization terms\cite{viverit}.
Because of these corrections the attraction is renormalized as
$U_{eff} \simeq U - A U^2/t$, so that $\frac{1}{U_{eff}} \simeq \frac{1}
{U}(1+AU/t) =
1/U +A/t$.
When this correction is plugged in the BCS formula for $T_c$, it results
in a correction to the prefactor.
If we simply extract the rescaling factor for a given small value of $U$
($T_c/T_c^{BCS} \simeq 0.32$) and we simply scale the whole weak-coupling
curve by this
factor, we obtain the points marked with asterisks, whose agreement
with the DMFT results
does not require further comments. It is interesting, instead, to compare
the DMFT estimations for $T_c$ with the QMC results:
despite the presence of many factors (such as the exact shape of the D.O.S.
of the model or the finite dimension effects) which are capable to introduce
relevant variations in the values of $T_c$, some general similarities appear
clearly. Indeed, while simple rescaling the data in terms of
the half-bandwidth $D$, both $T_c$ and $U_{max}$ estimations with the
two\cite{singerold,singernew} and three\cite{sewer} dimensional QMC
are lower than the DMFT evaluation (i.e.,
$T_c \sim 0.04 D $, $U_{max} \sim 0.7 D$ for the $d=2$ case, and
$T_c \sim 0.05 D$, $U_{max} \sim 1.3 D$ in $d=3$,
even if for a lower density of $n=0.5$), one can observe, quite surprisingly,
that the ratio between $T_c$ and $U_{max}$ is around $0.04 \div 0.05$ in both
the DMFT and the two QMC cases.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8.5cm]{fig4.eps}
\end{center}
\caption{Critical temperature as a function of $U$ at $n=0.75$:
the DMFT data (black circle) are compared with the
BCS -both the bare (empty circles) and the rescaled one (stars)- and the
BE mean field predictions (empty triangles) for an hard-core boson
systems (see Refs. \onlinecite{micnas,kellerlow}).}
\label{tc}
\end{figure}
Coming back to our DMFT results, the simplest and most important
observation is that
the critical temperature is always higher than the critical temperature
for the pairing transition.
For example in our $n=0.75$ case,
$T_c^{max}$ is about $ 0.1 D$, against a $T_{pairing}$ of $0.03D$.
As a result, the whole phase transition is hidden by superconductivity,
which remains as the only real instability of the system (cfr. Fig. 3).
Nevertheless, the crossover lines at higher temperature survive the
onset of superconductivity. Therefore the normal phase we reach for
$T > T_c$ is really different according to the regime of coupling we are in.
For weak-coupling, the normal phase is substantially a regular Fermi-liquid
and superconductivity occurs as the standard BCS instability. In the
strong-coupling regime, the normal phase is instead a more correlated phase
which presents a pseudogap in the spectrum.
At intermediate coupling, where the superconducting critical temperature
reaches its maximum, the normal phase is in a crossover region between
the two limiting behaviors.
\section{The Pseudogap Phase: Spin Susceptibility, Specific Heat and
Spectral Functions}
Even if the onset of superconductivity completely hides the pairing
transition, the fingerprints of the low-temperature normal phase
are still visible in the high-temperature phase diagram, in which
a crossover from a metallic phase to a gapped phase is still present.
It is tempting to associate the region in which the system behaves
as a collection of incoherent pairs to the pseudogap regime of the cuprates.
It is important to underline that, in this framework, the definition of the
pseudogap phase is somewhat tricky, and it implies a certain degree of
arbitrariness. In this section we come back to this region and compute
various observable whose anomalies have been used to identify the
pseudogap phase and compare the related estimates of the pseudogap
temperature $T^*$.
Our first estimate is based on the evaluation of the uniform spin
susceptibility $\chi_s$
as a function of temperature for different attraction strengths.
The opening of a gap in the spin excitation spectrum, not associated with
any long-range order, represented in fact one of the first indications of
existence of the pseudogap phase in high-temperature superconductors.
The DMFT calculation of $\chi_s$ can be performed
by evaluating the derivative of the magnetization $m$ with respect to a uniform
magnetic field in the limit of vanishing $h$. In terms of the local
Green functions
\begin{equation}
\chi_s= \lim_{h \rightarrow 0} \frac{1}{2}T\frac{\sum_{\omega_n}
\left[G_{\uparrow}(i\omega_n) -G_{\downarrow}(i\omega_n)\right]}{h}.
\end{equation}
This calculations has been performed by varying the temperature in a wide range
($0 <T< 2 D$) for four different values of the pairing interaction
($U/D= 0.8, 1.8, 2.4$ and $3.6$) and represent an extension of
the results reported in Ref. [\onlinecite{keller}].
The results of our
calculation are summarized in Fig. \ref{chispin}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8.5cm]{fig5.eps}
\end{center}
\caption{Spin susceptibility in the normal phase as a function of
temperature for $U/D= 0.0, 0.8,1.8,2.4,3.6$.
These values of $U$ are representative of all
the interesting region of the phase diagram in Fig. \ref{phd}, moving from
the metallic to the paired side. The values of the
superconducting critical temperature are marked by small black arrows.}
\label{chispin}
\end{figure}
In the weak-coupling side ($U=0.8 D$) we find a conventional metallic
behavior of $\chi_s$, which increases monotonically with decreasing
temperature. The interaction reduces the
zero-temperature extrapolated value with respect to the non-interacting result
$\chi_s= \rho(0)$. On the opposite side of the phase diagram, in the
strong-coupling regime ($U= 3.6 D$)
the standard high-temperature behavior of $\chi_s$
extends only down to a certain temperature $T_M^*$, where a maximum of
$\chi_s$ is reached. When the temperature is further reduced
$\chi_s$ starts to decrease, exponentially approaching zero
in the $T=0$ limit, signaling the opening of a gap in the
spin-excitation spectrum. A qualitatively similar behavior is found also
in the intermediate-coupling regime, at least as long as the value of $U$
stays larger than $U_{pairing}$ (e.g., $U=2.4 D$), or,
in other words, as long as the left line defining the crossover
region in Fig. \ref{phd} is not crossed.
The behavior of $\chi_s$ becomes richer for $U = 1.8 D < U_{pairing}$.
At high temperatures $\chi_s$ closely resembles the insulating case,
displaying
a clear maximum at a temperature $T_M^*$. By approaching $T=0$,
$\chi_s$ no longer vanishes, but it rises at small temperatures
displaying a minimum for a temperature lower than $T_M^*$:
a metallic behavior is therefore recovered,
associated to the narrow resonance at the Fermi level\cite{keller}. Such
a behavior naturally defines a different temperature scale $T_m^*$,
which is associated to the minimum of $\chi_s$ and represents the lower border
of the pairing zone, or in a sense, of the ``pseudogap'' region.
Conversely, this low-temperature
behavior has not been observed to our knowledge in finite-dimensional QMC
simulation \cite{bcsbeqmc,moreo,singerold,singernew,sewer,angleres}.
In practice, the system displays a pseudogap behavior in the region
between $T_m^*$ and $T_M^*$, whose boundary, labeled as $T_s^*$, is
represented in Fig. 8.
We finally mention that the temperature $T_M^*$ for which $\chi_s$ is maximum
scales with $U$. This finding is in a qualitative agreement
with a QMC simulation\cite{singerold,sewer},
where the $T_M^*(U)$ is taken as a definition of the temperature below
which the pseudogap appears.
From a more quantitative point of view, as happens for
$T_c$ and $U_{max}$, the values of $T_M^*(U)$ of the QMC simulations are
lower than our DMFT results (i.e., $T_M^*(U_{max}) \sim 0.15 D$ when
$d=2$ and $\sim 0.45 D$ for $d=3$, against the DMFT estimate of $\sim 0.7 D$)
. However, also in this case the ratio between $T_M^*(U_{max})$ and $U_{max}$
has a more universal value around $0.2 \div 0.3$.
Another relevant quantity is the specific heat
$C_V=\partial E/\partial T = - T\partial^2 F/\partial T^2$,
that we obtain by differentiating a fit to the DMFT internal energy $E(T)$
for the same attraction strengths and report in Fig. \ref{cv}.
Also for this quantity the weak coupling case ($U/D=0.8$) behaves as a regular
metal, with a linear behavior at small temperatures ($C_V = \gamma T$,
with $\gamma \propto 1/m^*$, $m^*$ being the effective mass).
followed by a smooth decrease when the temperature exceeds the
typical electronic energy scale. The same qualitative result is found for
the noninteracting system, where the low-$T$ slope is smaller since the
interacting system has a larger effective mass.
In the opposite strong coupling limit we
observe the typical activated behavior
of gapped systems for small temperatures, with an exponential
dependence of $C_V(T)$ which extends up to a temperature $T^*_{hM}$
large enough to wipe out the effect of the gap.
It is therefore natural to associate such a temperature to the closure of
the pseudogap.
In the most interesting $U = 1.8D$
case, two features are clearly present in the $C_V(T)$ curve.
The first, low-temperature feature is the evolution of the small-$U$
metallic feature, which acquires a larger slope as $U/D$ is increased due
to the enhancement of the effective mass, and shrinks as a consequence
of the reduced coherence temperature of the metal.
The second feature is instead the evolution of the large-$U$ insulating
one, and would show an activated behavior partially hidden by the
low-$T$ metallic peak. Thus, the system behaves like a metal in the
small temperature range, while it has a pseudogap for intermediate
temperature. We estimate the lower boundary of the pseudogap region in
this intermediate coupling regime through the maximum of the low-temperature
feature, which is controlled by the effective coherence scale of the metal.
The upper bound is naturally defined as the temperature in which the
activated behavior disappears. As a result, the specific heat analysis
determines a pseudogap region with a very similar shape than the one determined
through the spin susceptibility, with a re-entrance of metallic behavior
in the intermediate coupling regime at low temperatures.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig6.eps}
\caption{Specific heat as a function of temperature for
$U/D=0.8,1.8,2.4,3.6$. All the $C_V$ lines are obtained by differentiating
the the internal energy $E_{int}(T)$. The expression of $E_{int}(T)$ is
computed directly by fitting the DMFT data.}
\label{cv}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=8cm]{fig7.eps}
\end{center}
\caption{Here are plotted the density of states $\rho(\omega)$
for three different values
of the interaction: $U= 0.8 D, 2.4 D$ and $3.6$ (from the upper to the lower
row) both at the low-temperature ($T=0.1 D$, left panels)
and at the high-temperature ($T=1 D$, right panels).}
\label{spectral}
\end{figure}
An inspection to the spectral function can strengthen our insight on the
pseudogap phase.
In principle the ED algorithm allows to directly compute finite
frequency spectral functions $\rho(\omega) = -1/\pi Im G(\omega)$ ,
avoiding the problems and ambiguities
intrinsic with analytic continuation techniques. Unfortunately,
the discretization of the Hilbert space which allows for an ED solution
results in ``spiky'' local spectral functions formed by a collection of
$\delta$-functions. In this light, we find it useful to compute
$\rho(\omega)$ by analytically continuing Eq.(\ref{sigma}),
analytically computing the local retarded Green function
$G_{loc}(\omega)= -1/\pi\int d\epsilon D(\epsilon)
(\omega - \epsilon +\mu - \Sigma_{ret}(\omega))^{-1}$.
This procedure provides more ``realistic''
descriptions of both the non-interacting DOS and of the strong-coupling
pairing phase. However, even though the spectral functions are
smoothed by this procedure, we can only extract informations about the
gross features of the spectra as, e.g., the amplitude of the gap $\Delta(T)$.
Keeping these limitations in mind, some results for $\rho(\omega)$ are plotted
in Fig. \ref{spectral}: for the weak- ($U=0.8 D$),
the intermediate- ($U=2.4 D$) and strong-coupling ($U=3.6 D$) case
a low and high-temperature set of data are shown.
Apart from the obvious appearing and enlarging of a gap in $\rho(\omega)$
with
increasing $U$, which is evident in the low-temperature data, it should be
noticed that both in the intermediate and the strong-coupling regime there is
apparently no tendency to a 'closure' of such a gap when the temperature is
raised. Indeed, as it is shown in the second and the third row of Fig.
\ref{spectral}, the gap starts to fill at some temperature
($T \sim 0.45 D$ for $U =2.4 D$ and $T \sim 1.5\div 2.0 D$
for $U=3.6 D$), but for these values of $U$
much of the spectral weight remains in the high-energy Hubbard bands,
and the gapped structure does not completely
vanish up to the highest temperature reached in our calculation
($T \simeq 2D$).
On the other hand, QMC results in $d=2$\cite{singernew}
obtained through maximum entropy show a closure of the gap in $\rho(\omega)$
at a temperature lower that our threshold.
Further investigation is needed to understand whether the discrepancy is due
to a different behavior between $d=2$ and the infinite dimensionality limit,
or it is determined by the technical difficulties involved in the calculation
of real frequency spectra in both approaches.
The persistence of the gap structure at high temperature
that we find in DMFT is also obtained within a perturbative analysis of
superconducting fluctuations at strong coupling in $d=2$\cite{pera}.
In Fig. \ref{tstar} we compare our estimates of the pseudogap
temperature obtained through different physical quantities.
We draw the borders of the pseudogap region as determined from
the spin susceptibility ($T^*_s(U)$) and the specific-heat behavior ($T_h^*(U)$
and the value of the superconducting gap $\Delta_0$ at zero temperature:
The upper borders of the spin and the specific-heat ``pseudogap'' region scale
roughly with $U$, as $\Delta_0$ does, so that both $T_h^*(U)$ and $T_s^*(U)$
are proportional to $\Delta_0$, as the experimentally determined pseudogap.
At low temperature, the pseudogap region boundary as extracted from
thermodynamic response functions displays a clear re-entrance, which
can be associated with the onset of the low-temperature quasi-particle
peak. We also notice that the low-temperature curve qualitatively follows the
behavior of the $U_{c2}(T)$ line. As mentioned above, the slope of
$U_{c2}(T)$ is easily interpreted in terms
of entropy balance between the two phases, which favors the preformed pairs
phase.
Our phase diagram also represents a warning regarding attempts to extrapolate
the low-temperature behavior from the high-temperature data in order to
compare with finite-dimensional QMC calculations.
If one, as, e.g., in Ref. \onlinecite{sewer}, extrapolated the
high-temperature behavior down to $T=0$ in order to estimate the
metal-insulator point, would have obtained an estimate
of $U^*$ significantly lower than the real $U_{c2}$.
This finding emphasize how the high-temperature properties of
the attractive Hubbard model are only weakly dependent on dimensionality,
as indicated by the similarity between DMFT and finite-dimension QMC,
while the low-temperature behavior may well be dependent on the dimensionality,
as well as on the details of the bandstructure of the underlying lattice.
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{fig8.eps}
\caption{Different estimates of the pseudogap temperature. The filled
circles indicates $T_s^*(U)$, i.e., the temperatures of both the
maxima and the minima of $\chi_s(T)$, while the empty circle marked $T_h(U)$
that is the temperature associated with the maxima and
the minima of $C_V(T)$. The regions on the left of these two lines can be
interpreted as the zone of ``pseudogap'' behavior for the spin and the
specific heat respectively. These lines are then compared with the behavior
of the anomalous part of the self-energy at zero temperature
($\Delta_0$, empty triangles).}
\label{tstar}
\end{center}
\end{figure}
\section{Conclusions}
In this paper we have investigated the finite temperature aspects
of pairing and superconductivity in the attractive Hubbard model
by means of DMFT, considering both normal and superconducting solutions.
In the normal phase we have identified two families of solutions, a
Fermi-liquid metallic phase and a preformed-pair phase with
insulating character.
The latter phase is formed by local pairs without phase coherence.
A finite region of the coupling-temperature
phase diagram is characterized by the simultaneous presence of both
solutions. In the low temperature regime a first-order transition occurs
within this region when
the free energies of the two solutions cross, and the region closes
at a certain temperature ($T_{pairing} =0.03 D$) in a critical point.
Interestingly, some trace
of the two solutions survives even for temperature larger than the
critical temperature, and two crossover lines can be defined
separating a normal metal, a sort of semiconductor in which the gap is
closed by temperature, and the preformed-pair phase with
a well defined gap.
When superconductivity is allowed, the superconducting solution is stable
for all values of the attraction and the critical temperature is
always larger than the pairing transition temperature in the normal phase.
In the superconducting state, we find an evolution from a weak-coupling
BCS-like behavior, with exponentially small $T_c$
from a normal metal to the superconductor, and a strong-coupling regime
in which superconductivity is associated to the onset of the phase-coherence
among the preformed pairs that occurs at $T_c \propto t^2/U$.
The highest $T_c$ is obtained in the intermediate region between this
two limiting cases, namely for $U \simeq 2.1 D$, which is extremely
close to the zero-temperature critical point of the normal phase.
The presence of the pairing transition affects the normal phase above $T_c$
also when superconductivity establishes.
In particular, one could be tempted to identify the phase of preformed
pairs obtained at strong coupling with
the pseudogap behavior observed in cuprates.
In order to test the adequacy of such an identification, we computed
different observables, whose anomalies can identify the appearance of the
pseudogap, like the spin susceptibility, the specific heat and the
single particle spectral functions.
In the intermediate region of coupling, where the pairing transition
occurs and the superconducting critical temperature reaches its
maximum, the pseudogap region presents a re-entrance at low
temperatures associated with a small coherent peak in the spectral
function. At temperatures smaller than this coherence temperature
the system behaves like a normal metal with renormalized effective
mass.
On the other hand, the high-temperature boundary of the pseudogap
region scales with $U$ regardless the criterion we use to estimate it.
The estimate of the pseudogap temperature from specific heat and
spin susceptibility both scale with the zero-temperature gap, as
in the cuprates.
The most striking difference between our pseudogap
phase-diagram and the experiments in the cuprates is
that the pseudogap phase in the attractive Hubbard model is much larger
than the experimental one, as it is measured by the large
value of $T^*_{s,h}/T_c \simeq 5$ at the optimal value of the attraction.
The experimental $T^*$ around optimal doping is instead very close to
$T_c$, and, according to some authors, the pseudogap line tends to zero
at optimal doping.
Moreover, the pseudogap temperature observed in the cuprates is
definitely much smaller than the one found within our DMFT of the
attractive Hubbard model.
This inadequacy of the attractive Hubbard model in describing
some features of the pseudogap phase descend from the
above mentioned strong simplifications of the model
(neglect of retardation effects, Coulomb repulsion and d-wave symmetry of
the gap) and of our DMFT treatment which is exact only in the infinite
dimensionality limit.
One could be tempted in maintaining an attractive Hubbard model description
for the quasiparticles alone, but it is important to point out
that this interpretation can not be pushed too far. As an example,
it is clear that such a description would fail for
temperatures larger than the quasiparticle renormalized bandwidth.
A better description of the pseudogap phase would require models
in which both an attraction and a repulsion are present.
This is for instance the case of the models introduced in Refs.
\onlinecite{science,exe}, where the superconducting phenomenon
only involves heavy quasiparticles which experience an unscreened
attraction and a richer behavior of the pseudogap
(which in this case closes around optimal doping) is found.
\section{acknowledgments}
This work is also supported by MIUR Cofin 2003.
We acknowledge useful discussions with S. Ciuchi, M. Grilli,
and G. Sangiovanni.
|
1,108,101,565,769 | arxiv |
\section{AerialMPTNet}\label{sec:method}
In this section we explain our proposed AerialMPTNet tracking algorithm with its different configurations. Part of its architecture and configurations has been presented in~\cite{kraus2020aerialmptnet}.
As stated in~\autoref{sec:preExperiments}, a pedestrian's movement trajectory is influenced by its movement history, its motion relationships to its neighbours, and scene arrangements. The same holds for the vehicles in traffic scenarios. For the vehicles, there are other constraints such as moving along predetermined paths (e.g., streets, highways, railways) in most of the time.
Different objects have different motion characteristics such as speed and acceleration. For example, several studies have shown that walking speed of pedestrians are strongly influenced by their age, gender, temporal variations as well as distractions (e.g., cell phone usage), whether the individual is moving in a group or not, and even the size of the city where the event takes place~\cite{rastogi2011design, finnis2006field}. Regarding road traffic, similar factors could influence driving behaviors and movement characteristics (e.g., cell phone usage, age, stress level, and fatigue)~\cite{strayer2004profiles, rakha2007characterizing}. Furthermore, similar to the pedestrians, maneuvers of a vehicle can directly affect the movements of other neighbouring vehicles: for example, if the vehicle brakes, all the following vehicles must brake, too.
The understanding of individual motion patterns is crucial for tracking algorithms, especially when only limited visual information about target objects is available. However, current regression-based tracking methods such as GOTURN and SMSOT-CNN do not incorporate movement histories or relationships between adjacent objects. These networks locate the next position of objects by monitoring a search area in their immediate proximity. Thus, the contextual information provided to the network is limited. Additionally, during the training phase, the networks do not learn how to differentiate the targets from similarly looking objects within the search area. Thus, as discussed in~\autoref{sec:preExperiments}, ID switches and losing of object tracks happen often for these networks in crowded situations or by object intersections.
In order to tackle the limitations of previous works we propose to fuse visual features, track history, and the movement relationships of adjacent objects in an end-to-end fashion within a regression-based DNN, which we refer to as AerialMPTNet. \autoref{fig:modelOverview} shows an overview of the network architecture. AerialMPTNet takes advantage of a Siamese Neural Network (SNN) for visual features, a Long Short-Term Memory (LSTM) module for movement histories, and a GraphCNN for movement relationships.
The network takes two local image patches cropped from two consecutive images (previous and current), called target and search patch in which the object location is known and has to be predicted, respectively.
Both patches are centered at the object coordinates known from the previous frame.
Their size (the degree of contextual information) is correlated with the size of the objects, and it is set to $227\times 227$ pixels to be compatible to the network's input.
Both patches are then given to the SNN module (retained from~\cite{bahmanyar2019multiple}) composed of two branches of five 2D convolutional, two local response normalization, and three max-pooling layers with shared weights.
Afterwards, the two output features \(Out_{SNN}\) are concatenated and given to three 2D convolutional layers and, finally, four fully connected layers regressing the object position in the search patch coordinates. We use ReLU activations for all these convolutional layers.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/se_model.pdf}
\caption[Overview of AerialMPTNet's architecture.]{Overview of the network's architecture composing a SNN, a LSTM and a GraphCNN module. The inputs are two consecutive images cropped and centered to a target object, while the output is the object location in search crop coordinates.}
\label{fig:modelOverview}
\end{figure*}
The network output is a vector of four values indicating the $x$ and $y$ coordinates of the top-left and bottom-right corners of the objects' bounding boxes. These coordinates are then transformed into image coordinates.
In our network, the LSTM module and the GraphCNN module use the object coordinates in the search patch and image domain, respectively.
\subsection{Long Short-Term Memory Module}
In order to encode movement histories and predict object trajectories, recent works mainly relied on LSTM- and RNN-based structures~\cite{alahi2016social, xue2018ss, vemula2018social}. While these structures have been mostly used for individual objects, due to the large number of objects, we cannot apply these structures directly to our scenarios. Thus, we propose using a structure which treats all object by only one model and predicts the movements (movement vectors) instead of positions.
In order to test our idea, we built an LSTM comprising two bidirectional LSTM layers with 64 dimensions, a dropout layer with \(p=0.5\) in between, and a linear layer which generates two-dimensional outputs, representing the \(x\) and \(y\) values of the movement vector.
The input of the LSTM module are two-dimensional movement vectors with dynamic lengths up to five steps of the objects' movement histories.
We applied this module to our pedestrian tracking datasets. The results of this experiment show that our LSTM module can predict the next movement vector of multiple pedestrians with about 3.6 pixels (0.43~m) precision, which is acceptable for our scenarios.
Therefore, training a single LSTM on multiple objects would be enough for predicting the objects' movement vectors.
We embed a similar LSTM module into our network as shown in~\autoref{fig:modelOverview}. For the training of the module, the network first generates a sequence of object movement vectors based on the object location predictions. In our experiments, each track has a dynamic history of up to five last predictions. As tracks are not assumed to start at the same time, the length of each track history can be different. Thus, we use zero-padding to make the lengths of track histories similar, allowing to process them together as a batch. These sequences are fed into the first LSTM layer with a hidden size of 64. A dropout with \(p = 0.5\) is then applied to the hidden state of the first LSTM layer, and passes the results to the second LSTM layer. The output features of the second LSTM layer are fed into a linear layer of size 128. The 128-dimensional output of the LSTM module \(Out_{LSTM}\) is then concatenated with \(Out_{SNN}\) and \(Out_{Graph}\), the output of the GCNN module. The concatenation allows the network to predict object locations more precisely based on a fusion of appearance and movement features.
\subsection{GraphCNN Module}
The GraphCNN module consists of three 1D convolution layers with $1\times1$ kernels and respectively 32, 64, and 128 channels.
We generate each object's adjacency graph based on the location prediction of all objects. To this end, the eight closest neighbors in a radius of 7.5~m from the object are considered and modeled as a directed graph by a set of vectors $v_i$ from the neighbouring objects to the target object's position $(x,y)$. The resulting graph is represented as \([x, y, x_{v_1}, y_{v_1}, ..., x_{v_8}, y_{v_8}]\).
If less than eight neighbors are existing, we zero-pad the rest of the vectors.
The GraphCNN module also uses historical information by considering five previous graph configurations. Similarly to the LSTM module, we use zero-padding if less than five previous configurations are available.
The resulting graph sequences are described by a $18 \times 5$ matrix which is fed into the first convolution layer. In our setup, graph sequences of multiple objects are given to the network as a batch of matrices. The output of the last convolutional layer is gone through a global average pooling in order to generate the final 128-dimensional output of the module \(Out_{Graph}\), which is concatenated to \(Out_{SNN}\) and \(Out_{LSTM}\). The features of the GraphCNN module enable the network to better understand group movements.
\subsection{Squeeze-And-Excitation Layers}\label{sec:squeeze}
During our preliminary experiments in~\autoref{sec:preExperiments}, we experienced a high deviation in the quality of activation maps produced by the convolution layers in DCFNet and SMSOT-CNN. This deviation shows the direct impact of single channels and their importance for the final result of the network. In order to consider this factor in our approach, we model the dominance of the single channels by Squeeze-And-Excitation (SE) layers~\cite{hu2018squeeze}.
CNNs extract image information by sliding spatial filters across the inputs to different layers. While the lower layers extract detailed features such as edges and corners, the higher layers can extract more abstract structures such as object parts. In this process, each filter at each layer has a different relevance to the network output. However, all filters (channels) are usually weighted equally. Adding the SE layers to a network helps weighting each channel adaptively based on their relevance. In the SE layers, each channel is squeezed to a single value by using global average pooling~\cite{lin2013network}, resulting in a vector with \(k\) entries. This vector is given to a fully connected layer reducing the size of the output vector by a certain ratio, followed by a ReLu activation function. The result is fed into a second fully connected layer scaling the vector back to its original size and applying a sigmoid activation afterwards. In the final step, each channel of the convolution block is multiplied by the results of the SE layer. This channel weighting step adds less than 1\% to the overall computational cost.
As can bee seen in~\autoref{fig:modelOverview}, we add one SE layer after each branch of the SNN module, and one SE layer after the fusion of \(Out_{SNN}\), \(Out_{LSTM}\), and \(Out_{Graph}\).
\subsection{Online Hard Example Mining}\label{sec:ohem}
In the object detection domain, datasets usually contain a large number of easy cases with respect to cases which are challenging for the algorithms. Several strategies have been developed in order to account for this, such as sample-aware loss functions (e.g., Focal Loss \cite{lin2017focal}), where the easy and hard samples are weighted based on their frequencies, and online hard example mining (OHEM)~\cite{Shrivastava2016ohem}, which gives hard examples to the network if they are previously failed to be correctly predicted. The selection and focusing on such hard examples can make the training more effective.
However, in the multi-object tracking domain, such strategies have been rarely used although the tracking datasets suffer from the sample problem as the detection datasets. To the best of our knowledge, none of the previous works in the regression-based tracking used OHEM during their training process.
Thus, in order to deal with the sample imbalance problem of our datasets, we propose adapting and employing OHEM for our training process. To this end, if the tracker loses an object during training, we reset the object to its original starting position and the starting frame, and feed it to the network in the next iteration again. If the tracker fails again, we ignore the sample by removing it from the batch.
\subsection{Computer Vision and Remote Sensing}
Computer vision and aerial imagery are related fields, both aiming at computer understanding of images. The goal of traditional computer vision is to make a computer autonomously perform some of the tasks the human visual system can perform and to infer something about the environment in which a picture was taken \cite{prince2012computer}. In contrast to pure image processing, the output of a computer vision algorithm gives back information about the given image, and not of a new picture. Information can be anything such as object detection and recognition results, camera position, 3D models, and image segmentation. Although computer vision tasks often seem trivial for humans, they remain highly complex for computers since visual perception is not bound to any specific environment, and since different types and amounts of occlusions can limit the view on a target. Additionally, a target can move, and hence, can be seen from different views and with different lighting conditions \cite{szeliski2010computer}. Nevertheless, there has been much progress in the field, especially with the rise of machine learning and deep learning, the affordable and easy access to computing power and the availability of mobile technologies offering massive amounts of photo and video data. Recently, computers surpassed the reported human-level-performance on the ImageNet dataset~\cite{imagenet_cvpr09} for object classification \cite{he2015delving}. Medical imaging, machine inspection, facial recognition, pattern detection, surveillance, motion capturing, and feature matching are other fields of application for computer vision~\cite{szeliski2010computer}.
In contrast to computer vision, remote sensing deals with different scenarios such as observing the earth and the environment from space or very high altitudes and retrieving information from these observations. During the fast development of remote sensing, it provided us with a better understanding of weather and climate, leading to a more precise weather forecast, offers a cheap and effective way of collecting information of vast spatial regions, and provides methods to monitor ground objects over time~\cite{barrett2013introduction, remoteSensing2008}. Such information is useful to analyze the development of rural and urban areas or the evolution of agricultural processes by providing repetitive knowledge on crop status during different times of a season. These are only few examples. Others also include atmospheric research, data collection with different scales, resolutions, and sensors, analysis of ecological systems, mapping of wildfires and natural resources, and coordination of emergency responses \cite{remoteSensing2008, everaerts2008use}. In recent years, the development of better camera systems made very high-resolution image ground data available. The higher level of detail opened the field for new areas of application, especially for remote sensing tracking approaches dealing with small objects such as vehicles, ships, or pedestrians~\cite{reilly2010detection, meng2012object, bahmanyar2019multiple}.
\subsection{Machine Learning}
This section gives an overview of machine learning and introduces deep learning. Machine learning pursues to provide knowledge to computers through data and observations, which allows the computer to generalize to new situations. Machine learning tasks are divided into three categories: supervised learning, unsupervised learning, and reinforcement learning \cite{dey2016machine}.
In supervised learning, a function maps inputs to specific outputs. Such inputs are provided as datasets \(D = (x_1,x_2,..)\), where \(x_i\) can be almost any kind of data, such as images, video files, sound files, point clouds, or time-series data \cite{schmidhuber2015deep}. Each sample \(x_i\) is paired with a desired output value \(y_i\), which is called the ground truth. Afterwards, an algorithm can learn a function by analyzing the data and map unseen samples correctly. The ground truth can either be a discrete label or a continuous variable, dividing supervised learning into classification and regression tasks. Common datasets for classification are the MNIST dataset \cite{lecun-mnisthandwrittendigit-2010} containing handwritten digits, CIFAR-10 \cite{krizhevsky2009learning} which consists of more than 60,000 images for image classification as well as MS-COCO \cite{lin2014microsoft} containing images for object detection and segmentation. Popular regression datasets cover, for example, house price prediction given the details of houses and their neighborhood as well as predicting the quality of wine given specific attributes \cite{harrison1978hedonic, cortez2009modeling}.
The goal of unsupervised learning is to find previously unknown patterns or learn the underlying distribution of data given a dataset \(D = (x_1,x_2,..)\). In contrast to supervised learning, the samples \(x_i\) are not paired with an output value \(y_i\). Instead, unsupervised learning aims to cluster similar samples within the data, perform density estimation, or solve association problems, for example.
Reinforcement learning mainly optimizes the actions of software agents in a given environment. The agent has no knowledge of which actions to take until it has been into a specific situation. Based on its own decisions, it receives a reward depending on the outcome of its action. Future decisions are affected by this reward. The final goal is to maximize the reward in order to maximize the agent's performance.
This thesis includes approaches based on supervised learning and deep learning. Hence, we will deepen the basics of these in the next sections.
\subsubsection{Deep Learning \& Neural Networks}
A neural network is a computer system that can learn and perform task-specific jobs without being explicitly programmed \cite{goodfellow2016deep}. It consists of an input layer, one or multiple hidden layers, and an output layer. The input of such networks is a vector which is transformed by the hidden layers of the network. In each of these layers operate neurons, producing an output based on a received input. Initially, random weights are assigned to each connection between the neurons, and bias is assigned to each neuron. These weights and biases are updated during the training process of the network, leading to more accurate outputs. The overall process is called backpropagation \cite{hecht1992theory}. The term deep learning emerged as networks got more and more layers, and hence got "deeper". \autoref{fig:neuralnet} shows an example network with one hidden layer.
However, for computer vision tasks dealing with image or video data, networks based on such fully connected layers do not scale well. For example, a single neuron in the first hidden layer with an input image of size \(300 \times 300 \times 3\) (i.g. 300 width, 300 height, 3 color channels) has \(300 \times 300 \times 3 = 270,000\) weights. CNNs deal with this problem by using a combination of convolutional, pooling, and fully connected layers. Convolutional layers consist of \(k\) filters of different sizes \(W \times H\). The filters are slid across the input image and compute dot products between the entries in the filters and the entries in the input resulting in \(k\) 2D activation maps, which are stacked to obtain the output \cite{o2015introduction}. Pooling layers reduce the spatial size of the output to decrease the amount of computation and parameters in the network. Networks built upon such structures play a key role in this thesis.
\begin{figure}[htpb]
\centering
\includegraphics[width=0.45\textwidth]{figures/neural_net.jpeg}
\caption[Neural network architecture.]{Neural network with hidden layers. Retrieved January 3, 2019: \url{http://cs231n.github.io/neural-networks-1/}} \label{fig:neuralnet}
\end{figure}
\subsubsection{Layer Types}
During the experiments of this thesis, we used various layer types. While explaining every type of layer in detail is beyond the scope of this thesis, we introduce unfamiliar ones in the following.
\subsubsubsection{Local Response Normalization}
\gls{lrn} layers are not-trainable. They perform a lateral inhibition by applying a squared normalization over an input with multiple channels.
\begin{equation}
b_c = a_c \Bigg( w + \frac{a}{n} \sum_{c'=max(0, c-n/2)}^{min(N-1, c+n/2)} a_{c'}^2 \Bigg) ^{-\beta}
\end{equation}
, where \(a_c\) and \(b_c\) are the channel values before and after normalization, respectively, \(w\) is a value to provide numeric stability, \(n\) is the amount of neighboring channels used for normalization, \(a\) is a normalization constant and \(\beta\) is the exponent.
\subsubsubsection{Squeeze-And-Excitation Layers}
CNNs extract image information by sliding spatial filters across the input on different layer levels. While lower layers extract information such as edges and basic shapes, higher layers can detect more advanced structures such as cars or text. Nevertheless, each filter has a different relevance concerning the final output of the network. Within any layer, the filter amount is equal to the output depth. However, every output channel is weighted equally. \gls{se} Layers \cite{hu2018squeeze} change this behavior by weighting each channel adaptively while adding less than one percent of computing cost. Each channel is squeezed to a single value by using global average pooling \cite{lin2013network}, resulting in a vector with \(k\) entries. This vector is given to a fully connected layer reducing the size of the output vector by a certain ratio, followed by a \gls{relu} activation function. The result is fed into a second fully connected layer scaling the vector back to its original size and applying a sigmoid activation afterwards. In a final step, the idea is to weight each channel of the original convolution block by multiplying the results of the SE block.
\subsubsection{Activation Functions}
The activation function calculates the output of a neuron based on its input. In order to make neural networks capable of approximating any arbitrary function, activation functions need to be non-linear functions \cite{cybenko1989approximation}. Since neural networks need to be optimized during the training process, activation functions also need to be monotonic and differentiable. Commonly used activation functions are the sigmoid function, the hyperbolic tangent, or \gls{relu}.
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{figures/activations.jpg}
\caption[Activation Functions.]{Activation Functions. Retrieved January 2, 2019: \url{https://towardsdatascience.com/complete-guide-of-activation-functions-34076e95d044}}
\label{fig:activationFunctions}
\end{figure}
\end{comment}
Sigmoid and tangent are recently losing popularity since they both suffer from the vanishing gradient problem. ReLU solves the problem of vanishing gradients by using a linear function with the positive x-axis. Nevertheless, since the output of all negative values is zero, neurons can "die" once they have a negative value. In order to solve this issue, ReLU was extended by Leaky ReLU, which has a predefined negative slope and PReLU, which aims to learn the slope of the activation function \cite{maas2013rectifier, he2015delving}. Other activation functions include ELU, Maxout, and Swish, which was recently proposed by Google \cite{ramachandran2017swish}. \autoref{fig:activationFunctions} shows the functions and the graphs of some of the mentioned functions.
\subsubsection{Loss Functions}
Loss functions measure how well neural networks model the given data. They compare the output of the network with the ground truth and calculate the cost or the error. The loss function optimizes the network's weights \textit{w} until an error threshold is met. It is also essential that the loss function is differentiable since neural networks update all parameters by applying backpropagation. This subsection introduces the losses relevant to this master thesis.
\subsubsection{L1 \& L2 Loss}
The L1\footnote{https://pytorch.org/docs/stable/\_modules/torch/nn/modules/loss.html\#L1Loss} and L2\footnote{https://pytorch.org/docs/stable/\_modules/torch/nn/modules/loss.html\#MSELoss} losses are useful when dealing with regression problems. The L1 loss measures the \gls{mae} between the output of the network \(x\) and the ground truth \(y\). It is calculated as followed:
\begin{equation}
L1(x,y) = \sum_i|x_i-y_i|
\end{equation}
The L1 loss is less affected by outliers than the L2 loss. The L2 loss calculates the error of the squared distance between the network output and the true value:
\begin{equation}
L2(x,y) = \sum_i(x_i-y_i)^2
\end{equation}
\subsubsection{Huber Loss}
The Huber loss \cite{huber1992robust} is a mixture of the L1 and the L2 losses and combines their good properties. It is given as the \gls{mse} when the error is small and calculated as the \gls{mae} when the error is big.
\begin{equation}
L_H(x,y) = \sum_i z_i
\end{equation}
\begin{equation}
z_i = \begin{cases}
0.5(x_i-y_i)^2, & \text{$if |x_i-y_i|<1 $}\\
|x_i-y_i|-0.5, & \text{$otherwise$}
\end{cases}
\end{equation}
The Huber loss is more robust to outliers than the L2 loss and improves the L1 loss regarding missing minima at the end of the training.
\section{Conclusion and Future Works}\label{sec:conclusion}
In this paper, we investigate the challenges posed by the tracking of pedestrians and vehicles in aerial imagery by applying a number of traditional and DL-based \gls{sot} and \gls{mot} methods on three aerial \gls{mot} datasets. We also describe our proposed DL-based aerial \gls{mot} method, the so-called AerialMPTNet. Our proposed network fuses appearance, temporal, and graphical information for a more accurate and stable tracking by employing a \gls{snn}, a \gls{lstm}, and a \gls{gcnn} module.
The influence of \gls{se} and \gls{ohem} on the performance of AerialMPTNet is investigated, as well as the impact of adopting an $L1$ rather than a Huber loss function.
An extensive qualitative and quantitative evaluation shows that the proposed AerialMPTNet outperforms both traditional and state-of-the-art DL-based \gls{mot} methods for the pedestrian datasets, and achieves competitive results for the vehicle dataset. On the one hand, it is verified that \gls{lstm} and \gls{gcnn} modules enhance the tracking performance; on the other hand, the use of \gls{se} and \gls{ohem} significantly helps only in some cases, while degrading the tracking results in other cases. The comparison of $L1$ and Huber loss shows that $L1$ is a better option for most of the scenarios in our experimental datasets.
We believe that the present paper can promote research on aerial \gls{mot} by providing a deep insight into its challenges and opportunities, and pave the path for future works in this domain.
In the future, within the framework of AerialMPTNet, the search area size can be adapted to the image GSDs and object velocities and accelerations. Additionally, the \gls{snn} module can be modified in orer to improve the appearance features extraction.
The training process of most DL-based tracking methods relies on common loss functions, which do not correlate with tracking evaluation metrics such as MOTA and MOTP, as they are usually differentiable. Recently, differentiable proxies of MOTA and MOTP have been proposed~\cite{xutrain}, which can be also investigated for the aerial \gls{mot} scenarios.
\section{Datasets}\label{sec:datasets}
In this section, we introduce the datasets used in our experiments, namely the KIT~AIS (pedestrian and vehicle sets), the Aerial Multi-Pedestrian Tracking (AerialMPT)~\cite{kraus2020aerialmptnet}, and DLR's Aerial Crowd Dataset (DLR-ACD)~\cite{bahmanyar2019mrcnet}.
All these datasets are the first of their kind and aim at promoting pedestrian and vehicle detection and tracking based on aerial imagery.
The images of all these datasetes have been acquired by the German Aerospace Center~(DLR) using the 3K camera system, comprising a nadir-looking and two side-looking DSLR cameras, mounted on an airborne platform flying at different altitudes.
The different flight altitudes and camera configurations allow capturing images with multiple spatial resolutions (ground sampling distances - GSDs) and viewing angles.
For the tracking datasets, since the camera is continuously moving, in a post-processing step, all images were orthorectified with a digital elevation model, co-registered, and geo-referenced with a GPS/IMU system. Afterwards, images taken at the same time were fused into a single image and cropped to the region of interest.
This process caused small errors visible in the frame alignments. Moreover, the frame rate of all sequences is 2~Hz.
The image sequences were captured during different flight campaigns and differ significantly in object density, movement patterns, qualities, image sizes, viewing angles, and terrains. Furthermore, different sequences are composed by a varying number of frames ranging from 4 to 47. The number of frames per sequence depends on the image overlap in flight direction and the camera configuration.
\subsection{KIT AIS}
The KIT~AIS dataset is generated for two tasks, vehicle and pedestrian tracking. The data have been annotated manually by human experts and suffer from a few human errors. Vehicles are annotated by the smallest enclosing rectangle (i.e., bounding box) oriented in the direction of their travel, while individual pedestrians are marked by point annotations on their heads. In our experiments, we used bounding boxes of sizes $4 \times 4$ and $5 \times 5$ pixels for the pedestrians according to the GSDs of the images, ranging from 12 to 17 cm. As objects may leave the scene or be occluded by other objects, the tracks are not labeled continuously for all cases. For the vehicle set cars, trucks, and buses are annotated if they lie entirely within the image region with more than \(\frac{2}{3}\) of their bodies visible.
In the pedestrian set only pedestrians are labeled. Due to crowded scenarios or adverse atmospheric conditions in some frames, pedestrians can be hardly visible. In these cases, the tracks have been estimated by the annotators as precisely as possible. \autoref{tab:KITAISPED} and \autoref{tab:KITAISVEH} represent the statistics of the pedestrian and vehicle sets of the KIT~AIS dataset, respectively.
The KIT~AIS pedestrian is composed of 13 sequences with 2,649 pedestrians (Pedest.), annotated by 32,760 annotation points (Anno.) throughout the frames \autoref{tab:KITAISPED}. The dataset is split into 7 training and 6 testing sequences with 104 and 85 frames (Fr.), respectively. The sequences are characterized by different lengths ranging from 4 to 31 frames.
The image sequences come from different flight campaigns over Allianz Arena (Munich, Germany), Rock am Ring concert (Nuremberg, Germany), and Karlsplatz (Munich, Germany).
\KITAISPedestrian
KIT~AIS vehicle comprises 9 sequences with 464 vehicles annotated by 10,817 bounding boxes throughout 239 frames. It has no pre-defined train/test split. For our experiments, we split the dataset into 5 training and 4 testing sequences with 131 and 108 frames, respectively, similarly to~\cite{bahmanyar2019multiple}. According to~\autoref{tab:KITAISVEH}, the lengths of the sequences vary between 14 and 47 frames.
The image sequences have been acquired from a few highways, crossroads, and streets in Munich and Stuttgart, Germany. The dataset presents several tracking challenges such as lane change, overtaking, and turning maneuvers as well as partial and total occlusions by big objects (e.g., bridges). \autoref{fig:vehicleSamples} demonstrates sample images from the KIT~AIS vehicle dataset.
\KIAISVehicle
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/vehicleSamples_1.jpg}
\caption{Sample images from the KIT~AIS vehicle dataset acquired at different locations in Munich and Stuttgart, Germany.}
\label{fig:vehicleSamples}
\end{figure*}
\subsection{AerialMPT}
The Aerial Multi-Pedestrian Tracking (AerialMPT) dataset~\cite{kraus2020aerialmptnet} is newly introduced to the community, and deals with the shortcomings of the KIT~AIS dataset such as the poor image quality and limited diversity.
AerialMPT consists of 14 sequences with 2,528 pedestrians annotated by 44,740 annotation points throughout 307 frames ~\autoref{tab:MPTDataset}.
Since the images have been acquired by a newer version of the DLR's 3K camera system, their quality and contrast are much better than the images of KIT~AIS dataset. \autoref{fig:KITAErialMPTCompare} compares a few sample images from the AerialMPT and KIT~AIS datasets.
\AerialMOTPedestrian
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/pedestrianSamples.jpg}
\caption{Sample images from the AerialMPT and KIT AIS datasets. ``Bauma3", ``Witt", ``Pasing1" are from AerialMPT. ``Entrance\_01", ``Walking\_02", and ``Munich02" are from KIT~AIS.}
\label{fig:KITAErialMPTCompare}
\end{figure*}
AerialMPT is split into 8 training and 6 testing sequences with 179 and 128 frames, respectively. The lengths of the sequences vary between 8 and 30 frames. The image sequences were selected from different crowd scenarios, e.g., from moving pedestrians on mass events and fairs to sparser crowds in the city centers
\autoref{fig:overview} demonstrates an image from the AerialMPT dataset with the overlaid annotations.
\subsubsection{AerialMPT vs. KIT~AIS}
The AerialMPT has been generated in order to mitigate the limitations of the KIT~AIS pedestrian dataset. In addition to the higher quality of the images, the numbers of minimum annotations per frame and the total annotations of AerialMPT are significantly larger than those of the KIT~AIS dataset. All sequences in AerialMPT contain at least 50 pedestrian, while more than 20\% of the sequences of KIT~AIS include
less than ten pedestrians.
Based on our visual inspection, not only the pedestrian movements in AerialMPT are more complex and realistic, but also the diversity of the crowd densities are greater than those of KIT~AIS.
The sequences in AerialMPT differ in weather conditions and visibility, incorporating more diverse kinds of shadows as compared to KIT~AIS.
Furthermore, the sequences of AerialMPT are longer in average, with 60\% longer than 20 frames (less than 20\% in KIT~AIS).
Further details on these datasets can be found in~\cite{kraus2020aerialmptnet}.
\subsection{DLR-ACD}
DLR-ACD is the first aerial crowd image dataset~\cite{bahmanyar2019mrcnet} comprises 33 large aerial RGB images with average size of $3619\times 5226$ pixels from different mass events and urban scenes containing crowds such as sports events, city centers, open-air fairs, and festivals. The GSDs of the images vary between 4.5 and 15 cm/pixel.
In DLR-ACD 226,291 pedestrians have been manually labeled by point annotations, with the number of pedestrians ranging from 285 to 24,368 per image. In addition to its unique viewing angle, the large number of pedestrians in most of the images ($>$2K) makes DLR-ACD stand out among the existing crowd datasets. Moreover, the crowd density can vary significantly within each image due to the large field of view of the images. \autoref{fig:dlr_acd_samples} demonstrates example images from the DLR-ACD dataset. For further details on this dataset, the interested reader is remanded to~\cite{bahmanyar2019mrcnet}.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=.48\textwidth]{figures/I_16.jpg}}
\subfloat[]{\includegraphics[width=.48\textwidth]{figures/I_26.jpg}}
\caption{Example images of the DLR-ACD dataset. The images are from an open-air (a) festival (b) and music concert.}
\label{fig:dlr_acd_samples}
\end{figure*}
\section{Experimental Setup} \label{sec:exp_setup}
For all of our experiments, we used \textit{PyTorch} and \textit{Nvidia Titan XP} GPUs.
We trained all networks with an SGD optimizer and an initial learning rate of \(10^{-6}\).
For all training setups, unless indicated otherwise, we used the $L1$ loss, $L(x, \hat{x}) =|x - \hat{x}|$, where \(x\) and $\hat{x}$ represent the output of the network and ground truth, respectively.
The batch size of all our experiments is 150; however, during offline feedback training, the batch size can differ due to unsuccessful tracking cases and subsequent removal of the object from the batch.
In our experiments, we consider SMSOT-CNN as baseline network and compare different parts of our approach to it.
The original SMSOT-CNN is described in \textit{Caffe}. In order to make it completely comparable to our approach, we re-implemented it in \textit{PyTorch} and trained it.
For the training of SMSOT-CNN, we assign different fractions of the initial learning rate to each layer, as in the original \textit{Caffe} implementation, inspired by the GOTURN's implementation.
In more detail, we assigned the initial learning rate to each convolutional layer, and assigned a learning rate 10 times larger to the fully connected layers. Weights were initialized by Gaussians with different standard deviations, while biases were initialized by constant values (zero or one), as in the \textit{Caffe} version.
The training process of SMSOT-CNN is based on a so-called \textit{Example Generator}. Provided with one target image with known object coordinates, this creates multiple examples by creating and shifting the search crop to create different kinds of movements. It is also possible to give the true target and search images. A hyperparameter set to 10 controls the number of examples generated for each image.
For the pedestrian tracking, we also use DLR-ACD to increase the number of available training samples. SMSOT-CNN is trained completely offline and learns to regress the object location based on only the previous location of the object.
For AerialMPTNet, we train the SNN module and the fully connected layers as in SMSOT-CNN. After that, the layers are initialized with the learnt weights, and the remaining layers are initialized with the standard \textit{PyTorch} initialization. Moreover, we decay the learning rate by a factor of 0.1 for every twenty thousand iterations and train AerialMPTNet in an end-to-end fashion by using feedback loops to integrate previous movement and relationship information between adjacent objects. In contrast to the training process of SMSOT-CNN, which is based on artificial movements created by the example generator, we train our networks based on real tracks.
In the training process, a batch of 150 random tracks (i.e., objects from random sequences of the training set) is first selected starting at a random time step between 0 and the track end \(t_{end}-1\). We give the network the target and search patches for these objects. The network's goal is to regress each object position in the search patches consecutively until either the object is lost or the track ends. The target and search patches are generated based on the network predictions in consecutive frames. The object will remain in the batch as long as the network tracks it successfully. If the ground truth object position lies outside of the predicted search area or the track reaches its end frame, we remove the object from the batch and replace it with a new randomly selected object.
For each track and each time step, the network's prediction is stored and used from the LSTM and GraphCNN module. For each object in the batch, the LSTM module is given the objects' movement vectors from the latest time steps up to a maximum number of five, as explained in~\autoref{sec:method}. This process provides the network with an understanding of each object's movement characteristics by a prediction of the next movement. As a result, our network uses its predictions as feedback to improve its performance.
Furthermore, we perform gradient clipping for the LSTM during training to prevent exploding gradients. The neighbor calculation of the GraphCNN module is also based on the network's prediction of each object's position, as mentioned in~\autoref{sec:method}. Based on the network's prediction of the object position, we search for the nearest neighbors in the ground truth annotation of that frame. However, during the testing phase, we search nearest neighbors based on the network's prediction of the object positions.
For the pedestrian dataset, we set the context factor to 4, with each object with a bounding box size of $4 \times 4$ pixel resulting in an image patch of $16\times 16$ pixels. For vehicle tracking, however, due to the larger sizes of their bounding boxes, we reduce the context factor to 3. This helps avoiding multiple vehicles in a single image patch which could cause track confusion.
\section{Introduction}
\IEEEPARstart{V}{isual} Object Tracking, i.e., locating objects in video frames over time, is a dynamic field of research with a wide variety of practical applications such as in autonomous driving, robot aided surgery, security, and safety.
\begin{figure}[!ht]
\centering
\subfloat[]{\includegraphics[width=\columnwidth]{figures/mu_18.pdf}}\\
\subfloat[]{\includegraphics[width=\columnwidth]{figures/bauma3_10.pdf}}
\caption{Multi-Pedestrian tracking results of AerialMPTNet on the frame 18 of the ``Munich02" (top) and frame 10 of the ``Bauma3" (bottom) sequences of the AerialMPT dataset. Different pedestrians are depicted in different colors with the corresponding trajectories.}
\label{fig:overview}
\vspace{-5pt}
\end{figure}
The recent advances in machine and deep learning techniques have drastically boosted the performance of \gls{vot} methods by solving long-standing issues such as modeling appearance feature changes and relocating the lost objects~\cite{wojke2017simple, bergmann2019tracking, xiang2015learning, bertinetto2016fully}.
Nevertheless, the performance of the existing \gls{vot} methods is not always satisfactory due to hindrances such as heavy occlusions, difference in scales, background clutter or high-density in the crowded scenes. Thus, developing more sophisticated \gls{vot} methods overcoming these challenges is highly demanded.
The \gls{vot} methods can be categorized into \gls{sot} and \gls{mot} methods, which track single and multiple objects throughout subsequent video frames, respectively. The \gls{mot} scenarios are often more complex than the \gls{mot} because the trackers must handle a larger number of objects in a reasonable time (e.g., ideally real-time).
Most of previous \gls{vot} works using traditional approaches such as Kalman and particle filters~\cite{cuevas2005kalman, cuevas2007particle}, \gls{dcf}~\cite{bolme2010visual}, or silhouette tracking~\cite{boudoukh2009visual}, simplify the tracking procedure by constraining the tracking scenarios with, for example, stationary cameras, limited number of objects, limited occlusions, or absence of sudden background or object appearance changes. These methods usually use handcrafted feature representations (e.g., \gls{hog}~\cite{dalal2005histograms}, color, position) and their target modeling is not dynamic~\cite{marvasti2019deep}.
In real-world scenarios, however, such constraints are often not applicable and \gls{vot} methods based on these traditional approaches perform poorly.
The rise of \gls{dl} offered several advantages in object detection, segmentation, and classification \cite{he2016deep, szegedy2016rethinking, ren2015faster}. Approaches based on \gls{dl} have also been successfully applied to \gls{vot} problems, and significantly enhancing the performance, especially in unconstrained scenarios. Examples include the \gls{cnn}~\cite{wang2015visual, zhang2016robust}, \gls{rnn}~\cite{kim2018residual}, \gls{snn}~\cite{li2018high, held_learning_2016,bahmanyar2019multiple,kraus2020aerialmptnet}, \gls{gan}~\cite{song2018vital} and several customized architectures~\cite{zhang2017deep}.
Despite the many progress made for \gls{vot} in ground imagery, in the remote sensing domain, \gls{vot} has not been fully exploited, due to the limited available volume of images with high enough resolution and level of details. In recent years, the development of more advanced camera systems and the availability of very high-resolution aerial images have opened new opportunities for research and applications in the aerial \gls{vot} domain ranging from the analysis of ecological systems to aerial surveillance~\cite{remoteSensing2008, everaerts2008use}.
Aerial imagery allows collecting very high-resolution data from wide open areas in a cost- and time-efficient manner. Performing \gls{mot} based on such images (e.g., with \gls{gsd} $<$ 20 cm/pixel) allows us to track and monitor the movement behaviours of multiple small objects such as pedestrians and vehicles for numerous applications such as disaster management and predictive traffic and event monitoring.
However, few works have addressed aerial \gls{mot}~\cite{reilly2010detection, meng2012object, bahmanyar2019multiple}, and the aerial \gls{mot} datasets are rare.
The large number and the small sizes of moving objects compared to the ground imagery scenarios together with large image sizes, moving cameras, multiple image scale, low frame rates as well as various visibility levels and weather conditions makes \gls{mot} in aerial imagery especially complicate.
Existing drone or ground surveillance datasets frequently used as \gls{mot} benchmarks, such as MOT16 and MOT17~\cite{milan2016mot16}, are very different from aerial \gls{mot} scenarios with respect to their image and object characteristics. For example, the objects are bigger and the scenes are less crowded, with the objects appearance features usually being discriminative enough to distinguish the objects. Moreover, the videos have higher frame rates and better qualities and contrasts.
In this paper, we aim at investigating various existing challenges in the tracking of multiple pedestrian and vehicles in aerial imagery through intensive experiments with a number of traditional and DL-based \gls{sot} and \gls{mot} methods. This paper extends our recent work~\cite{kraus2020aerialmptnet}, in which we introduced a new \gls{mot} dataset, the so-called \gls{aerialmpt}, as well as a novel DL-based \gls{mot} method, the so-called AerialMPTNet, that fuses appearance, temporal, and graphical information for a more accurate \gls{mot}. In this paper, we also extensively evaluate the effectiveness of different parts of AerialMPTNet and compare it to traditional and state-of-the-art DL-based \gls{mot} methods.
We believe that our paper can promote research on aerial \gls{mot} (esp. for pedestrians and vehicles) by providing a deep insight into its challenges and opportunities.
We conduct our experiments on three aerial \gls{mot} datasets, namely \gls{aerialmpt} and KIT~AIS\footnote{https://www.ipf.kit.edu/code.php} pedestrian and vehicle datasets. All image sequences were captured by an airborne platform during different flight campaigns of the \gls{dlr}\footnote{https://www.dlr.de} and vary significantly in object density, movement patterns, and image size and quality. \autoref{fig:overview} shows sample images from the \gls{aerialmpt} dataset with the tracking results of our AerialMPTNet.
The images were captured at different flight altitudes and their \gls{gsd} (reflecting the spatial size of a pixel) varies between 8~cm and 13~cm.
The total number of objects per sequence ranges up to 609. Pedestrians in these datasets appear as small points, hardly exceeding an area of 4$\times$4 pixels. Even for human experts, distinguishing multiple pedestrians based on their appearance is laborious and challenging. Vehicles appear as bigger objects and are easier to distinguish based on their appearance features. However, different vehicle sizes, fast movements together with the low frame rates (e.g., 2 fps) and occlusions by bridges, trees, or other vehicles presents challenges to the vehicle tracking algorithm, illustrated in~\autoref{fig:zoom}.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=4.45cm, height=4.45cm]{figures/pedestrian_zoom.pdf}}
\subfloat[]{\includegraphics[width=4.45cm, height=4.45cm]{figures/pedestrian_zoom2.pdf}}
\subfloat[]{\includegraphics[width=4.45cm, height=4.45cm]{figures/car_zoom.pdf}}
\subfloat[]{\includegraphics[width=4.45cm, height=4.45cm]{figures/car_zoom2.pdf}}
\caption{Illustrations of some challenges in aerial \gls{mot} datasets. The examples are from the KIT~AIS pedestrian (a), AerialMPT (b), and KIT AIS vehicle datasets (c,d). Multiple pedestrians which are hard to distinguish due to their similar appearance features and low image contrast (a). Multiple pedestrians at a trade fair walking closely together with occlusions, shadows, and strong background colors (b). Multiple vehicles at a stop light where the shadow on the right hand side can be problematic (c). Multiple vehicles with some of them occluded by trees (d).}
\label{fig:zoom}
\end{figure*}
AerialMPTNet is an end-to-end trainable regression-based neural network comprising a \gls{snn} module which takes two image patches as inputs, a target and a search patch, cropped from a previous and a current frame, respectively. The object location is known in the target patch and should be predicted for the search patch. In order to overcome the tracking challenges of the aerial \gls{mot} such as the objects with similar appearance features and densely moving together, AerialMPTNet incorporates temporal and graphical information in addition to the appearance information provided by the \gls{snn} module.
Our AerialMPTNet employs a \gls{lstm} for temporal information extraction and movement prediction, and a \gls{gcnn} for modeling the spatial and temporal relationships between adjacent objects (graphical information).
AerialMPTNet outputs four values indicating the coordinates of the top-left and bottom-right corners of each object's bounding box in the search patch.
In this paper, we also investigate the influence of \gls{se} and \gls{ohem}~\cite{Shrivastava2016ohem} on the tracking performance of AerialMPTNet. To the best of our knowledge, we are the first work applying adaptive weighting of convolutional channels by \gls{se} and employ \gls{ohem} for the training of a DL-based tracking-by-regression method.
According to the results, our AerialMPTNet outperforms all previous methods for the pedestrian datasets and achieves competitive results for the vehicle dataset. Furthermore, \gls{lstm} and \gls{gcnn} modules adds value to the tracking performance. Moreover, while using \gls{se} and \gls{ohem} can significantly help in some scenarios, in other cases they may degrade the tracking results.
The rest of the paper is organized as follows. Section~\ref{sec:related_work} presents an overviews on related works; \autoref{sec:datasets} introduces the datasets used in our experiments; \autoref{sec:metrics} represents the metrics used for our quantitative evaluations; \autoref{sec:preExperiments} provides a comprehensive study on previous traditional and DL-based tracking methods on the aerial \gls{mot} datasets, with \autoref{sec:aerialMPTNet} explaining our AerialMPTNet with all its configurations; \autoref{sec:exp_setup} represents our experimental setups; \autoref{sec:evaluation} provides an extensive evaluation of our AerialMPTNet and compares it to the other methods; and \autoref{sec:conclusion} concludes our paper and gives ideas for future works.
\section{Evaluation Metrics}\label{sec:metrics}
In this section we introduce the most important metrics we use for our quantitative evaluations. We adopted widely-used metrics in the MOT domain based on~\cite{milan2016mot16} which are listed in~\autoref{tab:metrics}. In this table, $\uparrow$ and $\downarrow$ denote higher or lower values being better, respectively.
The objective of MOT is finding the spatial positions of \(p\) objects as bounding boxes throughout an image sequence (object trajectories). Each bounding box is defined by the \(x\) and \(y\) coordinates of its top-left and bottom-right corners in each frame.
Tracking performances are evaluated based on true positives (TP), correctly predicting the object positions, false positives (FP), predicting the position of another object instead of the target object's position, and false negatives (FN), where an object position is totally missed. In our experiments, a prediction (tracklet) is considered as TP if the intersection over union (IoU) of the predicted and the corresponding ground truth bounding boxes is greater than \(0.5\). Moreover, an identity switch (IDS) occurs if an annotated object \(a\) is associated with a tracklet \(t\), and the assignment in the previous frame was \(a \neq t\). The fragmentation metric shows the total number of times a trajectory is interrupted during tracking.
\begin{table}
\centering
\caption{Description of the metrics used for quantitative evaluations.}
\rowcolors{2}{gray!20}{white}
\begin{tabular}{cc|c}
Metric& & Description \\
\hline
IDF1 &$\uparrow$ & ID F1-Score\\
IDP &$\uparrow$ & ID Global Min-Cost Precision\\
IDR &$\uparrow$ & ID Global Min-Cost Recall\\
Rcll &$\uparrow$ & Recall\\
Prcn &$\uparrow$ & Precision\\
FAR &$\downarrow$ & False Acceptance Rate\\
MT &$\uparrow$ & Ratio of Mostly Tracked Objects\\
PT &$\uparrow$ & Ratio of Partially Tracked Objects\\
ML &$\downarrow$ & Ratio of Mostly Lost Objects\\
FP &$\downarrow$ & False Positives \\
FN &$\downarrow$ & False Negatives\\
IDS &$\downarrow$ & Number of Identity Switches\\
FM &$\downarrow$ & Number of Fragmented Tracks\\
MOTA &$\uparrow$ & Multiple Object Tracker Accuracy \\
MOTP &$\uparrow$ & Multiple Object Tracker Precision\\
MOTAL&$\uparrow$ & Multiple Object Tracker Accuracy Log\\
\hline
\end{tabular}
\label{tab:metrics}
\end{table}
Among these metrics, the crucial ones are the multiple object tracker accuracy (MOTA) and the multiple object tracker precision (MOTP).
MOTA represents the ability of trackers in following the trajectories throughout the frames \(t\), independently from the precision of the predictions:
\begin{equation}
MOTA = 1- \frac{\sum_t (FN_t + FP_t + ID_t)}{\sum_t GT_t}.
\end{equation}
The multiple object tracker accuracy log (MOTAL) is similar to MOTA; however, ID switches are considered on a logarithmic scale.
\begin{equation}
MOTAL = 1 - \frac{\sum FN_T + FP_t + log_{10}(ID_t+1)}{\sum GT_t}.
\end{equation}
MOTP measures the performance of the trackers in precisely estimating object locations:
\begin{equation}
MOTP = \frac{\sum_{t,i}d_{t,i}}{\sum_t c_t},
\end{equation}
where \(d_{t,i}\) is the distance between a matched object \(i\) and the ground truth annotation in frame \(t\), and \(c\) is the total number of matched objects.
Each tracklet can be considered as mostly tracked (MT), partially tracked (PT), or mostly lost (ML), based on how successful an object is tracked during its whole lifetime. A tracklet is mostly lost if it is only tracked less than 20\% of its lifetime and mostly tracked if it is tracked more than 80\% of its lifetime. Partially tracked applies to all remaining tracklets. We report MT, PT, and ML as percentages of the total amount of tracks. The false acceptance rate (FAR) for an image sequence with \(f\) frames describes the average amount of FPs per frame:
\begin{equation}
FAR = \frac{\sum FP_t}{f}.
\end{equation}
In addition, we use recall and precision measures, defined as follows:
\begin{equation}
Rcll = \frac{\sum TP_t}{\sum (TP_t + FN_t)},
\end{equation}
\begin{equation}
Prcn = \frac{\sum TP_t}{\sum (TP_t + FP_t)}.
\end{equation}
Identification precision (IDP), identification recall (IDR), and IDF1 are similar to precision and recall; however, they take into account how long the tracker correctly identifies the targets. IDP and IDR are the ratios of computed and ground-truth detections that are correctly identified, respectively.
IDF1 is calculated as the ratio of correctly identified detections over the average number of computed and ground-truth detections. IDF1 allows ranking different trackers based on a single scalar value. For any further information on these metrics, the interested reader is remanded to to~\cite{ristani2016performance}.
\section{Preliminary Experiments}\label{sec:preExperiments}
This section empirically shows the existing challenges in aerial pedestrian tracking.
We study the performance of a number of existing tracking methods including KCF~\cite{henriques2014high}, MOSSE~\cite{bolme2010visual}, CSRT~\cite{lukezic2017discriminative}, Median Flow~\cite{kalal2010forward}, SORT, DeepSORT~\cite{wojke_simple_2017}, Stacked-DCFNet~\cite{wang2017dcfnet}, Tracktor++~\cite{bergmann2019tracking}, SMSOT-CNN~\cite{bahmanyar2019multiple}, and Euclidean Online Tracking on aerial data, and show their strengths and limitations. Since in the early phase of our research, only the KIT AIS pedestrian dataset was available to us, the experiments of this section have been conducted on this dataset. However, our findings also hold for the AerialMPT dataset.
The tracking performance is usually correlated to the detection accuracy for both detection-free and detection-based methods. As our main focus is at tracking performance, in most of our experiments we assume perfect detection results and use the ground truth data. While for the object locations in the first frame are given to the detection-free methods, the detection-based methods are provided with the object locations in every frame. Therefore, for the detection-based methods, the most substantial measure is the number of ID switches, while for the other methods all metrics are considered in our evaluations.
\subsection{From Single- to Multi-Object Tracking}
Many tracking methods have been initially designed to track only single objects. However, according to~\cite{bahmanyar2019multiple}, most of them can be extended to handle MOT. Tracking management is an essential function in MOT which stores and exploits multiple active tracks at the same time, in order to remove and initialize the tracks of objects leaving from and entering into the scenes. For our experiments we developed a tracking management module for extending the SOT methods to MOT. It unites memory management, including the assignment of unique track IDs and individual object position storage, with track initialization, aging, and removing functionalities.
\subsubsection{KCF, MOSSE, CSRT, and Median Flow}
OpenCV provides several built-in object tracking algorithms. Among them, we investigate the KCF, MOSSE, CSRT, and Median Flow SOT methods. We extend them to the MOT scenarios within the OpenCV framework. We initialize the trackers by the ground truth bounding box positions. Moreover, we remove the objects if they leave the scene and their track ages are greater than 3 frames.
\autoref{tab:emprExp} shows the tracking results of these methods on the KIT AIS dataset.
Results indicate the poor performance of all of these methods with a total MOTA scores varying between -85.8 and -55.9. The results of KCF and MOSSE are very similar. However, the use of HOG features and non-linear kernels in KCF improves MOTA by 0.9 and MOTP by 0.5 points respectively, compared to MOSSE. Moreover, both methods mostly track about 1~\% of the pedestrians in average.
ACSRT (which is also DCF-based) outperforms both prior methods significantly, reaching a total MOTA and MOTP of -55.9 and 78.4. It mostly tracks about 10~\% of the pedestrians in average and proves the effectiveness of the channel and reliability scores. According to the table, Median Flow achieves comparable results to CSRT with total MOTA and MOTP scores of -63.8 and 77.7, respectively.
Comparing the results on different sequences indicates that all algorithms perform significantly better on the \textit{RaR\_Snack\_Zone\_02} and \textit{RaR\_Snack\_Zone\_04} sequences. Based on visual inspection, we argue that this is due to their short length. Additionally, we argue that the overall low performance of these methods can be caused by the use of handcrafted features.
\preliminaryExperimentsSOT
\subsubsection{Stacked-DCFNet}
DCFNet \cite{wang2017dcfnet} is also a SOT on a DCF. However, the DCF is implemented as part of a DNN and uses the featuresbased extracted by a light-weight CNN. Therefore, DCFNet is a perfect choice to study whether deep features improve the tracking performance compared to the handcrafted ones.
For our experiments, we took the \textit{PyTorch} implementation\footnote{https://github.com/foolwood/DCFNet\_pytorch} of DCFNet and modified its network structure to handle multi-object tracking, and we refer to it as ``Stacked-DCFNet".
From the KIT AIS pedestrian training set we crop a total of 20,666 image patches centered at every pedestrian. The patch size is the bounding box size multiplied by 10 in order to consider contextual information to some degree. Then we scale the patches to 125$\times$125 pixels to match the network input size. Using the patches, we retrain the convolutional layers of the network for 50 epochs with ADAM \cite{kingma2014adam} optimizer, MSE loss, initial learning rate of 0.01, and a batch size of 64. Moreover, we set the spatial bandwidth to 0.1 for both online tracking and offline training. Furthermore, in order to adapt it to MOT, we use our developed \textit{Python} module, and remove the objects if they leave the scene while their track ages are greater than 3 frames. Multiple targets are given to the network within one batch. For each target object, the network receives two image patches, from previous and current frames, centered on the known previous position of the object. The network output is the probability heatmap in which the highest value represents the most likely object location in the image patch of the current frame (search patch). If this value is below a certain threshold, we consider the object as lost.
Furthermore, we propose a simple linear motion model and set the center point of the search patch to the position estimate of this model instead of the position of the object in the previous frame patch (as in the original work). Based on the latest movement \(v_t(x,y)\) of a target, we estimate its position as:
\begin{equation}
p_{est}(x,y) = p(x,y) + k \cdot v_t(x,y),
\end{equation}
where \(k\) determines the influence of the last movement.
The tracking results in~\autoref{tab:emprExp} demonstrate that Stacked-DCFNet significantly outperforms the method with handcrafted features by a MOTA score of -37.3 (18.6 points higher than that of the CSRT). The MT and ML rates are also improving with only losing 23.6~\% of all tracks while mostly tracking the 13.8~\% of the pedestrians. According to the results, Stacked-DCFNet performs better on the longer sequences (\textit{AA\_Crossing\_02}, \textit{AA\_Walking\_02} and \textit{Munich02}), which shows the ability of the method in tracking objects for a longer time.
Altogether, the results indicate that deep features outperform the handcrafted ones by a large margin.
\subsection{Multi-Object Trackers}
In this section, we study a number of MOT methods including SORT, DeepSORT, and Tracktor++. Additionally, we propose a new tracking algorithm called Euclidean Online Tracking (EOT) which uses the Euclidean distance for object matching.
\subsubsection{DeepSORT and SORT}
DeepSORT \cite{wojke_simple_2017} is a MOT method comprising deep features and an IoU-based tracking strategy. For our experiments, we use the \textit{PyTorch} implementation\footnote{https://github.com/ZQPei/deep\_sort\_pytorch} of DeepSORT and adapt it for the KIT AIS dataset by changing the bounding box size and IoU threshold, as well as fine-tuning the network on the training set of the KIT AIS dataset. As mentioned, for the object locations we use the ground truth and do not use the DeepSORT's object detector. \autoref{tab:emprExpMOT} shows the tracking results of our experiments in which Rcll, Prcn, FAR, MT, PT, ML, FN, MOTP, and FM are not important in our evaluations as the ground truth is used instead of the detection results.
\preliminaryExperimentsMOT
In the first experiment, we employ DeepSORT with its original parameter settings. As the results show, this configuration is not suitable for tracking small objects (pedestrians) in aerial imagery. DeepSORT utilizes deep appearance features to associate objects to tracklets; however, for the first few frames, it relies on IoU metric until enough appearance features are available. The original IoU threshold is \(0.5\). The standard DeepSORT uses a Kalman filter for each object to estimate its position in the next frame. However, due to small IoU overlaps between most predictions and detections, many tracks can not be associated with any detection, making it impossible to use the deep features afterwards. The main cause of minor overlaps is the small size of the bounding boxes. For example, if the Kalman filter estimates the object position only 2 pixels off the detection's position, for a bounding box of 4$\times$4 pixels, the overlap would be below the threshold and, consequently, the tracklet and the object cannot be matched. These mismatches result in a large number of falsely initiated new tracks, leading to a total amount of 8,627 ID switches, an average amount of 8.27 ID switches per person, and an average amount of 0.71 ID switches per detection.
We tackle this problem by enlarging the bounding boxes by a factor of two in order to increase the IoU overlaps, increase the number of matched tracklets and detections, and enable the use of appearance features. According to~\autoref{tab:emprExpMOT}, it results in a 41.19\% decrease in the total number of ID switches (from 8,627 to 5,073), a 56.38\% decrease in the average number of ID switches per person (from 8.62 to 4.86), and a 59.15\% decrease in the average number of ID switches per detection (from 0.71 to 0.42).
We further analyze the impact of using different IoU thresholds on the tracking performance. \autoref{fig:id_switches} illustrates the number of ID switches with different IoU thresholds. It can be observed that by increasing the threshold (minimizing the required overlap for object matching) the number of ID switches reduces. The least number of ID switches (738 switches) is achieved by the IoU threshold of 0.99. More details can be seen in~\autoref{tab:emprExpMOT}. Based on the results, enlarging the bounding boxes and changing the IoU threshold significantly improves the tracking results of DeepSORT as compared to its original settings (ID switches by 91.44\% and MOTA by 3.7 times). This confirms that the missing IoU overlap is the main issue with the standard DeepSORT.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/id_switches.pdf}%
\caption{ID Switches versus IoU thresholds in DeepSORT. From left to right: total, average per person, and average per detection ID Switches.}
\label{fig:id_switches}
\end{figure}
After adapting the IoU object matching, the deep appearance features play a prominent role in the object tracking after the first few frames. Thus, a fine-tuning of the DeepSORT's neural network on the training set of the KIT AIS pedestrian dataset can further improve the results. Originally, the network has been trained on a large person re-identification dataset, which is very different from our scenario, especially in the looking angle and the object sizes, as the bounding boxes in aerial images are much smaller than in the person re-identification dataset (\(4 \times 4\) vs. \(128 \times 64\) pixels). Scaling the bounding boxes of our aerial dataset to fit the network input size leads to relevant interpolation errors. For our experiments we initialize the last re-identification layers from scratch, and the rest of the network using the pre-trained weights and biases. We also changed the number of classes to 610, representing the number of different pedestrians after cropping the images into the patches with the size of the bounding boxes, and ignoring the patches located at the image border. Instead of scaling the patches to \(128 \times 64\) pixels, we only scale them to \(50 \times 50\). We trained the classifier for 20 epochs with SGD optimizer, Cross-Entropy loss function, batch size of 128, and an initial learning rate of \(0.01\). Moreover, we doubled the bounding box sizes for our experiment.
The results in~\autoref{tab:emprExpMOT} show that the total number of ID switches only decreases from 738 to 734. This indicates that the deep appearance features of DeepSORT are not useful for our problem. While for a large object a small deviation of the bounding box position is tolerable (as the bounding box still mostly contains object-relevant areas), for our very small objects this can cause significant changes in object relevance. The extracted features mostly contain background information. Consequently, in the appearance matching step, the object features from its previous and currently estimated positions can differ significantly. Additionally, the appearance features of different pedestrians in aerial images are often not discriminative enough to distinguish them.
In order to better demonstrate this effect, we evaluate DeepSORT without any appearance feature, also known as SORT. \autoref{tab:emprExpMOT} shows the tracking results with original and doubled bounding box sizes and an IoU threshold of 0.99. According to the results, SORT outperforms the fine-tuned DeepSORT with 438 ID switches. Nevertheless, the number of ID switches is still high, given that we use the ground truth object positions. This could be due to the low frame rate of the dataset and the small sizes of the the objects. Although enlarging the bounding boxes improved the performance significantly, it leads to a poor localization accuracy.
\subsubsection{Tracktor++}\label{subsec:tracktor}
Tracktor++~\cite{bergmann2019tracking} is an MOT method based on deep features. It employs a Faster-RCNN to perform object detection and tracking through regression. We use its \textit{PyTorch} implementation\footnote{https://github.com/phil-bergmann/tracking\_wo\_bnw} and adapt it to our aerial dataset.
We tested Tracktor++ with the ground truth object positions instead of using its detection module; however, it totally failed the tracking task with these settings. Faster-RCNN has been trained on the datasets which are very different to our aerial dataset, for example in looking angle, number and size of the objects. Therefore, we fine-tune Faster-RCNN on the KIT AIS dataset. To this end, we had to adjust the training procedure to the specification of our dataset.
We use Faster-RCNN with a ResNet50 backbone, pre-trained on the ImageNet dataset. We change the anchor sizes to \{2, 3, 4, 5, 6\} and the aspect ratios to \{0.7, 1.0, 1.3\}, enabling it to detect small objects. Additionally, we increase the maximum detections per image to 300, set the minimum size of an image to be rescaled to 400 pixels, the region proposal non-maximum suppression (NMS) threshold to 0.3, and the box predictor NMS threshold to 0.1. The NMS thresholds influence the amount of overlap for region proposals and box predictions. Instead of SGD, we use an ADAM optimizer with an initial learning rate of 0.0001 and a weight decay of 0.0005. Moreover, we decrease the learning rate every 40 epochs by a factor of 10 and set the number of classes to 2, corresponding to background and pedestrians. We also apply substantial online data augmentation including random flipping of every second image horizontally and vertically, color jitter, and random scaling in a range of 10\%.
The tracking results of Tracktor++ with the fine-tuned Faster-RCNN are presented in~\autoref{tab:emprExpMOT}. The detection precision and recall of Faster-RCNN are 25~\% and 31~\%, respectively, with this poor detection performance potentially propagated to the tracking part. According to the table, Tracktor++ only achieves an overall MOTA of 5.3 and 2,188 ID switches even when we use ground truth object positions.
We conclude by assuming that Tracktor++ has difficulties with the low frame rate of the dataset and the small object sizes.
\subsubsection{SMSOT-CNN}
SMSOT-CNN~\cite{bahmanyar2019multiple} is the first DL-based method for multi-object tracking in aerial imagery. It is an extension to GOTURN~\cite{held2016learning}, an SOT regression-based method using CNNs to track generic objects at high speed. SMSOT-CNN adapts GOTURN for MOT scenarios by three additional convolution layers and a tacking management module. The network receives two image patches from the previous and current frames, where both are centered at the object position in the previous frame. The size of the image patches (the amount of contextual information) is adjusted by a hyperparameter. The network regresses the object position in the coordinates of the current frame's image patch. SMSOT-CNN has been evaluated on the KIT~AIS pedestrian dataset in~\cite{bahmanyar2019multiple}, where the objects' first positions are given based on the ground truth data. The tracking results can be seen in~\autoref{tab:emprExpMOT}. Due to the use of a deep network and the local search for the next position of the objects, the number of ID switches by SMSOT-CNN is 157, which is small, relative to the other methods. Moreover, this algorithm achieves an overall MOTA and MOTP of -29.8 and 71.0, respectively. Based on our visual inspections, SMSOT-CNN has some difficulties in densely crowded situations where the objects share similar appearance features. In these cases, multiple similarly looking objects can be present in an image patch, resulting in ID switches and losing track of the target objects.
\subsubsection{Euclidean Online Tracking}
Inspired by the tracking results of SORT besides its simplicity, we propose EOT based on the architecture of SORT. EOT uses a Kalman filter similarly to SORT. Then it calculates the euclidean distance between all predictions ($x_i,y_i$) and detections ($x_j,y_j$), and normalizes them w.r.t. the GSD of the frame to construct a cost matrix as follows:
\begin{equation}
D_{i,j} = GSD \cdot \sqrt{(x_i-x_j)^2 + (y_i-y_j)^2}.
\label{eq:formel}
\end{equation}
After that, as in SORT, we use the Hungarian algorithm to look for global minima. However, if objects enter or leave the scene, the Hungarian algorithm can propagate an error to the whole prediction-detection matching procedure: therefore, we constrain the cost matrix so that all distances greater than a certain threshold are ignored and set to an infinity cost. We set the threshold to $17\cdot GSD$ empirically. Furthermore, only objects successfully tracked in the previous frame are considered for the matching process.
According to~\autoref{tab:emprExpMOT}, while the total MOTA score is competitive with the previously studied methods, EOT achieves the least ID switches (only 37). Compared to SORT, as EOT keeps better track of the objects, the deviations in the Kalman filter predictions are smaller. Therefore, Euclidean distance is a better option as compared to IoU for our aerial image sequences.
\subsection{Conclusion of the Experiments}
In this section, we conclude our preliminary study. According to the results, our EOT is the best performing tracking method.
\autoref{fig:successfullTrackingEOT} illustrates a major case of success by our EOT method. We can observe that almost all pedestrians are tracked successfully, even though the sequence is crowded and people walk in different directions. Furthermore, the significant cases of false positives and negatives are caused by the limitation of the evaluation approach. In other words, while EOT tracks most of the objects, since the evaluation approach is constrained to the minimum 50\% overlap (4 pixels), the correctly tracked objects with smaller overlaps are not considered.
\begin{figure*}%
\centering
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS25.jpg}
\put(2,70){\color{red}25}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS26.jpg}
\put(2,70){\color{red}26}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS27.jpg}
\put(2,70){\color{red}27}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS28.jpg}
\put(2,70){\color{red}28}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS29.jpg}
\put(2,70){\color{red}29}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/MOS30.jpg}
\put(2,70){\color{red}30}
\end{overpic}}
\caption{A success case processed by Stacked-DCFNet on the sequence ``Munich02". The tracking results and ground truth are depicted in green and black, respectively.}%
\label{fig:successfullTrackingEOT}%
\end{figure*}
\autoref{fig:failureCaseDCFNet1} shows a typical failure case of the Stacked-DCFNet method. In the first two frames, most of the objects are tracked correctly; however, after that, the diagonal line in the patch center is confused with the people walking across it. We assume that the line shares similar appearance features with the crossing people.
\autoref{fig:failureCaseDCFNet2} illustrates another typical failure case of DCFNet. The image includes several people walking closely in different directions, introducing confusion into the tracking method due to the people's similar appearance features.
We closely investigate these failure cases in~\autoref{fig:activationDCFNet}. In this figure, we visualize the activation map of the last convolution layer of the network. Although the convolutional layers of Stacked-DCFNet are supposed to be trained only for people, the line and the people (considering their shadows) appear indistinguishable. Moreover, based on the features, different people cannot be discriminated.
\autoref{fig:successfullTrackingDCFNet} demonstrates a successful tracking case by Stacked-DCFNet. People are not walking closely together and the background is more distinguishable from the people.
We also evaluated SMSOT-CNN and found that it shares similar failure and success cases with Stacked-DCFNet, as both take advantage of convolutional layers for extracting appearance features.
Altogether, the Euclidean distance paired with trajectory information in EOT works better than IoU for tracking in aerial imagery. However, detection-based trackers such as EOT require object detection in every frame. As shown for Tracktor++, the detection accuracy of the object detectors is very poor for pedestrians in aerial images. Thus, detection-based methods are not appropriate for our scenarios. Moreover, the approaches which employ deep appearance features for re-identification share similar problems with object detectors, features with poor discrimination abilities in the presence of similarly looking objects, leading to ID switches and loosing track of objects. The tracking methods based on regression and correlation (e.g., Stacked-DCFNet and SMSOT-CNN) show, in general, better performances than the methods based on re-identification because they track objects by local image patches that errors to be propagated to the whole image.
Furthermore, according to our investigations, the path taken by every pedestrian is influenced by three factors: 1) the pedestrian's path history, 2) the positions and movements of the surrounding people, 3) the arrangement of the scene.
We conclude that both regression- and correlation-based tracking methods are good choices for our scenario. They can be improved by considering trajectory information and the pedestrians movement relationships.
\begin{figure*}%
\centering
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0054.jpg}
\put(2,5){54}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0055.jpg}
\put(2,5){55}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0056.jpg}
\put(2,5){56}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0057.jpg}
\put(2,5){57}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0058.jpg}
\put(2,5){58}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0059.jpg}
\put(2,5){59}
\end{overpic}}
\caption{A failure case by Stacked-DCFNet on the sequence ``AA\_Walking\_02". The tracking results and ground truth are depicted in green and black, respectively.}%
\label{fig:failureCaseDCFNet1}%
\end{figure*}
\begin{figure*}%
\centering
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0180.jpg}
\put(2,5){180}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0181.jpg}
\put(2,5){181}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0182.jpg}
\put(2,5){182}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0183.jpg}
\put(2,5){183}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0184.jpg}
\put(2,5){184}
\end{overpic}}
\subfloat{\begin{overpic}[width=2.9cm]{figures/ON0185.jpg}
\put(2,5){185}
\end{overpic}}
\caption{A success case by Stacked-DCFNet on the sequence ``AA\_Crossing\_02". The tracking results and ground truth are depicted in green and black, respectively.}
\label{fig:successfullTrackingDCFNet}%
\end{figure*}
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ON0141.jpg}
\put(2,80){141}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ON0142.jpg}
\put(2,80){142}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ON0143.jpg}
\put(2,80){143}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ON0144.jpg}
\put(2,80){144}
\end{overpic}}
\caption{A failure case by Stacked-DCFNet on the test sequence ``RaR\_Snack\_Zone\_04". The tracking results and the ground truth are depicted in green and black, respectively.}%
\label{fig:failureCaseDCFNet2}%
\end{figure}
\begin{figure}%
\centering
\subfloat[]{\includegraphics[width=.48\columnwidth]{figures/input.pdf} }%
\subfloat[]{\includegraphics[width=.48\columnwidth]{figures/activation.pdf} }
\caption{ (a) An input image patch to the last convolutional layer of Stacked-DCFNetand and (b) its corresponding activation map.}%
\label{fig:activationDCFNet}%
\end{figure}
\section{Related Works} \label{sec:related_work}
This section introduces various categorizations of \gls{vot} as well as related previous works.
\subsection{Visual Object Tracking} \label{section:tracking}
Visual object tracking is defined as locating one or multiple objects in videos or image sequences over time. The traditional tracking process comprises four phases including initialization, appearance modeling, motion modeling, and object finding. During initialization, the targets are detected manually or by an object detector. In the appearance modeling step, visual features of the region of interest are extracted by various learning-based methods for detecting the target objects. The variety of scales, rotations, shifts, and occlusions makes this step challenging.
Image features play a key role in the tracking algorithms. They can be mainly categorized into handcrafted and deep features. In recent years, research studies and applications have focused on developing and using deep features based on DNNs which have shown to be able to incorporate multi-level information and more robustness against appearance variations~\cite{fiaz2019handcrafted}.
Nevertheless, DNNs require large enough training datasets which are not always available. Thus, for many applications, the handcrafted features are still preferable.
The motion modeling step aims at predicting the object movement in time and estimate the object locations in the next frames. This procedure effectively reduces the search space and consequently the computation cost. Widely used methods for motion modeling include Kalman filter~\cite{kalman1960new}, Sequential Monte Carlo methods~\cite{montecarlo2014} and RNNs.
In the last step, object locations are found as the ones close to the estimated locations by the motion model.
\subsubsection{SOT and MOT}
Visual object tracking methods can be divided into \gls{sot}~\cite{wang_dcfnet_2017, ma2015hierarchical} and \gls{mot}~\cite{bahmanyar2019multiple, wojke_simple_2017} methods. While \gls{sot}s only track a single predetermined object throughout a video, even if there are multiple objects, \gls{mot}s can track multiple objects at the same time. Thus, \gls{mot}s can face exponential complexity and runtime increase based on the number of objects as compared to \gls{sot}s.
\subsubsection{Detection-Based and Detection-Free}
Object tracking methods also can be categorized into detection-based~\cite{huang2008robust} and detection-free methods~\cite{lu_deep_nodate}. While the detection-based methods utilize object detectors to detect objects in each frame, the detection-free methods only need the initial object detection. Therefore, detection-free methods are usually faster than the detection-based ones; however, they are not able to detect new objects entering the scene and require manual initialization.
\subsubsection{Online and Offline Learning}
Object tracking methods can be further divided based on their training strategies using either online or offline learning strategy. The methods with an online learning strategy can learn about the tracked objects during runtime. Thus, they can track generic objects~\cite{wang2016stct}. The methods with offline learning strategy are trained beforehand and are therefore faster during runtime~\cite{huang2017learning}.
\subsubsection{Online and Offline Tracking}
Tracking methods can be categorized into online and offline. Offline trackers take advantage of past and futures frames, while online ones can only infer from past frames. Although having all frames by offline tracking methods can increase the performance, in real-world scenarios future frames are not available.
\subsubsection{One- and Two-Stage Tracking}
Most existing tracking approaches are based on a two-stage tracking-by-detection paradigm~\cite{chahyati2017tracking, zhang2017real}. In the first stage, a set of target samples is generated around the previously estimated position using region proposal, random sampling, or similar methods. In the second stage, each target sample is either classified as background or as the target object. In one-stage-tracking, however, the model receives a search sample together with a target sample as two inputs and directly predicts a response map or object coordinates by a previously trained regressor~\cite{held_learning_2016,bahmanyar2019multiple}.
\subsubsection{Traditional and DL-Based Trackers}
Traditional tracking methods mostly rely on the Kalman and particle filters to estimate object locations. They use velocity and location information to perform tracking~\cite{cuevas2005kalman, cuevas2007particle, okuma2004boosted}. Tracking methods only relying on such approaches have shown poor performance in unconstrained environments. Nevertheless, such filters can be advantageous in limiting the search space (decreasing the complexity and computational cost) by predicting and propagating object movements to the following frames.
A number of traditional tracking methods follow a tracking-by-detection paradigm based on template matching~\cite{brunelli2009template}. A given target patch models the appearance of the region of interest in the first frame. Matched regions are then found in the next frame using correlation, normalized cross-correlation, or the sum of squared distances methods~\cite{hager1996real, briechle2001template}. Scale, illumination, and rotation changes can cause difficulties with these methods.
More advanced tracking-by-detection-based methods rely on discriminative modeling, separating targets from their backgrounds within a specific search space. Various methods have been proposed for discriminative modeling, such as boosting methods and Support Vector Machines (SVMs)~\cite{avidan2007ensemble, hare2015struck}. A series of traditional tracking algorithms, such as MOSSE and KCF~\cite{bolme2010visual, henriques2014high}, utilizes correlation filters, which model the target's appearance by a set of filters trained on the images. In these methods, the target object is initially selected by cropping a small patch from the first frame centered at the object. For the tracking, the filters are convolved with a search window in the next frame. The output response map assumes to have a peak at the target's next location. As the correlation can be computed in the Fourier domain, such trackers achieve high frame rates.
Recently, many research works and applications have focused on using DL-based tracking methods. The great advantage of DL-based features over handcrafted ones such as HOG, raw pixels values or grey-scale templates have been presented previously for a variety of computer vision applications. These features are robust against appearance changes, occlusions, and dynamic environments. Examples of DL-based tracking methods include re-identification with appearance modeling and deep features \cite{wojke_simple_2017}, position regression mainly based on SNNs~\cite{ held_learning_2016, li2018high}, path prediction based on RNN-like networks~\cite{sadeghian2017tracking}, and object detection with DNNs such as YOLO~\cite{redmon2016you}.
\subsection{SOTs and MOTs}
In this section, we present a few \gls{sot} and \gls{mot} methods.
\subsubsection{\gls{sot} Methods} \label{subsec:sotm}
Kalal \emph{et al.}~\bmvaOneDot proposed Median Flow~\cite{kalal2010forward}, which utilizes point and optical flow tracking. The inputs to the tracker are two consecutive images together with the initial bounding box of the target object. The tracker calculates a set of points from a rectangular grid within the bounding box. Each of these points is tracked by a Lucas-Kanade tracker generating a sparse motion flow. Afterwards, the framework evaluates the quality of the predictions and filters out the worst 50\%. The remaining point predictions are used to calculate the new bounding box positions considering the displacement.
MOSSE~\cite{bolme2010visual}, KFC~\cite{henriques2014high} and CSRT~\cite{lukezic2017discriminative} are based upon \gls{dcf}s. Bolme~\emph{et al.}~\bmvaOneDot\cite{bolme2010visual} proposed MOSSE which uses a new type of correlation filter called \gls{mosse}, which aims at producing stable filters when initialized using only one frame and grey-scale templates. MOSSE is trained with a set of training images \(f_i\) and training outputs \(g_i\), where \(g_i\) is generated from the ground truth as a 2D Gaussian centered on the target. This method can achieve state-of-the-art performances while running with high frame rates. Henriques~\emph{et al.}~\bmvaOneDot\cite{henriques2014high} replaced the grey-scale templates with HOG features and proposed the idea of \gls{kcf}. \gls{kcf} works with multiple channel-like correlation filters. Additionally, the authors proposed using non-linear regression functions which are stronger than linear functions and provide non-linear filters that can be trained and evaluated as efficiently as linear correlation filters. Similar to \gls{kcf}, dual correlation filters use multiple channels. However, they are based on linear kernels to reduce the computational complexity while maintaining almost the same performance as the non-linear kernels.
Recently, Lukezic~\emph{et al.}~\bmvaOneDot~\cite{lukezic2017discriminative} proposed to use channel and reliability concepts to improve tracking based on \gls{dcf}s. In this method, the channel-wise reliability scores weight the influence of the learned filters based on their quality to improve the localization performance. Furthermore, a spatial reliability map concentrates the filters to the relevant part of the object for tacking. This makes it possible to widen the search space and improves the tracking performance for non-rectangular objects.
As we stated before, the choice of appearance features plays a crucial role in object tracking. Most previous DCF-based works utilize handcrafted features such as HOG, grey-scale features, raw pixels, and color names or the deep features trained independently for other tasks. Wang~\emph{et al.}~\bmvaOneDot\cite{wang_dcfnet_2017} proposed an end-to-end trainable network architecture able to learn convolutional features and perform the correlation-based tracking simultaneously. The authors encode a \gls{dcf} as a correlation filter layer into the network, making it possible to backpropagate the weights through it. Since the calculations remain in the Fourier domain, the runtime complexity of the filter is not increased. The convolutional layers in front of the \gls{dcf} encode the prior tracking knowledge learned during an offline training process. The \gls{dcf} defines the network output as the probability heatmaps of object locations.
In the case of generic object tracking, the learning strategy is typically entirely online. However, online training of neural networks is slow due to backpropagation leading to a high run time complexity. However, Held~\emph{et al.}~\bmvaOneDot\cite{held_learning_2016} developed a regression-based tracking method, called GOTURN, based on a \gls{snn}, which uses an offline training approach helping the network to learn the relationship between appearance and motion. This makes the tracking process significantly faster.
This method utilizes the knowledge gained during the offline training to track new unknown objects online. The authors showed that without online backpropagation, GOTURN can track generic objects at 100 fps. The inputs to the network are two image patches cropped from the previous and current frames, centered at the known object position in the previous frame. The size of the patches depends on the object bounding box sizes and can be controlled by a hyperparameter. This determines the amount of contextual information given to the network. The network output is the coordinates of the object in the current image patch, which is then transformed to the image coordinates. GOTURN achieves state-of-the-art performance on common SOT benchmarks such as VOT~2014\footnote{https://www.votchallenge.net/vot2014/}.
\subsubsection{\gls{mot} Methods}
Bewley~\emph{et al.}~\bmvaOneDot~\cite{bewley_simple_2016} proposed a simple multi-object tracking approach, called SORT, based on the Jaccard distance, the Kalman filter, and the Hungarian algorithm~\cite{kuhn1955hungarian}. Bounding box position and size are the only values used for motion estimation and assigning the objects to their new positions in the next frame. In the first step, objects are detected using Faster R-CNN~\cite{ren2015faster}. Subsequently, a linear constant velocity model approximates the movements of each object individually in consecutive frames. Afterwards, the algorithm compares the detected bounding boxes to the predicted ones based on~\gls{iou}, resulting in a distance matrix. The Hungarian algorithm then assigns each detected bounding box to a predicted (target) bounding box. Finally, the states of the assigned targets are updated using a Kalman filter. SORT runs with more than 250 \gls{fps} with almost state-of-the-art accuracy. Nevertheless, occlusion scenarios and re-identification issues are not considered for this method, which makes it inappropriate for long-term tracking.
Wojke~\emph{et al.}~\bmvaOneDot\cite{wojke_simple_2017} extended SORT to DeepSORT and tackled the occlusion and re-identification challenges, keeping the track handling and Kalman filtering modules almost unaltered. The main improvement takes place into the assignment process, in which two additional metrics are used: 1) motion information provided based on the Mahalanobis distance between the detected and predicted bounding boxes, 2) appearance information by calculating the cosine distance between the appearance features of a detected object and the already tracked object. The appearance features are computed by a deep neural network trained on a large person re-identification dataset~\cite{zheng2016mars}. A cascade strategy then determines object-to-track assignments. This strategy effectively encodes the probability spread in the association likelihood. DeepSORT performs poorly if the cascade strategy cannot match the detected and predicted bounding boxes.
Recently, Bergmann~\emph{et al.}~\bmvaOneDot\cite{bergmann2019tracking} introduced Tracktor++ which is based on the Faster R-CNN object detection method. Faster R-CNN classifies region proposals to target and background and fits the selected bounding boxes to object contours by a regression head. The authors trained Faster R-CNN on the MOT17Det pedestrian dataset~\cite{milan2016mot16}. The first step is an object detection by Faster R-CNN. The detected objects in the first frame are then initialized as tracks. Afterwards, the tracks are tracked in the next frame by regressing their bounding boxes using the regression head. In this method, the lost or deactivated tracks can be re-identified in the following frames using a~\gls{snn} and a constant velocity motion model.
\subsection{Tracking in Satellite and Aerial Imagery} \label{subsec:trackingSat}
Visual object tracking for targets such as pedestrians and vehicles in satellite and aerial imagery is a challenging task that has been addressed by only few works, compared to the huge number addressing pedestrian and vehicle tracking in ground imagery~\cite{wang2015visual, yokoyama2005contour}
Tracking in satellite and aerial imagery is much more complex. This is due to the moving cameras, large image sizes, different scales, large number of moving objects, tiny size of the objects (e.g., 4$\times$4 pixels for pedestrians, 30$\times$15 for vehicles), low frame rates, different visibility levels, and different atmospheric and weather conditions~\cite{jadhav_aerial_2019, milan2016mot16}.
\subsubsection{Tracking by Moving Object Detection}
Most of the previous works in satellite and aerial object tracking are based on moving object detection~\cite{reilly2010detection, meng2012object, benedek2009detection}.
Reilly~\emph{et al.}~\bmvaOneDot\cite{reilly2010detection} proposed one of the earliest aerial object tracking approaches focusing on vehicle tracking mainly in highways. They compensate camera motion by a correction method based on point correspondence. A median background image is then modeled from ten frames and subtracted from the original frame for motion detection, resulting in the moving object positions. All images are split into overlapping grids, with each one defining an independent tracking problem. Objects are tracked using bipartite graph, matching a set of label nodes and a set of target nodes. The Hungarian algorithm solves the cost matrix afterwards to determine the assignments. The usage of the grids allows tracking large number of objects with the \(O(n^3)\) runtime complexity for the Hungarian algorithm.
Meng~\emph{et al.}~\bmvaOneDot\cite{meng2012object} followed the same direction. They addressed the tracking of ships and grounded aircrafts. Their method detects moving objects by calculating an \gls{adi} from frame to frame. Pixels with high values in the \gls{adi} are likely to be moving objects. Each target is afterwards modeled by extracting its spectral and spatial features, where spectral features refer to the target probability density functions and the spatial features to the target geometric areas. Given the target model, matching candidates are found in the following frames via regional feature matching using a sliding window paradigm.
Tracking methods based on moving object detection are not applicable for our pedestrian and vehicle tracking scenarios. For instance, Reilly~\emph{et al.}~\bmvaOneDot\cite{reilly2010detection} use a road orientation estimate to constrain the assignment problem. Such estimations which may work for vehicles moving along predetermined paths (e.g., highways and streets), do not work for pedestrian tracking with much more diverse and complex movement behaviors (e.g., crowded situations and multiple crossings). In general, such methods perform poorly in unconstrained environments, are sensitive to illumination change and atmospheric conditions (e.g., clouds, shadows, or fog), suffer from the parallax effect, and cannot handle small or static objects. Additionally, since finding the moving objects requires considering multiple frames, these methods cannot be used for the real-time object tracking.
\subsubsection{Tracking by Appearance Features}
The methods based on appearance-like features overcome the issues of the tracking by moving object detection approaches~\cite{butenuth2011integrating, Schmidt2011, liu2015fast, qi2015unsupervised, bahmanyar2019multiple}, making it possible to detect small and static objects on single images.
Butenuth~\emph{et al.}~\bmvaOneDot\cite{butenuth2011integrating} deal with pedestrian tracking in aerial image sequences. They employ an iterative Bayesian tracking approach to track numerous pedestrians, where each pedestrian is described by its position, appearance features, and direction. A linear dynamic model then predicts futures states. Each link between a prediction and a detection is weighted by evaluating the state similarity and associated with the direct link method described in~\cite{huang2008robust}.
Schmidt~\emph{et al.}~\bmvaOneDot\cite{Schmidt2011} developed a tracking-by-detection framework based on Haar-like features. They use a Gentle AdaBoost classifier for object detection and an iterative Bayesian tracking approach, similar to~\cite{butenuth2011integrating}. Additionally, they calculate the optical flow between consecutive frames to extract motion information. However, due to the difficulties of detecting small objects in aerial imagery, the performance of the method is degraded by a large number of false positives and negatives.
Bahmanyar~\emph{et al.}~\bmvaOneDot\cite{bahmanyar2019multiple} proposed \gls{smsot-cnn} and extended the GOTURN method, a SOT method developed by Held~\emph{et al.}~\bmvaOneDot\cite{held_learning_2016}, by stacking the architecture of GOTURN to track multiple pedestrians and vehicles in aerial image sequences. SMSOT-CNN is the only previous DL-based work dealing with \gls{mot}. SMSOT-CNN expands the GOTURN network by three additional convolutional layers to improve the tracker's performance in locating the object in the search area. In their architecture, each SOT-CNN is responsible for tracking one object individually leading to a linear increase in the tracking complexity by the number of objects. They evaluate their approach on the vehicle and pedestrian sets of the KIT~AIS aerial image sequence dataset. Experimental results shows that SMSOT-CNN significantly outperforms GOTURN. Nevertheless, SMSOT-CNN performs poorly in crowded situations and when objects share similar appearance features.
In Section~\ref{sec:preExperiments}, we experimentally investigate a set of the reviewed visual object tracking methods on three aerial object tracking datasets.
\section{Evaluation and Discussion}\label{sec:evaluation}
In this section, we evaluate different parts of our proposed AerialMPTNet on the KIT~AIS and AerialMPT datasets through a set of ablation studies. Furthermore, we compare our results to the tracking methods discussed in \autoref{sec:preExperiments}. \autoref{tab:configs} reports the different network configurations for our ablation studies.
\begin{table}
\centering
\caption{Different network configurations.}
\resizebox{\columnwidth}{!}{%
\rowcolors{2}{gray!25}{white}
\begin{tabular}{c|ccccc}
Name & SNN & LSTM & GCNN & SE Layers & OHEM \\
\hline
SMSOT-CNN & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ \\
AerialMPTNet$_{LSTM}$ & \checkmark & \checkmark & $\times$ & $\times$ & $\times$\\
AerialMPTNet$_{GCNN}$ & \checkmark & $\times$ & \checkmark & $\times$ & $\times$\\
AerialMPTNet & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ \\
AerialMPTNet$_{SE}$ & \checkmark & \checkmark & \checkmark & \checkmark & $\times$ \\
AerialMPTNet$_{OHEM}$ & \checkmark & \checkmark & \checkmark & $\times$ & \checkmark \\
\end{tabular}}
\label{tab:configs}
\end{table}
\subsection{SMSOT-CNN (\textit{PyTorch})}
As mentioned in Section~\ref{sec:exp_setup}, for a better comparison we re-implemented the SMSOT-CNN in the \textit{PyTorch} framework and trained it on our experimental datasets (as the pretrained weights could not be used.)
The tracking results of our \textit{PyTorch} SMOST-CNN on the ArialMPT and KIT~AIS pedestrian and vehicle datasets are presented in~\autoref{tab:smsotAll}.
Therein, SMSOT-CNN achieves a MOTA and MOTP scores of -35.0 and 70.0 for the KIT~AIS pedestrian, and 37.1 and 75.8 for the KIT AIS vehicle dataset, respectively. It achieves respectively a MOTA and MOTP of -37.2 and 68.0 on the AerialMPT dataset.
A comparison of the results to~\cite{bahmanyar2019multiple} shows that our \textit{PyTorch} implementation works rather similarly to the original \textit{Caffe} version, with only 5.2 and 4.0 points smaller MOTA for the KIT~AIS pedestrian and vehicle, respectively.
For the rest of our experiments, we consider the results of this implementation of SMOST-CNN as the baseline for our evaluations.
\SMSOTCNNALL
\subsection{AerialMPTNet (LSTM only)}
In this step, we evaluate the influence of the LSTM module on the tracking performance of our AerialMPTNet. \autoref{tab:arialmptnetLSTM} reports the tracking result of AerialMPTNet$_{LSTM}$ on our experimental datasets. We use the pre-trained weights of SMSOT-CNN to initialize the convolutional weights and biases. For the KIT~AIS pedestrian dataset, we evaluate the effects of freezing the weights during the training of LSTM. The tracking results with frozen and trainable convolutional weights in~\autoref{tab:arialmptnetLSTM} show that the latter improves MOTA and MOTP values by 8.2 and 0.5, respectively. Moreover, the network trained with trainable weights tracks 6.9\% more objects mostly during their lifetimes (MT). We can observe that this increase in performance holds for all sequences with different number of frames and objects.
We can also see that the number of ID switches for frozen weights is smaller with respect to the trainable weights (231 vs. 270). Based on our visual inspections, the smaller number of ID switches is caused by the network with frozen weights losing track of the objects. The network with the trainable weights can track objects for a longer time; however, when the objects get into crowded scenarios, it loses their track by switching their IDs, leading to an increase in the amount of ID switches.
Based on these comparisons, we can argue that the computed features in SNN need fine tuning to some degree in order to work jointly with the LSTM module. That could be the reason why the training with the trainable weights outperforms the setting employing frozen weights.
Thus, for the rest of our experiments, we use trainable weights. Consequently,~\autoref{tab:arialmptnetLSTM} shows only the results with trainable weights for the AerialMPT and KIT~AIS vehicle datasets.
\autoref{tab:overallperformance} represents the overall performances of different tracking methods on the KIT~AIS and AerialMPT datasets. According to the table, AerialMPTNet$_{LSTM}$ outperforms SMSOT-CNN with significant larger MOTA on all experimental datasets. In particular, based on~\autoref{tab:smsotAll} and~\autoref{tab:arialmptnetLSTM}, the main improvements happen for complex sequences such as the ``AA\_Walking\_02" and ``Munich02" sequences of the KIT~AIS pedestrian dataset, with a 20.8 and 23.8 points larger MOTA, respectively.
On the AerialMPT dataset, the most complex sequences are ``Bauma3" and ``Bauma6" presenting overcrowded scenarios with many pedestrians intersecting. According to the results, using the LSTM module does not help the performance relevantly. In such complex sequences, the trajectory information of the LSTM module is not enough for distinguishing pedestrians and tracking them within the crowds.
Furthermore, the increase in the number of mostly and partially tracked objects (MT and PT) and the decrease in the number of mostly lost ones (ML) indicate that the LSTM module helps AerialMPTNet in the tracking of the objects for a longer time. This, however, causes a larger number of ID switches as discussed before.
On the KIT~AIS vehicle dataset, although the results show a significant improvement of AerialMPTNet$_{LSTM}$ over SMSOT-CNN, the performance improvements are minor compared to the pedestrian datasets. This could be due the more distinguishable appearance features of the vehicles, leading to a good performance even when relying solely on the SNN module.
\aerialmptnetLSTM
\TotalResults
\subsection{AerialMPTNet (GCNN only)}
In this step, we focus on the modeling of the movement relationships between adjacent objects by AerialMPTNet$_{GCNN}$. As described in~\autoref{tab:configs}, we only consider the SNN and GCNN modules, and train the network on our experimental datasets. The tracking results on the test sequences of the datasets are shown in~\autoref{tab:gcnn}, and the comparisons to the other methods are provided in~\autoref{tab:overallperformance}.
AerialMPTNet$_{GCNN}$ outperforms SMSOT-CNN significantly with an improvement of 11.8, 12.0, and 5.7 points of the MOTA on the AerialMPT and KIT~AIS pedestrian and vehicle datasets, respectively. Additionally, AerialMPTNet$_{GCNN}$ enhances the MT, PT, and ML values for the pedestrian datasets, while only the MT is enhanced and the PT and ML get worse for the vehicle dataset.
Altogether, these results indicate that the relational information is more important for the pedestrians than the vehicles.
Moreover, according to~\autoref{tab:gcnn}, as in LSTM results, the use of GCNN helps more for complex sequences. For example, MOTA on the ``AA\_Walking\_02" and ``Munich02" sequences increase by 13.9 and 20.5, respectively; however, it decreases respectively by 12.1 and 14.8 on ``AA\_Crossing\_02" and ``RaR\_Snack\_Zone\_02". This could be due to the negative impact of the large number of zero paddings in the less crowded sequences with smaller number of adjacent objects.
Compared to AerialMPTNet$_{LSTM}$, for the AerialMPT, AerialMPTNet$_{GCNN}$ performs slightly better while on the other two datasets it performs worse with a narrow margin. We assume that, due to the higher crowd densities in the AerialMPT dataset, the relationships between adjacent objects are more critical with respect to their movement histories.
\aerialmptnetGCNN
\subsection{AerialMPTNet}\label{sec:aerialMPTNet}
In this step, we evaluate the complete AerialMPTNet by fusing the SNN, LSTM, and GCNN modules.
\autoref{tab:aerialMPTNetresults} represents the tracking results of AerialMPTNet on the test sets of our experimental datasets, and~\autoref{tab:overallperformance} compares its overall performance to the other tracking methods.
According to the results, the AerialMPTNet outperforms AerialMPTnet$_{LSTM}$ and AerialMPTNet$_{GCNN}$ for both pedestrian datasets. However, this is not the case for the vehicle dataset.
This is due to the main idea behind the development of the network. Since AerialMPTNet is initially designed for pedestrian tracking, it needs to be further adapted to domain specific challenges posed by vehicle tracking. For example, the distance threshold for the modeling if the adjacent object relationships (in GCNN) which considers objects within a distance of 50 pixels from the target object might miss many neighbouring vehicles, as usually the distances between vehicles are larger than those between pedestrians.
Finally, AerialMPTNet achieves better tracking results than SMSOT-CNN on all three datasets.
\aerialMPTNet
\subsubsection{Pedestrian Tracking}
In more details, AerialMPTNet yields the best MOTA among the studied methods on the ``AA\_Walking\_ 02", ``Munich02", and ``RaR\_Snack\_Zone\_02" sequences of the KIT~AIS pedestrian dataset (-16.8, -34.5, and 38.9, respectively.) These sequences are the most complex ones in this dataset with respect to the length and number of objects, thing which could significantly influence the MOTA value. Longer sequences and a higher number of objects usually cause the MOTA value to decrease, as it is more probable that the tracking methods lose track of the objects or confuse their IDs in these cases
\autoref{fig:smsotcnnWalking} illustrates the tracking results on two frames of the ``AA\_Walking\_ 02" sequence of the KIT~AIS pedestrian dataset by AerialMPTNet and SMSOT-CNN.
Comparing the predictions and ground truth points demonstrates that SMSOT-CNN loses track of a considerably higher number of pedestrians between these two frames. While SMSOT-CNN's predictions are stuck at the diagonal background lines due to their similar appearance features to the pedestrians, AerialMPTNet can easily handle this situation due to the LSTM module.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.49\columnwidth]{figures/wa_8_new.pdf}
\put(5,75){8}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\columnwidth]{figures/wa_14_new.pdf}
\put(5,75){14}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.49\columnwidth]{figures/wa_8_old.pdf}
\put(5,75){8}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\columnwidth]{figures/wa_14_old.pdf}
\put(5,75){14}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 8 and 14 of the ``AA\_Walking\_ 02" sequence of the KIT~AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.}
\label{fig:smsotcnnWalking}%
\end{figure}
We also visualized a cropped part of four frames from the ``AA\_Crossing\_02" sequence of the KIT~AIS pedestrian dataset in~\autoref{fig:aacrossing}. As in the previous example, AerialMPTNet clearly outperforms SMSOT-CNN on the tracking of the pedestrians crossing the background lines.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_4_new.pdf}
\put(5,80){4}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_6_new.pdf}
\put(5,80){6}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_8_new.pdf}
\put(5,80){8}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_10_new.pdf}
\put(5,80){10}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_4_old.pdf}
\put(5,80){4}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_6_old.pdf}
\put(5,80){6}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_8_old.pdf}
\put(5,80){8}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/cr_10_old.pdf}
\put(5,80){10}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 4, 6, 8, and 10 of the ``AA\_Crossing\_02" sequence of the KIT~AIS pedestrian dataset. The predictions and ground truth are depicted in blue and white, respectively.}
\label{fig:aacrossing}%
\end{figure}
On the AerialMPT dataset, AerialMPTNet achieves the best MOTA scores among all studied methods in this paper on the ``Bauma3", ``Bauma6", and ``Witt" sequences (-32.0, -28.4, -65.9), which contain the most complex scenarios regarding crowd density, pedestrian movements, variety of the GSDs, and complexity of the terrain.
However, in contrast to the KIT~AIS pedestrian dataset, the MOTA scores are not correlated with the sequence lengths, indicating the impact of other complexities on the tracking results and the better distribution of complexities over the sequences of the AerialMPT dataset as compared to the KIT~AIS pedestrian dataset.
\autoref{fig:aerialmptnetcrossing} exemplifies the role of the LSTM module in enhancing the tracking performance in AerialMPTNet. This figure shows an intersection of two pedestrians in the cropped patches from four frames of the ``Pasing8" sequence of the AerialMPT dataset. According to the results, SMOT-CNN (bottom row) loses one of the pedestrians after their intersection leading to an ID switch. However, AerialMPTNet (top row) can track both pedestrians correctly, mainly relying on the pedestrians' movement histories (their movement directions) provided by the LSTM module.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_11_new.pdf}
\put(5,58){11}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_13_new.pdf}
\put(5,58){13}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_15_new.pdf}
\put(5,58){15}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_17_new.pdf}
\put(5,58){17}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_11_old.pdf}
\put(5,58){11}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_13_old.pdf}
\put(5,58){13}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_15_old.pdf}
\put(5,58){15}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/pa_17_old.pdf}
\put(5,58){17}
\end{overpic}}
\caption{Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 11, 13, 15, and 17 of the ``Pasing8" sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.}
\label{fig:aerialmptnetcrossing}%
\end{figure}
\autoref{fig:aerialmptnetkarlsplatz} illustrates a case in which the advantage of the GCNN module can be clearly observed. The images are cropped from four frames of the ``Karlsplatz" sequence of the AerialMPT dataset. It can be seen that SMSOT-CNN has difficulties in tracking the pedestrians in such crowded scenarios, where the pedestrians move in various directions. However, AerialMPTNet can handle this scenario mainly based on the pedestrian relationship models provided by the GCNN module.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_21_new.pdf}
\put(1,66){\color{red}21}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_23_new.pdf}
\put(1,66){\color{red}23}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_25_new.pdf}
\put(1,66){\color{red}25}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_27_new.pdf}
\put(1,66){\color{red}27}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_21_old.pdf}
\put(1,66){\color{red}21}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_23_old.pdf}
\put(1,66){\color{red}23}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_25_old.pdf}
\put(1,66){\color{red}25}
\end{overpic}}
\subfloat{\begin{overpic}[width=.24\columnwidth]{figures/ka_27_old.pdf}
\put(1,66){\color{red}27}
\end{overpic}}
\caption{Tracking results by the AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 21, 23, 25, and 27 of the ``Karlsplatz" sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.}
\label{fig:aerialmptnetkarlsplatz}%
\end{figure}
In addition, there are sequences where both methods reach their limits and perform poorly. \autoref{fig:witt} illustrates the tracking results of AerialMPTNet (top row) and of SMSOT-CNN (bottom row) on two frames of the ``Witt" sequence of the AerialMPT dataset. Comparing the predictions and ground truth object tracks indicates the large number of lost objects by both methods. According to~\autoref{tab:aerialMPTNetresults} and~\autoref{tab:smsotAll}, despite the small number of frames in the ``Witt" sequence, the MOTA scores are low for both methods (-68.6 and -65.9). Further investigations show that these poor performances are caused by the non-adaptive search window size. In the ``Witt" sequence, pedestrians move out of the search window and are lost by the tracker as a consequence. In order to solve this issue, the GSD of the frames as well as the pedestrian velocities should be considered in determining the search window size.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.35\columnwidth, angle =90]{figures/wi_3_new.pdf}
\put(90,64){\color{red}3}
\end{overpic}}
\subfloat{\begin{overpic}[width=.35\columnwidth, angle =90]{figures/wi_6_new.pdf}
\put(90,64){\color{red}6}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.35\columnwidth, angle =90]{figures/wi_3_old.pdf}
\put(90,64){\color{red}3}
\end{overpic}}
\subfloat{\begin{overpic}[width=.35\columnwidth, angle =90]{figures/wi_6_old.pdf}
\put(90,64){\color{red}6}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 3 and 6 of the ``Witt" sequence of the AerialMPT dataset. The predictions and ground truth are depicted in blue and white, respectively.}
\label{fig:witt}%
\end{figure}
In order to show the complexity of the pedestrian tracking task in the AerialMPT dataset, we report the tracking results of AerialMPTNet on the frames 18 and 10 of the ``Munich02" and ``Bauma3" sequences, respectively, in~\autoref{fig:overview}.
\subsubsection{Vehicle Tracking}
According to~\autoref{tab:overallperformance}, AerialMPTNet outperforms SMSOT-CNN also on the KIT~AIS vehicle dataset, although the increase in performance is lower compared to the pedestrian tracking results.
Results on different sequences in~\autoref{tab:aerialMPTNetresults} and~\autoref{tab:smsotAll} show that both methods perform poorly on the ``MunichCrossroad02" sequence. \autoref{fig:crossroad2} visualizes the challenges that the tracking methods face in this sequence. For the visualization, we selected an early and a late frame to demonstrate the strong camera movements and changes in the viewing angle, which affect scene arrangements and object appearances. In addition, vehicles are partly or completely occluded by shadows and other objects such as trees. Finally, in this crossroad the movement patterns of the vehicles are complex.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.54\columnwidth, angle=90]{figures/muCr_4_new_m.pdf}
\put(3,85){\color{red}4}
\end{overpic}}
\subfloat{\begin{overpic}[width=.54\columnwidth, angle=90]{figures/muCr_31_new_m.pdf}
\put(3,85){\color{red}31}
\end{overpic}}
\caption{Tracking results by AerialMPTNet on the frames 4 and 31 of the ``MunichCrossroad02" sequence of the KIT~AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively. Several hindrances such as changing viewing angle, shadows, and occlusions (e.g., by trees) are visible.}
\label{fig:crossroad2}%
\end{figure}
In~\autoref{fig:crossroad02dd}, we compare the performances of AerialMPTNet and SMSOT-CNN on the ``MunichCrossroad02" sequence. Both methods track AerialMPTNet tracks a few vehicles better than SMSOT-CNN such as the ones located densely at the traffic lights. AerialMPTNet loses track of a few vehicles which are tracked correctly by SMSOT-CNN. These failures could be solved by a parameter adjustment in our AerialMPTNet.
\begin{figure}%
\centering
\subfloat{\begin{overpic}[width=.53\columnwidth, angle =90]{figures/muCr_2_new_m.pdf}
\put(3,86){\color{red}2}
\end{overpic}}
\subfloat{\begin{overpic}[width=.53\columnwidth, angle =90]{figures/muCr_8_new_m.pdf}
\put(3,86){\color{red}8}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.53\columnwidth, angle =90]{figures/muCr_2_old_m.pdf}
\put(3,86){\color{red}2}
\end{overpic}}
\subfloat{\begin{overpic}[width=.53\columnwidth, angle =90]{figures/muCr_8_old_m.pdf}
\put(3,86){\color{red}8}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 2 and 8 of the ``MunichCrossroad02" sequence of the KIT~AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.}
\label{fig:crossroad02dd}%
\end{figure}
In~\autoref{fig:stree04} we compare performances on the ``MunichStreet04" sequence. In this example, AerialMPTNet tracks the long vehicle much better than SMSOT-CNN.
\begin{figure*}%
\centering
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/st_20_new.pdf}
\put(2,27){\color{red}20}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/st_29_new.pdf}
\put(2,27){\color{red}29}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/st_20_old.pdf}
\put(2,27){\color{red}20}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/st_29_old.pdf}
\put(2,27){\color{red}29}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 20 and 29 of the ``MunichStreet04" sequence of the KIT~AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.}
\label{fig:stree04}%
\end{figure*}
Based on~\autoref{tab:aerialMPTNetresults} and~\autoref{tab:smsotAll}, SMSOT-CNN outperforms our AerialMPTNet on the ``MunichStreet02" sequence. In~\autoref{fig:street02}, we exemplify the existing problems with our AerialMPTNet in this sequence. A background object (in the middle of the scene) has been recognized as a vehicle in frame 7, while the vehicle of interest is lost. A similar failure happens at the intersection. This is due to the parameter configurations of AerialMPTNet. As mentioned before, our method was initially proposed for pedestrian tracking, taking into account the characteristics and challenges of this task. Thus, we believe that by further investigations and parameter tuning, such issues should be solved.
\begin{figure*}%
\centering
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/muSt_1_new.pdf}
\put(2,22){\color{red}1}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/muSt_7_new.pdf}
\put(2,22){\color{red}7}
\end{overpic}}
\\[1ex]
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/muSt_1_old.pdf}
\put(2,22){\color{red}1}
\end{overpic}}
\subfloat{\begin{overpic}[width=.49\textwidth]{figures/muSt_7_old.pdf}
\put(2,22){\color{red}7}
\end{overpic}}
\caption{Tracking results by AerialMPTNet (top row) and SMSOT-CNN (bottom row) on the frames 1 and 7 of the ``MunichStreet02" sequence of the KIT~AIS vehicle dataset. The predictions and ground truth bounding boxes are depicted in blue and white, respectively.}
\label{fig:street02}%
\end{figure*}
\subsubsection{Localization Preciseness}
In order to evaluate the preciseness of the object locations predicted by AerialMPTNet with respect to SMSOT-CNN, we vary the overlap criterion (IoU threshold) of the evaluation metrics for the Prcn, MOTA, MT, and ML metrics in~\autoref{fig:MOTAKITPedestrian}.
According to the plots, the performance of both methods decreases by increasing the IoU threshold, requiring more overlap between the predicted and ground truth bonding boxes (more precise localization.) For all presented metrics, the preciseness of our ArialMPTNet surpasses that of the SMSOT-CNN.
However, for the vehicle dataset the performance increase by our AerialMPTNet over SMSOT-CNN is lower than for the case of the pedestrian datasets.
\begin{figure*}%
\centering
\subfloat[]{\includegraphics[width=.485\textwidth]{figures/MotaPrcnPlot_KIT_ped.pdf}}
\subfloat[]{\includegraphics[width=.49\textwidth]{figures/MTMLPlot_KIT_ped.pdf}}
\subfloat[]{\includegraphics[width=.485\textwidth]{figures/MotaPrcnPlot_AerialMPT.pdf}}
\subfloat[]{\includegraphics[width=.49\textwidth]{figures/MTMLPlot_AerialMPT.pdf}}
\subfloat[]{\includegraphics[width=.485\textwidth]{figures/MotaPrcnPlot_Vehicle.pdf}}
\subfloat[]{\includegraphics[width=.49\textwidth]{figures/MTMLPlot_Vehicle.pdf}}
\caption{Comparing the Prcn, MOTA, MT, and ML of the AerialMPTNet and SMSOT-CNN on the KIT~AIS pedestrian (first row), AerialMPT (second row), and KIT~AIS vehicle (third row) datasets by changing the IoU thresholds of the evaluation metrics.}%
\label{fig:MOTAKITPedestrian}
\end{figure*}
\subsection{AerialMPTNet (with Squeeze-and-Excitation layers)}
In this step we evaluate the improvement achieved by adding SE layers to our AerialMPTNet, as described in Section~\ref{sec:squeeze}. We train the network on our three experimental datasets and report the tracking results in~\autoref{tab:overallperformance}. Using the SE layers in AerialMPTNet$_{SE}$ degrades the results marginally for most of the metrics on the KIT~AIS pedestrian and vehicle datasets as compared to AerialMPTNet. For the vehicle dataset, the SE layers improves the number of the mostly lost (ML) and partially tracked (PT) vehicles by 0.9\% and 3.9\%, respectively.
On the AerialMPT dataset, however, the network behaviour is totally different. AerialMPTNet$_{SE}$ outperforms AerialMPTNet for most of the metrics. SE layers improve MOTA and MOTP by 2 and 0.1 points, respectively. Moreover, the number of mostly tracked (MT) pedestrians increases by 1.7\%.
These inconstant behaviours could be due to the different image quality and contrast of the datasets. Since the images of the AerialMPT dataset are characterized by a higher quality, the adaptive channel weighting would be more meaningful.
\subsection{Training with OHEM}
We evaluate the influence of Online Hard Example Mining (OHEM) on the training of our AerialMPTNet as described in Section~\ref{sec:ohem}. The results are compared to those of the AerialMPTNet with its standard training procedure in~\autoref{tab:overallperformance}.
The use of OHEM in the training procedure reduces the performance marginally on both pedestrian datasets. For example, MOTA decreases by 5 and 1.7 points for the KIT~AIS pedestrian and AerialMPT datasets, respectively. For the KIT AIS vehicle dataset, however, results show small improvements in the tracking results. For instance, MOTA rises by 1.8 points and the number of mostly tracked objects increases by 1.4\%.
We argue that pedestrian movement is highly complex and therefore, providing in input a similar situation multiple times to the tracker based on OHEM does not help the performance. For the vehicles, however, since they mostly moves in straight paths, OHEM can improve the training by retrying the failure cases. This is the first experiment on the benefits of OHEM in regression-based tracking. Further experiments have to be conducted in order to better understand the underlying reasons.
\subsection{Huber Loss Function}
We assess the effects of loss function in the tracking performance by using the Huber loss~\cite{huber1992robust} instead of the traditional $L1$ loss function. The Huber loss is a mixture of the $L1$ and $L2$ losses, both commonly used for regression problems, and combines their strengths. The $L1$ loss measures the Mean Absolute Error (MAE) between the output of the network \(x\) and the ground truth \(\hat{x}\):
\begin{equation}
L1(x,\hat{x}) = \sum_i|x_i-\hat{x}_i|
\end{equation}
The L2 loss calculates the Mean Squared Error (MSE) between the network output and the ground truth value:
\begin{equation}
L2(x,\hat{x}) = \sum_i(x_i-\hat{x}_i)^2
\end{equation}
The $L1$ loss is less affected by outliers with respect to the $L2$ loss.
The Huber loss acts as a MSE when the error is small, and as a MAE when the error is large:
\begin{equation}
L_H(x,\hat{x}) = \sum_i z_i, \\
\end{equation}
\[
z_i = \begin{cases}
0.5(x_i-\hat{x}_i)^2, & \text{$if~~|x_i-\hat{x}_i|<1 $}\\
|x_i-\hat{x}_i|-0.5, & \text{$otherwise$}
\end{cases}
\]
The Huber loss is more robust to outliers with respect to $L2$ and improves the $L1$ loss for the missing minima at the end of the training.
\autoref{tab:huber} compares results obtained by $L1$ and Huber loss functions. The model trained with the $L1$ loss outperforms the one trained with the Huber loss in general on all three datasets. There are a few metrics for which the Huber loss shows an improvement over $L1$, such as MT in the vehicle dataset or IDS in the AerialMPT dataset; however, these are marginal. Altogether, we can conclude that the $L1$ loss is a better option for our method in these tracking scenarios.
\lossresults
\section{Comparing AerialMPTNet to Other Methods}
In this section, we compare the results of our AerialMPTNet with a set of traditional methods including KCF, Median Flow, CSRT, and MOSSE as well as DL-based methods such as Tracktor++, Stacked-DCFNet, and SMSOT-CNN.
\autoref{tab:overallperformance} reports the results of different tracking methods on the KIT~AIS and AerialMPT datasets.
In general, the DL-based methods outperform the traditional ones, with MOTA scores varying between -16.2 and -48.8 rather then between -55.9 and -85.8, respectively. The percentages of mostly tracked and mostly lost objects vary between 0.8\% and 9.6\% for the DL-based methods, while they lie between 36.5\% and 78.3\% for the traditional ones.
\subsection{Pedestrian Tracking}
Among the traditional methods, CSRT is the best performing one on the AerialMPT and KIT~AIS pedestrian datasets, with MOTA values of -55.9 and -64.6. CSRT mostly tracks 9.6\% and 2.9\%, and of the pedestrians while it mostly loses 39.4\% and 59.3\% of the objects in these datasets.
The DL-based methods, apart from Tracktor++, track much more pedestrians mostly ($>$13.8\%) and lose much less pedestrians ($<$23.6\%) with respect to traditional methods. The poor performances of Tracktor++ is due to its limitations in working with small objects.
AerialMPTNet outperforms all other methods according to most of the adopted figures of merit on the pedestrian datasets with significantly larger MOTA values (-16.2 and -23.4) and competitive MOTP (69.6 and 69.7) values.
It mostly tracks 5.9\% and 4.6\% more pedestrians and loses 5.2\% and 6.8\% less pedestrians with respect to the best performing previous method, SMSOT-CNN on the KIT~AIS and AerialMPT pedestrian datasets, respectively.
\subsection{Vehicle Tracking}
As~\autoref{tab:overallperformance} demonstrates, the DL-based methods and CSRT outperform KCF, Median Flow, and MOSSE significantly, with average MOTA value of 42.9 versus -30.9. The DL-based methods and CSRT are also better with respect to the number of mostly tracked and mostly lost vehicles, varying between 30.0\% and 69.1\% and between 22.6\% and 12.6\%, respectively. These values for KCF, MOSSE, and Median Flow are between 19.6\% and 32.2\% and between 50.4\% and 27.8\%.
Among the DL-based methods, Stacked-DCFNet has the best performance in terms of MOTA and MOTP, outperforming AerialMPTNet by 4.6 and 5.7 points, respectively. While the number of mostly tracked vehicles by Stacked-DCFNet is 2.6\% larger than in the case of AerialMPTNet, it mostly loses 3.1\% more vehicles.
The performance of Tracktor++ increases significantly compared to the pedestrian scenarios, due to the ability of its object detector in detecting vehicles. Tracktor++ achieves a competitive MOTA of 37.1 without any ground truth initialization.
The best performing method in terms of MOTA, MT, and ML is CSRT. It outperforms all other methods with a MOTA of 51.1 and MOTP of 80.7.
We rank the studied tracking methods based on their MOTA and MOTP values in~\autoref{fig:ranking}, with the diagrams offering a clear overview on their performance. AerialMPTNet appears the best method in terms of MOTA for both pedestrian datasets, and achieves competitive MOTP values. Median Flow, for example, achieves a very high MOTP values; however, because of the low number of matched track-object pairs after the first frame, it is not able to track many objects. Hence, the MOTP value solely is not a good performance indicator. For the KIT~AIS vehicle dataset, AerialMPTNet shows worse performance than the other methods according to the MOTA and MOTP values. CSRT and Stacked-DCFNet, however, perform favorably for vehicle tracking.
\begin{figure*}%
\centering
\subfloat[]{\includegraphics[width=.27\textwidth]{figures/figure_kit_ped.pdf}}
\subfloat[]{\includegraphics[width=.27\textwidth]{figures/figure_aerialMPT.pdf}}
\subfloat[]{\includegraphics[width=.268\textwidth]{figures/figure_vehicle.pdf}}
\subfloat[]{\includegraphics[width=.16\textwidth]{figures/legend.pdf}}
\caption{Ranking the tracking methods based on their MOTA and MOTP values on the (a) KIT~AIS pedestrian, (b) AerialMPT, and (c) KIT~AIS vehicle datasets.}
\label{fig:ranking}
\end{figure*}
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\printbibliography
\end{document}
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
|
1,108,101,565,770 | arxiv | \section{Introduction}\label{S:intro}
Birkhoff \cite{top_B}, Ore \cite{ore}, and Ward \cite{ward}, were among the first to study the set of closure operators which are mappings from a given set to itself or a complete lattice to itself. Birkhoff \cite{top_B} showed this set of mappings, acting on a set or a complete lattice, forms a complete lattice.
More recently Ranzato \cite{ran-closures} has looked at the set of closure operators which are mappings from a given partially ordered set to itself and sufficient conditions on the partial order which make the set of closure operators a complete lattice.
In the above mentioned cases and others, little if at all, is said about algebraic or finitary closure operators. On an infinite set $S$, we can consider the set of closure operators, $c.o.(S)$, but also the subset of algebraic closure operators, $a.c.o(S)$. We will consider these algebraic closure operators and what this set looks like. We would like to look more generally than sets and look at lattices. When looking at closure operators acting on lattices rather than acting on a set we can generalize the idea of algebraic closure operators. To do this will need elements of the lattice to act as finite subsets do in a set. With this in mind we restrict our lattice to be an algebraic lattice. The compact elements of the algebraic lattice act as the finite subsets do in the lattice of subsets.
Once we have this generalization we can consider the set of closure operators which are mappings of a given algebraic lattice $L$, $c.o.(L)$, and its subset of the algebraic closure operators of $L$, $a.c.o.(L)$.
\vspace{2mm}\noindent{\bf Proposition 3.7}. {\it
Let $L$ be an algebraic lattice. Then $a.c.o.(L)$ is a sublattice of $c.o.(L)$.
}\vspace{2mm}
\vspace{2mm}\noindent{\bf Theorem 3.9}. {\it
Let $L$ be an algebraic lattice. Then $a.c.o.(L)$ is a complete lattice. For a family $(\phi_i| i\in I)$ with $\phi_i \in a.c.o.(L)$ and $x \in L$,
$$
\left( \bigwedge_{i\in I} \phi _i\right) (x) = \bigvee _{k \leq_c x} \left( \bigwedge _{i \in I} \phi _i (k) \right) \hspace{.05in} \textnormal{and} \hspace{.05in} \left( \bigvee_{i \in I}\phi_i \right) (x) = \bigvee_{j \in I^k, k \geq 0} \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x).
$$
}\vspace{2mm}
The final section will look at the complete lattice of $a.c.o.(L)$ find compact elements and then showing these elements generate $a.c.o.(L)$, making $a.c.o.(L)$ an algebraic lattice.
When looking at the lattice of closure operators one might also consider closure systems. The lattice of all closure systems from a given lattice $L$ is the dual of $c.o.(L)$ \cite{ore}. By duality we would find that the closure systems related to algebraic closure operators would be a sub-lattice of the the lattice of all the closure systems for a given lattice. Duality does not give us the same for showing the lattice of algebraic closure systems is an algebraic lattice. We will say a discussion of this for another time.
\section{Preliminaries}\label{prelims}
We will first be reminded of the following useful items from lattice theory and closure operator theory.
A lattice is a non-empty partially ordered set $L$ such that for all
$a$ and $b$ in $L$ both $a\vee b:= \sup\{a,b\}$ and $a \wedge b := \inf \{a,b\}$ exist.
A partially ordered set $L$ is called a complete lattice when for each of the subsets $S$ of $L$ the $\sup\{S\}$ and the $\inf\{ S\}$ exists in $L$ \cite[Ch. I Sec. 4]{latticeB}. An element $c$ in a lattice is compact if $ c \leq \bigvee_{i \in I} x_i$ implies $ c \leq \bigvee_{i \in F} x_i$ for some finite $F \subseteq I$. We will let $C(L) := \{c \in L | c \textnormal{ is compact } \}$, and let $k \leq_c x$ denote that $k \leq x$ and $k \in C(L)$. A lattice for which every element is the join of compact elements is called compactly generated or algebraic \cite[Ch. VIII Sec 4$\&$ 5]{latticeB}.
We will note that in an algebraic lattice to show $x \leq y$ it is enough to show for all $k \leq_c x$, we have $k \leq y$.
\begin{definition}\label{closure_lat}\cite[Ch.V Sec. 1]{latticeB} Given a lattice $L$, a mapping $\phi:L \rightarrow L$ is called a \textsl{closure operator} on $L$ if for $a, b \in L$, it satisfies: \\
C1: $a \leq \phi(a)$ \hfill (extensive)\\
C2: $\phi(\phi(a)) = \phi(a)$ \hfill (idempotent)\\
C3: If $a \leq b$ then $\phi(a) \leq \phi(b)$ \hfill (isotone) \\
Let $c.o.(L) := \left\{ \phi | \phi \textnormal{ is a closure operator} \right\}$
\end{definition}
For mapping which act of the power set lattice there is the idea of finitary mappings. For a set $S$ a mapping $\phi: \mathcal{P}(S) \rightarrow \mathcal{P}(S)$ is called finitary if for all $A \subseteq S$ $\phi(A) = \bigcup_{F \textnormal{ finite subset } A} \phi(F) $ \cite[Ch. VIII Sec. 4]{latticeB}.
As we wish to look at all algebraic lattices and not just the power set lattices we will extend this definition for such closure operators.
\begin{definition}\label{finitary} Let $L$ be an algebraic lattice. An operator $\phi:L \rightarrow L$ is called finitary if for all $x \in L$:\\
F: $\displaystyle{ \phi(x) = \bigvee_{k \leq_c x} \phi(k)} $.
\\ Let $a.c.o.(L) := \left\{ \phi \in c.o.(L) | \phi \textnormal{ is finitary}\right\}$ and be called set of algebraic closure operators.
\end{definition}
\begin{remark}\label{geq_alg}
For any isotone operators $\displaystyle{ \phi(x) \geq \bigvee_{k \leq_c x} \phi(k)} $ is always true.
\end{remark}
\begin{theorem}\label{lattice_comp_L}\cite{top_B} Let $L$ be a complete lattice then $c.o.(L)$ is a complete lattice where $\phi_1 \leq \phi_2$ if and only if $\phi_1 (x) \leq_L \phi_2(x)$ for all $x \in L$ with the meets and the joins defined as follows:
$$
\left( \bigwedge_{i \in I} \phi_i\right) (x) := \bigwedge_{i \in I} \left( \phi_i(x) \right) \ \ \forall \ x \in L
$$
and
$$
\left( \bigvee_{i \in I} \phi_i \right) (x) := \bigwedge \left\{ c \in L | c \geq x \textnormal{ and } \phi_i(c) = c \ \forall \ i \in I \right\}.
$$
\end{theorem}
At this point let us mention we will be moving between a general lattice $L$ and the lattice of closure operators $c.o.(L)$ without distinguishing in which lattice we are taking the meet or join, as in the definition above. $\left( \bigvee_{i \in I} \phi_i \right) (x)$ would be taking the join in $c.o.(L)$ and then taking the closure of $x$ under the new operator. Where $ \bigvee_{i \in I} \left( \phi_i (x) \right)$ would be a join in $L$ where the elements joined, $\phi_i (x)$, are the closures of $x$ under the different closure operators.
\section{Sublattice}\label{sub}
In Theorem~\ref{lattice_comp_L}, we see that set of all closure operators which act on a lattice form a complete lattice. In this section we will consider a subset of that lattice, the set of all algebraic closure operators. Please note in a finite lattice the set of closure operators would be the same as the set of algebraic closure operators. Thus the more interesting case with when a lattice is infinite. We restrict our infinite lattices to algebraic lattice so that we can more easily extend the definition for a finitary operator, or more specifically an algebraic (finitary) closure operator, Definition~\ref{finitary}.
We will first consider what would happen if we take the meet of two algebraic closure operators.
\begin{proposition}\label{meetL}
Let $L$ be an algebraic lattice and let $\phi_1, \ \phi_2 \in a.c.o.(L)$. Then $\phi_1 \wedge \phi_2 \in a.c.o.(L)$.
\end{proposition}
\begin{proof}
Let $L$ be an algebraic lattice and let $\phi_1, \ \phi_2 \in a.c.o.(L)$. Let $x \in L$ and let $y = (\phi_1 \wedge \phi_2)(x) = \phi_1(x) \wedge \phi_2(x) = \left(\bigvee_{k \leq_c x} \phi_1 (k)\right) \wedge \left(\bigvee_{l \leq_c x} \phi_2(l)\right)$.
Since $L$ is algebraic, $ y=\bigvee_{a \leq_c y} a$ and for all $a \leq_c y$,
$ a \leq (\bigvee_{k \leq_c x} \phi_1 (k))$ and $a \leq (\bigvee_{l \leq_c x} \phi_2(l))$.
By the compactness of $a$ these covers can be reduced to finite covers. Let $M_a$ and $N_a$ be such covers. Then
$$
a \leq \bigvee_{m\in M_a} \phi_1 (m), \hspace{.85in} a \leq \bigvee_{n\in N_a} \phi_2(n).
$$
Letting $ a^* =\left( \bigvee_{j \in M_a \cup N_a} j \right)$ making $a^* \leq_c x$. We now use this along with properties of closure operators to find
$$
a \leq \phi_1(a^*) \wedge \phi_2(a^*) = (\phi_1 \wedge \phi_2)(a^*) \leq \bigvee_{k \leq_c x} (\phi_1 \wedge \phi_2)(k).
$$
We then see
$$
y = \bigvee_{a \leq_c y} a \leq \bigvee_{k \leq_c x} (\phi_1 \wedge \phi_2)(k), \hspace{.35in}
\left(\phi_1 \wedge \phi_2\right)(x) \leq \bigvee_{k \leq_c x } \left( \left(\phi_1 \wedge \phi_2\right (k) \right).
$$
With this and Remark~\ref{geq_alg} we have $\phi_1 \wedge \phi_2 (x) = \bigvee_{k \leq_c x } \left( \left(\phi_1 \wedge \phi_2\right) (k) \right)$ which by definition of algebraic closure operators makes $\phi_1 \wedge \phi_2 \in a.c.o.(L)$.
\end{proof}
We could ask if we are also closed under arbitrary meets. Below we have an example where the arbitrary meet of algebraic closure operators is not an algebraic closure operator.
Let $L$ be an algebraic lattice with at least one non-compact element with is not the greatest element. We will let 1 denote the greatest element of $L$. Consider the following family closure operators which act on $L$. Let $a\in L$.
\begin{equation}\label{finite_phi}
\phi_a(x) := \left\{ \begin{array}{cl} \nonumber
a, & \textrm{if } x \, \leq a\\
1, & otherwise. \\
\end{array}\right.
\end{equation}
This forms a family of algebraic closure operators.
Now let us look at the arbitrary meet of $\phi_k$ for $k \in C(L)$.
\begin{equation}\label{meet_finite_phi}
\left( \bigwedge_{k \in C(L)} \phi_k \right) (x) = \left\{ \begin{array}{cl} \nonumber
x, & \textrm{if } x \in C(L) \\
1, & otherwise. \\
\end{array}\right.
\end{equation}
This meet is not algebraic. Thus, $a.c.o.(L)$ is not closed under arbitrary meets.
We turn our attention to joins. The next two lemmas come in useful when looking at the join of algebraic closure operators and the proofs for these lemmas are left to the reader.
\begin{lemma}\label{finite_c3}
The Property C3 of Definition~\ref{closure_lat} holds for finitary operators.
\end{lemma}
\begin{lemma}\label{a_lee_mu}
Let $L$ be an algebraic lattice and let $\phi:L \rightarrow L$ be a mapping with Property C3. Then $\phi$ is finitary if and only if
for $x \in L$ and $a \leq_c \phi(x)$ implies $a \leq \phi(l)$ for some $l \leq_c x$.
\end{lemma}
With the use of these lemmas we will build a way to find the arbitrary join of algebraic closure operators. We will start by looking at the composite of many finitary operators.
\begin{corollary}\label{corollary_finitary2_L}
Let $\phi_i$ be finitary operators for $1 \leq i \leq n$ where $n \in \mathbb{N}$. Then $\phi_n \phi_{n-1}...\phi_2\phi_1$ is finitary.
\end{corollary}
\begin{proof} Let $\phi_i$ be finitary operators for $1 \leq i \leq n$ where $n \in \mathbb{N}$. If $y \leq_L z$ then $\phi _2\phi_1(y) \leq_L \phi _2\phi_1(z)$ by reiterating Property C3. This would make
$ \bigvee_{k \leq_c x} (\phi_2\phi _1(k)) \leq_L \phi _2\phi_1(x)$.
Take an $x\in L$. Let $\displaystyle a \leq_c \phi_2\phi_1(x) $.
From Lemma~\ref{a_lee_mu} and the fact that $\phi_1$ is finitary closure operator, we have that \\
$ a~\leq~\phi_2~(l)$ for some $l \leq_c \phi_1(x)$. We do the same for $l$ and find $l \leq \phi_1 (m)$ where
$m \leq_c x$. We then have $ a \leq \phi_2(l) \leq \phi_2\phi_1(m) \leq \bigvee_{k \leq_c x} \phi_2\phi_1 (k). $. Thus, $\phi_2\phi_1$ is finitary by Lemma~\ref{a_lee_mu}.
Using induction, assuming that $\phi _{n-1}\phi _{n-2}...\phi_1$
is finitary then we have that $\phi _n\phi _{n-1}...\phi_1$ is finitary.
\end{proof}
\begin{lemma}\label{finitary_arb_mu_L}
Let $L$ be an algebraic lattice. For a family $(\phi_i | i \in I)$ where $\phi_i \in a.c.o.(L)$, let
$$
\mu(x) := \bigvee_{j \in I^k, k \geq 0} \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x).
$$
Then \\
(1) for $x\in L$ and $a \in C(L)$, $a \leq \mu(x)$ if and only if $a \leq \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x)$ for some $k \geq 0$ and $j \in I^k$; \\
(2) $\mu$ is finitary.
\end{lemma}
\begin{proof}
Let $x \in L$ and $a \in C(L)$. For $a \leq \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x)$ by definition of joins this would make $a \leq \mu (x) $. For $a \leq_c \mu(x) = \bigvee_{j \in I^k, k \geq 0} \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x)$. Because $a$ is compact, we have that $a$ is less then the join of a finite subset of $\left\{ \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x) | j \in I^k and k \geq 0 \right\}$. This is a finite set thus there is a $\phi_{i_1} \phi_{i_2} ...\phi_{i_k} (x)$ for some $i \in I^k$, $k\geq0$ where $a \leq \phi_{i_1} \phi_{i_2} ...\phi_{i_k} (x)$.
We know from Corollary \ref{corollary_finitary2_L} that $\phi_{j_1} \phi_{j_2} ...\phi_{j_k}$ is finitary for any $j \in I^k$ and $k \geq 0$. Let $x \in L$ and $a \in C(L)$ with $a\leq \mu(x)$.
Since $\phi_{j_1} \phi_{j_2} ...\phi_{j_k}$ is finitary there exists $l \leq_c x$ such that $a \leq \phi_{j_1} \phi_{j_2} ...\phi_{j_k}(l) \leq \mu (l)$.
Since $l$ is compact,
$ a \leq \bigvee_{d \leq_c x} \mu(d)$. We can do this for any $a \leq_c \mu(x)$. From this we have
$$
\mu(x) = \left( \bigvee_{a \leq_c \mu(x)}a \right) \leq \bigvee_{d \leq_c x}\mu(d).
$$
Since $\mu$ is isotone we have the other inclusion from Remark~\ref{geq_alg}.
Thus we have that $\mu$ is finitary.
\end{proof}
\begin{proposition}\label{join_arb_alg_L}
Let $L$ be an algebraic lattice. For a family $( \phi_i | i\in I)$ where $\phi_i \in a.c.o.(L)$, the join in $c.o.(L)$ is
$$
\left( \bigvee_{i \in I} \phi_i \right) (x)=\bigvee_{j \in I^k, k \geq 0} \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x)
$$
and therefore it is also the join in $a.c.o.(L)$.
\end{proposition}
\begin{proof}
The right hand side is what we have called $\mu(x)$; let us continue to do so. Since $\phi_{i} \leq \left( \bigvee_{i \in I} \phi_i \right)$, we have $\mu(x) \leq \left( \bigvee_{i \in I} \phi_i\right)(x)$ for all $x \in L$.
We will assume for the rest of the proof that we are choosing $x, y \in L$.
\\
C1: $x \leq \phi_i (x) \leq \mu(x)$ for any $i \in I$. Thus, $\mu$ has the extensive property for closure operators.
\\
C3: We have already shown that $\mu$ is finitary and from Lemma~\ref{finite_c3}, Property C3 holds for $\mu$.
\\
C2: Having property C1 for $\mu$ we know $\mu(x) \leq \mu(\mu(x))$. Let $a \leq_c \mu(\mu(x))$, then by Lemma~\ref{a_lee_mu}
there exists $\displaystyle l \leq_c \mu(x)$ such that $\displaystyle a \leq \mu(l)$. Now we know $l$ compact, so by Lemma~\ref{finitary_arb_mu_L}, $l$ is less then some finite sub-cover, which in this case makes $l \leq \phi_{j_1} \phi_{j_2} ...\phi_{j_k}(x)$ for some $j \in I$ and $k \geq 0$. This in turn gives us
$$
a \leq \mu(l)\leq \mu \left( \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x) \right) = \mu(x).
$$
Since for any compact elements where $a \leq_c \mu\mu(x)$ we have $a \leq_c \mu(x)$, thus $\mu\mu(x) \leq \mu(x)$. Thus Property C3 holds for $\mu$.
Having all three properties of a closure operator $\mu \in a.c.o.(L)$. We have for each $i \in I$, $\phi_i \leq \mu$ and $\mu (x) \leq \left( \bigvee_{i \in I} \phi_i \right)(x)$, making \\
$ \mu = \left( \bigvee_{i \in I} \phi_i \right) $.
\end{proof}
\begin{proposition}
Let $L$ be an algebraic lattice. Then $a.c.o.(L)$ is a sublattice of $c.o.(L)$.
\end{proposition}
We see the set $a.c.o.(L)$ is a lattice. We see from Proposition~\ref{join_arb_alg_L} we have arbitrary joins. We shall see from the following proposition that although it is not a complete sublattice of $c.o.(L)$, $a.c.o.(L)$ is a complete lattice.
\begin{proposition} \label{def_arb_meet_L}
Let $L$ be an algebraic lattice. For a family $(\phi_i | i \in I)$, where $\phi _i \in a.c.o.(L)$ and $x \in L$,
$$
\left( \bigwedge_{i\in I}^a \phi _i\right) (x) = \bigvee _{k \leq_c x}^L \left( \bigwedge _{i \in I} \phi _i (k) \right).
$$
\end{proposition}
\begin{proof}
For ease of notation let $\tau (x) = \bigvee _{k \leq_c x} \left( \bigwedge _{i \in I} \phi _i (k) \right)$ for all $x \in L$. We will show $\tau$ is an algebraic closure operator, that is, that properties F1, C1, C2, and C3 hold for $\tau$. \\
F1: Consider what $\tau(k)$ would be if $k \in C(L)$:
$$
\tau(k) = \bigvee _{l \leq_c k}^L \left( \bigwedge_{i \in I} \phi _i (l)\right) = \bigwedge_{i \in I} \phi _i (k) \hspace{.15in} \textnormal{ because } k \leq_c k,
$$
Let $x \in L$ then $\tau(x) = \bigvee _{k \leq_c x} \left( \bigwedge_{i \in I} \phi _i (k) \right) = \bigvee _{k \leq_c x} \left( \tau(k) \right),$ thus $\tau$ is finitary. \\
C1: Let $x \in L$. Since $\tau$ is finitary and $L$ is algebraic for $k \leq_c x$ then \\
$ k\leq \bigwedge_{i\in I} \phi_i (k) = \tau(k) \leq \tau(x)$. Thus we have $x = \bigvee_{k \leq_c x} k \leq \tau(x)$.\\
C3: Since we already know $\tau$ is finitary we get Property C3 by Lemma~\ref{finite_c3}.
\\
C2: Let $k \leq_c \tau(\tau(x))$. By Lemma~\ref{a_lee_mu} there is an $a_k \leq_c \tau(x)$ such that $\displaystyle k \leq \tau(a_k)$.
Similarly, there is a $d_k \leq_c x $ such that $ a_k \leq \tau(d_k) = \bigwedge_{i\in I} \phi_i(d_k)$.
Which is to say that $a_k \leq \phi_i(d_k)$ for all $i \in I$.
We once again employ Property~C3 on the $\phi_i$'s and also Property~C2 to find $\phi_i(a_k) \leq \phi_i(\phi_i(d_k)) = \phi_i(d_k)$ for all $i\in I$. For any $k \leq_c \tau\tau (x)$.
$$
k \leq \tau(a_k) = \bigwedge_{i\in I} \phi _i (a_k) \leq \bigwedge_{i\in I} \phi_i (d_k) = \tau(d_k) \leq \tau (x).
$$
Thus, $\tau \tau (x) \leq \tau(x)$ making $\tau\tau(x) = \tau(x)$.
Now to show that $\tau$ is the greatest lower bound for the $\phi_i$'s, let $\rho \in a.c.o.(L)$ such that $\rho$ is a lower bound for each of the $\phi_i$'s.
Then $\rho (x) \leq \phi _i(x)$ for all $i\in I$. This would also make $ \rho ( a) \leq \phi _i(a)$ for all $i \in I$ and $a \leq_c x$. $ \rho(a) \leq \bigwedge_{i\in I} \phi_i(a) = \tau (a)$. We know that $\rho$ is finitary hence
$$
\rho(x)= \bigvee_{a \leq_c x} \rho(a) \leq \bigvee_{a \leq_c x} \tau (a) = \tau(x).
$$
This makes $\tau$ the greatest lower bound.
\end{proof}
\begin{theorem}
Let $L$ be an algebraic lattice. Then $a.c.o.(L)$ is a complete lattice. For a family $(\phi_i| i\in I)$ with $\phi_i \in a.c.o.(L)$ and $x \in L$,
$$
\left( \bigwedge_{i\in I} \phi _i\right) (x) = \bigvee _{k \leq_c x} \left( \bigwedge _{i \in I} \phi _i (k) \right) \hspace{.05in} \textnormal{and} \hspace{.05in} \left( \bigvee_{i \in I}\phi_i \right) (x) = \bigvee_{j \in I^k, k \geq 0} \phi_{j_1} \phi_{j_2} ...\phi_{j_k} (x).
$$
\end{theorem}
\section{Properties of the lattice of algebraic closure operators}\label{proposition_acoL}
In this section we will look at properties of the lattice $a.c.o.(L)$, find a set of elements that are compact and show that the elements of this set build the whole lattice. To do this we will use the make use of the following from Birkhoff \cite[Ch. VIII Sec. 4 $\&$ 5] {latticeB}.
For a directed set, a poset such that any two elements have an upper bound in the set, and a compact element $k$ in an algebraic lattice, $k \leq \bigvee D$ if and only if $k \leq d$ for some $d \in D$.
A complete lattice $L$ is said to be meet continuous when for any directed set $D \subseteq L$, and for any $a \in L$ we have the following, $a \wedge \left( \bigvee_{d\in D} d \right) = \bigvee_{d \in D} (a \wedge d)$. Any complete algebraic lattice is meet-continuous.
An element $a$ in a lattice $L$ is join-inaccessible when for a directed set $D \subseteq L$, $a = \bigvee_{d \in D} d$ implies $a = d$ for some $d \in D$. In a complete meet-continuous lattice $L$, an element $c$ is compact if and only if $c$ is join-inaccessible.
We would like to apply this last piece about the relationship of compact elements and join-inaccessible elements to help us distinguish some compact elements of $a.c.o.(L)$. To do this we will first prove $a.c.o.(L)$ is a meet continuous lattice, thus we will begin by looking at how directed sets behave in $a.c.o.(L)$.
\begin{lemma}\label{dir_join_L}
Let $L$ be a complete algebraic lattice, let the family $( \sigma_\psi | \psi \in \Psi)$ be a directed set in
$a.c.o.(L)$, then $ \left( \bigvee_{\psi \in \Psi} \sigma_\psi \right)(x) = \bigvee_{\psi \in \Psi} \left( \sigma_\psi (x) \right)$.
\end{lemma}
\begin{proof}
Consider $\sigma_n \sigma_{n-1}...\sigma_1$ where $\sigma_i \in( \sigma_\psi| d \in \Psi)$ for all $1 \leq i \leq n$, let $\sigma_{d}$ be the upper bound in the directed set.
Making \\
$\sigma_n \sigma_{n-1}...\sigma_1(x)~\leq~(\sigma_{d})^n(x)~=~\sigma_{d}(x)$
for all $x~\in~L$. Thus for any finite compose in the directed set we find
$\sigma_n \sigma_{n-1}...\sigma_1(x)~\leq~\bigvee_{\psi \in \Psi}\left(\sigma_\psi (x)\right)$.
This puts
$$
\left( \bigvee_{\psi \in \Psi } \sigma_\psi \right) (x) =\bigvee_{ j \in \Psi^k, k\geq 1} \left( \sigma_{j_1} ... \sigma{j_k} (x) \right) \leq \bigvee_{\psi \in \Psi} \left( \sigma_d (x) \right).
$$
Thus we have $\bigvee_{\psi \in \Phi} \left( \sigma_\psi (x) \right) = \left( \bigvee_{\psi \in \Psi} \sigma_\psi \right) (x) \hspace{.15in} \forall x \in L$.
\end{proof}
\begin{lemma}\label{meet_cont_L}
Let $L$ be a complete algebraic lattice. Then $a.c.o.(L)$ is a meet continuous lattice.
\end{lemma}
\begin{proof}
Let $ ( \sigma_\psi| \psi \in \Psi)$ be directed set in $a.c.o.(L)$, $\phi \in a.c.o.(L)$, and $x \in L$.
Now we consider the following:
$$
\left( \phi \wedge \left(\bigvee_{\psi \in \Psi} \sigma_\psi \right) \right) (x) = \phi (x) \wedge \left(\bigvee_{\psi\in \Psi} \sigma_\psi \right)(x)= \phi (x) \wedge \left( \bigvee_{\psi\in \Psi} \sigma_\psi (x) \right).
$$
Let $\textit{X} = \left\{ \sigma_\psi(x) | \psi \in \Psi \right\} \subseteq L$ and consider $\sigma_1 (x), \sigma_2(x) \in \textit{X}$; this would put $\sigma_1, \sigma_2$ in $ (\sigma_\psi | \psi \in \Psi )$, the directed set.
So there must be an element $\sigma_{3}\in (\sigma_\psi | \psi \in \Psi )$ such that $\sigma_1, \sigma_2 \leq \sigma_{3}$, but this would make $\sigma_1(x), \sigma_2(x) \leq \sigma_{3}(x)$. Thus $\textit{X}$ a directed set in $L$.
$$
\phi (x) \wedge \left( \bigvee_{\psi\in \Psi} \sigma_\psi (x) \right) = \bigvee_{\psi \in \Psi} \left( \phi(x) \wedge \sigma_\psi (x) \right) = \bigvee_{\psi \in \Psi} \left( (\phi \wedge \sigma_\Psi) (x) \right).
$$
Let $\Phi =\left\{\phi \wedge \sigma_d | d \in D \right\}$. Let $\phi \wedge \sigma_1, \phi \wedge \sigma_2 \in \Phi$ then there is a $\sigma_{3} \in (\sigma_\psi | \psi \in \Psi )$ such that
$ \sigma_1, \sigma_2 \leq \sigma_3$. This would then make
$\phi \wedge \sigma_1, \phi \wedge \sigma_2 \leq \phi \wedge \sigma_{3}$ which is in $\Phi$. We then have that $\Phi$ is directed.
Thus, $\left( \bigvee_{\psi \in \Psi} (\phi \wedge \sigma_\psi)\right) (x) =\bigvee_{\psi \in \Psi} \left( (\phi \wedge \sigma_d) (x) \right)$, and
$$
\left( \phi \wedge \left(\bigvee_{\psi \in \Psi}\sigma_\psi \right) \right) (x) = \left( \bigvee_{\psi \in \Psi} (\phi \wedge \sigma_\psi) \right) (x).
$$
Making $a.c.o.(L)$ is a meet-continuous lattice.
\end{proof}
We will show $a.c.o.(L)$ is algebraic by defining a set of compact elements in $a.c.o.(L)$ that will be the building blocks for our lattice. The proof of this first lemma will be left to the reader.
\begin{lemma}\label{phi_closure_L}
Let $a, b, v \in L$. Then for $x \in L$ let
\begin{displaymath}\label{phivab_L}
\phi^v_{a,b}(x) := \left\{ \begin{array}{cl} \nonumber
x \vee v \vee b , & \textrm{if } a \, \leq x \lor v \\
x \vee v, & otherwise. \\
\end{array}\right.
\end{displaymath}
Then $\phi^v_{a,b}$ is a closure operator for $L$.
\end{lemma}
\begin{lemma}\label{phi_alg_L}
Let $\phi^v_{a,b}$ be defined as in Lemma~\ref{phivab_L}. If $a \in C(L)$ then $\phi^v_{a,b}$ is algebraic.
\end{lemma}
\begin{proof}
Let $x \in L$. If $ a \not\leq (x \vee v) $ then $a \not\leq (k \vee v)$ for all $k \leq_c x $. Thus
$$
\phi^v_{a,b}(x) = x \vee v = \left(\bigvee_{k \leq_c x} k\right) \vee v =\bigvee_{k \leq_c x } ( k \vee v ) = \bigvee_{k \leq_c x} \phi^v_{a,b}(k).
$$
If $ a \leq (x \vee v) = \left(\bigvee_{k \leq_c x} k\right) \vee v $ then because $a$ is compact $ a \leq \left(\bigvee_{k \in F} k \right) \vee v$ where $F$ is a finite subset of the compact elements less than $x$.
Let \\
$k^*~=~\bigvee_{k \in F}~k$. $k^*$ is a compact element, thus $ a \leq k^* \vee v$ for some $k^* \leq_c x$ and $ \phi^v_{a,b} (k^*) = k^* \vee v \vee b $. This makes
$$
\phi^v_{a,b} (x) = x \vee v \vee b = \left(\bigvee_{k \leq_c x} k\right) \vee v \vee b = (k^* \vee v \vee b) \vee \left( \bigvee_{k \leq_c x} k \vee v \right) = \bigvee_{k \leq_c x} \phi^v_{a,b} (k).
$$
Thus, $\phi^v_{a,b}$ algebraic.
\end{proof}
\begin{lemma}\label{phi_compact_L}
Let $\phi^v_{a,b}$ be defined as in Lemma~\ref{phivab_L} and let $v, a,b \in C(L)$ then $\phi^v_{a,b}$ is compact in $a.c.o.(L)$.
\end{lemma}
\begin{proof}
To show that $\phi^v_{a,b}$ is a compact element, we will show that it is join-inaccessible. Let the family $(\sigma_\psi | \psi\in \Psi)$ be a directed set in $a.c.o.(L)$ and where $\phi^v_{a,b}(x)~=~\left(\bigvee_{\psi \in \Psi}\sigma_\psi \right)(x)$ for all $x\in L$. Since $\phi^v_{a,b}$ is finitary we need only consider the compact elements. Let $c \in C(L)$.\\
Case 1 $(a \leq c \vee v)$: First we will look at when $c = a$, $ \phi^v_{a,b} (a) =\left(\bigvee_{\psi \in \Psi } \sigma_\psi \right) (a)= \bigvee_{\psi \in \Psi} (\sigma_\psi (a))$. Because $\phi^v_{a,b}(a) = a\vee v \vee b$ is compact in $L$ we can reduce the join in $L$ to get $ \phi^v_{a,b} (a) = \sigma_a (a)$ for some $\sigma_a \in (\sigma_\psi | \psi \in \Psi )$. Now looking at $c \geq a$,
$$
\sigma_a (c) \leq \left( \bigvee_{\psi \in \Psi} \sigma_\psi \right) (c) = \phi^v_{a,b}(c) = c \vee v \vee b = c \vee a \vee v \vee b = \displaystyle c \vee \sigma_a (a) \leq \sigma_a (c).
$$
Thus for $c \geq a$, $\phi^v_{a,b} (c) = \sigma_a (c)$. \\
Case 2 $(c \vee v \not\geq a)$: Similarly to case one we will look at when $c=0$, the least element of $L$.
$$
0 \vee v = \phi^v_{a,b}(0)=\left( \bigvee_{d \in D} \sigma_d \right) (0) = \bigvee_{d \in D} (\sigma_d (0)) = \sigma_0 (0)
$$
for some $\sigma_0 \in (\sigma_\psi | \psi \in \Psi )$. Then for $a \not\leq c\vee v$
$$
\sigma_0 (c) \leq \phi^v_{a,b}(c)= c \vee v = c \vee \phi^v_{a,b} (0) = c \vee \sigma_0 (0) \leq \sigma_0(c).
$$
This makes $ \phi^v_{a,b}(c) = \sigma_0 (c)$ for $c \vee v \not\geq a$.
For any $c \in C(L)$, $\displaystyle \phi^v_{a,b}(c) = \left( \bigvee_{\psi \in D} \sigma_\psi \right) (c) = \left(\sigma_0 \vee \sigma_a \right) (c) = \sigma_b (c)$ for some $\sigma_b \in (\sigma_\psi | \psi \in \Psi )$ given our directed set. This would make $\phi^v_{a,b}$ join-inaccessible in $a.c.o.(L)$. Which is to say $\phi^v_{a,b} \in C(a.c.o.(L))$.
\end{proof}
\begin{lemma}\label{join_compact_L}
Let $\phi^v_{a,b}$ be defined as in Definition~\ref{phivab_L}. Let $a,b \in C(L)$ but $v$ does not have to be compact. Then $\phi^v_{a,b}= \bigvee_{k \leq_c v} \phi^k_{a,b}$
\end{lemma}
\begin{proof}
First we need to make an observation. If $k \leq_c v$ then \\
$ \left(\bigvee_{k \leq_c v} \phi^k_{a,b}\right)(x) \leq \phi^v_{a,b}(x)\textnormal{ for all } x \in L $.
Since $\phi^v_{a,b}$ and $\phi^k_{a,b}$ are algebraic showing $\phi^v_{a,b}(l) \leq \left(\bigvee_{k \leq_c v} \phi^k_{a,b}\right)(l)$ for all $l \in C(L)$ will suffice.
Let $l \in C(L)$ such that $ a \leq l \vee v = l \vee \left( \bigvee_{k \leq_c v} k \right)$.
Since $a$ is compact this join can be reduced to a finite join. $ a \leq l \vee \left(\bigvee_{k \in F_a} k \right)$.
Then for $k_a = \bigvee_{k \in F_a} k$, $k_a$ is a compact and we have
$$
\phi^v_{a,b}(l) = l \vee v \vee b = l \vee k_a \vee b \vee \left(\bigvee_{k \leq_c v} k \right) \le
\phi_{a,b}^{k_a} (l) \vee \left(\bigvee_{k \leq_c v} \phi^k_{a,b}(l) \right) \leq \bigvee_{k \leq_c v} \phi^k_{a,b}(l).
$$
For $a \not\leq l \vee v$,
$$
\phi^v_{a,b}(l) = l \vee v = l \vee \left( \bigvee_{k \leq_c v} k \right) =\bigvee_{k \leq_c v}\left(\phi^k_{a,b}(l)\right)= \left( \bigvee_{k \leq_c v} \phi^k_{a,b} \right) (l).
$$
We thus have in either case $\phi^v_{a,b}(l) \leq \bigvee_{k \leq_c v}\phi^k_{a,b}(l)$ for all $l \in C(L)$, $\phi^v_{a,b} = \bigvee_{k \leq_c v} \phi^k_{a,b}$.
\end{proof}
\begin{lemma}\label{all_join_compact_L}
Let $\sigma \in a.c.o.(L)$. Then
$$
\sigma = \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b} \ \ \textnormal{ where } v=\sigma(0).
$$
\end{lemma}
\begin{proof}
First of all, to ease between joins in $L$ and joins in $a.c.o.(L)$ we need to note that $C(L)$ is a directed set, and for any $x \in L$,
$\displaystyle \left\{l \in C(L) | l \leq_c x \right\}$ is also directed. Thus by Lemma~\ref{dir_join_L} we have
$$
\left( \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b} \right) (x) = \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} ( \phi^v_{a,b} (x) ).
$$
Let $x \in L$, $a \in C(L)$, and $b \leq_c \sigma(a)$. If $\phi^v_{a,b}(x) = x \vee v$ and we have $ x \leq \sigma(x)$ and $ v = \sigma(0) \leq \sigma(x)$
then $\displaystyle \phi^v_{a,b}(x)= x \vee v = x \vee \sigma(0) \leq \sigma(x)$. If $a \leq x \vee v$, which is to say $\phi^v_{a,b}(x) = x \vee v \vee b$, then we have $x \vee v \leq \sigma(x)$ and
$a, b \leq \sigma(a)$. These inequalities give us
$a \leq \sigma(x \vee v) \leq \sigma(\sigma(x)) = \sigma(x)$ which, by closure operator properties,
yields $\sigma(a) \leq \sigma(x)$. We then have $a \vee b \leq \sigma(a) \leq \sigma (x)$ so $a \vee b \vee x \vee v =x \vee v \vee b \leq \sigma (x)$. Thus, if
$\phi^v_{a,b}(x) = x \vee v$ or $ \phi^v_{a,b}(x) = x \vee v \vee b$ then $\phi^v_{a,b}(x) \leq \sigma (x)$. Since $a$ was chosen arbitrarily this would make
$$
\left( \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b}\right) (x) = \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} ( \phi^v_{a,b} (x) ) \leq \sigma(x).
$$
For the other inequality we will use the fact that these closure operators are algebraic and look at compact elements of $L$. Let $ l \in C(L)$ and $k \leq_c \sigma(l)$. Then
$$
k \leq \phi_{l,k}^v (l) \leq \bigvee_{b \leq_c \sigma(l)} \phi_{l,b}^v (x) \leq \left( \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b} \right) (l).
$$
This is true for all $k \leq_c \sigma(l)$, thus
$$
\sigma(l) = \bigvee_{k \leq \sigma(l)} k \leq \left( \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b} \right) (l).
$$
We have now shown both inequalities and thus $\displaystyle \sigma = \bigvee_{a \in C(L)} \bigvee_{b \leq_c \sigma(a)} \phi^{v}_{a,b}$.
\end{proof}
\begin{theorem}
The lattice $a.c.o.(L)$ is algebraic.
\end{theorem}
\begin{proof}
from Lemmas \ref{join_compact_L} and \ref{all_join_compact_L} we see for each $\sigma \in a.c.o.(L)$, $\sigma$ can be written as the join of elements that are the join of compacts. Thus $a.c.o.(L)$ is algebraic.
\end{proof}
|
1,108,101,565,771 | arxiv | \section{Introduction}
The characterization of galaxies according to their morphologies dates back to the very earliest studies of the extragalactic universe. Hubble's classification scheme \citep{Hubble1926EXTRA-GALACTICNEBULAE} and the revisions that ensued over the subsequent decades \citep[e.g.][]{DeVaucouleurs1959ClassificationGalaxies, Sersic1963InfluenceGalaxy, 1998gmc..book.....V} all recognized morphology as a fundamental property of galactic populations. From these early studies, it was already clear that galaxies are broadly organized into two populations, those characterized by spheroidal shapes, and those with a prominent disk component. Additional features, such as bars, rings and spiral arm number, have further added to the richness of morphological classifications over the years \citep[e.g.][]{Athanassoula2009RingsMorphology, Nair2010ASURVEY, Athanassoula2013BarProperties,Willett2013GalaxySurvey}.
The importance of morphology in galaxy evolution is supported by an abundance of observational and theoretical evidence. For example, a galaxy's internal structure is observed to strongly correlate with its global star formation rate \citep[e.g.][]{Wuyts2011GALAXY0.1, Bluck2014BulgeSurvey, Morselli2017BulgesActivity, Bluck2019WhatStructure, Cano-Diaz2019SDSS-IVSequences, Cook2020XGASS:Sequence, Sanchez2020SpatiallyGalaxies, Leslie2020TheZ5}, as well as star formation efficiency \citep{Saintonge2012THEGALAXIES,Leroy2013MOLECULARGALAXIES,Davis2014TheGalaxies,Colombo2018TheSequence, Dey2019OverviewSurveys, Ellison2021TheGalaxies}. The role of morphology in regulating star formation is also reproduced in simulations \citep[e.g.][]{Martig2009MORPHOLOGICALRED,Gensior2020HeartSpheroids}. Other properties found to depend on morphology include gas fraction \citep[e.g.][]{Saintonge2017XCOLDStudies} and metallicity \citep[e.g.][]{Ellison2008CluesSize}. Measuring and classifying galaxy morphologies are therefore a fundamental component of any modern galaxy survey.
Whereas early morphological measurements relied on expert visual classifications, often done by individual professional astronomers, this approach has become untenable in the era of large galaxy surveys. The deluge of galaxy images has therefore been tackled in two principal ways. The first is through crowd-sourcing, whereby the power of the human brain continues to be tapped, through the contributions of citizen scientists \citep{Darg2010GalaxyMorphologies, Lintott2011GalaxyGalaxies, Casteels2014GalaxyInformation, Simmons2017GalaxyCANDELS, Willett2017GalaxyImaging}. Recently, artificial intelligence is replacing humans and machine learning algorithms are increasingly being applied to the challenge of large imaging datasets, either for general morphological classification \citep{Huertas-Company2015ALEARNING, DominguezSanchez2019TransferAnother, Cheng2020OptimizingImaging, 2021arXiv210208414W} or the identification of particular galaxy types/features \citep{Bottrell2019DeepRealism, Pearson2019IdentifyingLearning, 2020ApJ...895..115F,Bickley2021NoTitle}. An alternative automated approach, which has been in use for several decades, is to compute some metric of the galaxy's light distribution, a technique which is readily applicable to large datasets. Such quantitative morphology metrics (usually) yield a continuous distribution, as opposed to placing galaxies in distinct classes.
There are numerous quantitative morphology metrics that have been developed over the years. One of the oldest methods is the definition of a characteristic radial profile of light in a given waveband \citep{Sersic1963InfluenceGalaxy}. Modern fitting applications of the Sérsic profile often decompose the galaxy into two (or more) components, in recognition that bulges and disks, which commonly co-exist in a given galaxy, are characterized by distinct indices \citep[e.g.][]{Simard2011ASurvey, Lackner2012AstrophysicallyGalaxies, Mendel2014ASURVEY, 2019MNRAS.486..390B,Meert2015ASystematics}. Other metrics have been developed for specific applications, such as the identification of recent merger activity \citep[e.g.][]{Lotz2008GalaxyMergers,Pawlik2016ShapeStages}. One of the most commonly used quantitative morphology systems is the `CAS' approach \citep{Conselice2003THEHISTORIES}, which computes concentration, asymmetry and smoothness \citep{Conselice2000TheGalaxies,Conselice2003A3,Lotz2004AClassification}. The non-parametric approach of CAS is particularly useful for identifying galaxy mergers, whose disturbed structures often do not conform to pre-defined parametric descriptions. Together, CAS and other non-parametric indices capture information that can be used to broadly distinguish early and late type galaxies, as well as possible mergers \citep{Conselice2003A3,Lotz2004AClassification,Lotz2008GalaxyMergers, 2009MNRAS.394.1956C}.
All of these quantitative morphology metrics and classifications were initially envisioned to work on flux maps originating from galactic starlight. Although some work has extended the application of traditional non-parametric indices (such as asymmetry) to the distribution of galactic gas mass \citep[e.g.][]{Holwerda2011QuantifiedGalaxies,Giese2016Non-parametricLopsidedness,Reynolds2020HGalaxies}, and stellar mass from Hubble photometry \citep{Lanyon-Foster2012TheZ1}, there has been little application to date on maps of other galactic properties, particularly the variety of properties provided by resolved spectroscopy. The widespread applicability of non-parametric morphology indicators is particularly pertinent in the era of large integral field unit (IFU) surveys wherein maps of a myriad of properties are available, i.e. the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) Survey \citep{Bundy2015OverviewObservatory,Law2015ObservingSurvey}, the Sydney-Australian-Astronomical-Observatory Multi-object Integral-Field Spectrograph (SAMI) Survey \citep{Allen2015TheRelease}, and the Calar Alto Legacy Integral Field Area (CALIFA) Survey \citep{Sanchez2012CALIFASurvey}. Indeed, the technical challenge of effectively capturing the complexity of the spatial properties for large numbers of galaxies often leads to the use of averaged profiles, which loses much of the valuable information encoded in the IFU data \citep[e.g.][]{Ellison2018StarMaNGA,Thorp2019SpatiallyMaNGA}. There is a clear need to explore the application of traditional techniques, and potentially develop new ones, that are capable of capturing in an effective, statistical way, the high order structure in IFU data product maps \citep[e.g.][]{Bloom2017TheFormation}.
Most importantly, the biases in non-parametric morphology measurements created by different resolutions and signal-to-noise ratio values, have not been investigated for IFU data product maps. These biases have been investigated in detail for photometry. \cite{Conselice2000TheGalaxies} demonstrated that adding simulated random noise to a galaxy image will systematically increase the measured asymmetry, and that the change in asymmetry is inversely proportional to the signal-to-noise ratio. On the same galaxy sample they also investigate the effects of degraded resolution on asymmetry, by simulating their real galaxy images as if they are observed at a larger distance, and find that asymmetry artificially decreases with distance. Degraded resolution can also alter the concentration measurement, as \cite{Bershady2000StructuralSample} demonstrated by artificially degrading their spatial sampling. The most commonly used concentration and asymmetry measurements have been constructed to mitigate these effects, and reasonable cuts in signal-to-noise and resolution can be made to make accurate asymmetry and concentration measurements within acceptable error \citep{Bershady2000StructuralSample, Conselice2000TheGalaxies, Conselice2003A3}. However, the same investigation has not been completed to see how these measurements and the necessary signal-to-noise/resolution limitations would work beyond photometric data, such as the IFU data products.
In the following work we use cosmological hydrodynamical simulations to explore the applicability of the most commonly used non-parametric indices (asymmetry and concentration) on the kind of products available in IFU surveys. We focus our experiment on simulated maps of stellar mass, although the lessons learned from this test case can be generalized to many IFU data products.
The paper is organized as follows. In Section \ref{sec:methods} we describe the simulated galaxy data used in our analysis (Sec. \ref{sec:TNG}), as well as reviewing the definitions of concentration and asymmetry used in this paper (Sec. \ref{sec:CAS}). In Section \ref{sec:analysis} we investigate how well the intrinsic concentration and asymmetry indices are recovered from the simulated galaxies once noise and point spread function (PSF) are included. We discuss the implications of our results in Section \ref{sec:discussion} and summarize in Section \ref{sec:summary}.
\section{Data \& Methods}
\label{sec:methods}
\subsection{Simulated galaxy images}
\label{sec:TNG}
Our goal is to assess the robustness of non-parametric morphology indicators on data products other than the standard application to starlight (e.g., stellar mass maps, SFR maps, metallicity maps). Although there is no mathematical reason that such indicators could not be applied to any 2-dimensional distribution, the limitations of realistic data may impose practical impediments. For this reason, it has become a common approach to add observational realism to simulated data, in order to fairly compare derived properties \citep{Bottrell2017GalaxiesMass,Bottrell2017GalaxiesBiases.,Huertas-Company2019TheLearning,Bottrell2019DeepRealism,Zanisi2020TheView,2020ApJ...895..115F}. Our approach is therefore to use simulations of galaxies, for which we can measure `true' values of a given morphology metric, before adding noise (e.g. in the form of measurement uncertainty) and the effects of degrading resolution, and comparing to the idealized measurements.
Specifically, in this work we make use of the IllustrisTNG (hereafter TNG) simulation \citep{Weinberger2017SimulatingFeedback, Pillepich2018SimulatingModel}, as well as its predecessor, Illustris \citep{2014Natur.509..177V,Genel2014IntroducingTime,Sijacki2015TheTime}, in particular we use TNG100-1 and Illustris-1 which have comparable volumes and resolutions. The motivation for using two different simulations is to assess whether our results are sensitive to the details of the physics model used, or whether they are generalizable to different simulations or observational datasets. The use of Illustris and TNG is ideal in this regard, since they both represent a suite of magneto-hydrodynamic (just hydrodynamic, in the case of Illustris) cosmological simulations \citep{Marinacci2018FirstFields,Naiman2018FirstEuropium,Nelson2018FirstBimodality,Pillepich2018FirstGalaxies,Springel2018FirstClustering,Nelson2019TheRelease} run with the moving-mesh code AREPO \citep{Springel2010EMesh,Pakmor2011MagnetohydrodynamicsGrid,Pakmor2016ImprovingAREPO}.
Both simulations have runs with similar volumes and resolutions, but which employ different physical models, such as the implementation of AGN feedback \citep{Weinberger2017SimulatingFeedback}. The AGN feedback model used in TNG allows for more efficient quenching of high mass galaxies, leading to dissipationless processes that alter the morphology of quenched galaxies \citep{Rodriguez-Gomez2017TheMorphology}. Additionally, the reparameterization of galactic winds in TNG \citep{Pillepich2018SimulatingModel} results in more accurate sizes of low-mass galaxies than in Illustris. These two changes to the physical model have resulted in more realistic morphologies of TNG galaxies, as opposed to Illustris, when compared to observational data sets (e.g. \citealt{Bottrell2017GalaxiesBiases.} compared with \citealt{Rodriguez-Gomez2019TheObservations}). Thus, for the work presented here, we use TNG100-1 as our fiducial simulation, the highest resolution run of the $(110.7 \textrm{Mpc})^3$ volume, with a baryonic mass resolution of order $\sim10^6 \textrm{M}_{\odot}$. We then use Illustris-1 (hereafter Illustris), which has similar volume and resolution to TNG100-1, as a comparison sample to test the universality of the results determined from TNG.
One thousand galaxies are randomly selected each from Illustris and TNG, all of which are drawn from the final $z=0$ snapshots and have stellar masses $\textrm{M}_{\star}/\textrm{M}_{\odot}\geq10^9$. Any galaxies with $\textrm{M}_{\star}<10^9\textrm{M}_{\odot}$ are not reliably resolved. Given our random selection the $\textrm{M}_{\star}$ distribution is biased towards low mass galaxies, but as we will demonstrate in Section \ref{sec:analysis} we recover a realistic range of asymmetry and concentration values \citep{Rodriguez-Gomez2019TheObservations}. Maps of the stellar mass distribution are generated from the simulation particle data, with a gravitational softening length for stars of 0.7 kpc at z=0 \citep{Nelson2018FirstBimodality}. This idealized stellar mass map allows us to measure non-parametric morphologies without having to consider sensitivities to the mass-to-light ratio that would lead to uncertainties for an observed galaxy. Each simulated galaxy is projected along a random axis, and we select a FOV ten times the stellar half mass radius of the galaxy ($R_{\rm half}$ from here forward), with 512 pixels on each size to achieve high spatial resolution for every galaxy (compared to the resolution degradation we will explore in Section \ref{sec:psf}.
\subsubsection{Adding Observational Effects}
Having generated the idealized stellar mass maps, observational realism is added in two steps. First, we consider the effect of degrading the resolution of the stellar mass map, which will reduce the contrast of asymmetry structures and alter the concentration measurement \citep{Conselice2000TheGalaxies,Bershady2000StructuralSample}. We achieve this by convolving the stellar mass map with a Gaussian PSF that has a full width at half maximum (FWHM) ranging between 0.002"-7" (with the goal of ranging from a negligible PSF to some of the largest PSFs in current observational surveys). We apply an arcsecond PSF value by assuming each galaxy is at the average distance of a MaNGA survey galaxy z=0.037 (the applicable distance for comparing this analysis to IFU data), and assuming a cosmology in which $\mathrm{H}_0=$ 70 km $\mathrm{s}^{-1}$ $\mathrm{Mpc}^{-1}$, $\Omega_{\mathrm{m}}=$0.3, and $\Omega_{\Lambda}=$0.7.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/Example_220582.png}
\centering
\caption{The steps through which we apply realistic noise and resolution to a simulated galaxy. Top-left: The stellar mass surface density of the simulated galaxy, from which intrinsic asymmetry and concentration is measured. Top-middle: A sample PSF applied to this galaxy, though values can range from 0.002"-7". Changing the PSF allows us to change the resolution to our simulated galaxies at a fixed distance. We chose to highlight this galaxy with a small angular size compare to the applied PSF, resulting in a small resolution value (smaller than has been examined in previous works), to highlight how significantly degrading the resolution changes the asymmetry and concentration. Top-right: Random noise generated from a Gaussian distribution, to be applied to the galaxy image. The desired (input) $S/N$ is used to generate the width of the Gaussian distribution so that the measured $S/N$ is of a similar (though not exactly the same) value. Bottom-left: The stellar mass surface density image of the galaxy with added noise. White values are where $\Sigma_{\star}$ is so small, adding negative noise results in a non-measurement. Bottom-middle: The image of the galaxy convolved with the PSF from the top-middle panel. Bottom-right: The image of the galaxy convolved with the PSF and with noise added, providing an image of the galaxy more similar to what would be seen with observations.}
\label{fig:Ex_Noise_PSF}
\end{figure*}
The PSF is only a component of the more fundamental resolution of the image, which itself is the relationship between the convolving PSF size and the apparent size of the galaxy. The set physical size of the galaxy, along with our choice of a constant distance, means that the apparent angular size of the galaxy is fixed and one need only vary the PSF to vary the resolution. Resolution ("Res") is defined in this analysis to be the ratio of the stellar half mass radius ($R_{\mathrm{half}}$) in arcseconds and the PSF FWHM in arcseconds (Res=$R_{\mathrm{half}}$/FWHM). By using $R_{\mathrm{half}}$ there is no need to fit the galaxy structure to measure radius, though we note that any approximation of galaxy size with respect to PSF would suffice. Figure \ref{fig:Ex_Noise_PSF} summarizes the different levels of realism applied to an example TNG galaxy stellar mass map (top left panel). The top middle panel shows an example PSF similar to that in the MaNGA survey (2.48"), but given the small angular size of the galaxy ($R_{\mathrm{half}}$=1.69") the resolution of the image is actually very low.
Second, noise is added to the mass map. As we explain in the next sub-section, the traditional asymmetry index involves the subtraction of a background sky term, in order to account for observational effects that contribute to asymmetry, but which is not associated with the galaxy itself. This is akin to subtracting the sky from a photometric measurement. As we will show in the next sub-section, the subtraction of the background sky term has a significant effect on the traditional asymmetry index, due to its mathematical definition. Although we are working with stellar mass maps, rather than light, noise would still present in an observed stellar mass map in the form of measurement and post-processing uncertainties. We therefore create a random `noise' map to add to the simulated galaxy mass map, to approximate an observational scenario where every pixel has some hypothetical uncertainty on the mass measurement. From this noise map we can measure a signal-to-noise to quantify the significance of the contribution from uncertainty, with the goal of creating a variety of noise maps to span a large range of possible signal-to-noise values.
To generate a random noise map representing the hypothetical uncertainties on each pixel for the simulated stellar mass map, we first select a desired signal-to-noise ratio ranging from 0 to 1000. We then measure the median signal of the galaxy within a $1R_{\rm half}$ aperture (where $R_{\rm half}$ is the stellar half-mass radius). We approximate the median absolute deviation (MAD) within that aperture to be the median signal within that aperture divided by the desired signal-to-noise ratio. Given we set the noise map to be a normal distribution, we can convert the MAD to a standard deviation by multiplying by the known scale factor $k=1/(\Phi^{-1}(3/4))=1.4826$ (where $\Phi^{-1}$ is the inverse of the cumulative distribution, and $3/4$ implies $\pm$MAD covers half the standard normal cumulative distribution function). Using the the standard deviation we construct a standard-normal distribution centred on zero; for each pixel we select a random value from this distribution to set the uncertainty of that pixel. This method ensures that the resulting signal-to-noise (the median value of the mass map within $1R_{\rm half}$ divided by the median value of the noise map within $1R_{\rm half}$) is approximately (though not exactly) equal to the desired signal-to-noise used to construct the noise map.
An example noise field is shown in the top right panel of Fig. \ref{fig:Ex_Noise_PSF}. The lower panels of Figure \ref{fig:Ex_Noise_PSF} show the resulting mass maps when either the noise (left panel), the PSF (middle panel) or both (right panel) are included.
\subsection{Nonparametric morphology indicators}
\label{sec:CAS}
Each of the 1000 TNG galaxies is assessed using two non-parametric indices: concentration and asymmetry, both of which are computed using the \texttt{statmorph} package \citep{Rodriguez-Gomez2019TheObservations}\footnote{https://statmorph.readthedocs.io/en/latest/overview.html}. We refer the reader to the original papers for details of the development of these metrics \citep{Schade1995Canada-FranceGalaxies,Abraham1996GalaxyField,Bershady2000StructuralSample,Conselice2000TheGalaxies,Conselice2003A3}, but review the basic details here.
The concentration parameter (C) has been used for over half a century \citep[see][]{Morgan1959AIL}. Although there is a plethora of possible definitions of concentration, the most commonly used in the assessment of galaxy light distributions is that of \cite{Bershady2000StructuralSample} and \cite{Conselice2003THEHISTORIES}. These works define concentration, as:
\begin{equation}
C = 5 \log_{10} (r_{80}/r_{20})
\end{equation}
\noindent where $r_{80}$ and $r_{20}$ are the radii within which 80\% and 20\% of the galaxy's total flux (for this work, total stellar mass) is contained, respectively. The inner and outer radii are chosen specifically to mitigate the degradation in concentration that comes from low resolution (see \cite{Bershady2000StructuralSample} for details). In the work presented here, we compute the intrinsic concentration ($C_{\rm int}$) derived directly from the idealized simulation image. Once we degrade the resolution of the image by convolving with a PSF, we refer to the measured concentration as $C_{\rm Res}$; comparing this value to $C_{\rm int}$ informs us of the impact of resolution on the true concentration. Finally, we define $C_{\rm obs}$ as the measured concentration once noise has been added to the image. Again, comparison of $C_{\rm obs}$ to $C_{\rm int}$ will quantify how much the intrinsic concentration of the simulated galaxy has been affected by realistic observational features.
Identifying the galaxy's centre is critical to the calculation of concentration, as well as many other non-parametric indices, including the second index of interest in this work asymmetry. Therefore concentration and asymmetry are often computed in conjunction, in an iterative approach that finds a centre which yields minimum asymmetry value, then uses that centre to measured concentration as well. As the name implies, the asymmetry parameter quantifies how much of an image's light is not symmetric about an axis of rotation, calculated as the difference between an image and its 180\textdegree rotation \citep{Conselice2000TheGalaxies}.
The intrinsic asymmetry ($A_{\rm int}$) is defined as:
\begin{equation}
A_{\rm int} \equiv \frac{\Sigma_{i,j} \mid I_{ij} - I^{180}_{ij}\mid} {\Sigma_{i,j}\mid I_{ij}\mid}
\label{eqn:A_int}
\end{equation}
\noindent where $I_{i,j}$ is the flux contained within an individual pixel of the galaxy image, and $I^{180}_{i,j}$ is the pixel at that same $i,j$ location once the image is rotated 180 degrees. These values are computed within 1.5 times the Petrosian radius ($r_{\rm petro}$). For an idealized image, such as one generated from a simulation, $A_{\rm int}$ is a perfect representation of the galaxy's asymmetry (with no background noise).
However, in real images there are observational artefacts that can contribute to the asymmetry that are not intrinsic to the galaxy, such as sky noise and background gradients introduced by nearby bright sources. For either a real galaxy observation, or a simulated galaxy that has had noise added to it, the measured asymmetry is therefore a combination of the galaxy signal and the background noise:
\begin{equation}
A_{\rm obs} = \frac{\Sigma_{i,j} \mid (I+\mathrm{noise})_{ij} - (I+\mathrm{noise})^{180}_{ij}\mid} {\Sigma_{i,j}\mid (I+\mathrm{noise})_{ij}\mid}
\label{eqn:A_obs}
\end{equation}
Therefore, in order to recover the intrinsic asymmetry, the noise asymmetry ($A_{\rm noise}$) needs to be quantified and removed from the observed asymmetry. This is analogous to the subtraction of background sky in the measurement to obtain galaxy photometry. In observations, $A_{\rm noise}$ is computed from a region offset from the galaxy; in simulations it can alternatively be computed from the noise (background) field generated in addition to the idealized image, i.e.:
\begin{equation}
A_{\rm noise} = \frac{\Sigma_{i,j} \mid \mathrm{noise}_{ij} - \mathrm{noise}^{180}_{ij}\mid} {\Sigma_{i,j}\mid \mathrm{noise}_{ij}\mid}
\label{eqn:A_sky}
\end{equation}
The premise above is that by subtracting the contribution to the measured asymmetry ($A_{\rm obs}$) from the background noise ($A_{\rm noise}$, a term that captures all background sources of asymmetry that we wish to remove), we can approximate the intrinsic asymmetry. That is, we determine a corrected, noise-subtracted asymmetry $A_{\rm noisesub}$ as:
\begin{equation}
A_{\rm noisesub} \equiv A_{\rm obs} - A_{\rm noise}
\end{equation}
\noindent with the assumption that $A_{\rm noisesub} \sim A_{\rm int}$.
However, due to the modulus present in the equations for $A_{\rm obs}$, $A_{\rm noise}$, and $A_{\rm int}$, it is not mathematically correct to assume that $A_{\rm int}$=$A_{\rm noisesub}$. Given the rule of modular subtraction ($\mid A\mid - \mid B\mid \leq \mid A-B\mid$), $A_{\rm noisesub}$ can only serve as a lower limit to the true value of $A_{\rm int}$. Indeed, due to the presence of the modulus in Eqn \ref{eqn:A_sky} even a random noise field will have a non-zero asymmetry associated with it. The underestimation of asymmetry through this noise correction method had been previously investigated in photometric studies, which recommend the asymmetry measurement only be made where $S/N>100$. Otherwise noise dominates the asymmetry, see \citealt{Conselice2000TheGalaxies}. Whereas this regime is readily achievable in photometry, such a high $S/N$ is unlikely to be reached in maps of galactic properties, and previous works on the asymmetry of galactic gas and dust have struggled with this as well \citep{Bendo2007VariationsSequence,Giese2016Non-parametricLopsidedness,Reynolds2020HGalaxies}. Although we are working with stellar mass maps, which don't suffer from a sky contribution, noise is still present in observed stellar mass maps in the form of uncertainties in the determination of the stellar mass itself, and these uncertainties are expected to far exceed the equivalent of a $S/N$ $\sim$ 100 criterion \citep[e.g.][]{Mendel2014ASURVEY, Sanchez2016Pipe3DFIT3D,Sanchez2016Pipe3DDataproducts}.
Figure \ref{fig:Ex_Noise_PSF} shows how asymmetry and concentration can be affected by resolution and noise, highlighting the impact of noise subtraction on the recovered asymmetry. In the example galaxy shown in Figure \ref{fig:Ex_Noise_PSF} the intrinsic concentration is measured to be $C_{\rm int}$=3.761 (top left panel). After degrading the resolution with Res=0.74 (the stellar half mass radius is smaller than the PSF that convolves the image), the concentration is significantly diminished to $C_{\rm obs}$=2.749 (lower middle panel). Conversely, if just noise is added to the image (no degraded resolution), the measured concentration is essentially unaffected, with a difference less than 0.1\% from the intrinsic value. The combined effect of resolution and noise yields an observed concentration of $C_{\rm obs}$=2.75, i.e. a net decrease from the true value by $\sim$27 per cent. As we will show for the full galaxy sample in the next section, observational changes in concentration can be traced solely to the value of resolution.
As for the impact of resolution and noise on asymmetry, the example galaxy in Figure \ref{fig:Ex_Noise_PSF} starts with an intrinsic asymmetry $A_{\rm int}$=0.094 (top left panel). The particular noise field generated for this example has an asymmetry (introduced by the use of the modulus in Equation \ref{eqn:A_sky}) of $A_{\rm noise}$=0.039. Since the measured asymmetry in the noisy image for this example galaxy is $A_{\rm obs}$=0.112 (lower right panel), subtracting the noise asymmetry yields a corrected value of $A_{\rm noisesub}=0.073$. The noise-subtracted asymmetry is therefore, in this case, under-estimating the true asymmetry by a factor of $\sim$20\%. We next turn to examine the effects of a broad range of resolution and signal-to-noise values. In the following section, we compare the measured, corrected and intrinsic morphologies of our full simulated galaxy sample, and investigate whether further corrections could be made to improve them.
\section{Analysis}
\label{sec:analysis}
In this section we assess the concentration and asymmetry of our sample of 1000 galaxies in the TNG simulation (a comparison with the Illustris simulation is given in Section \ref{sec:compare}). We assess separately the impacts of noise (which in the case of observed mass maps is driven by uncertainty on the measurements) and resolution, quantifying how well the intrinsic parameters can be recovered once these compounding observational effects are included. Finally, we investigate whether improvements can be made to the traditional definitions of concentration and asymmetry that will increase the fidelity of these metrics in marginal observational data.
\subsection{Noise}
\label{sec:noise}
Figure \ref{fig:Int_Noise} demonstrates how adding random noise to the stellar mass map changes the observed asymmetry and concentration in the 1000 galaxies taken from the TNG simulation. Noise contribution has little impact on the observed concentration (left panel). This result is expected, since a randomly constructed noise field will contribute uniformly to the inner and outer apertures from which concentration is calculated. The only exception to this rule is galaxies with the lowest signal-to-noise ($S/N<5$), where random clumping of pixels with high noise can shift the concentration measurement (hence why the scatter is also unbiased). Thus, when measuring the concentration of an image, one generally need not worry about any intrinsic bias created by the noise.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/Noise.pdf}
\centering
\caption{The observed concentration (left) and asymmetry (right) compared to intrinsic values (which have no random noise contribution). From the left panel it is clear that the addition of noise has no effect on concentration measurements (save catastrophic cases at the lowest signal-to-noise). Asymmetry, on the other had, is always overestimated once noise is added, though the degree of that overestimation scales with inversely with $S/N$.}
\label{fig:Int_Noise}
\end{figure*}
In contrast, adding noise to the image always leads to an over-estimate of the observed asymmetry (Equation \ref{eqn:A_obs}) compared with its intrinsic value (right panel of Figure \ref{fig:Int_Noise}), though the change is greatest for galaxies with low signal-to-noise, when the noise dominates the image \citep[see][]{Conselice2000TheGalaxies,Lotz2004AClassification,Giese2016Non-parametricLopsidedness}. For $S/N>100$ the difference rarely exceeds a 50$\%$ increase, but for $S/N<100$ the asymmetry can increase anywhere between a factor of 2 to 10. To reiterate, because of the modulus within the asymmetry measurement even a random noise map will have a net-positive value.
The noise subtraction method has traditionally been used to account for the non-negligible addition of noise, however, as discussed in Section \ref{sec:CAS}, it only works well for the $S/N>100$ regime (when noise does not dominate the asymmetry). Figure \ref{fig:Noise_Corr} illustrates how subtracting the noise contribution to asymmetry decreases the scatter between the measured and intrinsic asymmetry. $A_{\rm noisesub}$ values for galaxies with $S/N>100$ (the $S/N$ cut suggested by \citealt{Conselice2000TheGalaxies}) are within $\pm 0.05$ of their intrinsic value (represented by the grey dashed lines). However, galaxies with $S/N<100$ now have a preferentially underestimated asymmetry (creating a similar problem, although in the opposite direction, to the overestimation that occurs with no $A_{\rm noise}$ correction).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figures/A_approximations_scatter_1_hist.pdf}
\centering
\caption{The standard noise subtraction method requires subtracting all of $A_{\rm noise}$ from $A_{\rm obs}$ to approximate the intrinsic asymmetry. The black dashed line represents the line of equality, with red dashed lines representing a 0.05 buffer from this line. Points are colour-coded by $S/N$, save for galaxies with $S/N<10$, which are grey. By correcting for noise contribution to asymmetry the scatter between the measured and intrinsic asymmetry is lessened by a factor of 10, and ~5\% more galaxies have a good asymmetry measurement (within 0.05 of $A_{\rm int}$). However, the median offset using this correction is of a similar magnitude to doing no noise correction (see Figure \ref{fig:Int_Noise}), just in the opposite direction. The galaxies with asymmetries far less than $A_{\rm int}\pm0.05$ are those with low $S/N$, in particular the galaxies with $S/N<10$ have the greatest difference in asymmetry from noise.}
\label{fig:Noise_Corr}
\end{figure}
The presence of a bias in the asymmetry correction method is obviously problematic; despite being introduced as an improvement in the estimate of asymmetry of intermediate signal-to-noise galaxies (where background noise can have a non-negligible contribution), it actually introduces a new offset in the low $S/N$ regime. However, given $A_{\rm obs}$ is an overestimation and $A_{\rm obs} - A_{\rm noise}$ is an underestimation of the true asymmetry, one could postulate that a noise correction that more accurately reproduces $A_{\rm int}$ lies somewhere in between. We propose that subtracting a fraction of $A_{\rm noise}$, rather than the entire value, could mitigate the over-correction. By subtracting multiple different fraction of $A_{\rm noise}$ from $A_{\rm obs}$ we can determine what fraction of the noise needs to be corrected to reproduce $A_{\rm int}$.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/A_approximations_division_smoothbin_fraction.pdf}
\centering
\caption{Left: The mean difference between the intrinsic asymmetry and corrected asymmetry for increasing signal-to-noise. Each curve represents subtracting a different fraction of the noise asymmetry, where red has no subtraction, and purple subtracts the entire $A_{\rm noise}$ value. The width of each line is the standard error on the mean. See Figure \ref{fig:TNG_A_corr} for a comparison of these plots to our fit correction. Right: The standard deviation of the difference between intrinsic and corrected asymmetry in each signal-to-noise bin, for each correction method. From this we can see that to recover the intrinsic asymmetry at $S/N<100$, subtracting a fraction of $A_{\rm noise}$ does better than subtracting the total noise contribution to asymmetry. Subtracting $A_{\rm noise}/1.2$ recovers the most asymmetry values (within $A_{\rm int}\pm0.05$) and has the smallest scatter in the difference at all $S/N$ values.}
\label{fig:divide_by}
\end{figure*}
Figure \ref{fig:divide_by} exhibits how the difference between the intrinsic and noise corrected asymmetry varies with $S/N$ for multiple fractions of $A_{\rm noise}$ correction. Each coloured curve in Figure \ref{fig:divide_by} represents a different fractional correction of $A_{\rm noise}$, with pink representing a full subtraction of $A_{\rm noise}$, and red representing no subtraction. Both subtracting the sky asymmetry and doing no subtraction only achieve reasonable accuracy within $A_{\rm int}\pm0.05$ for a signal-to-noise greater than 100, when the curves begin to approach zero difference between the true and observed asymmetry. However, varying the fractional value of $A_{\rm noise}$ improves the recovery of the intrinsic asymmetry to lower signal-to-noise values. The best option between doing nothing and subtracting the total $A_{\rm noise}$ is actually to divide by 1.2, which results in the greatest fraction of galaxies within 0.05 of $A_{\rm int}$ (excluding galaxies with $S/N<10$ to ensure answers aren't skewed by the asymptotic nature of each curve). correcting for $A_{\rm noise}$/1.2 also results in the smallest scatter in the difference for all $S/N$ regimes. Figure \ref{fig:Noise_Corr_frac} directly compares the measured and intrinsic asymmetry (as was completed in Figure \ref{fig:Noise_Corr} for traditional noise correction). Though the scatter in the difference between the two is only marginally better than regular $A_{\rm noise}$ correction, the median offset has diminished and over 20\% more galaxies have reliable asymmetry measurements (within 0.05 of the intrinsic asymmetry). Therefore, when considering low $S/N$ data using $A_{\rm obs} - A_{\rm noise}$/1.2 is the best way to approximate $A_{\rm int}$, and we will continue to investigate this as a candidate asymmetry measurement for the rest of the analysis (along with correcting for $A_{\rm noise}$, and doing no correction).
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figures/A_approximations_scatter_2_hist.pdf}
\centering
\caption{The asymmetry corrected for $A_{\rm noise}$/1.2 versus the intrinsic asymmetry. Points are colour-coded by $S/N$, except for galaxies with $S/N<10$ which are grey. The scatter between the two is smaller than doing no noise correction, similar to the improvement from subtracting $A_{\rm noise}$ from the observed asymmetry (see Figure \ref{fig:Noise_Corr}). However, the median offset is significantly less, and over 20\% more of the simulated galaxies are within an accurate asymmetry measurement (within $A_{\rm int}\pm0.05$). In fact, almost all galaxies with $S/N>10$ are within 0.05 of $A_{\rm int}$, a much more forgiving $S/N$ cut than the $S/N>100$ cut necessary for the traditional $A_{\rm noise}$ correction.}
\label{fig:Noise_Corr_frac}
\end{figure}
\subsubsection{Fit to Recover Intrinsic Asymmetry for all $S/N$}
The two methods to correct for noise asymmetry described so far are approximations which work within a signal-to-noise limited system. However, if there exists some mathematical relationship between the intrinsic asymmetry, the observed asymmetry, and the signal-to-noise, one should be able to fit a relationship between two of those parameters to almost perfectly approximate the third. In order to explore this possibility, we assume a relationship between the intrinsic asymmetry, the observed asymmetry, and the signal-to-nose of the following form:
\begin{equation}
A_{\rm int} = (F_1(S/N)+1)A_{\rm obs} +F_2(S/N)A_{\rm obs}^2+ F_3(S/N)A_{\rm obs}^3
\label{Eqn:A_int_blank}
\end{equation}
\noindent where $F_3$, $F_2$, and $F_1$ are each unique functions of signal-to-noise of the following form:
\begin{equation}
F(S/N) = X_1(S/N)^{-1}+X_2(S/N)^{-2}+X_3(S/N)^{-3}
\label{Eqn:Polynomial_SN}
\end{equation}
\noindent Equation \ref{Eqn:A_int_blank} is designed so that as $S/N$ goes to infinity, $A_{int}=A_{obs}$. As we established in Figure \ref{fig:Int_Noise}, at the highest $S/N$ the observed asymmetry is almost exactly equal to the intrinsic asymmetry, as the contribution from uncertainty is practically negligible. Thus we wish to replicate a relationship between $A_{\rm int}$, $A_{\rm obs}$, and $S/N$ that aligns with the fact that as the uncertainty goes to zero (and $S/N$ goes to infinity), $A_{\rm int}$ should equal $A_{\rm obs}$.
We elect to only fit data with $S/N<500$. At $S/N>500$, the difference between the observed and intrinsic asymmetry never exceeds 1\% of $A_{\rm int}$, and it is therefore safe to assume that $A_{\rm int} \approx A_{\rm obs}$. By excluding high $S/N$ data from the fit we can focus on best characterizing the asymptotic change of asymmetry with $S/N$. We fit Equation \ref{Eqn:A_int_blank} to the $A_{\rm obs}$ and $S/N$ of the 788 TNG galaxies with $S/N<500$, using the python package LM-Fit\footnote{https://dx.doi.org/10.5281/zenodo.11813}, which minimises the root-mean-square difference between the intrinsic asymmetry and the observed asymmetry. The following fit is found to minimise the difference between $A_{\rm int}$ and $A_{\rm obs}$:
\begin{equation}
\begin{aligned}
& \textbf{S/N<500: } A_{\rm int} = (- (15.36\pm0.58)(S/N)^{-1}+ (78.85\pm3.38)(S/N)^{-2} \\ & -(86.89\pm4.41)(S/N)^{-3} +1)A_{\rm obs} + ((10.69\pm1.39)(S/N)^{-1} \\ & - (99.59\pm6.37)(S/N)^{-2} + (125.44\pm7.63)(S/N)^{-3} )A_{\rm obs}^2 \\ & + (-(0.91\pm0.83)(S/N)^{-1}+ (31.98\pm3.19)(S/N)^{-2} \\ & -(45.82\pm3.43)(S/N)^{-3})A_{\rm obs}^3\\
& \textbf{S/N>500: } A_{\rm int} = A_{\rm obs} \\
\end{aligned}
\label{eqn:A_noise_fit}
\end{equation}
The coefficients in Equation \ref{eqn:A_noise_fit} are specific to how we measure the signal-to-noise ratio, in particular using $R_{\rm half}$ to set the aperture from which we make that measurement. In practice one could repeat the fitting process laid out thus far using a different aperture radius (as would be necessary for different kinds of observations). This would significantly change the coefficients present in the final relationship between $A_{\rm int}$, $A_{\rm obs}$, and $S/N$. But the conceptualization of this fit, as well as its relative accuracy, would remain unchanged.
We evaluate the goodness of this fit by comparing the intrinsic asymmetry to that predicted by Equation \ref{eqn:A_noise_fit} in Figure \ref{fig:Fit_scat} (see Section \ref{sec:compare} for tests on independent samples). Both the median offset between the observed asymmetry and intrinsic asymmetry, and the scatter of this offset, is reduced by a factor of 10 after correcting asymmetry based on this fit. The majority of the scatter here comes from the lowest $S/N$ ($<10$) galaxies, where the accuracy of the fit is lowest. However, low $S/N$ galaxies can both have over- or under-estimated asymmetry measurements (unlike the bias to one or the other when correcting based on $A_{\rm noise}$). The difficulty with fitting the low signal-to-noise data likely stems from the asymptotic relationship between the intrinsic and observed asymmetry at low signal-to-noise, thus a small change in the fit can lead to a more drastically incorrect prediction. Of the predicted asymmetries, including all $S/N$ values, 81.5\% are within 0.05 of $A_{\rm int}$ (what we would consider a "good" asymmetry measurement). Thus the fit correction is a significant improvement on the 54.1\% and 60.7\% for doing no correction, and subtracting $A_{\rm noise}$, respectively. Subtraction $A_{\rm noise}$/1.2 is still the best asymmetry prediction by a margin, recovering 87.8\% of asymmetries within 0.05 of $A_{\rm int}$. Though the fit correction still has its own benefits over the $A_{\rm noise}$/1.2 correction, in particular when simultaneously correcting for resolution as is investigated in Section \ref{sec:Res_A}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figures/Apredict_hexbin_allgalaxies_contoursfill.pdf}
\centering
\caption{Density contours for the predicted asymmetry based on the fit relationship from Equation \ref{eqn:A_noise_fit} versus the intrinsic asymmetry, the dashed red line representing where the two would be equal to each other. Applying a correction for the random noise based on a fit function significantly decreases the scatter of the difference between the predicted and intrinsic asymmetry, while also removing any bias in the predicted asymmetry. The scatter in the fit is dominated entirely by low $S/N$ galaxies.}
\label{fig:Fit_scat}
\end{figure}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/A_approximations_SN_goodnessoffit_compare_sim_LMFit_smooth_SNbound_zoom_JustTNG.pdf}
\centering
\caption{Left: The mean difference between the intrinsic and observed asymmetry for increasing signal-to-noise bins. The red line represents performing no correction for the sky asymmetry, the purple line represents correcting for $A_{\rm noise}$, the green line for correcting $A_{\rm noise}/1.2$, and the orange line for applying the fit correction from Equation \ref{eqn:A_noise_fit}. The width of each line is the standard error on the mean. Right: The standard deviation of the difference in asymmetry for the four asymmetry correction methods. All methods that perform some kind of correction for noise contribution to asymmetry result in lower standard deviations than doing no noise subtraction. One should note that the standard deviation of the difference for the LM Fit method is comparable to the two noise subtraction methods except at low signal to noise. Here the scatter of the LM Fit method is slightly larger, though still relatively small and a significant improvement from doing no noise correction.}
\label{fig:TNG_A_corr}
\end{figure*}
To determine the best method for measuring asymmetry we can compare the results from Equation \ref{eqn:A_noise_fit} to the traditional noise subtraction, the noise subtraction divided by 1.2, and doing no correction for noise asymmetry. Figure \ref{fig:TNG_A_corr} examines how well each of these methods reproduce the intrinsic asymmetry in different signal-to-noise regimes. The traditional method of subtracting the noise asymmetry (the purple line) does poorly at low signal-to-noise, as expected based on the earlier discussions in this work. By excluding galaxies with $S/N<100$, we can guarantee this bias in asymmetry is no more than 0.02, which given the range of possible asymmetry values (0-1) is relatively small. Even for low asymmetry galaxies where the bias has a greater percentage effect, because the bias is relatively constant after $S/N=100$ you could still compare asymmetries within a sample of galaxies with this $S/N$ cut. The signal-to-noise cut must be made though, otherwise the asymmetry of faint objects will appear systematically higher than brighter, nearby objects.
However, few galaxy properties (like those collected from an IFU observations) can achieve high enough signal-to-noise to recovery the majority of intrinsic asymmetry values with all of these correction methods. Though the signal-to-noise of the flux emission lines measured from spectra are excellent by design, other uncertainties that go into computing properties like stellar mass surface density can raise the statistical uncertainty of the pixel up to 0.1 dex or more. If one of our TNG stellar mass maps had an error of 0.1 dex on every pixel, the signal-to-noise of that stellar mass map (based on our calculation method) would be approximately 4. Thus, if we want to examine the asymmetry of spectral data products, the $A_{\rm noise}/1.2$ method is the better traditional noise correction compared to the $A_{\rm noisesub}$ method. The bias reaches less than 0.025 difference for $S/N$ as low as 50, and no bias exists at $S/N$>100. The asymmetry fit method, on the other hand, does better than all other asymmetry approximations at low $S/N$, with a difference between $A_{\rm int}$ and $A_{\rm obs}$ below 0.02 even at signal-to-noise$<$10, making it the ideal asymmetry measurement for data products with large uncertainties. This method has the added benefit of also not requiring a measurement of $A_{noise}$, so long as a global $S/N$ value can be approximated accurately. The relationship between observed asymmetry, signal-to-noise, and intrinsic asymmetry is fit to this simulation for a particular subset of galaxies; in Section \ref{sec:compare} we evaluate how well this fit can be applied to other datasets by comparing to Illustris.
\subsection{Spatial Resolution}
\label{sec:psf}
The resolution of an image can alter both the asymmetry and concentration measurements (e.g. Figure \ref{fig:Ex_Noise_PSF}). Low resolution can smooth out asymmetric structures that would otherwise lead to a larger asymmetry measurement. Degrading the resolution of an image will also spread out the light distribution (or stellar mass distribution in this case), resulting in more light in the outer aperture compared to the inner aperture than in the original image. Thus, degrading the resolution will lead to a systematically lower concentration as well. Figure \ref{fig:PSF_change} shows both of these effects for our sample of TNG galaxies. In the left panel the asymmetry is systematically under-estimated as the resolution decreases, and the right panel shows a similar under-estimation of concentration with decreasing resolution. The effect of varying resolution when comparing asymmetry and concentration values has been discussed in detail in previous works \citep{Bershady2000StructuralSample, Conselice2000TheGalaxies,Lotz2004AClassification,Bendo2007VariationsSequence,Giese2016Non-parametricLopsidedness}.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/PSF_AC_change_Res.pdf}
\centering
\caption{A demonstration of how degrading resolution changes the asymmetry (left) and the concentration (right) compared to their intrinsic values (where resolution is functionally infinite). Degrading the resolution systematically decreases both asymmetry and concentration, and the magnitude of this difference increases inversely with the value of resolution. Note that the points with the lowest resolution are furthest from the black dashed line representing $A_{\rm Res}$=$A_{\rm int}$. Given the bias in the asymmetry/concentration measurement is clearly dependent on the intrinsic asymmetry/concentration and resolution value, just as with $S/N$ we should be able to correct for resolution effects and recover the intrinsic index.}
\label{fig:PSF_change}
\end{figure*}
\subsubsection{Resolution Corrected Asymmetry}
\label{sec:Res_A}
Just as we corrected the asymmetry for noise based on a relationship between $S/N$ and $A_{\rm obs}$ (Equation \ref{Eqn:A_int_blank}), we can assume a similar relationship exists between $A_{\rm Res}$ and the resolution of the image that can recover $A_{\rm int}$. Though we define our resolution as the stellar half mass radius over the PSF FWHM (both in arcseconds), any approximation of the apparent galaxy size in respect to the PSF size would suffice here. Using the Res parameter instead of the PSF FWHM alone guarantees the correction will work for a variety of galaxy sizes and distances, rather than just working for a range of observational PSFs. Given that resolution and $S/N$ have a similar effect on asymmetry, we assume a similar relationship between $A_{\rm int}$, $A_{\rm Res}$, and Res exists of the following form:
\begin{equation}
\begin{aligned}
& A_{\rm int} = (F_1(\rm Res)+1)A_{\rm Res}+F_2(\rm Res)A_{\rm Res}^2 +F_3(\rm Res)A_{\rm Res}^3 \\
\end{aligned}
\label{eqn:A_fit_PSF_blank}
\end{equation}
\noindent where $A_{\rm Res}$ is the observed asymmetry with degraded resolution (note that $A_{\rm Res}$ has no noise contribution) and $F_3$, $F_2$, and $F_1$ are each unique functions of this resolution parameter "Res" of the following form:
\begin{equation}
\begin{aligned}
& F(\rm Res) = X_1(\rm Res)^{-1}+X_2(\rm Res)^{-2}+X_3(\rm Res)^{-3}
\end{aligned}
\label{Eqn:Polynomial_FWHM}
\end{equation}
\noindent As with Equation \ref{Eqn:A_int_blank}, we design the relationship such that $A_{\rm int}=A_{\rm Res}$ as Res goes to infinity. As was completed with signal-to-noise in Section \ref{sec:noise}, we fit $A_{\rm Res}$ and Res to Equation \ref{eqn:A_fit_PSF_blank} using LM-Fit, resulting in the following relationship:
\begin{equation}
\begin{aligned}
& A_{\rm int} = ((3.626\pm0.108)\rm Res^{-1}+(0.0289\pm0.0714)\rm Res^{-2} \\ & -(0.00680\pm0.00833)\rm Res^{-3} +1)A_{\mathrm{Res}} + (-(15.53\pm1.46)\rm Res^{-1} \\ & -(21.17\pm2.32)\rm Res^{-2}+(0.036\pm0.454)\rm Res^{-3})A_{\mathrm{Res}}^2 \\ & +((10.34\pm1.68)\rm Res^{-1}+(77.4\pm11.5)\rm Res^{-2} \\ & +(35.4\pm14.7)\rm Res^{-3})A_{\mathrm{Res}}^3 \\
\end{aligned}
\label{eqn:A_fit_PSF}
\end{equation}
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figures/Asymmetry_FWHM_Fit_contourfill.pdf}
\centering
\caption{Density contours of the asymmetry predicted by Equation \ref{eqn:A_fit_PSF}, plotted against the intrinsic asymmetry, with the red dashed line representing where the two would be equal. Note the median difference between these two values, even for the smallest asymmetries measurements, is only a few percent of the measured value. Thus the intrinsic asymmetry is recovered (within reason) for the majority of the galaxies in our sample.}
\label{fig:A_PSF_corr_fit}
\end{figure}
Figure \ref{fig:A_PSF_corr_fit} confirms that the asymmetry predicted by Equation \ref{eqn:A_fit_PSF} ($A_{\mathrm{predict}}$) accurately replicates $A_{\rm int}$ for the majority of the sample, with a median difference that equates to only a few percent of the measured asymmetry (for even the lowest asymmetry galaxies). Equation \ref{eqn:A_fit_PSF} could be used to correct for very high $S/N$ observations, but in practicality resolution will need to be corrected for in conjunction with noise.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/A_donothing_SN_PSF_discretebins_Res.pdf}
\centering
\caption{A demonstration of how applying a PSF (and thus degrading the resolution) changes two of the asymmetry approximation methods from Figure \ref{fig:TNG_A_corr}. Left panel: How resolution changes the difference in asymmetry when no correction for the noise asymmetry is performed. The curve from Figure \ref{fig:TNG_A_corr} is replicated here, labelled "No PSF". The other three curves represent the relationship between $A_{\rm int}$-$A_{\rm corr}$ for different resolution bins. Applying a PSF to $A_{\rm obs}$ always shifts the curve up, since $A_{\rm obs}$ will systematically increase. This shift upward is greater for smaller resolutions. Right panel: The same investigation, but for the $A_{\rm noise}/1.2$ subtraction method. The curve from Figure \ref{fig:TNG_A_corr} is replicated here, labelled "No PSF", where as the other three curves are separated into resolution bins. The same systemic shift upward is seen as in both methods. Not only that, but beyond $S/N=200$ the change in asymmetry is almost entirely due to worsened resolution. The fact that both no noise subtraction and subtracting $A_{\rm noise}/2$ have similar changes from resolution beyond $S/N>200$ implies that above that $S/N$ we need only correct for resolution effects on asymmetry.}
\label{fig:A_PSF_bins}
\end{figure*}
Figure \ref{fig:A_PSF_bins} showcases how different resolution values would change the relationship between $A_{\rm int}-A_{\rm corr}$ and $S/N$ for the no noise subtraction and $A_{\rm noise}/1.2$ subtraction correction methods. The original curve (with no PSF convolution) from Figure \ref{fig:TNG_A_corr} is replicated here for reference. Although the impact in asymmetry is seen most strongly at low signal-to-noise, at $S/N>100$ the change in asymmetry is dominated by the systematic decrease that comes from degrading resolution. The shift from decreasing resolution is also similar between the two correction methods, despite he $A_{\rm int}$-$A_{\rm corr}$ relationship differing between the two. The consistent upward shift of the curves above $S/N$=200 implies that there is some predictable change in asymmetry from PSF that is independent from signal-to-noise, and hence Equation \ref{eqn:A_fit_PSF} could be used to correct the asymmetry index irrespective of any choice of noise correction method. For $S/N<200$, the effects of noise and resolution are more entangled (almost indistinguishable for $S/N<100$) and need to be corrected for simultaneously.
To address the disparity between resolution changes in high and low $S/N$ regimes, we recommend a piecewise correction to asymmetry. For $S/N>200$ the asymmetry only needs to be corrected for resolution using Equation \ref{eqn:A_fit_PSF}, given the change in asymmetry from noise is comparatively so small. For $S/N<200$ we need to first correct for noise using Equation \ref{eqn:A_noise_fit}. The resulting $A_{\rm corr}$ still has effects from resolution, so we use LM-Fit determine the relationship between $A_{\rm corr}$, Res, and $A_ {\rm int}$ assuming they follow the same relationship as Equation \ref{eqn:A_fit_PSF_blank}. If $A_{\rm corr}$ is the predicted asymmetry calculated from Equation \ref{eqn:A_noise_fit} using $A_{\rm Res + Noise}$ and $S/N$, the following equation describes a combined noise and resolution correction for asymmetry:
\begin{equation}
\begin{aligned}
& \textbf{S/N<200: } A_{\rm int} = ((1.930\pm0.118)\mathrm{Res}^{-1} -(0.027\pm0.060)\mathrm{Res}^{-2} \\ & -(0.012\pm0.007)\mathrm{Res}^{-3}+1)A_{\rm corr} + ((-15.209(\pm1.260)\mathrm{Res}^{-1} \\ & -(3.000\pm0.922)\mathrm{Res}^{-2} + (0.307\pm0.127)\mathrm{Res}^{-3})A_{\rm corr}^2 \\ & + ((23.629\pm3.220)\mathrm{Res}^{-1} +(11.098\pm3.000)\mathrm{Res}^{-2} \\ & - (1.042\pm0.454)\mathrm{Res}^{-3})A_{\rm corr}^3\\
& \textbf{S/N>200: } A_{\rm int} = \mathrm{Equation } \ref{eqn:A_fit_PSF}
\end{aligned}
\label{eqn:A_piecewise}
\end{equation}
Figure \ref{fig:A_piecewise_correct} demonstrates how well Equation \ref{eqn:A_piecewise} recovers the intrinsic asymmetry. Comparing to the isolated noise (see Figure \ref{fig:Fit_scat}) and resolution (see Figure \ref{fig:A_PSF_corr_fit}) corrections, the median offset when correcting for both simultaneously is larger by a factor of 10, with an increased scatter as well. A slightly worse fit is to be expected, considering resolution and noise change asymmetry in opposing ways. Given the effects of noise and resolution on asymmetry can cancel each other out, it makes it very difficult for a fit based on $A_{\mathrm{Res}+\mathrm{noise}}$ to determine the individual change from resolution or noise alone. The overall broadening of the distribution of $A_{\mathrm{predict}}$ vs $A_{\mathrm{int}}$ stems from the dependence of the decrease in asymmetry from resolution effects on the intrinsic asymmetry: the absolute change in asymmetry from degrading resolution cannot be greater than its initial asymmetry value. Thus when the fit correction does poorly on small $A_{\mathrm{int}}$, resolution has a smaller effect than the noise contribution, which will drive an overestimation in $A_{\mathrm{predict}}$. When the fit does poorly on a large $A_{\mathrm{int}}$, the effects of resolution will dominate and lead to an underestimated $A_{\mathrm{predict}}$. Be that as it may, the broadening is driven entirely by galaxies with a combined low $S/N$ and Res value (roughly $S/N<20$ and Res$<1$). For better measurements the combined fit does well enough that on average the combined correction is sufficient for the work herein. Future analysis could be devoted to improving the joint correction, using other advanced fitting techniques such as machine learning algorithms to disentangle the two effects for even the smallest $S/N$ and Res values.
\begin{figure}
\includegraphics[width=0.87\columnwidth]{Figures/Asymmetry_FWHM_Fit_2versions_contourfill.pdf}
\centering
\caption{Density contours of the asymmetry predicted by Equation \ref{eqn:A_piecewise} to correct for resolution and noise, plotted against the intrinsic asymmetry. The red dashed line represents where the two values would be equal to each other. The median difference is only a small fraction of the intrinsic asymmetry, though it is larger that the corrections for noise (Figure \ref{fig:Fit_scat}) and resolution (Figure \ref{fig:A_PSF_corr_fit}) alone. Correcting for noise and resolution simultaneously is inherently more difficult because both artefacts change asymmetry in opposite directions, making it difficult to distinguish which fraction of the change in asymmetry stems from noise or resolution. Most of the galaxies that constitute the outer contours have both a small signal-to-noise and resolution, demonstrating a weakness of the combined fit correction for the worst quality asymmetries. However, a median offset of 0.0098 is still good enough quality to make decent asymmetry measurements for the majority of the sample.}
\label{fig:A_piecewise_correct}
\end{figure}
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/A_PSF_Fit_Correction_Just_TNG_Rescombined.pdf}
\centering
\caption{The difference between intrinsic asymmetry and the asymmetry measured for four different noise correction methods with no resolution effects (the same as in Figure \ref{fig:TNG_A_corr}), with resolution effects, and after resolution correction. Given noise and resolution effect asymmetry in ways that can cancel out, it is difficult to disentangle the two observational effects. Thus the resolution corrected curve approximates the curve with no PSF, but not exactly (particularly at low $S/N$). The LM Fit method reproduces the intrinsic asymmetry at low $S/N$ most accurately from the four methods.}
\label{fig:A_PSF_corr_methods}
\end{figure*}
We demonstrate how correcting for noise and PSF simultaneously with our LM-Fit equations is superior to correcting them separately in Figure \ref{fig:A_PSF_corr_methods}. Each panel represents a different noise correction method as presented in Section \ref{sec:noise}, with the original curves from Figure \ref{fig:TNG_A_corr}. When we degrade the resolution of the image and try to do the same correction (be it subtracting a fraction of the noise asymmetry or predicting asymmetry from Equation \ref{eqn:A_noise_fit}), the curves are shifted upwards due to the systematically lower asymmetry measurement once the resolution is degraded. For the "noise subtraction", "$A_{\rm noise}/1.2$", and "no noise subtraction", use Equation \ref{eqn:A_fit_PSF} to predict the intrinsic asymmetry form the noise corrected asymmetry and the Res value. Doing this brings us closer to the original curve, but there is still some disparity likely stemming from Equation \ref{eqn:A_fit_PSF} being constructed from asymmetries with no noise contribution and the slightly imperfect nature of these three noise correction methods. The curves are better recovered above $S/N=100$ than below, supporting the conclusion that noise must be correctly accounted for to use Equation \ref{eqn:A_fit_PSF} on more realistic images.
The "LM Fit" method, on the other hand, uses Equation \ref{eqn:A_piecewise} to correct for resolution and noise simultaneously. In this case, the Noise+PSF corrected curve almost completely overlaps the original noise correction curve with no resolution effects. Though Figure \ref{fig:A_PSF_corr_methods} demonstrates that a resolution correction fit like Equation \ref{eqn:A_fit_PSF} can be applied to noise correction methods that do not require a fit, the most accurate way to account for both noise and resolution is to correct them with a fit function simultaneously (Equation \ref{eqn:A_piecewise}).
\subsubsection{Resolution Corrected Concentration}
In Section \ref{sec:noise} we demonstrated that concentration is unaffected by noise. Figure \ref{fig:A_PSF_corr_TNG_app} shows that the relationship between the fractional change in concentration as a function of resolution is unchanged for different signal-to-noise bins. I.e., a resolution value will have the same effect on concentration for both a high and low signal-to-noise galaxy. This uniformity is expected considering how little adding random noise changes the concentration. Therefore a function can be fit to this data to correct any concentration underestimation from resolution, without also needing to account for noise correction (unlike asymmetry). We discuss the limitations of the LM-fit method further in Section \ref{sec:compare}.
\begin{figure}
\includegraphics[width=0.8\columnwidth]{Figures/C_donothing_SN_PSF_discretebins_Res.pdf}
\centering
\caption{The mean fractional difference between intrinsic and observed concentration as a function of resolution for different $S/N$ bins. The three curves overlap each other, indicating that a single resolution correction can be applied to concentrations of varying $S/N$. This logically follows the results from Figure \ref{fig:Int_Noise}, which demonstrated that noise has a negligible effect on the concentration measurement.}
\label{fig:A_PSF_corr_TNG_app}
\end{figure}
Just as with the asymmetry correction for noise, we postulate that the relationship between $C_{\rm int}$, $C_{\rm Res}$, and Res=$R_{\rm half}$/FWHM can be described by a single function (equation \ref{eqn:C_fit_blank}).
\begin{equation}
\begin{aligned}
& C_{\rm int} = (F_1(\rm Res)+1)C_{\rm Res} + F_2(\rm Res)C_{\rm Res}^2 + F_3(\rm Res)C_{\rm Res}^3
\end{aligned}
\label{eqn:C_fit_blank}
\end{equation}
\noindent Where $F_3$, $F_2$, and $F_1$ are of the same form as Equation \ref{Eqn:Polynomial_FWHM}. Once again we have constructed the relationship to ensure that $C_{\rm int}=C_{\rm Res}$ as the resolution goes to infinity. Utilising LM-Fit, we find that the best equation to express this relationship is:
\begin{equation}
\begin{aligned}
& C_{\rm int} = (-(3.616\pm0.296)\rm Res^{-1} - (1.467\pm0.268)\rm Res^{-2} \\ & -(0.6106\pm0.0716)\rm Res^{-3} + 1)C_{\rm Res} + ((2.159\pm0.164)\rm Res^{-1} \\ & + (0.996\pm0.225)\rm Res^{-2} + (0.7367\pm0.0848)\rm Res^{-3})C_{\rm Res}^2 \\ & + (- (0.3090\pm0.0226)\rm Res^{-1} - (0.0825\pm0.0432)\rm Res^{-2} \\ & -(0.2197\pm0.0247)\rm Res^{-3})C_{\rm Res}^3 \\
\end{aligned}
\label{eqn:C_fit}
\end{equation}
\noindent This function predicts concentration values that are, on average, within 1$\%$ of the intrinsic concentration value (with a median difference of -0.033). Figure \ref{fig:C_PSF_fit} further demonstrates that the predicted and intrinsic concentration are tightly correlated.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{Figures/Concentration_26_notitle_FWHM_contoursfill.pdf}
\centering
\caption{Density contours for the concentration predicted by Equation \ref{eqn:C_fit} versus the intrinsic concentration, with the red dashed line representing $C_{\rm predict}$=$C_{\rm int}$ The small, unbiased scatter about the line of equity demonstrates how effective the resolution correction is for this simulation. The median difference in concentration is less than 1$\%$ of the average measured concentration.}
\label{fig:C_PSF_fit}
\end{figure}
Thus, both asymmetry and concentration can be reliably corrected for resolution as low as 0.05. Such a correction is crucial to allow comparison of non-parametric morphology measurements for different resolutions, particularly if one wants to compare the morphologies of different observational data sets. The accuracy and convenience of a resolution correction is crucial in particular for asymmetry, given the more complex choices required to correct asymmetry for noise. In the next section we evaluate the general applicability of the resolution and noise correction we have outlined thus far, by testing them on unseen TNG data and the Illustris simulation.
\section{Discussion}
\label{sec:discussion}
\subsection{Illustris Comparison}
\label{sec:compare}
In Section \ref{sec:analysis}, we demonstrated how by fitting the relationship between observed and measured asymmetry/concentration, it is possible to correct for most of the bias created by an observational effect such as noise/resolution. For asymmetry, this fitting method performs better than other correction methods we investigate for background noise. In particular, the fitting method allows for correction of asymmetry measurements down to very low $S/N$. However, before such a fitting method can be used in practice, we must check its universality, or whether its form is appropriate only for the sample of TNG galaxies used thus far.
We first check that Equations \ref{eqn:A_noise_fit}, \ref{eqn:A_fit_PSF}, \ref{eqn:A_piecewise}, and \ref{eqn:C_fit} are not overfit to the training sample by applying those corrections to a set of test TNG galaxies not included in our initial 1000 galaxy set from which the fits were determined. We select 1000 new stellar mass maps from TNG galaxies, using the same quality cuts described in Section \ref{sec:TNG}, but only selecting galaxies we have not used in the analysis yet. We select the 1000 galaxies randomly, and thus the global properties (such as total stellar mass and stellar half mass radius) of the test set are comparable to our initial TNG sample. We then add noise from the same range of input $S/N$ values, and convolve with a PSF to achieve the same range of resolution values (refer back to Section \ref{sec:CAS} for details). We can test to see how well Equations \ref{eqn:A_noise_fit}, \ref{eqn:A_fit_PSF}, \ref{eqn:A_piecewise}, and \ref{eqn:C_fit} recover the intrinsic asymmetry/concentration values for this new dataset.
The top panel of Figure \ref{fig:Check_overfit_TNG} presents the quality of the predicted asymmetry for these new galaxies: correcting for noise using Equation \ref{eqn:A_noise_fit} (1st panel), for resolution using Equation \ref{eqn:A_fit_PSF} (2nd panel), and for resolution and noise using equation \ref{eqn:A_piecewise} (3rd panel). There are 20 galaxies (2\% of the sample) for which the asymmetry is predicted to be unrealistically high when correcting for noise and resolution together ($A_{\rm predict}>1$); we remove these from the rest of the analysis so the statistics measured reflect the behaviour of the majority of the sample rather than a few outliers. The 4th panel shows how well Equation \ref{eqn:C_fit} corrects the concentration for resolution. The difference between the predicted and intrinsic values for this new sample are congruent with those seen in the training sample (compare to Figure \ref{fig:Fit_scat} (median offset -0.001), Figure \ref{fig:A_PSF_corr_fit} (median offset -0.0032), Figure \ref{fig:A_piecewise_correct} (median offset -0.0098), and Figure \ref{fig:C_PSF_fit} (median offset -0.033) respectively). Thus there is no evidence that our equations are overfit to our initial galaxy sample. With that confirmed, we can now investigate the robustness of the fit: i.e., are Equations \ref{eqn:A_noise_fit}, \ref{eqn:A_fit_PSF}, \ref{eqn:A_piecewise}, and \ref{eqn:C_fit} robust enough to be applied to other simulations, or ultimately, to observational data?
\begin{figure*}
\centering
\subfloat[Unseen TNG Galaxies]{\includegraphics[width=0.9\textwidth]{Figures/Test_overfit_TNG_PSF_noise_contourfill.pdf}}%
\qquad
\subfloat[Illustris Galaxies]{\includegraphics[width=0.9\textwidth]{Figures/Test_overfit_Illustris_PSF_noise_2versions_contourfill.pdf}}%
\caption{Top row: Density contours of the intrinsic morphologies compared to values predicted by the fits presented in this work for a set of previously unseen TNG galaxies, with the dashed green line representing where the predicted values equal the intrinsic value. The first panel shows the asymmetry predicted after noise correction, the 2nd shows asymmetry predicted after resolution correction, the 3rd shows asymmetry corrected for both resolution and noise, and the fourth shows concentration predicted after resolution correction (no noise correction for concentration is necessary). Bottom row: Density contours of the intrinsic morphologies compared to values predicted by the fits presented in this work for a set of Illustris Galaxies, with the dashed green line representing where the predicted values equal the intrinsic value. The order of whether noise, resolution, or a combination of both is corrected for is the same as in the top row. For the combined noise and resolution correction (third column) galaxies with asymmetries predicted to be greater than 1 (far beyond the scope of $A_{\rm int}$) are removed from the analysis (20 in the unseen TNG and 4 in Illustris). For both datasets there is no clear bias in the predicted values, and the scatter on the predicted values reflects that seen in the training data (see Figures \ref{fig:Fit_scat}, \ref{fig:A_PSF_corr_fit}, \ref{fig:A_piecewise_correct} and \ref{fig:C_PSF_fit}). For comparison the initial median offset for noise correction of asymmetry is -0.001, for resolution correction of asymmetry is -0.0032, for resolution and noise corrected asymmetry is -0.0098, and for resolution correction of concentration is -0.033. The combined noise and resolution correction does have a broader distribution than the other corrections, where galaxies with both a small $S/N$ and Res value result in poorer asymmetry predictions. But the same effect is scene in the original TNG sample (see Figure \ref{fig:A_piecewise_correct}). The replicability of the difference between the intrinsic and predicted indices with both Illustris and TNG implies that our corrections are viable for different galaxy samples.}
\label{fig:Check_overfit_TNG}
\end{figure*}
Assessing the accuracy of our fitting method is not possible on observational data, given we cannot know the "intrinsic" asymmetry or concentration of an observed galaxy. However, we can use a different simulation suite as a test case, to see how well our suggested fitting corrections work on an entirely different set of data. For this, we choose to test our fit on a sample of 1000 galaxies from the Illustris-1 simulation \citep{2014Natur.509..177V,Genel2014IntroducingTime,Sijacki2015TheTime}. Though simulated using the same code as Illustris, TNG has an updated galaxy formation model which is known to affect the morphologies of the galaxies \citep{Rodriguez-Gomez2017TheMorphology,Rodriguez-Gomez2019TheObservations}. If the underlying physics in a simulation changes either the asymmetry and concentration correction fits, then it is unlikely we could apply those same fits to other simulations or observations. More importantly, testing on a different dataset will help further verify the signal-to-noise regimes in which the asymmetry fit correction is dependable.
The middle and right panels of Figure \ref{fig:AC_TNG_Il} shows the differing distributions of asymmetry and concentration for the TNG and Illustris sub-samples used in this work, along with the stellar half-mass radius of those galaxies. Though the asymmetry distributions are relatively similar, TNG has more highly concentrated galaxies than Illustris. \cite{Rodriguez-Gomez2019TheObservations} noted this difference in concentration between TNG and Illustris, attributing it to the more accurate size of low-mass galaxies (see left panel of Figure \ref{fig:AC_TNG_Il}) caused by the treatment of galactic winds and the enhanced quenching of high-mass galaxies from the new AGN and stellar feedback model. Given that our correction methods require input from both the observed concentration/asymmetry and the applied resolution/noise, the difference in distributions should not alter the viability of a comparison between the two samples.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/Intrinsic_Comparison.pdf}
\centering
\caption{Distributions of intrinsic asymmetry (left), intrinsic concentration (middle) and stellar half-mass radius (right) for the 1000 galaxies selected from TNG (yellow) and Illustris (pink) galaxies. Note that Illustris galaxies are on average larger and more highly concentrated, but have similar asymmetries as those in TNG. TNG has more accurate sizes for low mass galaxies (reflected in the smaller median $R_{\mathrm{half}}$ in TNG compared to Illustris), which likely leads to lower concentration values than were observed in Illustris.}
\label{fig:AC_TNG_Il}
\end{figure*}
We then apply the same noise addition (and range of $S/N$ values) to Illustris as described in Section \ref{sec:methods}, such that we can replicate the curves displayed in the left panel of Figure \ref{fig:TNG_A_corr} with Illustris, and compare them to the same results acquired with TNG (see Figure \ref{fig:TNG_Ill_A_corr}). Doing no noise correction, subtracting $A_{\rm noise}$, and subtracting $A_{\rm noise}/1.2$ are shown to all work equally well for both Illustris and TNG in all signal-to-noise regimes. Not only that, but because the relationship between $A_{\rm int}-A_{\rm corr}$ and signal-to-noise is almost identical between TNG and Illustris for these three methods, we could use those relationships to predict the offset in asymmetry for any correction method used with data of a particular signal-to-noise. The bottom left hand panel of Figure \ref{fig:TNG_Ill_A_corr} demonstrates how using Equation \ref{eqn:A_noise_fit} to correct Illustris asymmetries for noise works just as well it did on TNG, despite the data being fit to minimize $A_{\rm int}-A_{\rm corr}$ for TNG alone. The bottom left panel of Figure \ref{fig:Check_overfit_TNG} shows this more explicitly with the direct comparison of $A_{\rm predict}$ and $A_{\rm int}$, confirming that the median difference and scatter in the difference between the asymmetry predicted by Equation \ref{eqn:A_noise_fit} and the intrinsic Illustris asymmetry is comparable to that seen in TNG (see Figure \ref{fig:Fit_scat}).
\begin{figure*}
\includegraphics[width=0.9\textwidth]{Figures/A_approximations_sep_methods_LMFit_smooth_nobound.pdf}
\centering
\caption{The mean difference between the intrinsic and corrected asymmetry as a function of signal-to-noise for subtracting $A_{\rm noise}$ (top left), subtracting $A_{\rm noise}/1.2$ (top right), predicting asymmetry from Equation \ref{eqn:A_noise_fit} (bottom left), and doing no noise correction (bottom right) for both our TNG (the dark curve) and Illustris (the light curve) galaxy sets. All four correction methods match between the galaxy samples, implying that the relationship between the quality of correction and $S/N$ is the same no matter which simulation is examined.}
\label{fig:TNG_Ill_A_corr}
\end{figure*}
Next we test the effects of resolution on Illustris, and how well Equations \ref{eqn:A_fit_PSF}, \ref{eqn:A_piecewise}, and \ref{eqn:C_fit} correct for them. Given the overall larger size of Illustris galaxies (see the distribution of $R_{\rm half}$ in the left panel of Figure \ref{fig:AC_TNG_Il}), we choose to select PSF FWHMs such that the Res values of Illustris match those in TNG (rather than applying the same range of PSFs to Illustris and TNG). If we applied PSFs ranging from 0.002"-7" FWHM to Illustris, they would have less effect on the overall resolution of Illustris galaxies than they did on TNG, given their greater size. In such a scenario we would only be testing the PSF correction for comparatively "good" resolutions. Instead we use the Res values in TNG to compute a new range of PSF values to apply to Illustris (ranging from 0.002"-30" FWHM), which results in the same effective resolution of galaxy structure in the two simulations (note that this is distinct from the resolution of the simulation itself).
We can use Equations \ref{eqn:A_fit_PSF} and \ref{eqn:C_fit} to correct for the changes to asymmetry/concentration on a Illustris galaxies with different resolutions. The bottom row of Figure \ref{fig:Check_overfit_TNG} demonstrates how well the predicted asymmetry (2nd panel) and concentration (4th panel) reflect the intrinsic values. The median difference and scatter in difference for both asymmetry and concentration is comparable to that for the predicted values in TNG (compare to Figures \ref{fig:A_PSF_corr_fit} and \ref{fig:C_PSF_fit}). The distribution of $C_{\rm predict}$ vs. $C_{\rm int}$ looks narrower than that in TNG, stemming from the narrower range and overall smaller $C_{\rm int}$ values in Illustris (see Figure \ref{fig:AC_TNG_Il}). We also demonstrate how a combined resolution and noise correction works on Illustris (Equation \ref{eqn:A_piecewise}) in the 3rd panel, where the median difference is similar to both that in the unseen TNG sample and the initial test sample (all are approximately off by 0.01, still well within a "good" asymmetry measurement).
The different physical models that shape TNG and Illustris galaxies do not alter the quality of the resolution and noise corrections we have described in this work. Thus, these corrections could feasibly be used on other datasets or even observations. The benefit of using simulations is that we implement any changes between the intrinsic and observed data, and thus understand it completely. The same care needs to be taken to understand the difference between an observational dataset and the simulation on which the fits are based. For example, we do not have to account for other observational effects that might add to (or detract from) the asymmetry outside of noise, such as neighbouring objects outside the field-of-view (when external flux leaks into the image of a galaxy). Our correction methods also depend on how $S/N$ and resolution are quantified; to use the fit coefficients provided in this work $S/N$ and Res need to be measured similarly to what is laid out in this analysis. Though the work herein stands as a proof of concept, we caution those wishing to use a fit correction to take into account the unique features of their galaxy sample, and consider creating a fit on simulations meant to emulate that sample.
\subsection{Implications}
\label{sec:implications}
There are a few studies that have attempted to measure non-parametric morphologies on data other than broad band light images, and how to account for the complex uncertainties in that scenario. \cite{Lanyon-Foster2012TheZ1} characterize the morphology of stellar mass maps determined from Hubble Space Telescope imaging using CAS as well as other morphological measurements. The asymmetry of their stellar mass maps was on average larger than from the optical light image, which they attribute in part to random fluctuations in the mass-to-light ratio. The smallest asymmetries they measure for stellar mass maps are negative, implying that the background asymmetry they estimate from both regular background fluctuations and a generalized contribution from mass-to-light ratio fluctuations is larger than the asymmetry of the galaxy. Both of these findings are in agreement with our results of high uncertainties leading to falsely large asymmetries, and subtracting the total $A_{\rm noise}$ resulting in an over-corrected asymmetry.
\cite{Giese2016Non-parametricLopsidedness} is the only other paper to suggest a fitting method to correct asymmetry for noise, whilst measuring the asymmetries of extended HI gas disks from the Westerbork observations of neutral Hydrogen in Irregular and SPiral galaxies (WHISP) sample \citep{2001ASPC..240..451V}. To account for the generally low $S/N$ measurements of HI, \cite{Giese2016Non-parametricLopsidedness} use the Tilted Ring Fitting Code (TIRIFIC) model to approximate a correction in asymmetry based on the $S/N$. Using machine learning they are able to approximate the intrinsic asymmetry within 0.05 for most galaxies. Rather than creating a universal correction method, they fine-tune their correction to the WHISP sample to further their study. Indeed, the problem with creating a fit based on a simulated galaxy sample is the need to calibrate simulated galaxies to reflect the observations of interest. This is particularly relevant if one is interested in measuring the asymmetry of a galaxy property like surface mass density, or star-formation rate density. In contrast, we have investigated the universality of multiple different noise correction methods, and just as importantly, the signal-to-noise regimes for which those methods are viable. Using the work herein one can evaluate an approximate accuracy of asymmetry and concentration measurements for a dataset and decide if any correction for bias in the measurements is necessary (and the most appropriate approach to correct for that bias).
\begin{table*}
\captionof{table}{Summary of the validity of different asymmetry noise correction methods in various $S/N$ regimes. For each method (no noise correction (``No corr''), subtract $A_{\rm noise}$ from the observed asymmetry (``Noise Sub''), subtract $A_{\rm noise}/1.2$ from the observed values (``$A_{\rm noise}/1.2$''), and corrected based on Equation \ref{eqn:A_noise_fit} (``LM Fit'')) we provide the mean difference and scatter in difference between the intrinsic asymmetry and the measured asymmetry. From this one can approximate the bias in asymmetry measurement for data with differing $S/N$ regimes. The signal to noise must be measured as laid out in this analysis, using the median within an aperture of 1$R_{\rm half}$ (an alternative radius with comparable size to $R_{\rm half}$ could be used as well). \label{tab:A_noise_corr_TNG}}
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} ccccccccc}
S/N & No Corr Mean & No Corr $\sigma$ & Noise Sub Mean & Noise Sub $\sigma$ & $A_{\rm noise}/1.2$ Mean & $A_{\rm noise}/1.2$ $\sigma$ & LM Fit Mean & LM Fit $\sigma$\\
\hline
S/N<5 & -0.772 & 0.167 & 0.122 & 0.044 & -0.027 & 0.043 & 0.009 & 0.066 \\
5<S/N<10 & -0.428 & 0.137 & 0.103 & 0.043 & 0.011 & 0.039 & 0.0 & 0.07 \\
10<S/N<25 & -0.349 & 0.124 & 0.103 & 0.043 & 0.028 & 0.036 & -0.005 & 0.075 \\
25<S/N<50 & -0.135 & 0.112 & 0.06 & 0.034 & 0.028 & 0.02 & -0.01 & 0.043 \\
50<S/N<75 & -0.039 & 0.023 & 0.035 & 0.017 & 0.023 & 0.013 & -0.006 & 0.02 \\
75<S/N<100 & -0.024 & 0.013 & 0.03 & 0.015 & 0.021 & 0.011 & -0.001 & 0.013 \\
100<S/N<150 & -0.017 & 0.01 & 0.025 & 0.011 & 0.018 & 0.009 & 0.001 & 0.011 \\
150<S/N<200 & -0.01 & 0.005 & 0.019 & 0.006 & 0.014 & 0.005 & 0.002 & 0.007 \\
200<S/N<250 & -0.006 & 0.003 & 0.016 & 0.007 & 0.012 & 0.005 & 0.003 & 0.005 \\
250<S/N<300 & -0.004 & 0.003 & 0.013 & 0.006 & 0.01 & 0.004 & 0.002 & 0.003 \\
300<S/N<400 & -0.003 & 0.002 & 0.011 & 0.005 & 0.009 & 0.004 & 0.002 & 0.002 \\
400<S/N<600 & -0.002 & 0.001 & 0.007 & 0.003 & 0.006 & 0.003 & 0.0 & 0.002 \\
600<S/N<900 & -0.001 & 0.001 & 0.005 & 0.002 & 0.004 & 0.002 & -0.001 & 0.001 \\
900<S/N<1200 & -0.0 & 0.0 & 0.004 & 0.002 & 0.003 & 0.002 & -0.0 & 0.0 \\
\end{tabular*}
\end{table*}
The analysis within this work provides the opportunity to approximate both the bias in asymmetry, and the scatter in that bias, created by different amounts of noise in an image such that future works can check the quality of asymmetry measurements for the dataset being used.
Table \ref{tab:A_noise_corr_TNG} provides a simplified version of the results from Figure \ref{fig:TNG_A_corr}, supplying both the mean offset and the standard deviation of that offset for the four noise-correction methods discussed in this work. Using Table \ref{tab:A_noise_corr_TNG}, one could check what the bias would be for a desired asymmetry correction method at the signal-to-noise of their particular set of observations, and determine which method would work best to both minimize the bias, as well as avoid varying biases within the same sample. As the community gets more creative with what kinds of observations and data products asymmetry is measured from, we hope this table can serve as a cautionary step as we explore this new regime of non-parametric morphologies.
It is paramount that when measuring asymmetry for a set of galaxies one can quantify the variability of an offset created by noise contribution. If one is working with high-quality photometry, or strong spectral lines, it is likely that the change in asymmetry from noise will be the same within that sample (no matter the noise-correction method chosen). However, comparisons of that data to lower-quality data sets becomes ill-advised, an issue which will only become more relevant as state-of-the-art non-parametric morphology measurements are compared to literature values. For low signal-to-noise data-sets it is even unwise to compare asymmetries within the data-set, given the bias created by a large noise/uncertainty can vary so drastically between asymmetry values. This is a particular issue for those measuring asymmetries in HI gas \citep[e.g][]{Holwerda2011QuantifiedGalaxies,Giese2016Non-parametricLopsidedness,Reynolds2020HGalaxies}. A recent work by \cite{Baes2020NonparametricWavelengths} explored the change in asymmetry in a range of wavelengths from the UV to the submm, as well as subsequent derived data products such as stellar mass, dust mass, and star-formation rate. Though they saw trends reflective of understood physical models, without assessing the varying signal-to-noise levels of those observations it is not clear how much of those trends are driven by noise effects.
\subsubsection{Example of effects of noise on merger asymmetries}
The changes in asymmetry from noise contribution could lead to observing trends that do not exist, but they could also result in obscuring relationships we expect to see. We can use our methods of adding noise to simulations, and correcting for that noise with different methods, to investigate the noise effects on an example science question: does galaxy asymmetry increase as the result of galaxy-galaxy interaction.
Observational studies have reliably found that photometric asymmetry increases for galaxy pairs as the separation between the interacting pairs decreases \citep{Hernandez-Toledo2005THEEVOLUTION,DePropris2007TheRate,Casteels2014GalaxyInformation}. Asymmetric structures resulting from an interaction are tied to the tidal forces disturbing the galaxy morphology, therefore the correlation between pair separation and asymmetry should be apparent in both optical light and stellar mass maps. This serves as a good test case for how the heightened uncertainty of a mass measurement would effect the prominence of this trend.
To test how noise will impact asymmetry measurements of TNG stellar mass maps, we collect a new sample of interacting TNG galaxies on which we can apply the four different asymmetry measurements we have discussed. We select a sample of TNG galaxies at z=0 with $10^9\leq \textrm{M}_{\star}/\textrm{M}_{\odot}$ which are undergoing a major merger, i.e. they have a companion with a mass ratio greater than 0.1 that is closer than 100 kpc in 3D separation. We select 300 interacting galaxies from TNG, with the goal of having approximately (though not always exactly) 30 galaxies in each 10 kpc bin for 3D separations ($r_{3D}$) ranging from 0-100kpc.
\begin{figure*}
\includegraphics[width=0.75\textwidth]{Figures/A_vs_r3D_median_smooth_5methods.png}
\centering
\caption{The asymmetry of interacting galaxies in TNG as a function of their 3D separation from the nearest companion ($r_{3D}$). We include both the intrinsic asymmetry measurement (navy, identical in all panels), the observed asymmetry measurement (red), the observed asymmetry corrected for $A_{\rm noise}$ (pink), the observed asymmetry corrected for $A_{\rm noise}/1.2$ (green), and the asymmetry predicted by Equation \ref{eqn:A_noise_fit} (yellow). The four panels represent four different signal-to-noise ranges we apply to the same set of TNG galaxies, demonstrating how the trend between asymmetry and $r_{3D}$ is obscured in observed asymmetry as signal-to-noise decreases. Ideally all curves would match the navy curve, but as the $S/N$ regime decreases the different asymmetry measurements struggle to capture the true underlying trend between $A_{\rm int}$ and $r_{3D}$.}
\label{fig:A_rp}
\end{figure*}
We then apply 4 different $S/N$ regimes to this data: $S/N<40$, $40<S/N<80$, $80<S/N<120$, and $S/N>120$, using the same methods described in Section \ref{sec:CAS}. Figure \ref{fig:A_rp} demonstrates how the relationship between asymmetry and pair separation varies between the intrinsic value (navy), the observed value (red), the observed corrected for $A_{\rm noise}$ (pink), the observed corrected for $A_{\rm noise}/1.2$ (green), and the asymmetry predicted from Equation \ref{eqn:A_noise_fit} (the LM-Fit method, yellow). If we only examine these changes for small amounts of added noise ($80<S/N<120$ and $S/N>120$), the observed asymmetry traces the intrinsic asymmetry's relationship to $r_{3D}$ almost perfectly. Once we enter the intermediate signal-to-noise regime ($40<S/N<80$), the trend has diminished for observed asymmetry. This is because the magnitude of $A_{\rm noise}$ depends on $A_{\rm obs}$. Thus, at low $r_{3D}$ values, where all galaxies have lower asymmetries compared to those in a later stage of interaction, the noise will have a greater impact on the observed asymmetry. As a result, galaxies at large separations will have an overestimated asymmetry, whereas the asymmetry of close galaxy pairs will be relatively unchanged by noise contribution.
Given the change in asymmetry from noise differs with the intrinsic asymmetry, for the three lower signal-to-noise regimes in Figure \ref{fig:A_rp} some noise correction needs to be completed to capture the true relationship between asymmetry and pair separation. Below $S/N<120$ the best traditional noise correction to recover the asymmetry trend at high $r_{3D}$ is subtracting $A_{\rm noise}/1.2$ from $A_{\rm obs}$. One can also use the LM-Fit method to correct for noise and achieve similar recovery, and this works well in the intermediate $S/N$ regime. In the lowest signal-to-noise bin ($S/N<40$), the relationship between asymmetry and $r_{3D}$ has completely disappeared for the observed asymmetry. Correcting with the $A_{\rm noise}/1.2$ results in a trend closer to the intrinsic asymmetry, but it is still not as drastic a change in asymmetry between $r_{3D}$ values as expected. The LM-Fit method, though better replicating the intrinsic asymmetry than the traditional $A_{\rm noise}$ correction, also doesn't capture the complete trend between asymmetry and pair separation due to the large scatter in the fit for $S/N<50$. The degredation of this relationship with increased noise is an important example of how, even within a dataset of relatively uniform signal-to-noise values, the effects of $A_{\rm noise}$ can still alter the final conclusions. Those endeavouring to use non-parametric morphologies in their analysis will need to consider what degree variations (or lack-there-of) in asymmetry will result from noise effects, and what kind of noise correction is required for their desired analysis.
\section{Conclusions}
\label{sec:summary}
Using stellar mass maps from the IllustrisTNG100-1 and Illustris-1 simulations, we investigated the impact of observational noise and resolution, and the accuracy of different methods to recover the intrinsic asymmetry and concentration parameters. Our main conclusions are the following:
\begin{itemize}
\item The traditional asymmetry metric is systematically biased by the presence of measurement uncertainties. The commonly used method of correcting for noise asymmetry contribution underestimates the asymmetry, particularly at $S/N<100$, where the asymmetry can be overestimated anywhere from a factor of 2 to 10 (see Figure \ref{fig:Int_Noise} and \ref{fig:Noise_Corr}).
\item An alternative correction method, which subtracts $A_{\rm noise}/1.2$ from the observed asymmetry reduces this systematic bias, reaching less than 0.05 difference by $S/N\sim25$ (Figure \ref{fig:divide_by}).
\item One can fit the relationship between the observed asymmetry and the signal-to-noise to predict the intrinsic asymmetry (see Equation \ref{eqn:A_noise_fit}), removing the bias in asymmetry created by noise as low as $S/N\sim5$ (Figure \ref{fig:A_PSF_corr_fit}).
\item Degraded resolution also changes the asymmetry (and concentration) measurement from its intrinsic value. A fit relationship is determined between observed asymmetry or concentration and resolution (Res=$R_{\rm half}$/FWHM) to correct for this bias (see Equations \ref{eqn:A_fit_PSF} and \ref{eqn:C_fit}, respectively) and which accurately replicate the intrinsic values (Figures \ref{fig:A_PSF_corr_fit} and \ref{fig:C_PSF_fit}). Real asymmetries will need to be corrected for both resolution and noise, and though Equation \ref{eqn:A_piecewise} does not correct asymmetry as well as when just resolution needs to be accounted for (compare Figure \ref{fig:A_piecewise_correct} to Figure \ref{fig:A_PSF_corr_fit}), it still brings the majority of asymmetry measurements within $A_{\rm int}\pm0.05$.
\item The noise and resolution correction work equally well on an unseen set of TNG galaxies (confirming the corrections are not overfit to their training data) and a set of Illustris galaxies (demonstrating these corrections can be applied to a simulation using different physical models). See Figure \ref{fig:Check_overfit_TNG} for details.
\item We list various examples to demonstrate how the choice of asymmetry noise correction can alter science results, including our own hypothetical investigation into the relationship between asymmetry and galaxy interactions. Tables \ref{tab:A_noise_corr_TNG} provides an important summary of the biases created by different corrections for $S/N$ values to help future works ascertain the best method for their intended study.
\end{itemize}
\section*{Acknowledgements}
The authors thank David Patton and Scott Wilkinson for their helpful discussion on the work herein, as well as the anonymous referee for their helpful report. MDT acknowledges the receipt of a Mitacs Globalink research award. AFLB \& RM gratefully acknowledge ERC grant 695671 'Quench' and support from the UK Science and Technology Facilities Council (STFC). MHH acknowledges support from William and Caroline Herschel Postdoctoral Fellowship Fund.
The simulations of the IllustrisTNG project used in this work were undertaken with compute time awarded by the Gauss Centre for Supercomputing (GCS) under GCS Large-Scale Projects GCS-ILLU and GCS-DWAR on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS), as well as on the machines of the Max Planck Computing and Data Facility (MPCDF) in Garching, Germany.
\section*{Data Availability}
Simulation data from TNG100-1 and Illustris-1 used in this work is openly available on the IllustrisTNG website, at \hyperlink{tng-project.org/data}{tng-project.org/data}.
\bibliographystyle{mnras}
|
1,108,101,565,772 | arxiv | \section{Introduction}
\label{sec:intro}
A \emph{Coxeter group} $W$ is a group generated by a finite set
$S$ subject only to relations $s^2=1$ for $s\in S$ and
$(st)^{m_{st}}=1$ for $s\neq t\in S$, where $m_{st}=m_{ts}\in \{2,3,\ldots,\infty\}$.
Here the convention is that $m_{st}=\infty$ means that we do
not impose a relation between $s$ and $t$. We say that $W$ is
\emph{$2$-dimensional} if for any triple of distinct elements $s,t,r\in S$,
the group $\langle s,t,r\rangle$ is infinite. In other words,
$\frac{1}{m_{st}}+\frac{1}{m_{sr}}+\frac{1}{m_{tr}}\leq 1$.
Consider an arbitrary group $G$ with a finite symmetric generating set
$S$. For $g\in G$, let $\ell(g)$ denote
the \emph{word length} of $g$, that is, the minimal number $n$
such that $g=s_1\cdots s_n$ with $s_i\in S$ for $i=1,\ldots,
n$. Let $S^*$ denote the set of all words over $S$.
If $v\in S^*$ is a word of length $n$, then by $v(i)$ we
denote the prefix of $v$ of length $i$ for $i=1,\ldots, n-1$,
and the word $v$ itself for $i\geq n$. For $1\leq i\leq j\leq
n$ by $v(i,j)$ we denote the subword of $v(j)$ obtained by
removing $v(i-1)$. For a word $v\in S^*$, by $\ell(v)$ we denote
the word length of the group element that $v$ represents.
We say that $G$ is
\emph{biautomatic} if there exists a regular language $\mathcal
L\subset S^*$ (see Section~\ref{sec:regularity} for the definition of regularity) and a constant $C>0$ satisfying the following conditions (see \cite[Lem~2.5.5]{E}).
\begin{enumerate}[(i)]
\item For each $g\in G$, there is a word in $\mathcal L$
representing $g$.
\item For each $s\in S$ and $g,g'\in G$ with $g'=gs,$ and each $v,v'\in \mathcal L$
representing $g,g'$, for
all $i\geq 1$ we have $\ell(v(i)^{-1}v'(i))\leq C$.
\item For each $s\in S$ and $g,g'\in G$ with $g'=sg,$ and each $v,v'\in \mathcal L$
representing $g,g'$, for
all $i\geq 1$ we have $\ell(v(i)^{-1}s^{-1}v'(i))\leq C$.
\end{enumerate}
Our paper concerns the two following well-known open questions (see e.g.\ \cite[\S6.6]{FHT}).
\begin{question}
\label{q:1}
Are Coxeter groups biautomatic?
\end{question}
\begin{question}
\label{q:2}
Are groups acting properly and cocompactly on $2$-dimensional $\mathrm{CAT}(0)$ spaces biautomatic?
\end{question}
All Coxeter groups are known to be automatic (i.e.\ having a regular language satisfying (i) and (ii)) by \cite{BH}.
Biautomaticity has been established only in special cases: \cite{E} (Euclidean and hyperbolic), \cite{NiRe2003} (right-angled),
\cite{Bahls2006} and \cite{CaMu2005} (no Euclidean reflection triangles), \cite{Cap2009} (relatively hyperbolic).
\medskip
Question~\ref{q:2} is widely open. The assumption of $2$-dimensionality is essential, since recently Leary--Minasyan \cite{LeMi} constructed a group acting properly and cocompactly on a $3$-dimensional $\mathrm{CAT}(0)$ space that is not biautomatic. Even in the case of $2$-dimensional buildings, except right-angled and hyperbolic cases, the answer was known
only in particular instances, e.g.\ for many (but not all) proper cocompact actions on Euclidean buildings by \cite{GeSh1}, \cite{GeSh2},
\cite{CaSh1995}, \cite{Nos2000}, and \cite{Sw}.
\smallskip
To define a convenient language, we need the following.
Let $W$ be an arbitrary Coxeter group. For $g\in W$, we denote
by $T(g)\subseteq S$ the set of $s\in S$ satisfying $\ell(gs)<\ell(g)$.
By \cite[Thm~2.16]{R}, the group $\langle T(g)\rangle$ is finite. By $w(g)$ we denote the
longest element in $\langle T(g) \rangle$ (which is unique by \cite[Thm~2.15(iii)]{R}, and consequently it is an involution). Let $\Pi(g)=gw(g)$. By
\cite[Thm~2.16]{R}, we have $\ell(\Pi(g))+\ell(w(g))=\ell(g)$.
We define the \emph{standard language} $\mathcal L\subset S^*$ for $W$ inductively
in the following way. Let $v\in S^*$ be a word of length $n$.
If $v$ represents the identity element of $W$, then $v\in \mathcal L$ if and only if $v$ is the empty word. Otherwise,
let $g\in W$ be the group element represented by $v$ and let $k=\ell(w(g))$. We declare $v\in \mathcal L$ if and
only if $v(n-k)\in \mathcal L$ and $v(n-k+1,n)$ represents $w(g)$. In particular, $v(n-k)$ represents $\Pi(g)$. It follows inductively that $n=\ell(g)$. Such a language is called \emph{geodesic}. Note that the standard language satisfies part~(i) of the definition of biautomaticity.
The paths in $W$ formed by the words in the standard language generalise the normal cube paths for $\mathrm{CAT}(0)$ cube complexes \cite[\S3]{NR} used to prove biautomaticity for right-angled (or, more generally, cocompactly cubulated) Coxeter groups \cite{NiRe2003}. Our main result is the following.
\begin{thm}
\label{thm:main}
If $W$ is a $2$-dimensional Coxeter group, then
it is biautomatic with $\mathcal L$ the standard language.
\end{thm}
Since the standard language is geodesic and preserved by the automorphisms of~$W$ stabilising~$S$, by \cite[Thm~6.7]{Sw} we have the following immediate consequence.
\begin{cor}
\label{cor:buildings}
Let $G$ be a group acting properly and cocompactly on a building of type $W,$ where $W$ is a $2$-dimensional Coxeter group. Then $G$ is biautomatic.
\end{cor}
One element of our proof of Theorem~\ref{thm:main} is:
\begin{thm}
\label{lem:regular}
Let $W$ be a Coxeter group. Then its standard language is
regular.
\end{thm}
In other words, the regularity and part (i) of the definition of biautomaticity are
satisfied for any Coxeter group $W$. However, it is not so
with part (ii). The \emph{$\widetilde A_3$ Euclidean group} is the Coxeter
group with
$S=\{p,r,s,t\}, m_{pr}=m_{rs}=m_{st}=m_{tp}=3,
m_{ps}=m_{rt}=2$.
\begin{thm}
\label{thm:second}
If $W$ is the $\widetilde A_3$ Euclidean group, then its
standard language does not satisfy part (ii) in the definition
of biautomaticity.
\end{thm}
Note, however, that by \cite[Cor~4.2.4]{E}, all Euclidean groups, in particular~$\widetilde A_3$, are biautomatic (with a different language).
\medskip
\noindent \textbf{Organisation.} In Section~\ref{sec:prel} we review the basic properties of Coxeter groups. In Section~\ref{sec:regularity} we prove Theorem~\ref{lem:regular}. For $2$-dimensional $W$, we verify parts~(iii) and~(ii) of the definition of biautomaticity in Sections~\ref{sec:partiii} and~\ref{sec:main}. This completes the proof of Theorem~\ref{thm:main}. We finish with the proof of Theorem~\ref{thm:second} in Section~\ref{sec:dim3}.
\medskip
\noindent \textbf{Acknowledgement.} We thank the referee for many helpful suggestions.
\section{Preliminaries}
\label{sec:prel}
By $X^1$ we denote the \emph{Cayley graph} of $W$, that is, the graph with vertex set $X^0=W$ and with edges joining
each $g\in W$ with $gs$, for $s\in S$. We call such an edge an
\emph{$s$-edge}. We call $gs$ the
\emph{$s$-neighbour} of $g$.
For $r\in W$ a conjugate of an element
of $S$, the \emph{wall} $\mathcal W_r$ of $r$ is the fixed point set of $r$ in $X^1$. We call $r$ the \emph{reflection} in $\mathcal W_r$ (for fixed $\mathcal W_r$ such $r$ is unique). If a midpoint of an edge $e$
belongs to a wall $\mathcal W$, then we say that $\mathcal W$ is \emph{dual} to~$e$ (for fixed~$e$ such a wall is unique).
We say that $g\in W$ is \emph{adjacent} to a wall~$\mathcal W$, if $\mathcal W$ is dual to an edge incident to $g$.
Each wall $\mathcal W$ separates $X^1$ into two components, and
a geodesic edge-path in $X^1$ intersects $\mathcal W$ at most once \cite[Lem~2.5]{R}.
For $T\subseteq S$, each coset $g\langle T\rangle\subseteq X^0$ for $g\in W$
is a \emph{$T$-residue}. A geodesic edge-path in $X^1$ with endpoints in a residue $R$ has all its vertices in $R$ \cite[Lem~2.10]{R}. We say that a wall $\mathcal W$ \emph{intersects}
a residue $R$ if $\mathcal W$ separates some elements of $R$. Equivalently, $\mathcal W$ is dual to an edge
with both endpoints in $R$.
\begin{thm}[{\cite[Thm~2.9]{R}}]
\label{thm:proj}
Let $W$ be a Coxeter group. Any residue $R$ of~$X^0$ contains a unique element $h$ with minimal $\ell(h)$. Moreover, for any $g\in R$ we have $\ell(h)+\ell(h^{-1}g)=\ell(g)$.
\end{thm}
As introduced in Section~\ref{sec:intro}, for $g\in W$ we denote
by $T(g)\subseteq S$ the set of $s\in S$ satisfying $\ell(gs)<\ell(g)$.
Let $R$ be the $T(g)$-residue containing $g$.
By \cite[Thm~2.16]{R}, the group $\langle T(g)\rangle$ is finite and, for $w(g)$ the
longest element in $\langle T(g) \rangle$, the unique element $h\in R$ from Theorem~\ref{thm:proj} is $\Pi(g)=gw(g)$. In particular,
we have $\ell(\Pi(g))+\ell(w(g))=\ell(g)$. Note that if $W$ is $2$-dimensional, then for each $g\in W$ we have $|T(g)|=1$ or $2$. See Figure~\ref{f:f0} for an example where $S=\{s,t,r\}$ with $m_{st}=2, m_{sr}=m_{tr}=4$, and $g=strst$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{f0}
\end{center}
\caption{$T(g)=\{ s,t\}$ and $w(g)=st=ts$.}
\label{f:f0}
\end{figure}
For $g\in W$, let $\mathcal W(g)$ be the set of walls $\mathcal W$ in $X^1$ that separate $g$ from the identity element $\mathrm{id}\in W$ and such that there is no wall $\mathcal W'$ separating $g$ from $\mathcal W$.
\begin{rem}
\label{rem:WandR}
Let $g\in W$ and let $R$ be the $T(g)$-residue containing $g$. Since $R$ is finite, all the walls intersecting $R$ belong to $\mathcal W(g)$. However, there might be walls in $\mathcal W(g)$ that do not intersect $R$. See Figure~\ref{f:f0} for an example, where we indicated all three walls of $\mathcal W(g)$ for $g=strst$.
\end{rem}
By the following Parallel Wall Theorem, there exists a bound on the distance in~$X^1$ between $g$ and each of the walls of $\mathcal W(g)$.
\begin{thm}[{\cite[Thm~2.8]{BH}}]
\label{thm:parallel}
Let $W$ be a Coxeter group. There is a constant $Q=Q(W)$ such that for any $g\in W$ and a wall $\mathcal W$ at distance $> Q$ from $g$ in $X^1$, there is a wall $\mathcal W'$ separating $g$ from $\mathcal W$.
\end{thm}
By $X$ we denote the \emph{Cayley complex} of $W$. It is the
piecewise Euclidean $2$-complex with $1$-skeleton $X^1$, all
edges of length $1$, and a regular $2m_{st}$-gon spanned on
each $\{s,t\}$-residue with $m_{st}<\infty$. If $W$ is
$2$-dimensional, then $X$ is $\mathrm{CAT}(0)$, see \cite[\S II.5.4]{BHa} and the link condition in \cite[\S II.5.6]{BHa}. ($X$ coincides then with the `Davis complex' of $W$.)
Walls in $X^1$ extend to (convex) walls in $X,$ which still separate $X$.
We will consider the action of $G$ on $X^0=G$ by left multiplication. This induces obvious actions of $G$ on $X^1, X$ and the set of walls.
\section{Regularity}
\label{sec:regularity}
A \emph{finite state automaton over $S$} (FSA) is a finite directed graph $\Gamma$ with vertex set $V$, edge set $E\subseteq V\times V$, an edge labeling $\phi\colon E\to \mathcal P(S^*)$ (the power set of $S^*$), a distinguished set of \emph{start states} $S_0\subseteq V$, and a distinguished set of \emph{accept states} $F\subseteq V$. A word $v\in S^*$ is \emph{accepted by $\Gamma$} if there exists a decomposition $v=v_0\cdots v_m$ of $v$ into subwords and an edge-path $e_0\cdots e_m$ in $\Gamma$ such that $e_0$ has initial vertex in $S_0$, $e_m$ has terminal vertex in~$F$, and $v_i\in \phi(e_i)$ for each $i=0,\ldots,m$. A subset of $S^*$ is a \emph{regular language} if it is the set of accepted words for some FSA over $S$.
The proof of the regularity of the standard language relies on Theorem~\ref{thm:parallel} and the following lemma.
\begin{lemma}
\label{lem:append}
Let $W$ be a Coxeter group. Let $g\in W$, let $T\subseteq S$ be such that
$\langle T \rangle$ is finite, and let $w$ be the longest
element in $\langle T \rangle$. Then $T(gw)=T$ if and only if
\begin{enumerate}[(i)]
\item
$T$ is disjoint from $T(g)$, and
\item
for each
$t\in S\setminus T$,
the wall dual to $(gw,gwt)$ does not lie in $\mathcal W(g)$.
\end{enumerate}
\end{lemma}
Note that for $g\in W$ and $s\in S$, the wall dual to $(g,gs)$ lies in $\mathcal W(g)$ if and only if it separates $g$ from $\mathrm{id}$. Consequently, condition~(i) could be written equivalently as: for each
$t\in T$, the wall dual to $(g,gt)$ does not lie in $\mathcal W(g)$.
\begin{proof}[Proof of Lemma~\ref{lem:append}]
Suppose first $T(gw)=T$. Then, for $R$ the $T$-residue containing $gw$, by the discussion after Theorem~\ref{thm:proj}, the unique element $h\in R$ with minimal $\ell(h)$ is $g$. Thus for each $t\in T$ we
have $\ell(gt)> \ell(g)$ and so condition~(i) holds.
Furthermore, for $t\in S\setminus T$, the wall~$\mathcal W$
dual to the edge $(gw,gwt)$ does not separate $gw$
from $\mathrm{id}$. Additionally, the wall $\mathcal W$ cannot separate $gw$ from $g$: If it did, then after
conjugating by $(gw)^{-1}$, the reflection in $\mathcal W$ could become simultaneously the generator~$t$ and a word in the elements of $T$, contradicting $t\in S\setminus T$ by \cite[Lem~2.1(ii)]{R}. Thus $\mathcal W$ does not separate $g$ from $\mathrm{id}$, and so condition~(ii) holds.
Conversely, suppose that we have $T\subseteq S$
satisfying conditions~(i) and~(ii).
Then, by condition~(i), for $R$ the $T$-residue containing $g$, we have that the minimal word length element $h\in R$ from Theorem~\ref{thm:proj} coincides with $g$, and so the element of $R$ of maximal word length is $gw$. Consequently, we have $T(gw)\supseteq T$.
Suppose, for a contradiction, that there is
$t\in T(gw)\setminus T$. Then the wall $\mathcal W$
dual to the edge $(gw,gwt)$ separates $gw$ from $\mathrm{id}$. The same argument as in the previous paragraph implies that $\mathcal W$ does not separate $gw$ from~$g$, so it separates $g$ from $\mathrm{id}$. Furthermore, if a wall $\mathcal W'$ separated $\mathcal W$ from $g$, then $\mathcal W'$ would also have to separate $gw$ from $g$, contradicting $\ell(g)+\ell(w)=\ell(gw)$. Consequently, $\mathcal W\in \mathcal W(g)$, which contradicts condition~(ii).
\end{proof}
We now define an FSA $\Gamma$ over $S$ that will accept exactly the standard language.
\begin{defin}
\label{def:reg}
Let $Q$ be the constant from Theorem~\ref{thm:parallel}. For $g\in W$, let $\mathcal U_Q(g)$ be the set of walls in $X^1$ intersecting the closed ball in $X^1$ of radius $Q$ centred at~$g$. By Theorem~\ref{thm:parallel}, we have $\mathcal W(g)\subseteq \mathcal U_Q(g)$.
Consider the set $\hat V$ of pairs of the form $(g, \mathcal U)$, where $g\in W$, and $\mathcal U$ is a subset of $\mathcal U_Q(g)$.
We define an equivalence relation $\sim$ on $\hat V$ by $(g,\mathcal U)\sim (h, \mathcal U')$ if $\mathcal U'=hg^{-1}\mathcal U$. We take the vertices of our FSA $\Gamma$ to be $V=\hat V/\sim$. To lighten the notation, we denote the equivalence class of $(g,\mathcal U)$ by $[g,\mathcal U]$.
In any equivalence class of $\sim$, there is exactly one representative of the form $(\mathrm{id}, \mathcal U)$. Suppose that we have
$T\subseteq S$ such that $\langle T\rangle$ is finite. Let $w$ be the longest element of $\langle T\rangle$. If
\begin{enumerate}[(i)]
\item
for each $t\in T,$ the wall dual to $(\mathrm{id},t)$ lies outside $\mathcal U$, and
\item
for each $t\in S\setminus T,$ the wall dual to $(w,wt)$ lies outside $\mathcal U$,
\end{enumerate}
then we put an edge $e$ in $\Gamma$ from $[\mathrm{id}, \mathcal U]$ to $[w, \mathcal U']$, where
$\mathcal U'$ is defined as the set of walls in $\mathcal U_Q(w)$ that
\begin{enumerate}[(a)]
\item
lie in $\mathcal U$ or intersect the residue $\langle T\rangle$, and
\item
are not separated from $w$ by a wall satisfying~(a).
\end{enumerate}
We let the label~$\phi(e)$ to be the set of all minimal length words representing~$w$.
We let all states be accept states of $\Gamma$ and let the set of start states $S_0$ contain only~$[\mathrm{id}, \emptyset]$.
\end{defin}
\begin{proof}[Proof of Theorem~\ref{lem:regular}]
Let $\Gamma$ be the FSA from Definition~\ref{def:reg}, and let $\mathcal L$ be the standard language.
We argue inductively on $j\geq 0$ that, among the words $v\in S^*$ of length $\leq j$,
\begin{itemize}
\item
$\Gamma$ accepts exactly the words in $\mathcal L$, and
\item
the accept state of each such word $v$ is $[g,\mathcal W(g)],$ where $v$ represents $g$.
\end{itemize}
This is true for $j=0$ by our choice of $S_0$. Now let $n>0$ and suppose that we have verified the inductive hypothesis for all $j<n$. Let $v$ be a word in $S^*$ of length $n$.
Suppose first that $v$ is a word in $\mathcal L$ representing $g\in W$. By the definition of~$\mathcal L$, for $k=\ell(w(g))$, we have $v(n-k)\in \mathcal L$. Moreover, $v(n-k+1,n)$ represents~$w(g)$. By the inductive hypothesis, $\Gamma$ accepts $v(n-k)$. Furthermore, $v(n-k)$ labels some edge-path in $\Gamma$ from $S_0$ to $[\Pi(g),\mathcal W(\Pi(g))]$.
Let $T=T(g)$ and $w=w(g)$. By Lemma~\ref{lem:append}, applied replacing $g$ with~$\Pi(g)$, we have that
\begin{enumerate}[(i)]
\item for each $t\in T$, the wall dual to $(\Pi(g),\Pi(g)t)$ does not lie in $\mathcal W(\Pi(g))$, and
\item for each $t\in S\setminus T,$ the wall dual to $(g,gt)$ does not lie in $\mathcal W(\Pi(g))$.
\end{enumerate}
Thus, translating by $\Pi(g)^{-1}$, we see that $\Gamma$ has an edge from $[\mathrm{id},\Pi(g)^{-1}\mathcal W(\Pi(g))]=[\Pi(g),\mathcal W(\Pi(g))]$ to $[w,\mathcal U']=[g,\Pi(g)\mathcal U']$, labelled by $v(n-k+1,n)$, and so $\Gamma$ accepts~$v$. Furthermore,
by conditions~(a) and~(b) in Definition~\ref{def:reg}, we have that $\Pi(g)\mathcal U'$ consists of walls of $\mathcal U_Q(g)$ that lie in $\mathcal W(\Pi(g))$ or intersect the residue $g\langle T\rangle$ and are not separated from $g$ by any other such wall. Since $\mathcal W(g)\subseteq \mathcal U_Q(g)$, this implies $\Pi(g)\mathcal U'=\mathcal W(g)$.
Conversely, let $v$ be accepted by $\Gamma$ and suppose that $v=v_0\cdots v_m$ as in the definition of an accepted word.
By the inductive hypothesis, the word $v_0\cdots v_{m-1}$ belongs to $\mathcal L$ and represents $g\in W$ such that $e_m$ starts at
$[g,\mathcal W(g)]=[\mathrm{id}, g^{-1}\mathcal W(g)]$. By the definition of the edges, $v_{m}$ represents the longest element~$w$ in some finite~$\langle T \rangle$, and $\mathcal U=g^{-1}\mathcal W(g)$ satisfies conditions~(i) and~(ii) in Definition~\ref{def:reg}.
Translating by $g$, we obtain that $g$ and $T$ satisfy conditions~(i) and~(ii) of Lemma~\ref{lem:append}. Consequently, we have $T=T(gw)$, and so $v$ belongs to~$\mathcal L$.
\end{proof}
\section{$g$ and $sg$}
\label{sec:partiii}
\begin{lemma}
\label{lem:part3}
Let $W$ be a $2$-dimensional Coxeter group. Then its standard language satisfies part (iii) of the definition of biautomaticity.
\end{lemma}
We will need the following.
\begin{sublemma}
\label{sub}
Let $W$ be a $2$-dimensional Coxeter group.
There is a constant $D=D(W)$ such that for any wall $\mathcal W$ adjacent to $\mathrm{id}$, any $f\in W$ adjacent to $\mathcal W$, and any vertices $h,h'\in W$ on geodesic edge-paths from $\mathrm{id}$ to $f$ satisfying $\ell(h)=\ell(h')$, we have $\ell(h^{-1}h')< D$.
\end{sublemma}
\begin{proof}
Let $Q=Q(W)$ be the constant from Theorem~\ref{thm:parallel}. Suppose
that $h,h'\in W$ lie on geodesic edge-paths~$\gamma,\gamma'$ from $\mathrm{id}$ to $f$ and satisfy $\ell(h)=\ell(h')$.
Note that each vertex $g\in W$ of $\gamma$ lies at distance $\leq Q$ from $\mathcal W$ in $X^1$,
since otherwise there would be a wall $\mathcal W'$ separating $g$
from~$\mathcal W$, and so $\mathcal W'$ would intersect $\gamma$
at least twice.
Since $W$ is $2$-dimensional, we have that $X$ is a $\mathrm{CAT}(0)$ space, with path-metric that we denote $|\cdot, \cdot|$,
and the extension of~$\mathcal W$ to $X$ (for which we keep the same notation) is a convex tree. Let $x,y\in \mathcal W$ be the midpoints of the edges dual to $\mathcal W$ incident to $\mathrm{id},f$, respectively. Let $N(\mathcal W)$ be the closed $Q$-neighbourhood of $\mathcal W$ in $X$, w.r.t.\ the $\mathrm{CAT}(0)$ metric. Note that $N(\mathcal W)$ is quasi-isometric to $\mathcal W$, so in particular $N(\mathcal W)$ is Gromov-hyperbolic (for definition, see e.g.\ \cite[III.H.1.1]{BHa}). Moreover, since $X$ and $X^1$ are quasi-isometric, we have that $\gamma\subset N(\mathcal W)$ is a $(\lambda,\epsilon)$-quasigeodesic (for definition, see \cite[I.8.22]{BHa}), where the constants $\lambda,\epsilon$ depend only on $W$. Consequently, by the stability of quasi-geodesics \cite[III.H.1.7]{BHa}, for a constant $C=C(W)$, there is a point $z$ on the geodesic from $x$ to $y$ with $|h,z|\leq C$. Analogously, there is a vertex $h''$ on $\gamma'$ with $|z,h''|\leq C$, and so $|h,h''|\leq 2C$.
Thus, since $X$ and $X^1$ are quasi-isometric, there is a constant $D=D(W)$ with $\ell(h^{-1}h'')< \frac{D}{2}$. By the triangle inequality in $X^1$, we have $|\ell(h)-\ell(h'')|< \frac{D}{2}$. Thus, by $\ell(h)=\ell(h')$, the distances on $\gamma'$ from $h''$ and $h'$ to $\mathrm{id}$ differ by less than $\frac{D}{2}$. Consequently,
we have $\ell(h''^{-1}h')< \frac{D}{2}$, and so $\ell(h^{-1}h')< D$, as desired.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem:part3}] Let $\mathcal L$ be the standard language. Let $D$ be the constant from Sublemma~\ref{sub}. Let $K$ be the maximal word length of the longest element of a finite
$\langle T\rangle$ over all $T\subseteq S$, and let $C=\max\{K,D\}$.
We prove part~(iii) of the definition of biautomaticity
inductively on $\ell(g)$, where we assume without loss of generality
$\ell(sg)>\ell(g)$. If $g=\mathrm{id}$, then there is nothing to prove.
Suppose now $g\neq \mathrm{id}$, and let $\mathcal W$ be the wall in $X^1$ dual to the $s$-edge incident to $\mathrm{id}$. Let $v,v'\in \mathcal L$ represent $g,sg,$ respectively.
Assume first that $g$ is not adjacent to $\mathcal W$. Let $\mathcal W'$ be a wall adjacent to $g$ separating $g$ from $\mathrm{id}$. Then $\mathcal W'$
also separates $g$ from~$s$. Consequently, $s\mathcal W'$
separates $sg$ from $\mathrm{id}$. Conversely, if a wall $\mathcal W'$ is adjacent to $sg$ and separates $sg$ from $\mathrm{id}$, then it also
separates $sg$ from $s$, and so $s\mathcal W'$ separates $g$ from $\mathrm{id}$. Consequently,
$T(sg)=T(g)$ and so $w(g)=w(sg)$, hence $\Pi(sg)=s\Pi(g)$. In
other words, for $k=\ell(w(g))$, the
words $v'(\ell(sg)-k)$ and $sv(\ell(g)-k)$ represent the same element $s\Pi(g)$ of $W$. Then
part (iii) of the definition of biautomaticity for $g$ follows inductively
from part~(iii) for $\Pi(g)$, for $i< \ell(sg)-k$, or from the definition of $K$, for $i\geq \ell(sg)-k$.
Secondly, assume that $g$ is adjacent to $\mathcal W$. Then $(g,sg)$ is an edge of
$X^1$. Let $f=sg$ and for $0\leq i\leq \ell(g)$ let $h,h'$ be the elements of $W$
represented by $sv(i)$ and $v'(i+1)$. Then, by the definition of
$D$, we have $\ell(v(i)^{-1}sv'(i+1))<D$, as desired.
\end{proof}
\section{$g$ and $gs$}
\label{sec:main}
For $g\in W$ and $k\geq 0$, we set $\Pi^k(g)=\overbrace{\Pi\circ
\cdots \circ\Pi}^{k}(g)$. The main result of this section is the following.
\begin{prop}
\label{prop:main}
Let $W$ be a $2$-dimensional Coxeter group.
Let $g,g'\in W$ be such that $g'\in g \langle s,t \rangle$ for
some $s,t\in S$ with $m_{st}<\infty$ (possibly $s=t$). Then there are $0\leq k,k'\leq 3$ with
$k+k'>0$, such that $\Pi^{k'}(g')\in \Pi^k(g)\langle p,r\rangle$
for some $p,r\in S$ with $m_{pr}<\infty$ (possibly $p=r$).
\end{prop}
We obtain the following consequence, which together with Theorem~\ref{lem:regular} and Lemma~\ref{lem:part3} completes the proof of Theorem~\ref{thm:main}.
\begin{cor}
\label{cor:fellow_travel}
Let $W$ be a $2$-dimensional Coxeter group. Then its standard language satisfies part (ii) of the definition of
biautomaticity.
\end{cor}
\begin{proof}
As before, let $K$ be the maximal word length of the longest element of a finite
$\langle T\rangle$ over all $T\subseteq S$. Assume without loss of generality
$\ell(gs)>\ell(g)$.
Let $0\leq i\leq \ell(g)$. By Proposition~\ref{prop:main}, there is $0\leq j\leq \ell(g)$ with $|j-i|\leq \frac{3K}{2}$ and $0\leq i'\leq \ell(g)+1$ such that $v(j)$ and $v'(i')$ represent elements of $W$ in a common finite residue. Consequently, we have $\ell\big(v(j)^{-1}v'(i')\big)\leq K,$ and so in particular $|j-i'|\leq K$. Therefore $\ell\big(v(i)^{-1}v'(i)\big)\leq |i-j|+\ell\big(v(j)^{-1}v'(i')\big)+|i'-i|\leq 5K$.
\end{proof}
In the proof of Proposition~\ref{prop:main} we will use the following \emph{truncated piecewise Euclidean structure}
on the barycentric subdivision $X'$ of the Cayley complex $X$ of~ $W$. Consider the function $q\colon\mathbf{N}_{\geq 2}\to \mathbf{N}_{\geq 2}$, defined as
\[q(m)=\begin{cases}
m, & \text{for } m=2,3, \\
4, & \text{for } m=4,5, \\
6, & \text{for } m\geq 6.
\end{cases}
\]
Note that each triangle $\sigma$ of $X'$ is a triangle in the barycentric subdivision of a
regular $2m$-gon of $X$ spanned on an $\{s,t\}$-residue with $m_{st}=m<\infty$. Consequently, in the usual piecewise Euclidean structure, $\sigma$ has angles $\frac{\pi}{2m},\frac{\pi}{2},\big(1-\frac{1}{m}\big)\frac{\pi}{2}$. Moreover, the edge opposite to $\frac{\pi}{2m}$ is half of the edge of $X^1$, so it has length $\frac{1}{2}$. In the truncated piecewise Euclidean structure, we choose a different metric on $\sigma$, namely that of a triangle in the barycentric subdivision of
a regular $2q(m)$-gon. More precisely, the angles of $\sigma$ are $\frac{\pi}{2q(m)},\frac{\pi}{2},\big(1-\frac{1}{q(m)}\big)\frac{\pi}{2}$, while the length of the edge opposite to $\frac{\pi}{2q(m)}$ is still $\frac{1}{2}$.
In the following, let $v$ be a vertex of $X'$. The \emph{link} of $v$ in $X'$ is the metric graph whose vertices correspond to the edges of $X'$ incident to $v$. Vertices of the link corresponding to edges $e_1,e_2$ of $X'$ are connected by an edge of length $\theta$, if $e_1$ and~$e_2$ lie in a common triangle $\sigma$ of $X'$ and form angle $\theta$ in $\sigma$. A \emph{loop} in the link is a locally embedded closed edge-path.
\begin{lemma}
\label{lem:truncated}
The truncated piecewise Euclidean structure satisfies the \emph{link condition}, i.e.\ each loop in the link of a vertex $v$ of $X'$ has length $\geq 2\pi$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:truncated}] If $v$ is the barycentre on an edge of $X$, then its link is a simple bipartite graph all of whose edges have length $\frac{\pi}{2}$. Hence its loops have length $\geq 4\frac{\pi}{2}=2\pi$.
If $v$ is the barycentre of a polygon of $X$, then its link is a circle that had length $2\pi$ in the usual piecewise Euclidean structure.
The angles at the barycentre of a polygon in the truncated Euclidean structure are at least as large as the angles in the usual piecewise Euclidean structure, and consequently in the truncated Euclidean structure the link has length $\geq 2\pi$.
It remains to consider a vertex $v\in X^0$, and its link $L'$ in $X'$. For each triangle~$\sigma$ of $X'$ incident to $v$ there is exactly one other triangle $\tau$ of $X'$ incident to $v$ with common hypothenuse, and they lie in the same polygon of $X$. Let $L$ be the graph obtained from $L'$ by merging into one edge each pair of edges corresponding to such $\sigma$ and~$\tau$.
Note that $L$ is isometric to $L'$.
The graph $L$ has a vertex corresponding to each $s\in S$ and an edge of length $\big(1-\frac{1}{q(m_{st})}\big)\pi\geq \frac{\pi}{2}$ joining the vertices corresponding to $s,t$, for each $s,t\in S$ with $m_{st}<\infty$. In particular, all the loops in $L$ of combinatorial length $\geq 4$ have metric length $\geq 2\pi$. To obtain the same for loops in $L$ of combinatorial length $3$, we need to verify that for each triple of distinct $s,t,r\in S$, we have
\begin{equation*}
\frac{1}{q(m_{st})}+\frac{1}{q(m_{tr})}+\frac{1}{q(m_{sr})}\leq 1.\tag{$*$}
\end{equation*}
If $q(m_{st}),q(m_{tr}), q(m_{sr})\neq 2$, then ($*$) holds. If $q(m_{st}),q(m_{tr})\neq 2$ and $q(m_{sr})=2$, then $m_{st},m_{tr}\neq 2$ and $m_{sr}=2$. Since $W$ is $2$-dimensional, we have $m_{st},m_{tr}\geq 4$ or $m_{st}\geq 6$ or $m_{tr}\geq 6$. We then have, respectively, $q(m_{st}),q(m_{tr})\geq 4$ or $q(m_{st})\geq 6$ or $q(m_{tr})\geq 6$, and so ($*$) holds in this case as well. Finally,
if $q(m_{st})=q(m_{sr})=2$, then $m_{st}=m_{sr}=2$, contradicting the $2$-dimensionality of $W$.
\end{proof}
Below, for two edges $e_1,e_2$ incident to a vertex $v$ of $X'$, by their \emph{angle} at $v$ we mean the distance in the link of $v$ between the vertices that $e_1,e_2$ correspond to. Since $X'$ satisfies the link condition, this is the same as the Alexandrov angle if the latter is $<\pi$.
\begin{lemma}
\label{lem:angles}
Let $W$ be a $2$-dimensional Coxeter group.
Let $\gamma, \gamma'$ be geodesic edge-paths in $X^1$
with common endpoints. Suppose that there are walls $\mathcal W_i$ in $X$ with $i=1,2,3,$ such that
$\gamma$ intersects them in the opposite order to $\gamma'$, and that $\mathcal W_2$ is the middle one in
both of these orders. For $i=1,3,$ let $\theta_i$ be the angle in the truncated structure at $x_i=\mathcal W_2\cap\mathcal W_i$
formed by the segments in $\mathcal W_2, \mathcal W_i$ from $x_i$ to $\gamma\cap \mathcal W_2$ and $\gamma\cap\mathcal W_i$.
Then $\theta_1+\theta_3<\pi$.
\end{lemma}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{f1}
\end{center}
\caption{Lemma~\ref{lem:angles}}\label{f:f1}
\end{figure}
See Figure~\ref{f:f1} for an illustration. Note that in the definition of either $\theta_i$ we could replace $\gamma$ by $\gamma'$.
In the proof we will need the following terminology. A \emph{combinatorial $2$-complex} is a $2$-dimensional CW complex in which the attaching maps of $2$-cells are closed edge-paths. For example, the Cayley complex $X$ of a Coxeter group is a combinatorial 2-complex. A \emph{disc diagram} $D$ is a compact contractible combinatorial 2-complex with a fixed embedding in $\mathbf{R}^2$. Its \emph{boundary path} is the attaching map of
the cell at $\infty$. If $X$ is a combinatorial 2-complex, \emph{a disc diagram in $X$} is a cellular map $\varphi \colon D\to X$ that is \emph{combinatorial}, i.e.\ its restriction to each cell of $D$ is a homeomorphism onto a cell of $X$.
The \emph{boundary path} of a disc diagram $\varphi \colon D\to X$ is the composition of the boundary path of $D$ and $\varphi$. We say that $\varphi$ is \emph{reduced}, if it is locally injective on $D-D^0$.
\begin{proof}[Proof of Lemma~\ref{lem:angles}]
Let $\varphi \colon D\to X$ be a reduced disc diagram in $X$ with boundary
$\gamma^{-1}\gamma'$ (for the existence of $\varphi$, see for example \cite[\S V.1--2]{LS}).
Consider the piecewise Euclidean structure on the barycentric subdivision $D'$ of $D$ that is the pullback under $\varphi$ of the truncated Euclidean structure on $X'$. By Lemma~\ref{lem:truncated} and \cite[II.5.4]{BH}, the induced path-metric on $D$ is $\mathrm{CAT}(0)$. Furthermore, for each wall $\mathcal W$ in $X$, the preimage $\varphi^{-1}(\mathcal W)$ is a geodesic in $D$.
Thus $D$ contains a geodesic triangle formed by the segments of $\varphi^{-1}(\mathcal W_i)$ joining their three intersection points.
Its angles indicated in Figure~\ref{f:f1} equal $\theta_1,\theta_3$. Since the Alexandrov angles of that triangle do not exceed the angles of the comparison triangle in the Euclidean plane \cite[II.1.7(4)]{BHa}, we have $\theta_1+\theta_3<\pi$.
\end{proof}
\begin{cor}
\label{cor:step1}
Let $W$ be a $2$-dimensional Coxeter group.
Let $f\in W$ with $T(f)=\{s,t\}$, with $s\neq t$.
Let $h=\Pi(f)$ and let $R$ be the $\{s,t\}$-residue containing $f$ and $h$.
Let $g\in R$ and let $m$ be the distance in $X^1$ between $g$ and $h$. Suppose $T(g)=\{s,r\}$ with $r\neq s,t$.
Then:
\begin{enumerate}[(i)]
\item $m\leq 3$.
\item If $m=3$, then $m_{sr}=2$.
\item If $m_{st}=3$ and $m=2$, then $m_{sr}=2$.
\item If $m_{st}=4$, then $m\leq 2$.
\item If $m=m_{st}-1$, then $m_{st}\leq 3$, and for $m_{st}=3$ we have $m_{sr}=2$.
\end{enumerate}
\end{cor}
\begin{proof} Note that $T(g)=\{s,r\}$ implies in particular
$g\neq f,h$. Let $\gamma_0$ be the geodesic edge-path in $X^1$ from $f$ to $h$ not containing $g$. Let $\gamma_1$ be
the geodesic edge-path of length $m_{sr}$ with vertices in the $\{s,r\}$-residue containing $g$, starting at $g$ with the $r$-edge. Let $\gamma$ be any
geodesic edge-path from $f$ to $\mathrm{id}$ containing $\gamma_0$. Let $\gamma'$ be any geodesic edge-path from $f$ to $\mathrm{id}$ containing $\gamma_1$.
Let $\mathcal W_1$ be the first wall intersecting~$\gamma$. Let $\mathcal W_2$ be the wall dual to the $s$-edge incident to $g$.
Let $\mathcal W_3$ be the wall dual to the $r$-edge incident to $g$. See Figure~\ref{fig:corollary}. Note that $\mathcal W_3$ does not
intersect $R$ (since then $\mathcal W_2$ and $\mathcal W_3$ would intersect twice in $X$) and, analogously, $\mathcal W_1$ does
not intersect the $\{s,r\}$-residue of $g$. Consequently, we are in the setup of Lemma~\ref{lem:angles} and we let $\theta_1,\theta_3$ be as in that lemma, so that $\theta_1+\theta_3<\pi$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{f2}
\end{center}
\caption{Corollary~\ref{cor:step1}}\label{fig:corollary}
\end{figure}
Observe that we have $\theta_1=(m-1)\frac{\pi}{q(m_{st})}$ and $\theta_3=(m_{sr}-1)\frac{\pi}{q(m_{sr})}$.
To prove part (i), assume $m\geq 4$. We then have $\theta_1\geq \frac{\pi}{2}$. However, $\theta_3\geq \frac{\pi}{2}$, which
contradicts Lemma~\ref{lem:angles}.
For part (ii), if $m=3$ then we only have $\theta_1\geq \frac{\pi}{3}$. However, assuming $m_{sr}\geq 3$, we would have $\theta_3\geq \frac{2\pi}{3}$,
which also contradicts Lemma~\ref{lem:angles}.
For part (iii), if $m=2$ and $m_{st}=3$, then we have $\theta_1=
\frac{\pi}{3}$. Assuming $m_{sr}\geq 3$, we would have $\theta_3\geq \frac{2\pi}{3}$ as before,
which contradicts Lemma~\ref{lem:angles}.
To prove part (iv), if we had $m_{st}=4$ and $m\geq 3$, then $\theta_1\geq \frac{\pi}{2}$ and $\theta_3\geq
\frac{\pi}{2}$ would
also contradict Lemma~\ref{lem:angles}.
For part (v), assume $m=m_{st}-1$. The case $m_{st}\geq 5$ is excluded by part (i), and the case $m_{st}=4$ is excluded by part (iv). For $m_{st}=3$ we
have $m_{sr}=2$ by part (iii).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:main}]
If $s=t$, then without loss of generality $s\in T(g)$, and we
can take $k=1, k'=0$.
Assume now $s\neq t$. Let $R=g \langle s,t \rangle$, and let $f,h\in R$ be the elements
with maximal and minimal word length, respectively.
Let $m,m'$ be the distances in~$X^1$ between $h$ and
$g,g'$, respectively. We can assume $\Pi(g),\Pi(g')\notin R$. Then in
particular $m,m'\neq m_{st}$ and if $m\neq 0$ we have
$|T(g)|=2$ and $T(g)$ contains exactly one of $s,t$. Without loss of generality we suppose then
$T(g)=\{s,r\}$ for some $r\neq s,t$.
Note that from Corollary~\ref{cor:step1}(i) it follows that $m\leq 3$. Furthermore, by Corollary~\ref{cor:step1}(ii) if
$m=3$, then $m_{sr}=2$. An analogous statement holds for $m'$.
\smallskip
\noindent \textbf {Case 1: $m=3$, or $m=2$ and $m_{sr}\geq 3$.}
If $m=3$, then denoting by $\hat g$ the $s$-neighbour of $g$, we have $T(\hat g)=\{t,r\}$. Since $m_{sr}=2$, we have $m_{tr}\geq 3$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.57]{f3}
\end{center}
\caption{Proof of Proposition~\ref{prop:main}, Case 1.}\label{fig:step2}
\end{figure}
Applying Corollary~\ref{cor:step1}(v), with $f$ replaced by $\hat g$ and $g$ replaced by the $t$-neighbour of $\hat g$, gives $m_{tr}=3$, and so $m_{st}\geq 6$.
Consequently, in $X^1$ we have the configuration described in Figure~\ref{fig:step2}(a),
where for each edge $(q,\hat q)$ of $X^1$ the vertex $q$ is drawn higher than $\hat q$ if $\ell(q)>\ell(\hat q)$.
For each edge-path in $X^1$ labelled $sr,rs,$ or $trt$, with endpoints $q,\hat q$ satisfying $\ell(q)-\ell(\hat q)=2$ or $3$, respectively, there is another edge-path from $q$ to $\hat q$ labelled $rs,sr,$ or $rtr$, respectively. The word lengths of the consecutive vertices of such a path
are $\ell(q),\ell(q)-1,\ell(q)-2=\ell(\hat q),$ or $\ell(q),\ell(q)-1,\ell(q)-2,\ell(q)-3=\ell(\hat q),$ respectively.
Thus the configuration described in Figure~\ref{fig:step2}(a) extends to the configuration in Figure~\ref{fig:step2}(b). In particular, we have $m'\neq 3$,
since otherwise, for $r'\in S$ satisfying $T(g')=\{t,r'\}$, denoting by $\hat{g}'$ the $t$-neighbour of~$g'$, we have $T(\hat{g}')=\{s,r'\}$ and so $r'=r$. Since $m_{tr'}=2$, we have $3\leq m_{sr'}=m_{sr}$, which is a contradiction. Consequently, $m'\leq 2$.
Consider any of the two vertices labelled by $u$ in Figure~\ref{fig:step2}(b). Note that $T(u)=\{t\}$,
since having $|T(u)|=2$ would force the $t$-neighbour $\hat u$ of $u$ to have $|T(\hat u)|\geq 3$.
This implies that $\Pi^3(g)$ lies on the lower $\{s,t\}$-residue $R'$ in Figure~\ref{fig:step2}(b).
Furthermore, note that $T(h)=\{r\}$, since having $T(h)=\{r,p\}$ for some $p\in S$
would force the $r$-neighbour $\hat h$ of $h$ to have $T(\hat
h)=\{t,p\}$, contradicting Corollary~\ref{cor:step1}(v) with $g$ replaced by $\hat
h$, and $f$ replaced by the $s$-neighbour of $\hat h$, since it would imply $m_{st}\leq 3$.
Consequently, in any of the cases $m'=0,1,2$, there is
$k'\leq 3$ with $\Pi^{k'}(g')\in R'$, as desired.
If $m=2$ and $m_{sr}\geq 3$, then the same proof goes through with
the following minor changes. Namely, $m_{sr}=3$ and $m_{tr}=2$ follow from Corollary~\ref{cor:step1}(v)
applied with $f$ replaced by $g$ and $g$ replaced by the $s$-neighbour of $g$. The remaining part of the
proof is the same, with $s$ and $t$ interchanged, except that it is $\Pi(g)$ instead of $\Pi^3(g)$ that lies in $R'$.
Namely, in $X^1$ we have the configuration described in Figure~\ref{fig:step2}(a), with the top square removed, $s$ and $t$ interchanged, and $\hat g$ replaced with $g$. Consequently, we have the configuration described in Figure~\ref{fig:step2}(b), with the same modifications. We obtain $k'\leq 3$ with $\Pi^{k'}(g')\in R'$ as before.
\smallskip
\noindent \textbf {Case 2: $m=2$ and $m_{sr}=2$.}
We have $m_{tr}\geq 3$ and the configuration from Figure~\ref{fig:step3} inside $X^1$. Note that if $m'=2$, then we can assume $T(g')=\{t\}$. Indeed, if $T(g')=\{t,p\}$, then we can assume $m_{tp}=2$ since otherwise interchanging $g,g'$ we can appeal to Case~1. Thus $p\neq r$, and so the $t$-neighbor $\hat {g}'$ of $g'$ has $|T(\hat {g}')|\geq 3$, which is a contradiction.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.57]{f4}
\end{center}
\caption{Proof of Proposition~\ref{prop:main}, Case 2.}\label{fig:step3}
\end{figure}
Consequently both $\Pi(g)$ and $\Pi^{k'}(g')$ for some $k'\leq 2$ lie in the $\{t,r\}$-residue $R'$ from Figure~\ref{fig:step3}. This completes Case~2.
\smallskip
Note that if, say, $m=1,m'=0$, then we can take $k=1,k'=0.$
Thus it remains to consider the case where $m'=m=1$.
\smallskip
\noindent \textbf {Case 3: $m'=m=1$, and $T(g')=\{t,r\}.$} In other words, the second element of $T(g')$ coincides with that of $T(g)$.
If one of $m_{sr},m_{tr}$, say $m_{sr}$, equals $2$, then we can take $k=1,k'=0$, and we are done. If $m_{sr}=m_{tr}=3$, then we can take $k=k'=1$. It remains to consider the case, where, say, $m_{sr}\geq 4, m_{tr}\geq 3$.
Let $\gamma_0,\gamma_0'$ be the geodesic edge-paths from $f$ to $g,g'$, respectively.
If $m_{tr}\geq 4$, then we apply Lemma~\ref{lem:angles} with any $\gamma$ starting with $\gamma_0\overbrace
{rsr\cdots}^{m_{sr}}$, and any $\gamma'$ starting with $\gamma'_0\overbrace
{rtr\cdots}^{m_{tr}}$. We take $\mathcal W_1,\mathcal W_2,\mathcal W_3$ to be the walls dual to $r$-edges incident to $g,h,g',$ respectively. Then $\theta_1,\theta_3\geq \frac{\pi}{2}$, which is a contradiction. Analogously, if $m_{sr}\geq 6$, then $\theta_1\geq \frac{2\pi}{3},\theta_3\geq \frac{\pi}{3}$, contradiction.
We can thus assume $m_{tr}= 3,$ and $m_{sr}=4$ or $5$. In particular, $m_{st}\geq 3$. We now apply Corollary~\ref{cor:step1}, with $f$ replaced by
the $r$-neighbour $u$ of $h$ and $g$ replaced by the $s$-neighbour $\hat u$ of $u$, see Figure~\ref{fig:case3}. Since $T(u)=\{s,t\}$ with $m_{st}\geq 3$ and $T(\hat u)=\{t,r\}$ with $m_{tr}= 3$, Corollary~\ref{cor:step1}(v) yields a contradiction.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.57]{f5}
\end{center}
\caption{Proof of Proposition~\ref{prop:main}, Case 3.}\label{fig:case3}
\end{figure}
\smallskip
\noindent \textbf {Case 4: $m'=m=1$, and $T(g')=\{t,p\}$ for some $p\neq r$.}
First note that $T(h)=\{r,p\}$ and so $m_{pr}<\infty$. If $m_{sr}=m_{tp}=2$, then we can take $k'=k=1$ and we are done. We now focus on the case $m_{sr}\geq 3$ and $m_{tp}\geq 3$.
By Corollary~\ref{cor:step1}(v), applied with $f$ replaced by~$g$ and $g$ replaced by $h$, we obtain $m_{sr}=3$ and $m_{rp}=2$. In particular, since $m_{tp}<\infty$, we have $m_{tr}\geq 3$.
Let $\hat h$ be the $p$-neighbour of~$h$. We then apply Lemma~\ref{lem:angles} to geodesic edge-paths $\gamma,\gamma'$ from $g$ to $\mathrm{id}$, where $\gamma$ starts with the edge-path of length $m_{sr}$ in the $\{s,r\}$-residue of $g$ starting with the $r$-edge, and $\gamma'$ starts with the $s$-edge, the $p$-edge, followed by the edge-path of length $m_{tr}$ in the $\{t,r\}$ residue of $\hat h$ starting with the $t$-edge. See Figure~\ref{fig:case4}. We consider the walls $\mathcal W_1, \mathcal W_2$ dual to the $r$-edges incident to $g,h$, respectively, and $\mathcal W_3$ dual to the $t$-edge incident to $\hat h$. We have $\theta_1= \frac{\pi}{3}, \theta_3\geq \frac{2\pi}{3}$, which is a contradiction.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{f6}
\end{center}
\caption{Proof of Proposition~\ref{prop:main}, Case 4, $m_{sr}\geq 3$ and $m_{tp}\geq 3$.}\label{fig:case4}
\end{figure}
It remains to consider the case where, say, $m_{sr}\geq 3$ and $m_{tp}=2$. Then again by Corollary~\ref{cor:step1}(v), applied with $f$ replaced by $g$ and $g$ replaced by $h$, we obtain $m_{sr}=3$ and $m_{rp}=2$. Let $u$ be the $r$-neighbour of~$h$. Then $\Pi(g)$ lies in the $\{p,s\}$-residue $R'$ of $u$. Let $\hat h=\Pi(g')$, and let $\hat u$ be the $p$-neighbour of $u$, see Figure~\ref{fig:lastcase}. We have $m_{sp}\geq 6$ and so by Corollary~\ref{cor:step1}(v), applied with $f$ replaced by $u$ and $g$ replaced by $\hat u$, we obtain $T(\hat u)=\{s\}$. We claim that $T(\hat h)=\{r\}$ and so $\Pi(\hat h)$ also lies in $R'$, finishing the proof. To justify the claim, suppose $T(\hat h)=\{r,q\}$ with $q\neq r$. If $m_{rq}\geq 3$, then
we consider the walls $\mathcal W_1, \mathcal W_2$ dual to the $r$-edges incident to $g,h$, respectively, and $\mathcal W_3$ dual to the $q$-edge incident to $\hat h$, which leads to a contradiction as in the previous paragraph. If $m_{rq}=2$, then $q\neq s$ and so $T(\hat u)=\{s,q\}$, which is a contradiction.
This justifies the claim and completes Case~4.
\end{proof}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{f7}
\end{center}
\caption{Proof of Proposition~\ref{prop:main}, Case 4, $m_{sr}\geq 3$ and $m_{tp}=2$.}\label{fig:lastcase}
\end{figure}
\section{$\widetilde A_3$ Euclidean group}
\label{sec:dim3}
In this section it will be convenient to view the Cayley graph $X^1$ of the $\widetilde A_3$ Coxeter group $W$ as the dual graph to its Coxeter complex, which is the following subdivision of $\mathbf{R}^3$. (The reader might find it convenient to relate this subdivision intro tetrahedra with the standard subdivision of $\mathbf{R}^3$ into unit cubes.) Its vertices are triples of integers $(x,y,z)$ that are all odd or all even. Edges connect each vertex $(x,y,z)$ to vertices of the form $(x\pm 2,y,z),(x,y\pm 2,z),(x,y,z\pm 2),(x\pm 1,y\pm 1,z\pm 1)$, where the three signs can be chosen independently. See for example \cite[Thm~A]{Munro}, where this Coxeter complex is described as a subdivision of the hyperplane $\sigma$ in $\mathbf{R}^4$ defined by $x_1+x_2+x_3+x_4=0$, and the linear isomorphism with our subdivision of $\mathbf{R}^3$ is given by $(x,y,z)\mapsto (x+y+z,x-y-z, y-z-x, z-x-y)$. Furthermore, in Step~1 of the proof of \cite[Thm~A]{Munro}, we show that the tetrahedra of the Coxeter complex are obtained by subdividing $\sigma$ along a family of hyperplanes that, after identifying $\sigma$ with $\mathbf{R}^3,$ have equations $x\pm y=c, x\pm z=c,$ or $ y\pm z=c$, for $c$ even.
In particular, in the second paragraph of Step~1 of the proof of \cite[Thm~A]{Munro}, we describe explicitly one of the tetrahedra as, after identifying $\sigma$ with $\mathbf{R}^3$, spanned on the clique with vertices
$(-1,-1,-1),(-1,-1,1),(-2,0,0),(0,0,0)$. Using the action of $W$, this gives the following description of all the tetrahedra of our subdivision.
Namely, tetrahedra are spanned (up to permuting the coordinates) on cliques with vertices $(x,y,z-1),(x,y,z+1),(x+1,y-1,z),(x+1,y+1,z)$.
Each such tetrahedron has exactly two edges of length $2$, and the segment $e=((x,y,z),(x+1,y,z))$ joining their centres has length $1$. We can equivariantly embed $X^1$ into $\mathbf{R}^3$ by mapping each vertex into the centre of a tetrahedron, and mapping each edge affinely. Consequently, we can identify elements $g\in W$ with segments of the form $e_g=((x,y,z),(x+1,y,z))$, where $y+z$ is odd, up to permuting the coordinates. We identify $\mathrm{id}\in W$ with $e_{\mathrm{id}}=((0,0,1),(0,1,1))$. In particular, the point $O=(0,0,0)$ belongs to the identity tetrahedron. Note that for each $g\in W, s\in S$, the segments $e_g,e_{gs}$ are incident. Furthermore, walls in $X^1$ extend to subcomplexes of $\mathbf{R}^3$ isometric to Euclidean planes, and such a wall is adjacent to $g\in W$ if and only if it contains a face of the tetrahedron containing $e_g$.
\begin{lemma}
\label{lem:stepin A3}
Let $|x_0|+1<y_0<z_0$. Let $g\in W$ be such that
\begin{enumerate}[(i)]
\item $e_g=((x_0,y_0,z_0),(x_0+1,y_0,z_0))$, or
\item $e_g=((x_0,y_0,z_0),(x_0,y_0,z_0+1))$.
\end{enumerate}
Then $\ell(w(g))$ equals, respectively,
\begin{enumerate}[(i)]
\item $3$, or
\item $2$.
\end{enumerate}
Furthermore, $e_{\Pi(g)}$ is equal to the translate of $e_g$ by, respectively,
\begin{enumerate}[(i)]
\item $(0,-1,-1)$, or
\item $(0,0,-1)$.
\end{enumerate}
\end{lemma}
\begin{proof} In case (i), suppose first that $x_0+y_0$ is even. Then $e_g$ lies in the tetrahedron with vertices $(x_0,y_0,z_0-1),(x_0,y_0,z_0+1),(x_0+1,y_0-1,z_0),(x_0+1,y_0+1,z_0)$. The walls adjacent to~$g$ are the hyperplanes containing the faces of this tetrahedron, which are $x+y=x_0+y_0, x-y=x_0-y_0, x+z=x_0+1+z_0, x-z=x_0+1-z_0$. Projecting $e_g,O,$ and these walls onto the $xy$ plane (Figure~\ref{fig:proj1}(a)), or the $xz$ plane (Figure~\ref{fig:proj1}(b)), we obtain that $e_g$ is separated from $O$ exactly by the first and fourth among these walls.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{fb}
\end{center}
\caption{Proof of Lemma~\ref{lem:stepin A3}, case (i), $x_0+y_0$ even.}\label{fig:proj1}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=1]{cube}
\end{center}
\caption{Proof of Lemma~\ref{lem:stepin A3}, case (i), $x_0+y_0$ even: the two walls.}\label{fig:cube}
\end{figure}
Consequently, $gT(g)g^{-1}$ consists of the reflections in the first and fourth of these walls. These reflections preserve the cube spanned by $e_g$ and its translates by $(0,-1,0),(0,0,-1),$ and $(0,-1,-1)$, see Figure~\ref{fig:cube}. The longest element (of length~$3$) in the group that these reflections generate maps $e_g$ to its translate by $(0,-1,-1)$.
Secondly, suppose that $x_0+y_0$ is odd. Then $e_g$ lies in the tetrahedron with vertices $(x_0,y_0-1,z_0),(x_0,y_0+1,z_0),(x_0+1,y_0,z_0-1),(x_0+1,y_0,z_0+1)$. Thus the walls adjacent to $g$ are $x+y=x_0+1+y_0, x-y=x_0+1-y_0, x+z=x_0+z_0, x-z=x_0-z_0$. Hence, as illustrated in Figure~\ref{fig:proj2}(a,b), $e_g$ is separated from $O$ exactly by the second and third among these walls.
Consequently, $gT(g)g^{-1}$ consists of the reflections in the second and third of these walls. The longest element (of length $3$) in the group they generate maps $e_g$ to its translate by $(0,-1,-1)$ as before.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{fc}
\end{center}
\caption{Proof of Lemma~\ref{lem:stepin A3}, case (i), $x_0+y_0$ odd.}\label{fig:proj2}
\end{figure}
In case (ii), suppose first that $y_0+z_0$ is odd. Then $e_g$ lies in the tetrahedron with vertices $(x_0,y_0-1,z_0),(x_0,y_0+1,z_0),(x_0-1,y_0,z_0+1),(x_0+1,y_0,z_0+1)$.
Thus the walls adjacent to $g$ are $x+z=x_0+z_0, x-z=x_0-z_0, y+z=y_0+z_0+1, y-z=y_0-z_0-1$.
Hence, as illustrated in Figure~\ref{fig:proj3}(a,b), $e_g$ is separated from $O$ exactly by the first and second among these walls.
Consequently, $gT(g)g^{-1}$ consists of the reflections in the first and second of these walls. These reflections commute and preserve the square spanned by $e_g$ and its translate by $(0,0,-1)$. The longest element in the group these reflections generate (i.e.\ their composition) maps $e_g$ to its translate by $(0,0,-1)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{fd}
\end{center}
\caption{Proof of Lemma~\ref{lem:stepin A3}, case (ii), $y_0+z_0$ odd.}\label{fig:proj3}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.63]{fe}
\end{center}
\caption{Proof of Lemma~\ref{lem:stepin A3}, case (ii), $y_0+z_0$ even.}\label{fig:proj4}
\end{figure}
Secondly, suppose that $y_0+z_0$ is even. Then $e_g$ lies in the tetrahedron with vertices $(x_0-1,y_0,z_0),(x_0+1,y_0,z_0),(x_0,y_0-1,z_0+1),(x_0,y_0+1,z_0+1)$.
Thus the walls adjacent to $g$ are $x+z=x_0+z_0+1, x-z=x_0-z_0-1, y+z=y_0+z_0, y-z=y_0-z_0$.
Hence, as illustrated in Figure~\ref{fig:proj4}(a,b), $e_g$ is separated from $O$ exactly by the third and
fourth among these walls.
Consequently, $gT(g)g^{-1}$ consists of the (commuting)
reflections in the third and fourth of these walls.
The longest element in the group they generate
maps~$e_g$ to its translate by $(0,0,-1)$ as before.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:second}] Let $\mathcal L$ be the standard language.
For each $C>0$ consider the following $g,g'\in W$ with incident segments
$$e_g=((x_0,y_0,z_0),(x_0+1,y_0,z_0)),\ e_{g'}=((x_0,y_0,z_0),(x_0,y_0,z_0+1))$$
with $x_0,z_0$ even and $y_0$ odd, satisfying
$|x_0|+C<y_0\leq z_0-C$. Suppose that
$g,g'$ are represented by $v,v'\in\mathcal L$ of length $N,N'$ (which differ by $1$).
By Lemma~\ref{lem:stepin A3}, for $n,n'\leq C$ we have that $v(N-3n)$ represents the element of $W$ corresponding to the segment
$e_g-n(0,1,1)$ and $v'(N'-2n')$ represents the element of $W$ corresponding to the segment
$e_{g'}-n'(0,0,1)$. In particular, for $i=3n=2n',$ we see that the
segments corresponding to $v(N-i)$ and $v'(N'-i)$ are
$((x_0,y_0-n,z_0-n),(x_0+1,y_0-n,z_0-n))$ and $
((x_0,y_0,z_0-\frac{3}{2}n),
(x_0,y_0,z_0+1-\frac{3}{2}n))$.
Thus they are at Euclidean distance $\geq n$, so in particular
$\ell(v(N-i)^{-1}v'(N'-i))\geq n$. This shows that part
(ii) of the definition of biautomaticity does not hold for
$\mathcal L$.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Bahls2006}{article}{
author={Bahls, Patrick},
title={Some new biautomatic Coxeter groups},
journal={J. Algebra},
volume={296},
date={2006},
number={2},
pages={339--347}}
\bib{BHa}{book}{
author={Bridson, Martin R.},
author={Haefliger, Andr\'e},
title={Metric spaces of non-positive curvature},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={319},
publisher={Springer-Verlag},
place={Berlin},
date={1999}}
\bib{BH}{article}{
author={Brink, Brigitte},
author={Howlett, Robert B.},
title={A finiteness property and an automatic structure for Coxeter
groups},
journal={Math. Ann.},
volume={296},
date={1993},
number={1},
pages={179--190}}
\bib{Cap2009}{article}{
author={Caprace, Pierre-Emmanuel},
title={Buildings with isolated subspaces and relatively hyperbolic
Coxeter groups},
journal={Innov. Incidence Geom.},
volume={10},
date={2009},
pages={15--31}}
\bib{CaMu2005}{article}{
author={Caprace, Pierre-Emmanuel},
author={M\"{u}hlherr, Bernhard},
title={Reflection triangles in Coxeter groups and biautomaticity},
journal={J. Group Theory},
volume={8},
date={2005},
number={4},
pages={467--489}}
\bib{CaSh1995}{article}{
author={Cartwright, Donald I.},
author={Shapiro, Michael},
title={Hyperbolic buildings, affine buildings, and automatic groups},
journal={Michigan Math. J.},
volume={42},
date={1995},
number={3},
pages={511--523}}
\bib{E}{book}{
author={Epstein, David B. A.},
author={Cannon, James W.},
author={Holt, Derek F.},
author={Levy, Silvio V. F.},
author={Paterson, Michael S.},
author={Thurston, William P.},
title={Word processing in groups},
publisher={Jones and Bartlett Publishers, Boston, MA},
date={1992}}
\bib{FHT}{article}{
author={Farb, Benson},
author={Hruska, Chris},
author={Thomas, Anne},
title={Problems on automorphism groups of nonpositively curved polyhedral
complexes and their lattices},
conference={
title={Geometry, rigidity, and group actions},
},
book={
series={Chicago Lectures in Math.},
publisher={Univ. Chicago Press, Chicago, IL},
},
date={2011},
pages={515--560}}
\bib{GeSh1}{article}{
author={Gersten, Stephen M.},
author={Short, Hamish},
title={Small cancellation theory and automatic groups},
journal={Invent. Math.},
volume={102},
date={1990},
number={2},
pages={305--334}}
\bib{GeSh2}{article}{
author={Gersten, Stephen M.},
author={Short, Hamish},
title={Small cancellation theory and automatic groups: Part II},
journal={Invent. Math.},
volume={105},
date={1991},
number={3},
pages={641--662}}
\bib{LeMi}{article}{
title={Commensurating {HNN}-extensions: non-positive curvature and biautomaticity},
author={Leary, Ian},
author={Minasyan, Ashot},
eprint={arXiv:1907.03515},
status={to appear},
journal={Geom. Topol.},
date={2020}}
\bib{LS}{book}{
author={Lyndon, Roger C.},
author={Schupp, Paul E.},
title={Combinatorial group theory},
note={Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 89},
publisher={Springer-Verlag, Berlin-New York},
date={1977},
pages={xiv+339}}
\bib{Munro}{article}{
author={Munro, Zachary},
title={Weak modularity and $\widetilde{A}_n$ buildings},
status={submitted},
date={2019},
eprint={arXiv:1906.10259}}
\bib{NR}{article}{
author={Niblo, Graham A.},
author={Reeves, Lawrence D.},
title={The geometry of cube complexes and the complexity of their
fundamental groups},
journal={Topology},
volume={37},
date={1998},
number={3},
pages={621--633}}
\bib{NiRe2003}{article}{
author={Niblo, Graham A.},
author={Reeves, Lawrence D.},
title={Coxeter groups act on ${\rm CAT}(0)$ cube complexes},
journal={J. Group Theory},
volume={6},
date={2003},
number={3},
pages={399--413}}
\bib{Nos2000}{article}{
author={Noskov, Gennady A.},
title={Combing Euclidean buildings},
journal={Geom. Topol.},
volume={4},
date={2000},
pages={85--116}}
\bib{R}{book}{
author={Ronan, Mark},
title={Lectures on buildings},
series={Perspectives in Mathematics},
volume={7},
publisher={Academic Press, Inc., Boston, MA},
date={1989},
pages={xiv+201}}
\bib{Sw}{article}{
author={\'{S}wi{\k{a}}tkowski, Jacek},
title={Regular path systems and (bi)automatic groups},
journal={Geom. Dedicata},
volume={118},
date={2006},
pages={23--48}}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,565,773 | arxiv | \section{Introduction}
\setcounter{equation}{0}
After the realization that the effective Lagrangian of non--abelian
gauge theories is invariant with respect to Becchi-Rouet-Stora--Tyutin
(BRST) \cite{1} as well as
anti--BRST transformations \cite{1a}, it has been recognized that
this invariance can be used as a fundamental principle in the construction
of covariantly quantized gauge theories (for a modern introduction see
\cite{1b}). In particular, a superfield
formulation of quantized pure Yang--Mills theories by Bonora and Tonin
provides a convenient framework for describing the extended BRST symmetries
\cite{2}. In this framework the extended BRST symmetries are realized as
translations in a superspace along additional anticommuting coordinates
(for a more recent approach, we refer to \cite{3} and references therein).
A $Sp(2)$--covariant superfield description of Lagrangian quantization
of general gauge theories, which is applicable irrespective of
whether the theories are irreducible or reducible and whether
the gauge algebra is closed or open, has been given in Ref. \cite{4}.
A corresponding superfield formulation of the quantization procedure
in the Hamiltonian approach for theories with first--class constraints
has been given in Ref.~\cite{5}.
Recently, the $Sp(2)$--quantization of Batalin, Lavrov and Tyutin(BLT)
has been extended to a formalism which is based on the orthosymplectic
superalgebra $osp(1,2)$ \cite{6} and which can be applied to {\it massive}
gauge theories. This is achieved by incorporating into the extended BRST
transformations $m$--dependent terms in such a way that the $m$--extended
(anti)BRST symmetry of the quantum action $W_m$ is preserved.
In that approach $W_m$ is required to satisfy the
generalized quantum master equations of $m$--extended BRST symmetry
and, in addition, of $Sp(2)$ symmetry,
\begin{eqnarray}
\label{qme1}
\hbox{$\frac{1}{2}$} ( W_m, W_m )^a + V_m^a W_m
&=& i \hbar \Delta^a W_m
\quad\Longleftrightarrow\quad
\bar{\Delta}^a \exp\{W_m\}=0,
\\
\label{qme2}
\hbox{$\frac{1}{2}$} \{ W_m, W_m \}_\alpha + V_\alpha W_m
&=& i \hbar \Delta_\alpha W_m
\quad\Longleftrightarrow\quad
\bar{\Delta}_\alpha \exp\{W_m\}=0,
\end{eqnarray}
respectively, whose generating (second order) differential operators
\begin{eqnarray}
\label{Delta1}
\bar{\Delta}_m^a &\equiv& \Delta^a + (i/\hbar) V_m^a,
\qquad (a = 1,2),\\
\label{Delta2}
\bar{\Delta}_\alpha &\equiv& \Delta_\alpha + (i/\hbar) V_\alpha,
\qquad
(\alpha = 0, \pm 1) ,
\end{eqnarray}
form a superalgebra isomorphic to $osp(1,2)$ (the definitions of the
(anti)brackets and the operators $\bar{\Delta}_m^a$ and
$\bar{\Delta}_\alpha$ are given below).
The incorporation of mass terms into the action $W_m$ is necessary at least
intermediately in the renormalization scheme of Bogoliubov, Parasiuk, Hepp,
Zimmermann and Lowenstein(BPHZL)\cite{BPHZL}.
An essential ingredient to deal with
massless theories in that scheme consists in the introduction of a
regularizing mass $m = (s - 1) M$ for any massless field and performing
ultraviolet as well as infrared subtractions thereby avoiding
spurious infrared singularities in the limit $s \rightarrow 1$. By using
such an infrared regularization -- without violating the extended BRST
symmetries -- the $osp(1,2)$--superalgebra appears necessarily.
Moreover, the BPHZL renormalization scheme is probably the mathematical best
founded one in order to formulate the quantum master equations on the level
of algebraic renormalization theory and to properly compute
higher--loop anomalies \cite{7}. The reason is the following:
The only quantity that remains undefined
in the above mentioned approaches of quantizing general gauge theories
is the right--hand side of the quantum master equations (that problem already
occurs in the Batalin--Vilkovisky(BV) field--antifield formalism). At the
classical level, the extended BRST invariance in the $osp(1,2)$--approach
is expressed by the classical master equations
$\hbox{$\frac{1}{2}$} ( S_m, S_m )^a + V_m^a S_m = 0$, where $S_m$ is the
lowest order approximation in $\hbar$ of $W_m$. On the quantum level,
formal manipulations modify the classical master equations into
Eq.~(\ref{qme1}).
When applied to the local functional $W_m$
the operation $\Delta^a W_m$ leads to the ill--defined expression $\delta(0)$.
Well--defined expressions for the regularized operators $\Delta^a$ are
proposed at one--loop level in \cite{8} within the context of Pauli-Villars
regularization and at higher order in \cite{9} for non--local regularization.
However, by means of the BPHZL renormalization scheme, which bypasses any
ultraviolet regularizations, the right--hand side of the quantum master
equations can be defined by using Zimmermanns's normal products to any
order of perturbation theory \cite{7}.
The purpose of the present paper is to reveal the geometrical content of the
$osp(1,2)$--covariant Lagrangian quantization which amounts to understand
the geometrical meaning of the $m$--dependent part of the extended BRST
transformations. For that reason the theory will be described in terms of
super(anti)fields. Our approach is based on the idea to consider
$osp(1,2)$ as subsuperalgebra of the superalgebra $sl(1,2)$. The latter
algebra, being isomorphic to $osp(2,2)$, contains four bosonic generators
$V_\alpha$ and $V$, which form the Lie algebra $sl(2) \oplus u(1)$, and
four (nilpotent) fermionic generators $V_+^a$ and $V_-^a$.
The even part of $osp(1,2)$ is the algebra $sl(2)$
generating the special linear transformations, but due to their isomorphism
to the algebra $sp(2)$ we will speak about symplectic transformations.
The eigenvalues of the generators $V_\alpha$ for $\alpha = 0$ define
the ghost numbers, whereas the eigenvalues of the
generator $V$ define what in Ref.~\cite{10} was called the `new ghost
number'. The generators $V_+^a$ and $V_-^a$ have opposite new ghost numbers,
${\rm ngh}(V_\pm^a) = \pm 1$, respectively. But, introducing a mass
$m$ which formally will be attributed also by a new ghost number,
${\rm ngh}(m) = 1$, they can be combined into two fermionic
generators $V_m^a = V_+^a + \hbox{$\frac{1}{2}$} m^2 V_-^a$ of the
superalgebra $osp(1,2)$. For $m\neq 0$ these generators $V_m^a$
are neither nilpotent nor do they anticommutate among themselves.
The key observation that allows for a geometric interpretation of the
superalgebra $sl(1,2)$ is due to Baulieu, Siegel and Zwiebach \cite{11}
which in a
quite different context of string theory gave a description of $sl(1,2)$
as the algebra generating conformal transformations in a 2--dimensional
superspace. Hence, the generators $V_+^a, V_-^a,
V^{ab} = (\sigma^\alpha)^{ab} V_\alpha$ and $V$ of the superalgebra
$sl(1,2)$, with $(\sigma^\alpha)^{ab}$ generating the fundamental
representation of $sl(2)$, may be considered as generators of translations
$i P^a$, special conformal transformations $i K^a$, symplectic rotations
$i M^{ab}$ and dilatations $-i D$, respectively, in superspace.
This leads immediatly to a `natural' geometric formulation of the $osp(1,2)$
quantization procedure:
In a superspace description the invariance of $W_m$ under $m$--extended
BRST transformations, generated by
$V_m^a = V_+^a + \hbox{$\frac{1}{2}$} m^2 V_-^a$, corresponds to translations
combined with $m$--dependent special conformal transformations, and
its invariance
under $Sp(2)$--transformations, generated by $V_\alpha$, corresponds to
symplectic rotations. Furthermore, solutions $S_m$ of the classical master
equations $\hbox{$\frac{1}{2}$} ( S_m, S_m )^a + V_m^a S_m = 0$ and
$\{ S_m, S_m \}_\alpha + V_\alpha S_m = 0$ with vanishing new ghost number,
${\rm ngh}(S_m) = 0$, correspond to solutions in the superspace being
invariant under dilatations, generated by $V$.
The paper is organized as follows. In Sect.~2 we shortly review some basic
definitions and properties of $L$--stage reducible gauge theories and we
introduce the corresponding configuration space of fields and antifields.
Furthermore, the (anti)commutation relations of the superalgebra $sl(1,2)$
are defined and an explicit realization in terms of linear differential
operators acting on the antifields are given.
In Sect.~3 the superalgebra $sl(1,2)$ is realized as algebra of the
conformal group in superspace where the usual space--time is extended
by two extra anticommuting coordinates $\theta^a$. Moreover, we give
a superspace representation of the algebra $sl(1,2)$ acting linearly
on the super(anti)fields.
In Sect.~4 the $osp(1,2)$--covariant superfield quantization rules for
general gauge theories are formulated. Besides, it is shown that proper
solutions of the classical master equations can be constructed being
invariant under $osp(1,2) \oplus u(1)$, where the additional $u(1)$ symmetry
is related to the new ghost number conservation; however, this symmetry is
broken by choosing a gauge. Sect.~5 is devoted to study the (in)dependence
of general Green's functions on the choice of the gauge. In the $osp(1,2)$
approach it is proven that mass terms generally destroy gauge independence;
however, this gauge dependence disappears in the limit $m = 0$.
In Sect.~6 we construct $osp(1,2) \oplus u(1)$ symmetric proper solutions
of the classical master equations. Moreover, the problem of how to determine
the transformations of the gauge fields and the full set of the necessary
(anti)ghost and auxiliary fields under the superalgebra $sl(1,2)$ is
solved both for irreducible and first--stage reducible theories with closed
algebra.
Throughout this paper we have used the condensed notation introduced by
DeWitt \cite{12} and conventions adopted in Ref.~\cite{6}; if not specified
otherwise, derivatives with respect to the superantifields
$\bar{\Phi}_A(\theta)$ and the superspace coordinates $\theta^a$ are the
(usual) left ones and that with respect to the superfields $\Phi^A(\theta)$
are {\it right} ones. Left derivatives with respect to $\Phi^A(\theta)$ and
right derivatives with respect to $\theta^a$ are
labelled by the subscript $L$ and $R$, respectively; for example,
$\delta_L/ \delta \Phi^A(\theta)$ ($\partial_R/ \partial \theta^a$)
denotes the left(right) derivative with respect to the superfields
$\Phi^A(\theta)$ (the superspace coordinates $\theta^a$).
\section{Realization of $sl(1,2)$ in terms of antifields}
\setcounter{equation}{0}
\noindent (A) {\it General gauge theories }
\\
Before going into the main subject of this section let us shortly introduce the
basic definitions of general gauge theories and the corresponding
configuration space of fields and antifields:
A set of gauge (as well as matter) fields $A^i$ with Grassmann parities
$\epsilon(A^i) = \epsilon_i$ will be considered whose classical action
$S_{\rm cl}(A)$ is invariant under the gauge transformations
\begin{equation}
\label{II1}
\delta A^i = R^i_{\alpha_0} \xi^{\alpha_0},
\qquad
\alpha_0 = 1, \ldots, n_0,
\qquad
S_{{\rm cl}, i} R^i_{\alpha_0} = 0;
\end{equation}
here, $\xi^{\alpha_0}$ are the parameters of these transformations and
$R^i_{\alpha_0}(A)$ are the gauge generators having Grassmann parity
$\epsilon(\xi^{\alpha_0}) = \epsilon_{\alpha_0}$ and
$\epsilon(R^i_{\alpha_0}) = \epsilon_i + \epsilon_{\alpha_0}$, respectively;
by definition $X_{, j} = \delta X/ \delta A^j$.
For {\it general gauge theories} the algebra of generators has the form
\cite{10}:
\begin{equation}
\label{II2}
R_{\alpha_0, j}^i R_{\beta_0}^j -
(-1)^{\epsilon_{\alpha_0} \epsilon_{\beta_0}}
R_{\beta_0, j}^i R_{\alpha_0}^j =
- R_{\gamma_0}^i F_{\alpha_0 \beta_0}^{\gamma_0} -
M^{ij}_{\alpha_0 \beta_0} S_{{\rm cl}, j},
\end{equation}
where $F_{\alpha_0 \beta_0}^{\gamma_0}(A)$ are the field--dependent
structure functions and the matrix $M^{ij}_{\alpha_0 \beta_0}(A)$ is graded
antisymmetric with respect to $(ij)$ and $(\alpha_0 \beta_0)$. The gauge
algebra is said to be {\it closed} if $M_{\alpha_0 \beta_0}^{ij} = 0$,
otherwise it is called {\it open}. Moreover, Eq. (\ref{II2}) defines a
Lie algebra if the algebra is closed and the
$F_{\alpha_0 \beta_0}^{\gamma_0}$ do not depend on $A^i$.
If the set of generators $R^i_{\alpha_0}$ are linearly {\it independent} then
the theory is called {\it irreducible} \cite{13}. On the other hand,
if the generators $R^i_{\alpha_0}$ are not independent, i.e.,
if on--shell certain relations exist among them, then,
according to the following characterization, the theory under consideration
is called $L$--stage {\it reducible} \cite{14}: \\
There exists a chain of field--dependent on--shell zero--modes
$Z^{\alpha_s - 1}_{\alpha_s}(A)$,
\begin{alignat*}{2}
R^i_{\alpha_0} Z^{\alpha_0}_{\alpha_1} &=
S_{{\rm cl}, j} K^{ji}_{\alpha_1},
&\qquad
K^{ij}_{\alpha_1} &= - (-1)^{\epsilon_i \epsilon_j} K^{ji}_{\alpha_1},
\\
Z^{\alpha_{s - 2}}_{\alpha_{s - 1}} Z^{\alpha_{s - 1}}_{\alpha_s} &=
S_{{\rm cl}, j} K^{j \alpha_{s - 2}}_{\alpha_s},
&\qquad
\alpha_s &= 1, \ldots, n_s, ~ s = 2, \ldots, L,
\end{alignat*}
where the stage $L$ of reducibility is defined by the lowest value $s$
for which the matrix $Z^{\alpha_{L - 1}}_{\alpha_L}(A)$ is no longer
degenerated. The $Z^{\alpha_{s - 1}}_{\alpha_s}$ are the on--shell zero
modes for $Z^{\alpha_{s - 2}}_{\alpha_{s - 1}}$ with
$\epsilon(Z^{\alpha_{s - 1}}_{\alpha_s}) = \epsilon_{\alpha_{s - 1}} +
\epsilon_{\alpha_s}$.
In the following, if not stated otherwise, we assume $s$ to
take on the values $s = 0, \ldots, L$, thereby including also the case of
irreducible theories.
The whole space of fields $\phi^A$ and antifields $\bar\phi_A,
\phi^*_{Aa}, \eta_A$ together with their Grassmann parities
(modulo 2) is characterized by the following sets \cite{10,6}
\begin{alignat*}{2}
\phi^A &= ( A^i, B^{\alpha_s| a_1 \cdots a_s},
C^{\alpha_s| a_0 \cdots a_s}, s = 0, \ldots L ),
&\qquad
\epsilon(\phi^A) &\equiv \epsilon_A =
( \epsilon_i, \epsilon_{\alpha_s} + s, \epsilon_{\alpha_s} + s + 1 )
\\
\bar{\phi}_A &= ( \bar{A}_i, \bar{B}_{\alpha_s| a_1 \cdots a_s},
\bar{C}_{\alpha_s| a_0 \cdots a_s}, s = 0,\ldots L ),
&\qquad
\epsilon(\bar{\phi}_A) &= \epsilon_A,
\\
\phi^*_{A a} &= ( A^*_{i a}, B^*_{\alpha_s a| a_1 \cdots a_s},
C^*_{\alpha_s a| a_0 \cdots a_s} , s = 0, \ldots L ),
&\qquad
\epsilon(\phi^*_{A a}) &= \epsilon_A + 1,
\\
\eta_A &= (D_i, E_{\alpha_s| a_1 \cdots a_s},
F_{\alpha_s| a_0 \cdots a_s}, s = 0,\ldots L),
&\qquad
\epsilon(\eta_A) &= \epsilon_A,
\end{alignat*}
respectively.
Here, the pyramids of auxiliary fields $B^{\alpha_s| a_1 \cdots a_s}$ and
(anti)ghosts $C^{\alpha_s| a_0 \cdots a_s}$ are
$Sp(2)$--tensors of rank $s$ and $s + 1$, respectively,
being completely {\it symmetric} with
respect to the `internal' $Sp(2)$--indices $a_i=1,2,\;(i=0,1,\ldots,s)$;
similarly for the
antifields $\bar\phi_A, \phi^*_{Aa}$ and sources $\eta_A$.
The independent index $a=1,2$
which counts the two components of a $Sp(2)$--spinor will be called
`external'.
The totally symmetrized $Sp(2)$--tensors are irreducible and have maximal
$Sp(2)$--spin.
Raising and lowering of $Sp(2)$--indices is obtained by the invariant tensor
\begin{equation*}
\epsilon^{ab} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix},
\qquad
\epsilon^{ac} \epsilon_{cb} = \delta^a_b.
\end{equation*}
\smallskip
\noindent (B) {\it The superalgebra sl(1,2) }
\\
The main goal of this Section is to determine the action of the
generators of the superalgebra
$sl(1,2)$ on the antifields $\bar{\phi}_A$, $\phi_{A a}^*$ and $\eta_A$.
Let us now introduce that algebra.
The even part of $sl(1,2) \sim sl(2,1)$ is the Lie algebra
$sl(2) \oplus u(1)$. We denote by $V_\alpha$, ($\alpha = 0, \pm$) the (real)
generators of $SL(2)$ and by $V$ the generator of $U(1)$. The odd part of
$sl(1,2)$ contains two (nilpotent) $SL(2)$--spinors, $V_\pm^a$, with spin
$\frac{1}{2}$ and Weyl weight $\alpha(V_\pm^a) = \pm 1$, respectively.
Spin and Weyl weight of $V_\pm^a$ are defined through their behaviour
under the action of the generators $V_\alpha$ and $V$, respectively.
\footnote{
Identifying $V\equiv iD$ with $D$ being the generator of dilatations
in superspace, as will be done in Sect.~3, the Weyl weight coincides
with the superspace scale dimension of the corresponding quantity.
Of course, the latter should not be confuced with the scale dimension
of any quantity in ordinary space--time.}
The (anti)commutation relations of the superalgebra $sl(1,2)$ are
\cite{15}:
\begin{alignat}{3}
\label{II3}
[ V, V_\alpha ] &= 0,
&\qquad
[ V, V_+^a ] &= V_+^a,
&\qquad
[ V, V_-^a ] &= - V_-^a,
\nonumber
\\
[ V_\alpha, V_\beta ] &= \epsilon_{\alpha\beta}^{~~~\!\gamma} V_\gamma,
&\qquad
[ V_\alpha, V_+^a ] &= V_+^b (\sigma_\alpha)_b^{~a},
&\qquad
[ V_\alpha, V_-^a ] &= V_-^b (\sigma_\alpha)_b^{~a},
\\
\{ V_+^a, V_+^b \} &= 0,
&\qquad
\{ V_-^a, V_-^b \} &= 0,
&\qquad
\{ V_+^a, V_-^b \} &= - (\sigma^\alpha)^{ab} V_\alpha - \epsilon^{ab} V,
\nonumber
\end{alignat}
where the $Sp(2)$--indices are raised or lowered according to
\begin{gather*}
(\sigma_\alpha)^{ab} = \epsilon^{ac} (\sigma_\alpha)_c^{~b} =
(\sigma_\alpha)^a_{~c} \epsilon^{cb} =
\epsilon^{ac} (\sigma_\alpha)_{cd} \epsilon^{db},
\\
(\sigma_\alpha)_a^{~b} = - (\sigma_\alpha)^b_{~a},
\qquad~
(\sigma_\alpha)^{ab} = (\sigma_\alpha)^{ba}.
\end{gather*}
The matrices $\sigma_\alpha (\alpha = 0, \pm)$ generate the (real) Lie algebra
$sl(2)$ being isomorphic to $sp(2)$:
\begin{gather}
\label{II4}
(\sigma_\alpha)_a^{~c} (\sigma_\beta)_c^{~b} = g_{\alpha\beta} \delta_a^b +
\hbox{$\frac{1}{2}$} \epsilon_{\alpha\beta\gamma} (\sigma^\gamma)_a^{~b},
\qquad
(\sigma^\alpha)_a^{~b} = g^{\alpha\beta} (\sigma_\beta)_a^{~b},
\\
g^{\alpha\beta} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 2 \\ 0 & 2 & 0
\end{pmatrix},
\qquad
g^{\alpha\gamma} g_{\gamma\beta} = \delta^\alpha_\beta,
\nonumber
\end{gather}
where $\epsilon_{\alpha\beta\gamma}$ is the totally antisymmetric tensor,
$\epsilon_{0+-} = 1$. For the generators $\sigma_\alpha$ we may choose
the representation $(\sigma_0)_a^{~b} = \tau_3$ and
$(\sigma_\pm)_a^{~b} = - \hbox{$\frac{1}{2}$} (\tau_1 \pm i \tau_2)$, with
$\tau_\alpha$ ($\alpha = 1,2,3$) being the Pauli matrices.
Let us now rewrite the $sl(1,2)$--algebra in two equivalent forms,
both of which being of physical relevance in the following.
First, introducing another basis $V^{ab}$ of the $SL(2)$--generators, namely
\begin{equation}
\label{II5}
V^{ab} = (\sigma^\alpha)^{ab} V_\alpha,
\end{equation}
and making use of the equalities
\begin{equation*}
(\sigma^\alpha)^{ab} (\sigma_\alpha)_d^{~c} =
- \epsilon^{c \{a } \delta^{b\} }_d,
\qquad
\epsilon_{\alpha\beta}^{~~~\!\gamma}
(\sigma^\alpha)^{ab} (\sigma^\beta)^{cd} =
- \epsilon^{\{c \{a } (\sigma^\gamma)^{ b\} d\} },
\end{equation*}
where the curly brackets $\{ ~ \}$ indicate symmetrization of indices,
the (anti)commutation relations of $sl(1,2)$ read
\begin{alignat}{3}
[ V, V^{ab} ] &= 0,
&\qquad
[ V, V_+^a ] &= V_+^a,
&\qquad
[ V, V_-^a ] &= - V_-^a,
\nonumber
\\
\label{II6}
[ V^{ab}, V^{cd} ] &= - \epsilon^{\{c \{a } V^{ b\} d\} },
&\qquad
[ V^{ab}, V_+^c ] &= - \epsilon^{c \{a } V_+^{ b\} },
&\qquad
[ V^{ab}, V_-^c ] &= - \epsilon^{c \{a } V_-^{ b\} },
\\
\{ V_+^a, V_+^b \} &= 0,
&\qquad
\{ V_-^a, V_-^b \} &= 0,
&\qquad
\{ V_+^a, V_-^b \} &= - V^{ab} - \epsilon^{ab} V.
\nonumber
\end{alignat}
In that form the superalgebra $sl(1,2)$ may be given a geometric
interpretation as the algebra of the conformal group in a 2--dimensional
superspace having two anticommuting coordinates (see Sect.~3 below).
Secondly, we remark that within the field--antifield formalism not the entire
$sl(1,2)$--superalgebra will be of physical relevance,
since not any of their generators define symmetry operations of the
quantum action -- only some combinations of them forming a
orthosymplectic superalgebra $osp(1,2)$ generate symmetries
(see Sect.~4 below). Therefore, with respect to this let us notice
the isomorphism between $sl(1,2)$ and $osp(2,2)$ by introducing
the following two combinations of $V_+^a$ and $V_-^a$,
\begin{equation*}
O_+^a \equiv V_+^a + \hbox{$\frac{1}{2}$} V_-^a,
\qquad
O_-^a \equiv V_+^a - \hbox{$\frac{1}{2}$} V_-^a.
\end{equation*}
Then for the (anti)commutation relations of the superalgebra $osp(2,2)$
we obtain
\begin{alignat*}{3}
[ V, V_\alpha ] &= 0,
&\qquad
[ V, O_+^a ] &= O_-^a,
&\qquad
[ V, O_-^a ] &= O_+^a,
\\
[ V_\alpha, V_\beta ] &= \epsilon_{\alpha\beta}^{~~~\!\gamma} V_\gamma,
&\qquad
[ V_\alpha, O_+^a ] &= O_+^b (\sigma_\alpha)_b^{~a},
&\qquad
[ V_\alpha, O_-^a ] &= O_-^b (\sigma_\alpha)_b^{~a},
\\
\{ O_+^a, O_+^b \} &= - (\sigma^\alpha)^{ab} V_\alpha,
&\qquad
\{ O_-^a, O_-^b \} &= (\sigma^\alpha)^{ab} V_\alpha,
&\qquad
\{ O_+^a, O_-^b \} &= - \epsilon^{ab} V.
\end{alignat*}
Here, $(V_\alpha, O_+^a)$ as well as $(V_\alpha, O_-^a)$
obey two different
$osp(1,2)$--superalgebras with $(V, O_-^a)$ as well as $(V, O_+^a)$
forming an
irreducible tensor of these algebras, respectively, either of them
transforming according
to the same representation. Notice, that both $O_+^a$ and $O_-^a$ are
neither nilpotent nor do they anticommute among themselves.
\smallskip
\noindent (C) {\it Representation of sl(1,2) on the antifields}
\\
Now, let us give an explicit {\em linear} realization of the generators
of the superalgebra (\ref{II3}) by their action on the antifields
$\bar{\phi}_A$, $\phi_{A a}^*$ and the sources $\eta_A$
(a nonlinear realization on the fields $\phi^A$ will be given in Sect.~4).
\begin{alignat}{2}
V_+^a \bar{\phi}_A &= \epsilon^{ab} \phi^*_{A b},
&\qquad
V_-^a \bar{\phi}_A &= 0,
\nonumber
\\
\label{II7}
V_+^a \phi^*_{A b} &= - \delta^a_b \eta_A,
&\qquad
V_-^a \phi^*_{A b} &= \bar{\phi}_B \bigr(
(\sigma^\alpha)^a_{~b} (\sigma_\alpha)^B_{~~\!A} -
\delta^a_b {\bar\gamma}^B_A \bigr),
\\
V_+^a \eta_A &= 0,
&\qquad
V_-^a \eta_A &= \phi^*_{B b} \bigr(
(\sigma^\alpha)^{ab} (\sigma_\alpha)^B_{~~\!A} -
\epsilon^{ab} ({\bar\gamma}^B_A + 2 \delta^B_A) \bigr),
\nonumber
\\
\nonumber
\\
V_\alpha \bar{\phi}_A &= \bar{\phi}_B (\sigma_\alpha)^B_{~~\!A},
&\qquad
V \bar{\phi}_A &= \bar{\phi}_B {\bar\gamma}^B_A,
\nonumber
\\
\label{II8}
V_\alpha \phi_{A b}^* &= \phi_{B b}^* (\sigma_\alpha)^B_{~~\!A} +
\phi_{A a}^* (\sigma_\alpha)^a_{~b},
&\qquad
V \phi^*_{A b} &= \phi^*_{B b} ({\bar\gamma}^B_A + \delta^B_A),
\\
V_\alpha \eta_A &= \eta_B (\sigma_\alpha)^B_{~~\!A},
&\qquad
V \eta_A &= \eta_B ({\bar\gamma}^B_A + 2 \delta^B_A)
\nonumber
\end{alignat}
(for a componentwise notation see Appendix A). In Eqs.~(\ref{II7}),
(\ref{II8})
we introduced two kinds of matrices which deserve some explanation.
The matrices $(\sigma_\alpha)^B_{~~\!A}$ are generalized $\sigma$--matrices
acting only on internal $Sp(2)$--indices of the (anti)fields, for example,
\begin{equation*}
\bar{\phi}_B (\sigma_\alpha)^B_{~~\!A} = \Bigr( 0,
\sum_{r = 1}^s \bar{B}_{\alpha_s| a_1 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r},
\sum_{r = 0}^s \bar{C}_{\alpha_s| a_0 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r} \Bigr);
\end{equation*}
their general definition is given by
\begin{equation}
\label{II9}
(\sigma_\alpha)^B_{~~\!A} \equiv \begin{cases}
\delta^{\beta_s}_{\alpha_s} (s + 1) (\sigma_\alpha)^b_{~a}
S^{b_1 \cdots b_s a}_{a_1 \cdots a_s b}
& \text{for $A = \alpha_s|a_1 \cdots a_s, B = \beta_s|b_1 \cdots b_s$},
\\
\delta^{\beta_s}_{\alpha_s} (s + 2) (\sigma_\alpha)^b_{~a}
S^{b_0 \cdots b_s a}_{a_0 \cdots a_s b}
& \text{for $A = \alpha_s|a_0 \cdots a_s, B = \beta_s|b_0 \cdots b_s$},
\\
0 & \text{otherwise},
\end{cases}
\end{equation}
where the symmetrizer $S^{b_0 \cdots b_s a}_{a_0 \cdots a_s b}$ is defined as
\begin{equation*}
S^{b_0 \cdots b_s a}_{a_0 \cdots a_s b} \equiv
\frac{1}{(s + 2)!} \frac{\partial}{\partial X^{a_0}} \cdots
\frac{\partial}{\partial X^{a_s}} \frac{\partial}{\partial X^b}
X^a X^{b_s} \cdots X^{b_0},
\end{equation*}
$X^a$ being independent bosonic variables.
These operators, obeying $ S^{b_0 \cdots b_s a}_{c_0 \cdots c_s d}
S^{c_0 \cdots c_s d}_{a_0 \cdots a_s b} =
S^{b_0 \cdots b_s a}_{a_0 \cdots a_s b} $, possess the additional properties
\begin{align*}
S^{b_0 \cdots b_s a}_{a_0 \cdots a_s b} &= \frac{1}{s + 2} \Bigr(
\sum_{r = 0}^s \delta^{b_r}_{a_0}
S^{b_0 \cdots b_{r - 1} b_{r + 1} \cdots b_s a}_{a_1 \cdots a_s b} +
\frac{1}{s + 1} \sum_{r = 0}^s \delta^a_{a_0} \delta^{b_r}_b
S^{b_0 \cdots b_{r - 1} b_{r + 1} \cdots b_s}_{a_1 \cdots a_s} \Bigr),
\\
S^{b_0 \cdots b_s}_{a_0 \cdots a_s} &= \frac{1}{s + 1}
\sum_{r = 0}^s \delta^{b_r}_{a_0}
S^{b_0 \cdots b_{r - 1} b_{r + 1} \cdots b_s}_{a_1 \cdots a_s}.
\end{align*}
Furthermore, ${\bar\gamma}^B_A = \alpha(\bar{\phi}_A) \delta^B_A$ are
arbitrary diagonal matrices whose entries $\alpha(\bar{\phi}_A)$,
in general, may be any (real) numbers. By definition, cf.~Eq.~(\ref{II8}),
$\alpha(\bar{\phi}_A)$ is the (up to now arbitrary)
Weyl weight of the antifields $\bar\phi$. (This arbitrariness may be traced
back to fact that these representations of $sl(1,2)$ are not completely
reducible, cf.~\cite{15}). Taking advantage of that freedom we may fix
$\alpha(\bar{\phi}_A)$ by relating it to the Weyl weight
$\alpha(\phi^A)$ of the fields $\phi^A$ -- which is uniquely determined
by means of the quantum master equations at the lowest order of $\hbar$
(see Sect. 4 and 6 below) -- according to
\begin{equation}
\label{IIQ}
{\bar\gamma}^B_A + \gamma^B_A + 2 \delta^B_A = 0,
\qquad
{\rm i.e.,}
\qquad
\alpha(\bar{\phi}_A) + \alpha(\phi^A) + 2 =0,
\end{equation}
where $\gamma^B_A = \alpha(\phi^A) \delta^B_A$ is the
analogous (diagonal) matrix in the $sl(1,2)$--representations of the fields
\footnote{
The requirement (\ref{IIQ})
ensures that (proper) solutions $S_m$ of the $m$--extended {\it classical}
master equations can be constructed having vanishing Weyl weight,
$\alpha(S_m) = 0$. Later on, we identify the Weyl weight of the (anti)fields
with the new ghost number introduced in Ref. \cite{10}.}.
These matrices $\gamma^B_A $ are given by
\begin{equation}
\label{II10}
\gamma^B_A \equiv \begin{cases}
\delta^{\beta_s}_{\alpha_s} (s + 2)
\delta^{b_1}_{a_1} \cdots \delta^{b_s}_{a_s}
& \text{for $A = \alpha_s|a_1 \cdots a_s, B = \beta_s|b_1 \cdots b_s$},
\\
\delta^{\beta_s}_{\alpha_s} (s + 1)
\delta^{b_0}_{a_0} \cdots \delta^{b_s}_{a_s}
& \text{for $A = \alpha_s|a_0 \cdots a_s, B = \beta_s|b_0 \cdots b_s$},
\\
0 & \text{otherwise}.
\end{cases}
\end{equation}
From their entries one may read off
the Weyl weight $\alpha(\phi^A)$ of the fields $\phi^A$, namely
\begin{equation}
\label{II11}
\alpha(\phi^A) = ( 0, s + 2, s + 1 ),
\end{equation}
and, throught Eq.~(\ref{IIQ}), the Weyl weights of the
antifields $\bar{\phi}_A, \phi_{A a}^*$ and $\eta_A$,
\begin{equation}
\label{II12}
\alpha(\bar{\phi}_A) = - \alpha(\phi^A) - 2,
\qquad
\alpha(\phi_{A a}^*) = - \alpha(\phi^A) - 1,
\qquad
\alpha(\eta_A) = - \alpha(\phi^A).
\end{equation}
In order to prove that the transformations (\ref{II7}) and (\ref{II8}) obey
the $sl(1,2)$--superalgebra one needs the basic properties (\ref{II4}) of the
matrices $\sigma_\alpha$ and the following two equalities:
\begin{align*}
\epsilon^{ac} \delta^b_d + \epsilon^{bc} \delta^a_d &=
- (\sigma^\alpha)^{ab} (\sigma_\alpha)^c_{~d},
\\
(\sigma^\alpha)^{ab} \bigr(
(\sigma_\alpha)^c_{~e} \delta^d_f + \delta^c_e (\sigma_\alpha)^d_{~f} \bigr)
&=
(\sigma^\alpha)^{ab} \bigr(
(\sigma_\alpha)^d_{~e} \delta^c_f + \delta^d_e (\sigma_\alpha)^c_{~f} \bigr),
\end{align*}
which can be proven by means of the following relations:
\begin{equation*}
\epsilon^{ab} \delta^c_d +
\epsilon^{bc} \delta^a_d +
\epsilon^{ca} \delta^b_d = 0,
\qquad
\epsilon^{ab} ( \delta^c_e \delta^d_f - \delta^d_e \delta^c_f ) =
\epsilon^{cd} ( \delta^a_e \delta^b_f - \delta^b_e \delta^a_f ).
\end{equation*}
\section{Superspace representations of the algebra $sl(1,2)$}
\setcounter{equation}{0}
This Section is devoted to a geometric interpretation of
the superalgebra $sl(1,2)$ as given by Eqs. (\ref{II6}). This opens
the possibility to formulate the quantization of general gauge theories
in terms of super(anti)fields over a 2--dimensional superspace.
\medskip
\noindent (A) {\it Representations of sl(1,2) in superspace}
\\
In Ref. \cite{11} it was pointed out that the generators of the (real)
algebra $osp(1,1|2) \sim sl(1,2)$ acquire a clear geometric meaning if
they are interpreted as generators of transformations in superspace.
This is obtained by redefining the generators of $sl(1,2)$ as follows:
\begin{equation}
\label{III13}
V_+^a \equiv - i P^a,
\qquad
V_-^a \equiv - i K^a,
\qquad
V^{ab} \equiv - i M^{ab},
\qquad
V \equiv i D.
\end{equation}
Then, the (anti)commutation relations resulting from (\ref{II6}) can be
interpreted as algebra of the conformal group in two {\em anticommuting}
dimensions with metric tensor $\epsilon^{ab}$:
\begin{alignat}{3}
\hspace{-.2cm}
[ D, M^{ab} ] &= 0,
&\quad
[ D, P^a ] &= - i P^a,
&\quad
[ D, K^a ] &= i K^a,
\nonumber
\\
\label{III14}
\hspace{-.2cm}
[ M^{ab}, M^{cd} ] &= - i \epsilon^{\{c \{a } M^{ b\} d\} },
&\quad
[ M^{ab}, P^c ] &= - i \epsilon^{c \{a } P^{ b\} },
&\quad
[ M^{ab}, K^c ] &= - i \epsilon^{c \{a } K^{ b\} },
\\
\hspace{-.2cm}
\{ P^a, P^b \} &= 0,
&\quad
\{ K^a, K^b \} &= 0,
&\quad
\{ P^a, K^b \} &= i ( \epsilon^{ab} D - M^{ab} ),
\nonumber
\end{alignat}
with $P^a$, $K^a$, $M^{ab}$ and $D$ being the generators of translations,
special conformal transformations, (symplectic) rotations and
dilatations, respectively. The superspace which we encounter here is obtained
by extending the usual spacetime to include two extra anticommuting
coordinates $\theta^a$. Raising and lowering of $Sp(2)$--indices are
defined by the rules $\theta^a = \epsilon^{ab} \theta_b$ and
$\theta_a = \epsilon_{ab} \theta^b$; the square of $\theta^a$ and the
derivative with respect to it are defined by
$\theta^2 \equiv \hbox{$\frac{1}{2}$} \epsilon_{ab} \theta^b \theta^a$ and
$\partial^2/ \partial \theta^2 \equiv \hbox{$\frac{1}{2}$} \epsilon^{ab}
\partial^2/ \partial \theta^b \partial \theta^a$.
The representation of the algebra (\ref{III14}) in that superspace is given by
\begin{align}
\label{III15}
P^a &= i \frac{\partial}{\partial \theta_a},
\\
\label{III16}
K^a &= 2 i \theta^2 \frac{\partial}{\partial \theta_a} -
\theta_b ( \Sigma^{ab} - i \epsilon^{ab} \Delta ),
\\
\label{III17}
M^{ab} &= - i \Bigr(
\theta^a \frac{\partial}{\partial \theta_b} +
\theta^b \frac{\partial}{\partial \theta_a} \Bigr) + \Sigma^{ab},
\\
\label{III18}
D &= i \theta_a \frac{\partial}{\partial \theta_a} - i \Delta,
\end{align}
where $\Sigma^{ab}$ and $\Delta$ constitute the basis of some
finite--dimensional representation of the algebra of the ``little group'',
i.e., the stabilizer subgroup of that conformal group,
\begin{equation*}
[ \Sigma^{ab}, \Sigma^{cd} ] = - \epsilon^{ \{c \{a } \Sigma^{ b\} d\} },
\qquad
[ \Delta, \Sigma^{ab} ] = 0.
\end{equation*}
Obviously, the corresponding representation of the algebra (\ref{II6}) is
obtained by a change of the $SL(2)$--generators analogous to (\ref{II5}),
$\Sigma^{ab} = i (\sigma^\alpha)^{ab} \Sigma_\alpha$, with $\Sigma_\alpha$
being related to the matrix representation of the $V_\alpha$'s and
satisfying
\begin{equation}
\label{III19}
[ \Sigma_\alpha, \Sigma_\beta ] = \epsilon_{\alpha\beta}^{~~~\!\gamma}
\Sigma_\gamma,
\qquad
[ \Delta, \Sigma_\alpha ] = 0.
\end{equation}
The corresponding representation of the generators (\ref{III13}) in the
superspace are
\begin{align}
\label{III15a}
V_+^a &= \frac{\partial}{\partial \theta_a},
\\
\label{III16a}
V_-^a &= 2 \theta^2 \frac{\partial}{\partial \theta_a} -
\theta_b \bigr( (\sigma^\alpha)^{ab} \Sigma_\alpha -
\epsilon^{ab} \Delta \bigr),
\\
\label{III17a}
V_\alpha &= \theta^a (\sigma_\alpha)^a_{~b}
\frac{\partial}{\partial \theta_b} + \Sigma_\alpha,
\\
\label{III18a}
V &= - \theta_a \frac{\partial}{\partial \theta_a} + \Delta.
\end{align}
\smallskip
\noindent (B) {\it Representation of $sl(1,2)$ on super(anti)fields}
\\
Now, having revealed the geometrical content of the generators of $sl(1,2)$
we are able to formulate the transformations (\ref{II7}) and (\ref{II8})
in superspace. Let $\Phi^A(\theta)$,
$\epsilon(\Phi^A(\theta)) \equiv \epsilon_A$, be a set of superfields with
the restriction $\Phi^A(\theta)|_{\theta = 0} = \phi^A$. It admits the
following general expansion in terms of component fields,
\begin{equation}
\label{III24}
\Phi^A(\theta) = \phi^A + \pi^{A a} \theta_a - \lambda^A \theta^2,
\qquad
\frac{\delta}{\delta \Phi^A(\theta)} = \frac{\delta}{\delta \phi^A} \theta^2 -
\theta^a \frac{\delta}{\delta \pi^{A a}} -
\frac{\delta}{\delta \lambda^A}
\end{equation}
(remember that, according to the general convention, derivatives
with respect to the fields are defined as acting from the {\em right}).
With each superfield $\Phi^A(\theta)$ a superantifield
$\bar{\Phi}_A(\theta)$ is associated having the {\em same} Grassmann parity,
$\epsilon(\bar{\Phi}_A(\theta)) = \epsilon_A$,
\begin{equation}
\label{III25}
\bar{\Phi}_A(\theta) = \bar{\phi}_A - \theta^a \phi^*_{A a} -
\theta^2 \eta_A,
\qquad
\frac{\delta}{\delta \bar{\Phi}_A(\theta)} =
\theta^2 \frac{\delta}{\delta \bar{\phi}_A} +
\frac{\delta}{\delta \phi^*_{A a}} \theta_a -
\frac{\delta}{\delta \eta_A}.
\end{equation}
According to (\ref{III24}) and (\ref{III25}) for the expressions of the
derivatives it holds
\begin{equation*}
\frac{\delta \Phi^A(\theta)}{\delta \Phi^B(\bar{\theta})} =
\frac{\delta \bar{\Phi}_B(\theta)}{\delta \bar{\Phi}_A(\bar{\theta})} =
\delta^A_B \delta^2(\theta - \bar{\theta}),
\qquad
\hbox{with}
\qquad
\delta^2(\theta - \bar{\theta}) \equiv (\theta - \bar{\theta})^2.
\end{equation*}
Then, by the help of $\bar{\Phi}_A(\theta)$ the $sl(1,2)$-transformations
(\ref{II7}) and (\ref{II8}) may be written in the following compact form:
\begin{align}
\label{III26}
V_+^a \bar{\Phi}_A(\theta) &=
\frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a},
\\
\label{III27}
V_-^a \bar{\Phi}_A(\theta) &=
2 \theta^2 \frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a} -
\theta_b \bigr(
(\sigma^\alpha)^{ab} \Sigma_\alpha - \epsilon^{ab} \Delta \bigr)
\bar{\Phi}_A(\theta),
\\
\label{III28}
V_\alpha \bar{\Phi}_A(\theta) &= - \Bigr\{
\theta_a (\sigma_\alpha)^a_{~b}
\frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_b} +
\Sigma_\alpha \bar{\Phi}_A(\theta) \Bigr\},
\\
\label{III29}
V \bar{\Phi}_A(\theta) &= - \Bigr\{
- \theta_a \frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a} +
\Delta \bar{\Phi}_A(\theta) \Bigr\}
\end{align}
with
\begin{equation}
\label{III30}
\Sigma_\alpha \bar{\Phi}_A(\theta) = - \bar{\Phi}_B(\theta)
(\sigma_\alpha)^B_{~~\!A},
\qquad
\Delta \bar{\Phi}_A(\theta) = - \bar{\Phi}_B(\theta)
{\bar\gamma}^B_{~~\!A}.
\end{equation}
Some care has to be taken in order to get the correct signs in these
equations. First, in order to attain that the transformations laws
(\ref{III26})--(\ref{III29}) are compatible with the superalgebra (\ref{II6})
it is necessary to take into account an extra minus sign on the right--hand
side of (\ref{III28}) and (\ref{III29})
(cf. Eqs. (\ref{III17a}), (\ref{III18a})).
Since the matrices $\Sigma_\alpha$ generate an irreducible representation of
the symplectic group, by virtue of (\ref{III19}), $- \Delta$ must be a number
which, by definition, agrees with the Weyl weight of the superantifields
(observe $\alpha(\theta)=1$ in accordance with Eqs. (\ref{II12})).
Secondly, let us emphasize that the minus sign on the
right--hand side of the first relation (\ref{III30}) is crucial:
A further transformation in (\ref{III28}) does not act on the
numerical matrices $\Sigma_\alpha$ but directly on $\bar{\Phi}_A(\theta)$;
this reverses the factors on the right--hand side against those on the left
one and the minus sign is therefore necessary to retain the multiplication
law of the conformal group.
Collecting the results obtained up to now the representation of the generators
of $sl(1,2)$ by differential operators on the superspace reads
\begin{align}
\label{III31}
V_+^a &= \int d^2 \theta \,
\frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a}
\frac{\delta}{\delta \bar{\Phi}_A(\theta)},
\\
\label{III32}
V_-^a &= \int d^2 \theta \, \Bigr\{
2 \theta^2 \frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a} +
\theta_b \bar{\Phi}_B(\theta) \bigr(
(\sigma^\alpha)^{ab} (\sigma_\alpha)^B_{~~\!A} -
\epsilon^{ab} {\bar\gamma}^B_A \bigr) \Bigr\}
\frac{\delta}{\delta \bar{\Phi}_A(\theta)},
\\
\label{III33}
V_\alpha &= \int d^2 \theta \, \Bigr\{
- \theta_a (\sigma_\alpha)^a_{~b}
\frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_b} +
\bar{\Phi}_B(\theta) (\sigma_\alpha)^B_{~~\!A} \Bigr\}
\frac{\delta}{\delta \bar{\Phi}_A(\theta)},
\\
\label{III34}
V &= \int d^2 \theta \, \Bigr\{
\theta_a \frac{\partial \bar{\Phi}_A(\theta)}{\partial \theta_a} +
\bar{\Phi}_B(\theta) {\bar\gamma}^B_A \Bigr\}
\frac{\delta}{\delta \bar{\Phi}_A(\theta)},
\end{align}
where the integration over $\theta^a$ is given by
\begin{equation*}
\int d^2 \theta = 0,
\qquad
\int d^2 \theta \, \theta^a = 0,
\qquad
\int d^2 \theta \, \theta^a \theta^b = \epsilon^{ab}.
\end{equation*}
Making use of the expansions (\ref{III25}) for $\bar{\Phi}_A(\theta)$
and $\delta/ \delta \bar{\Phi}_A(\theta)$ and performing in
Eqs. (\ref{III31})--(\ref{III34}) the $\theta$--integration
it is easily verified that the resulting expressions for $V_\pm^a$,
$V_\alpha$ and $V$ generate exactly the transformations
(\ref{II7}) and (\ref{II8}) of the component fields of $\bar{\Phi}_A(\theta)$.
\smallskip
Furthermore, let us give also a superspace representation of $sl(1,2)$ in
terms of $\Phi^A(\theta)$. The corresponding generators $U_\pm^a$, $U_\alpha$
and $U$ being defined as {\it right} derivatives -- in contrast to
$V_\pm^a$, $V_\alpha$ and $V$, which are defined as {\it left} ones --
obey the following (anti)commutation relations (cf. Eqs. (\ref{II3}))
\begin{alignat}{3}
[ U, U_\alpha ] &= 0,
&\qquad
[ U, U_+^a ] &= - U_+^a,
&\qquad
[ U, U_-^a ] &= U_-^a,
\nonumber
\\
\label{III35}
[ U_\alpha, U_\beta ] &= - \epsilon_{\alpha\beta}^{~~~\!\gamma} U_\gamma,
&\qquad
[ U_\alpha, U_+^a ] &= - U_+^b (\sigma_\alpha)_b^{~a},
&\qquad
[ U_\alpha, U_-^a ] &= - U_-^b (\sigma_\alpha)_b^{~a},
\\
\{ U_+^a, U_+^b \} &= 0,
&\qquad
\{ U_-^a, U_-^b \} &= 0,
&\qquad
\{ U_+^a, U_-^b \} &= (\sigma^\alpha)^{ab} U_\alpha + \epsilon^{ab} U.
\nonumber
\end{alignat}
If we replace in Eqs. (\ref{III31})--(\ref{III34}) the superantifield
$\bar{\Phi}_A(\theta)$ by $\Phi^A(\theta)$, the left derivatives
$\delta_L/ \delta \bar{\Phi}_A(\theta)$ by the right derivatives
$\delta_R/ \delta \Phi^A(\theta)$, and reverse the order of all the factors,
then for the representations we are looking for we obtain
\begin{align}
\label{III36}
U_+^a &= \int d^2 \theta \, \frac{\delta}{\delta \Phi^A(\theta)}
\frac{\partial_R \Phi^A(\theta)}{\partial \theta_a},
\\
\label{III37}
U_-^a &= \int d^2 \theta \, \frac{\delta}{\delta \Phi^A(\theta)} \Bigr\{
2 \theta^2 \frac{\partial_R \Phi^A(\theta)}{\partial \theta_a} +
\bigr( (\sigma^\alpha)^{ab} (\sigma_\alpha)^A_{~~\!B} +
\epsilon^{ab} \gamma^A_B \bigr)
\Phi^B(\theta) \theta_b \Bigr\},
\\
\label{III38}
U_\alpha &= \int d^2 \theta \, \frac{\delta}{\delta \Phi^A(\theta)} \Bigr\{
- \frac{\partial_R \Phi^A(\theta)}{\partial \theta_b}
(\sigma_\alpha)_b^{~a} \theta_a +
(\sigma_\alpha)^A_{~~\!B} \Phi^B(\theta) \Bigr\},
\\
\label{III39}
U &= \int d^2 \theta \,
\frac{\delta}{\delta \Phi^A(\theta)} \Bigr\{
\frac{\partial_R \Phi^A(\theta)}{\partial \theta_a} \theta_a +
\gamma^A_B \Phi^B(\theta) \Bigr\}.
\end{align}
In addition, we have replaced ${\bar\gamma}^B_A$ by the (diagonal)
matrix $\gamma^B_A = \alpha(\phi^A) \delta^B_A$, whose entries
$\alpha(\phi^A)$ are given by Eq. (\ref{II11}).
Making use of the expansions (\ref{III25}) for $\Phi^A(\theta)$ and
$\delta/\delta \Phi^A(\theta)$ and integrating in
Eqs. (\ref{III36})--(\ref{III39}) over $\theta^a$ for the components of
$\Phi^A(\theta)$ one obtains the (linear) transformations
\begin{alignat}{2}
\phi^A U_+^a &= \pi^{A a},
&\quad
\phi^A U_-^a &= 0,
\nonumber
\\
\label{U1}
\pi^{A b} U_+^a &= - \epsilon^{ab} \lambda^A,
&\quad
\pi^{A b} U_-^a &= \bigr(
(\sigma^\alpha)^{ab} (\sigma_\alpha)^A_{~~\!B} +
\epsilon^{ab} \gamma^A_B \bigr) \phi^B,
\\
\lambda^A U_+^a &= 0,
&\quad
\lambda^A U_-^a &= \bigr(
(\sigma^\alpha)^a_{~b} (\sigma_\alpha)^A_{~~\!B} +
\delta^a_b ( \gamma^A_B + 2 \delta^A_B ) \bigr) \pi^{B b},
\nonumber
\\
\nonumber
\\
\phi^A U_\alpha &= (\sigma_\alpha)^A_{~~\!B} \phi^B,
&\quad
\phi^A U &= \gamma^A_B \phi^B,
\nonumber
\\
\label{U2}
\pi^{A a} U_\alpha &= (\sigma_\alpha)^A_{~~\!B} \pi^{B a} +
(\sigma_\alpha)^a_{~b} \pi^{A b},
&\quad
\pi^{A a} U &= ( \gamma^A_B + \delta^A_B ) \pi^{B a},
\\
\lambda^A U_\alpha &= (\sigma_\alpha)^A_{~~\!B} \lambda^B,
&\quad
\lambda^A U &= ( \gamma^A_B + 2 \delta^A_B ) \lambda^B,
\nonumber
\end{alignat}
which define the explicit realization of $sl(1,2)$ on the superfield
analogous to Eqs. (\ref{II7}) and (\ref{II8}). By a simple straightforward
calculation it is verified that the transformations (\ref{U1}) and (\ref{U2})
indeed satisfy the $sl(1,2)$--superalgebra (\ref{III35}).
\section{Quantum master equations}
\setcounter{equation}{0}
The superspace representation of $sl(1,2)$ obtained in the previous section
enables one to attack the problem of superfield quantization of general
gauge theories.
A superfield version for the $Sp(2)$--covariant Lagrangian quantization
was proposed in Ref. \cite{4}. In that approach the quantum action
$W(\Phi^A(\theta), \bar{\Phi}_A(\theta))$ is required to be invariant under
the (anti)BRST transformations which, in superspace, are realized as
translations along the coordinates $\theta^a$.
In order to proceed further in the development of that formalism one may
attempt
to include also special conformal transformations, symplectic rotations and
dilatations by imposing additional symmetry requirements. Such an
extension is possible, but only for one of the two
$osp(1,2)$--subalgebras of $osp(2,2) \sim sl(1,2)$. Indeed, for a
superfield description of the $osp(1,2)$--covariant quantization procedure
introduced in Ref. \cite{6} one needs both translations as well as
special conformal transformations and symplectic rotations. In that
approach the translations are combined with the special conformal
transformations by means of a mass parameter $m$ leading to $m$--dependent
(anti)BRST transformations. The invariance under symplectic transformations
ensures the ghost number conservation of the corresponding quantum
action $W_m(\Phi^A(\theta), \bar{\Phi}_A(\theta))$. In addition, the
dilatations may be used to ensure the new ghost number conservation of
$W_m(\Phi^A(\theta), \bar{\Phi}_A(\theta))$ at the lowest order of $\hbar$.
\smallskip
\noindent (A) {\it Sp(2)--covariant superfield quantization }
\\
To begin with, we shortly review the $Sp(2)$--covariant superfield
quantization \cite{4}. Let us introduce the antisymplectic differential
operators
\begin{equation}
\label{IV40}
\bar{\Delta}^a = \Delta^a + (i/\hbar) V^a,
\qquad
V^a \equiv V_+^a,
\end{equation}
with the translations $V_+^a$ given by Eq.~(\ref{III31}) and
the nilpotent (second--order) differential operators
$\Delta^a$ given by
\begin{equation}
\label{Delta}
\Delta^a = \int d^2 \theta
\frac{\partial^2 \delta_L}{\partial \theta^2 \delta \Phi^A(\theta)} \,
\theta^a \frac{\delta}{\delta \bar{\Phi}_A(\theta)} =
(-1)^{\epsilon_A} \frac{\delta_L}{ \delta \phi^A} \,
\frac{\delta}{ \delta \phi_{A a}^*}.
\end{equation}
Let us remark, that this definition of $\Delta^a$ by projecting out from
$\delta_L/ \delta \Phi^A(\theta)$ only the first component agrees with the
initial definition in Ref. \cite{10} but differs from that in Ref. \cite{4}.
In our opinion the definition (\ref{Delta}) seems to be much better adapted
to the present aim than that of Ref. \cite{4} since a change of the definition
of $\Delta^a$, like in the triplectic quantization \cite{16}, requires also
a change of the definition of $V^a$ -- but then the geometric meaning of
$V^a$ would be lost. The operators $\bar{\Delta}^a$, $\Delta^a$ and $V^a$
possess the important properties of nilpotency
and (relative) anticommutativity,
\begin{equation*}
\{ \bar{\Delta}^a, \bar{\Delta}^b \} = 0
\quad
\Longleftrightarrow
\quad
\{ \Delta^a, \Delta^b \} = 0,
\quad
\{ V^a, V^b \} = 0,
\quad
\{ \Delta^a, V^b \} = 0.
\end{equation*}
The basic object of the superfield quantization is the quantum action
$W(\Phi^A(\theta), \bar{\Phi}_A(\theta))$, which is required to be a solution
of the quantum master equation
\begin{equation}
\label{IV41}
\bar{\Delta}^a\, {\rm exp}\{ (i/ \hbar) W \} = 0
\qquad
\Longleftrightarrow
\qquad
\hbox{$\frac{1}{2}$} ( W, W )^a + V^a W = i \hbar \Delta^a W,
\end{equation}
where the superantibrackets $( F,G )^a$ are defined by
\begin{equation}
\label{IV42}
( F,G )^a = (-1)^{\epsilon_A} \int d^2 \theta \, \Bigr\{
\frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \theta^a
\frac{\delta G}{\delta \bar{\Phi}_A(\theta)} -
(-1)^{(\epsilon(F) + 1) (\epsilon(G) + 1)} (F \leftrightarrow G) \Bigr\}.
\end{equation}
The solution of (\ref{IV41}) is sought of as a power series in Planck's
constant $\hbar$,
\begin{equation*}
W = S + \sum_{n = 1}^\infty \hbar^n W_{n}.
\end{equation*}
Furthermore, two requirements -- the nondegeneracy of $S$
and the correctness of the classical limit -- have to be imposed. The first
one is translated into the requirement that $S$ should be a
{\em proper} solution of the classical master equation, i.e.,
the Hessian of second derivatives of $S$
should be of maximal rank at the stationary points, and the second one
means that $S$ should satisfy the usual boundary condition, namely
that $S$ coincides with the classical action $S_{\rm cl}(A)$ if all the
antifields are put equal to zero.
To remove the gauge degeneracy of the action $S$, one introduces the operator
\begin{equation*}
\hat{U}(F) = {\rm exp}\{ (\hbar/ i) \hat{T}(F) \}
\quad
{\rm with}
\quad
\hat{T}(F) = \hbox{$\frac{1}{2}$} \epsilon_{ab}
\{ \bar{\Delta}^b, [ \bar{\Delta}^a, F ] \},
\end{equation*}
$F = F(\Phi^A(\theta))$ being an arbitrary bosonic gauge fixing
functional. Then, the gauge fixed quantum action
$W_{\rm ext}(\Phi^A(\theta), \bar{\Phi}_A(\theta))$, defined by
\begin{equation}
\label{IV43}
{\rm exp}\{ (i/ \hbar) W_{\rm ext} \} =
\hat{U}(F) \,{\rm exp}\{ (i/ \hbar) W \},
\end{equation}
is also a solution of the quantum master equations (\ref{IV41}).
\smallskip
\noindent (B) {\it osp(1,2)--covariant superfield quantization }
\\
Let us now give the superfield description of the $osp(1,2)$--covariant
quantization \cite{6}. In that approach the antisymplectic differential
operators (\ref{IV40}) are replaced by
\begin{equation}
\label{IV44}
\bar{\Delta}_m^a = \Delta^a + (i/\hbar) V_m^a,
\qquad
V_m^a \equiv V_+^a + \hbox{$\frac{1}{2}$} m^2 V_-^a,
\end{equation}
with the special conformal operators $V_-^a$ given by Eq. (\ref{III32}).
Here, the mass parameter $m$ having Weyl weight $\alpha(m) = 1$ is introduced
because $V_+^a$ and $V_-^a$ have different mass dimensions (and opposite
Weyl weight $\alpha(V_\pm^a) = \pm 1$). In addition, one introduces the
differential operators
\begin{equation}
\label{IV45}
\bar{\Delta}_\alpha = \Delta_\alpha + (i/\hbar) V_\alpha,
\end{equation}
with the symplectic rotations $V_\alpha$ given by Eq. (\ref{III33}) and
the (second--order) differential operators $\Delta_\alpha$ being defined by
\begin{equation*}
\Delta_\alpha = (-1)^{\epsilon_A + 1} \int d^2 \theta \,
\theta^2 (\sigma_\alpha)_B^{~~\!A} \frac{\partial^2 \delta_L}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\frac{\delta}{\delta \bar{\Phi}_B(\theta)} =
(-1)^{\epsilon_A} (\sigma_\alpha)_B^{~~\!A}
\frac{\delta_L}{ \delta \phi^A} \frac{\delta}{\delta \eta_B}.
\end{equation*}
As long as $m \neq 0$ the operators $\bar{\Delta}_m^a$ are neither nilpotent
nor do they anticommute among themselves; instead, together with the
operators $\bar{\Delta}_\alpha$ they generate a superalgebra isomorphic
to $osp(1,2)$:
\begin{alignat}{2}
[ V_\alpha, V_\beta ] &= \epsilon_{\alpha\beta}^{~~~\!\gamma} V_\gamma,
&\qquad\qquad
[ \bar{\Delta}_\alpha, \bar{\Delta}_\beta ] &= (i/\hbar)
\epsilon_{\alpha\beta}^{~~~\!\gamma} \bar{\Delta}_\gamma,
\nonumber
\\
\label{IV46}
[ V_\alpha, V_m^a ] &= V_m^b (\sigma_\alpha)_b^{~a},
&\qquad\qquad
[ \bar{\Delta}_\alpha, \bar{\Delta}_m^a ] &= (i/\hbar)
\bar{\Delta}_m^b (\sigma_\alpha)_b^{~a},
\\
\{ V_m^a, V_m^b \} &= - m^2 (\sigma^\alpha)^{ab} V_\alpha,
&\qquad\qquad
\{ \bar{\Delta}_m^a, \bar{\Delta}_m^b \} &= - (i/\hbar)
m^2 (\sigma^\alpha)^{ab} \bar{\Delta}_\alpha.
\nonumber
\end{alignat}
The $m$--{\it dependent} quantum action
$W_m(\Phi^A(\theta), \bar{\Phi}_A(\theta))$ is required to obey the
$m$--{\it extended} generalized quantum master equations
\begin{align}
\label{IV47}
\bar{\Delta}_m^a\, {\rm exp}\{ (i/ \hbar) W_m \} = 0
\qquad
&\Longleftrightarrow
\qquad
\hbox{$\frac{1}{2}$} ( W_m, W_m )^a + V_m^a W_m = i \hbar \Delta^a W_m
\\
\intertext{which ensure (anti)BRST invariance, and the generating equations
of $Sp(2)$--invariance:}
\label{IV48}
\bar{\Delta}_\alpha\, {\rm exp}\{ (i/ \hbar) W_m \} = 0
\qquad
&\Longleftrightarrow
\qquad
\hbox{$\frac{1}{2}$} \{ W_m, W_m \}_\alpha + V_\alpha W_m =
i \hbar \Delta_\alpha W_m,
\end{align}
where the curly superbrackets $\{ F,G \}_\alpha$ are defined by
\begin{equation}
\label{IV49}
\{ F,G \}_\alpha = - \int d^2 \theta \, \Bigr\{
\theta^2 \frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\frac{\delta G}{\delta \bar{\Phi}_B(\theta)} (\sigma_\alpha)_B^{~~\!A} +
(-1)^{\epsilon(F) \epsilon(G)} (F \leftrightarrow G) \Bigr\}.
\end{equation}
The gauge fixed quantum action
$W_{m, {\rm ext}}(\Phi^A(\theta), \bar{\Phi}_A(\theta))$ is introduced
according to
\begin{equation}
\label{IV50}
{\rm exp}\{ (i/ \hbar) W_{m, {\rm ext}} \} =
\hat{U}_m(F) \,{\rm exp}\{ (i/ \hbar) W_m \},
\end{equation}
where the operator $\hat{U}_m(F)$ has to be choosen as \cite{6}
\begin{equation*}
\hat{U}_m(F) = {\rm exp}\{(\hbar/ i) \hat{T}_m(F)\}
\quad
{\rm with}
\quad
\hat{T}_m(F) = \hbox{$\frac{1}{2}$} \epsilon_{ab}
\{ \bar{\Delta}_m^b, [ \bar{\Delta}_m^a, F ] \} + (i/ \hbar)^2 m^2 F,
\end{equation*}
$F = F(\Phi^A(\theta))$ being the gauge fixing functional. With these
definitions one establishes the following two relations:
\begin{equation*}
[ \bar{\Delta}_m^a, \hat{T}_m(F) ] = \hbox{$\frac{1}{2}$} (i/\hbar)
(\sigma^\alpha)^a_{~b} [ \bar{\Delta}_m^b, [ \bar{\Delta}_\alpha , F ] ]
\end{equation*}
\vspace*{-1cm}
\begin{equation*}
[ \bar{\Delta}_\alpha, \hat{T}_m(F) ] = \hbox{$\frac{1}{2}$}
\epsilon_{ab} \left\{ \bar{\Delta}_m^b,
[ \bar{\Delta}_m^a, [ \bar{\Delta}_\alpha, F ] ] \right\} +
(i/\hbar)^2 m^2 [ \bar{\Delta}_\alpha, F ].
\end{equation*}
Restricting $F(\Phi^A(\theta))$ to be a $Sp(2)$--scalar by imposing
the condition $[ \bar{\Delta}_\alpha, F ] W_m = 0$ it can be verified
(see Ref.~\cite{6}) that the commutators
$[ \bar{\Delta}_m^a, \hat{U}_m(F) ]$ and
$[ \bar{\Delta}_\alpha, \hat{U}_m(F) ]$, if applied on
${\rm exp}\{ (i/\hbar) W_m \}$, vanish on the subspace of {\em admissible}
actions $W_m$. These action are determined by the condition
\begin{equation}
\label{IV51}
\int d^2 \theta \, \theta^2 \Bigr\{
\frac{\delta W_m}{\delta \bar{\Phi}_A(\theta)} + \Phi^A(\theta) \Bigr\} = 0
\qquad
\Longleftrightarrow
\qquad
\frac{\delta W_m}{\delta \eta_A} = \phi^A,
\end{equation}
i.e., depending only {\it linearly} on $\eta_A$. This condition ensures
that the gauge fixed quantum action $W_{m, {\rm ext}}$ also satisfies the
quantum master equations (\ref{IV47}) and (\ref{IV48}). Then, by virtue of
(\ref{IV51}), the restriction $[ \bar{\Delta}_\alpha, F ] W_m = 0$ becomes
\begin{equation*}
[ \bar{\Delta}_\alpha, F ] W_m = 0
\qquad
\Longrightarrow
\qquad
\int d^2 \theta \, \theta^2 \frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\Phi^B(\theta) (\sigma_\alpha)_B^{~~\!A} + V_\alpha F = 0,
\end{equation*}
which expresses the $Sp(2)$--invariance of $F$. Furthermore, the
quantum master equations (\ref{IV48}) simplify into
\begin{equation}
\label{IV52}
\bar{\Delta}_\alpha {\rm exp}\{ (i/\hbar) W_m \} = 0
\quad
\Longrightarrow
\quad
\int d^2 \theta \, \theta^2 \frac{\partial^2 \delta W_m}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\Phi^B(\theta) (\sigma_\alpha)_B^{~~\!A} + V_\alpha W_m = 0,
\end{equation}
since the $\sigma_\alpha$--matrices are traceless.
The equations (\ref{IV52}) for $\alpha = 0$ express the
ghost number conservation of the action $W_m$, ${\rm gh}(W_m) = 0$.
Thereby the ghost numbers of the fields and antifields are given by
\begin{align*}
{\rm gh}(\phi^A) &=
- \Bigr( 0, \sum_{r = 1}^s (-1)^{a_r}, \sum_{r = 0}^s (-1)^{a_r} \Bigr),
\qquad
\hbox{where}
\qquad
a_r = 1,2,
\\
{\rm gh}(\bar{\phi}_A) &= - {\rm gh}(\phi^A),
\qquad
{\rm gh}(\phi_{A a}^*) = - {\rm gh}(\phi^A) + (-1)^a,
\qquad
{\rm gh}(\eta_A) = - {\rm gh}(\phi^A).
\end{align*}
\smallskip
\noindent (C) {\it New ghost number conservation }
\\
In Ref. \cite{10} also a so--called new ghost number was ascribed to all
fields and antifields of the solutions of the {\it classical} master
equations in the following way:
\begin{align*}
{\rm ngh}(\phi^A) &= ( 0, s + 2, s + 1 ),
\\
{\rm ngh}(\bar{\phi}_A) = - {\rm ngh}(\phi^A) - 2,
\qquad
{\rm ngh}(\phi_{A a}^*) &= - {\rm ngh}(\phi^A) - 1,
\qquad
{\rm ngh}(\eta_A) = - {\rm ngh}(\phi^A).
\end{align*}
According to these definitions we also have ${\rm ngh}(\theta^a) = -1$.
In comparison with Eqs. (\ref{II11}) and (\ref{II12}) it follows that
the new ghost number agrees with the Weyl weight of the fields and
antifields, i.e.,
\begin{alignat*}{3}
{\rm ngh}(\phi^A) &= \alpha(\phi^A),
&\qquad
{\rm ngh}(\pi^{Aa}) &= \alpha(\pi^{Aa}),
&\qquad
{\rm ngh}(\lambda^A) &= \alpha(\lambda^A),
\\
{\rm ngh}(\bar{\phi}_A) &= \alpha(\bar{\phi}_A),
&\qquad
{\rm ngh}(\phi_{A a}^*) &= \alpha(\phi_{A a}^*),
&\qquad
{\rm ngh}(\eta_A) &= \alpha(\eta_A).
\end{alignat*}
In order to clarify how in our approach both numbers are related
to each other let us introduce the following differential operator
\begin{equation}
\label{IV54}
\bar{\Delta}_m = \Delta + (i/\hbar) V_m,
\qquad
V_m \equiv V + m \frac{\partial}{\partial m},
\end{equation}
with the dilatations $V$ given by Eq. (\ref{III34}) and the (second--order)
differential operator $\Delta$ defined by
\begin{equation*}
\Delta = (-1)^{\epsilon_A + 1} \int d^2 \theta \,
\theta^2 \gamma_B^A \frac{\partial^2 \delta_L}
{\partial \theta^2\delta \Phi^A(\theta)}
\frac{\delta}{\delta \bar{\Phi}_B(\theta)} =
(-1)^{\epsilon_A} \gamma_B^A \frac{\delta_L}{ \delta \phi^A}
\frac{\delta}{\delta \eta_B}.
\end{equation*}
The new operator $\bar{\Delta}_m$ together with the generating operators
$\bar{\Delta}_m^a$ and $\bar{\Delta}_\alpha$ form an extension of the
$osp(1,2)$--superalgebra being isomorphic to $osp(1,2) \oplus u(1)$ where,
in addition to the (anti)commutation relations (\ref{IV46}), the following
relations hold true:
\begin{alignat}{2}
[ V_m, V_m ] &= 0,
&\qquad
[ \bar{\Delta}_m, \bar{\Delta}_m ] &= 0,
\nonumber
\\
\label{Delta_m}
[ V_m, V_\alpha ] &= 0,
&\qquad
[ \bar{\Delta}_m, \bar{\Delta}_\alpha ] &= 0,
\\
[ V_m, V_m^a ] &= V_m^a,
&\qquad
[ \bar{\Delta}_m, \bar{\Delta}_m^a ] &= (i/\hbar) \bar{\Delta}_m^a.
\nonumber
\end{alignat}
Let us assume now that solutions $W_m$ of the quantum master equations
(\ref{IV48}) and (\ref{IV49}) can be constructed which also satisfy the
following equation:
\begin{equation}
\label{IV55}
\bar{\Delta}_m\, {\rm exp}\{ (i/ \hbar) W_m \} = 0
\qquad
\Longleftrightarrow
\qquad
\hbox{$\frac{1}{2}$} \{ W_m, W_m \} + V_m W_m = i \hbar \Delta_m W_m
\end{equation}
with the following abbreviation
\begin{equation}
\label{IV56}
\{ F,G \} = - \int d^2 \theta \, \Bigr\{
\theta^2 \frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\frac{\delta G}{\delta \bar{\Phi}_B(\theta)} \gamma_B^A +
(-1)^{\epsilon(F) \epsilon(G)} (F \leftrightarrow G) \Bigr\}.
\end{equation}
Notice, that $\{ F,G \}$ does {\em not} define a new superbracket since
$\gamma_B^A=\delta_B^A \alpha(\phi^A)$ is a diagonal matrix.
Taking into account the restriction
(\ref{IV51}) the additional master equation (\ref{IV55}), at the lowest order
of $\hbar$, simplifies according to
\begin{equation}
\label{IV57}
\int d^2 \theta \, \theta^2 \frac{\partial^2 \delta S_m}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\Phi^B(\theta) \gamma_B^A + V_m S_m = 0.
\end{equation}
Obviously, the matrix $\gamma^B_A$ is uniquely determined by solving the
quantum master equations (\ref{IV47}) and (\ref{IV48}) at the lowest order
of $\hbar$, together with Eq. (\ref{IV57}). The matrix ${\bar\gamma}^B_A$,
which enters in $V_m$, is fixed by the requirement (\ref{IIQ}).
Equation (\ref{IV57}) expresses the conservation of the new ghost number of
$S_m$ in the case $m \neq 0$, i.e.,
${\rm ngh}(S_m) = 0$. Thereby, we have formally ascribed also a new ghost
number resp. Weyl weight to the mass parameter $m$, namely, according to the
definition of $V_m$, ${\rm ngh}(m) = 1$ resp. $\alpha(m) = 1$. This already
has been used in the definition of $V^a_m$, Eq. (\ref{IV44}).
Let us emphasize that the equation (\ref{IV55}) is quite formal since its
right--hand side, for the same reasons as explained in the Introduction, is
not well defined. Since, generally, the new ghost number is conserved only
in the classical limit, we restricted ourselves in (\ref{IV57}) to the lowest
order approximation. In order to express the new ghost number conservation
to higher orders -- which is, of course, only possible as long as the
dilatation invariance in superspace is not broken by radiative corrections --
this requires a sensitive definition of the expression $\Delta_m W_m$ on
the right--hand side of Eq. (\ref{IV55}). In order to obtain a corresponding
local quantum operator equation, which is valid to all orders in perturbation
theory, one can use the method described in Ref. $\cite{7}$.
Independently, by introducing a gauge the gauge--fixed quantum action
(\ref{IV50}) breaks the new ghost number conservation. Namely, because of
\begin{equation*}
[ \Delta_m, \hat{T}_m(F) ] = \hbox{$\frac{1}{2}$}
\epsilon_{ab} \{ \bar{\Delta}_m^b, [ \bar{\Delta}_m^a,
[ \Delta_m, F ] + 2 F ] \} + (i/\hbar)^2 m^2 ( [ \Delta_m, F ] + 2 F ),
\end{equation*}
the action (\ref{IV50}) is only a solution of (\ref{IV57}) iff
\begin{equation*}
[ \Delta_m, F ] W_m = - 2 F
\qquad
\Longrightarrow
\qquad
\int d^2 \theta \, \theta^2 \frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\Phi^B(\theta) \gamma_B^A + V_m F = - 2 F,
\end{equation*}
where the second equation follows from the first one by taking into account
the condition (\ref{IV51}). On the other hand, the expression on the
left--hand side (modulo the signum of $F$), can never be negative,
since $F$ depends only on $\Phi^A$ which has positive Weyl weight,
\begin{equation*}
{\rm sgn}(F) \Bigr\{ \int d^2 \theta \, \theta^2 \frac{\partial^2 \delta F}
{\partial \theta^2 \delta \Phi^A(\theta)} \,
\Phi^B(\theta) \gamma_B^A + V_m F \Bigr\} \geq 0.
\end{equation*}
This proves that the new ghost number conservation is broken through
gauge fixing.
\section{Generating functionals and gauge (in)dependence}
\setcounter{equation}{0}
Next, we turn to the question of gauge (in)dependence of the generating
functionals of Green's functions \cite{10,6}.
\smallskip
\noindent (A) {\it Sp(2)--covariant approach }
\\
In discussing this question it is convenient to study first the symmetry
properties of the vacuum functional $Z(0)$ defined as
\begin{equation}
\label{V59}
Z(0) = \int d \Phi^A(\theta) \, d \bar{\Phi}_A(\theta) \,
\rho(\bar{\Phi}_A(\theta)) \exp\{ (i/ \hbar) ( W_{\rm ext} + S_X ) \}.
\end{equation}
Here, $\rho(\bar{\Phi}_A(\theta))$ is a density having the form of
a $\delta$--functional,
\begin{equation}
\label{V60}
\rho(\bar{\Phi}_A(\theta)) = \delta \bigg(
\int d^2 \theta \, \bar{\Phi}_A(\theta) \bigg),
\end{equation}
and $S_X$ is given by
\begin{equation}
\label{V61a}
S_X = \int d^2 \theta \, \bar{\Phi}_A(\theta) \Phi^A(\theta).
\end{equation}
The term $S_X$ can be cast into the (anti)BRST--invariant form
\begin{eqnarray*}
S_X = \hbox{$\frac{1}{2}$} \epsilon_{ab} \Bigr(
V^b ( V^a X - X U^a ) + ( V^a X - X U^a ) U^b \Bigr),
\qquad
X \equiv - \int d^2 \theta \, \theta^2 \bar{\Phi}_A(\theta) \Phi^A(\theta),
\end{eqnarray*}
with $V^a \equiv V_+^a$ and $U^a \equiv U_+^a$, whose action on
$\bar{\Phi}_A(\theta)$ and $\Phi^A(\theta)$ are defined in
Eqs. (\ref{III31}) and (\ref{III36}), respectively, satisfying
$\{ V^a, V^b \} = 0$ and $\{ U^a, U^b \} = 0$. Let us combine the action
of $V^a$ and $U^a$ on an arbitrary functional $Y$ according to
\begin{equation*}
L^a Y \equiv V^a Y - (-1)^{\epsilon(Y)} Y U^a,
\qquad
\{ L^a, L^b \} = 0,
\end{equation*}
then the operators $L^a$ are nilpotent and anticommuting.
Inserting into expression (\ref{V59}) the relation (\ref{IV43}) and
integrating by parts this gives
\begin{equation}
\label{V62}
Z(0) = \int d \Phi^A(\theta) \, d \bar{\Phi}_A(\theta)
\rho(\bar{\Phi}_A(\theta))
\exp\{ (i/ \hbar) ( W + S_X + S_F ) \}
\end{equation}
with the following expression for $S_F$:
\begin{equation}
\label{V63}
S_F = - \int d^2 \theta \Bigl\{
\frac{\delta F}{\delta \Phi^A(\theta)}
\frac{\partial^2 \Phi^A(\theta)}{\partial \theta^2} +
\hbox{$\frac{1}{2}$} \epsilon_{ab} \int d^2 \bar{\theta} \,
\frac{\partial \Phi^A(\theta)}{\partial \theta_a}
\frac{\delta^2 F}{\delta \Phi^A(\theta) \delta \Phi^B(\bar{\theta)}}
\frac{\partial \Phi^B(\bar{\theta})}{\partial \bar{\theta}_b} \Bigr\}.
\end{equation}
This may be cast also into the (anti)BRST invariant form
\begin{equation*}
S_F = \hbox{$\frac{1}{2}$} \epsilon_{ab} F U^b U^a.
\end{equation*}
Then, by virtue of $L^a S_X = 0$ and $L^a S_F = 0$, it can be checked that
the integrand of the vacuum functional (\ref{V62}) is invariant under the
following
global (anti)BRST transformations (thereby, one has to make use of
Eq. (\ref{IV45})):
\begin{equation}
\label{V64}
\delta \Phi^A(\theta) = \Phi^A(\theta) U^a \mu_a,
\qquad
\delta \bar{\Phi}_A(\theta) = \mu_a V^a \bar{\Phi}_A(\theta) +
\mu_a ( W, \bar{\Phi}_A(\theta) )^a,
\end{equation}
where $\mu_a$, $\epsilon(\mu_a) = 1$, is a $Sp(2)$--doublet of constant
anticommuting parameters. Here, we have taken into account that the
density $\rho(\bar{\Phi}_A(\theta)) = \delta(\eta_A)$ is invariant under
the transformations (\ref{V64}). These transformations realize the
(anti)BRST symmetry in the superfield approach to quantum gauge theory.
The invariance of $Z(0)$ under the transformations (\ref{V64}) permits
to study the question whether $Z(0)$ is independent on the choice of the
gauge. Indeed, let us change the gauge--fixing functional
$F \rightarrow F + \delta F$. Then, the gauge--fixing term $S_F$ changes
according to
\begin{equation}
\label{V65}
S_F \rightarrow S_{F + \delta F} = S_F + S_{\delta F},
\qquad
S_{\delta F} = \hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U^b U^a.
\end{equation}
Now, we perform in the vacuum functional (\ref{V62})
the transformations (\ref{V64}) and choose the parameters $\mu_a$ as follows,
\begin{equation*}
\mu_a = - (i/\hbar) \hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U^b.
\end{equation*}
Thereby we induce the factor ${\rm exp}(\mu_a U^a)$ in the integration
measure. Combining its exponent with $S_F$ leads to
\begin{equation*}
S_F \rightarrow S_F + (\hbar/i) \mu_a U^a = S_F -
\hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U^b U^a = S_F - S_{\delta F}.
\end{equation*}
By comparison with (\ref{V65}) this proves that the vacuum functional
and, therefore, also the $S$--matrix is independent on the choice of the
gauge.
\smallskip
\noindent (B) {\it osp(1,2)--covariant approach }
\\
In this approach the vacuum functional $Z_m(0)$, which depends on
the additional mass parameter $m$, is defined as
\begin{equation}
\label{V66}
Z_m(0) = \int d \Phi^A(\theta) \, d \bar{\Phi}_A(\theta)
\rho(\bar{\Phi}_A(\theta)) \exp\{ (i/ \hbar) ( W_{m, {\rm ext}} +
S_{m, X} ) \},
\end{equation}
with
\begin{equation}
S_{m, X} = S_X + m^2 \int d^2 \theta \, \theta^2
\bar{\Phi}_A(\theta) \gamma^A_B \Phi^B(\theta),
\label{V61b}
\end{equation}
where $S_X$ again is given by Eq. (\ref{V61a}). The term $S_{m, X}$ can
be rewritten as
\begin{equation*}
S_{m, X} = \hbox{$\frac{1}{2}$} \epsilon_{ab} \Big(
V_m^b ( V_m^a X - X U_m^a ) + ( V_m^a X - X U_m^a ) U_m^b \Big) + m^2 X,
\qquad
V_\alpha X + X U_\alpha = 0,
\end{equation*}
with ($V_m^a \equiv V_+^a + \hbox{$\frac{1}{2}$} m^2 V_-^a, V_\alpha$) and
($U_m^a \equiv U_+^a + \hbox{$\frac{1}{2}$} m^2 U_-^a, U_\alpha$) obeying
the following $osp(1,2)$--superalgebras
\begin{alignat}{2}
[ V_\alpha, V_\beta ] &= \epsilon_{\alpha\beta}^{~~~\!\gamma} V_\gamma,
&\qquad\qquad
[ U_\alpha, U_\beta ] &= - \epsilon_{\alpha\beta}^{~~~\!\gamma} U_\gamma,
\nonumber
\\
\label{V67}
[ V_\alpha, V_m^a ] &= V_m^b (\sigma_\alpha)_b^{~a},
&\qquad\qquad
[ U_\alpha, U_m^a ] &= - U_m^b (\sigma_\alpha)_b^{~a},
\\
\{ V_m^a, V_m^b \} &= - m^2 (\sigma^\alpha)^{ab} V_\alpha,
&\qquad\qquad
\{ U_m^a, U_m^b \} &= m^2 (\sigma^\alpha)^{ab} U_\alpha,
\nonumber
\end{alignat}
respectively; the action of ($V_\pm^a, V_\alpha$) and
($U_\pm^a, U_\alpha$) on $\bar{\Phi}_A(\theta)$ and $\Phi^A(\theta)$
are defined by Eqs. (\ref{III32})--(\ref{III34}) and
(\ref{III37})--(\ref{III39}), respectively.
Inserting into the expression (\ref{V66}) the relation (\ref{IV50})
and integrating by parts this yields
\begin{equation}
\label{V68}
Z_m(0) = \int d \Phi^A(\theta) \, d \bar{\Phi}_A(\theta) \,
\rho(\bar{\Phi}_A(\theta))
\exp\{ (i/ \hbar) ( W_m + S_{m, X} + S_{m, F} ) \},
\end{equation}
with
\begin{equation*}
S_{m, F} = S_F - \hbox{\large $\frac{1}{2}$} m^2
\int d^2 \theta \, \theta^2
\frac{\partial^2 \delta F}{\partial \theta^2 \delta \Phi^A(\theta)}
\gamma^A_B \Phi^B(\theta),
\end{equation*}
where $S_F$ is given by Eq.~(\ref{V63}). The gauge--fixing term $S_{m, F}$
can be rewritten as
\begin{equation*}
S_{m, F} = \hbox{$\frac{1}{2}$} \epsilon_{ab} F U_m^b U_m^a + m^2 F.
\end{equation*}
Let us now introduce the differential operators
\begin{equation*}
L_m^a Y \equiv V_m^a Y - (-1)^{\epsilon(Y)} Y U_m^a,
\qquad
L_\alpha Y \equiv V_\alpha Y + Y U_\alpha,
\end{equation*}
which, by virtue of the relations (\ref{V67}), satisfy the
$osp(1,2)$--superalgebra
\begin{equation*}
[ L_\alpha, L_\beta ] = \epsilon_{\alpha\beta}^{~~~\!\gamma} L_\gamma,
\qquad
[ L_\alpha, L_m^a ] = L_m^b (\sigma_\alpha)_b^{~a},
\qquad
\{ L_m^a, L_m^b \} = - m^2 (\sigma^\alpha)^{ab} L_\alpha.
\end{equation*}
By using this algebra, after tedious but straightforward computations,
one verifies the following relations:
\begin{equation*}
L_m^c ( \hbox{$\frac{1}{2}$} \epsilon_{ab} L_m^b L_m^a + m^2 ) =
\hbox{$\frac{1}{2}$} m^2 (\sigma^\alpha)^c_{~d} L_m^d L_\alpha,
\qquad
[ L_\alpha, \hbox{$\frac{1}{2}$} \epsilon_{ab} L_m^b L_m^a + m^2 ] = 0.
\end{equation*}
Therefore, it holds $L_m^a S_{m, X} = 0$ and $L_\alpha S_{m, F} = 0$, since
$X$ and $F$ are $Sp(2)$--invariant. Because $W_m$ exhibits the same
$\eta$--dependence as $- S_{m, X}$, Eqs. (\ref{IV51}), (\ref{V61a}) and
(\ref{V61b}),\break $W_m + S_{m, X}$ is independent on $\eta_A$ and, hence,
the integration over $\bar\Phi_A$ with the density
$\rho(\bar{\Phi}_A(\theta)) = \delta(\eta_A)$ yields a constant factor which
is equal to one.
We assert now that the integrand in (\ref{V68}) is invariant under the
following global transformations (thereby, one has to make use of the
Eqs. (\ref{IV47}) and (\ref{IV48}), respectively):
\begin{align}
\label{V69}
\delta \Phi^A(\theta) &= \Phi^A(\theta) U_m^a \mu_a,
\qquad
\delta \bar{\Phi}_A(\theta) = \mu_a V_m^a \bar{\Phi}_A(\theta) +
\mu_a ( W_m, \bar{\Phi}_A(\theta) )^a
\\
\label{V70}
\delta \Phi^A(\theta) &= \Phi^A(\theta) U_\alpha \mu^\alpha,
\qquad
\delta \bar{\Phi}_A(\theta) = \mu^\alpha V_\alpha \bar{\Phi}_A(\theta) +
\mu^\alpha \{ W_m, \bar{\Phi}_A(\theta) \}_\alpha,
\end{align}
where $\mu_a$, $\epsilon(\mu_a) = 1$, and $\mu^\alpha$,
$\epsilon(\mu^\alpha) = 0$, are constant anticommuting resp.~commuting
parameters. Notice, that in the present case $\rho(\bar{\Phi}_A(\theta))$
is {\it not} invariant under the transformations (\ref{V69}).
The transformations (\ref{V69}) and (\ref{V70}) realize the $m$--extended
(anti)BRST-- and $Sp(2)$--symmetry, respectively.
Next, we study the question whether the mass dependent terms in
$Z_m(0)$ violate the independence on the choice of the gauge. Proceeding
as in the previous case, by changing the gauge--fixing functional
$F \rightarrow F + \delta F$ the gauge--fixing term changes according to
\begin{equation}
\label{V71}
S_{m, F} \rightarrow S_{m, F + \delta F} = S_{m, F} + S_{m, \delta F},
\qquad
S_{m, \delta F} = \hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U_m^b U_m^a +
m^2 \delta F.
\end{equation}
Now, carring out in (\ref{V68}) the transformations (\ref{V69}), we choose
\begin{equation*}
\mu_a = - (i/\hbar) \hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U_m^b,
\end{equation*}
which leads to
\begin{equation*}
S_{m, F} \rightarrow S_{m, F} + (\hbar/i) \mu_a U_m^a = S_{m, F} -
\hbox{$\frac{1}{2}$} \epsilon_{ab} (\delta F) U_m^b U_m^a =
S_{m, F} - S_{m, \delta F} + m^2 \delta F.
\end{equation*}
By comparison with (\ref{V71}) we observe that the mass term $m^2 F$ violates
the independence of $Z_m(0)$ on the choice of the gauge. One may try to
compensate this undesired term $m^2 \delta F$ by means of an additional
change of variables using the transformations (\ref{V70}). But this change
should not destroy the form of the action arrived at the previous stage.
However, such additional changes of variables lead to a Berezinian which
is equal to one because $\sigma_\alpha$ are traceless. Thus, the unwanted
term could never be compensated.
\section{Irreducible and first--stage reducible massive theories
with closed algebra}
\setcounter{equation}{0}
In the preceeding Sections we gave a general framework of quantizing
massive general gauge theories by introducing on the space of superfields
and superantifields a set of differential operators which obey the
superalgebra $sl(1,2)$. Thereby, we extended our previous work \cite{6}
on $osp(1,2)$--covariant quantization where we already considered the
case of irreducible and first--stage reducible gauge theories with
closed algebra. In order to illustrate our present approach let us
study how the construction of these theories is extended now. (Thereby
we also simplify some of our former calculations.)
\smallskip
\noindent{\it (A) Generic form of the dependence on the antifields}\\
Our aim here is to construct a proper
solution $S_m$ of the {\it classical} master equations
\begin{equation}
\label{VI72}
\hbox{$\frac{1}{2}$} ( S_m, S_m )^a + V_m^a S_m = 0,
\qquad
\hbox{$\frac{1}{2}$} \{ S_m, S_m \}_\alpha + V_\alpha S_m = 0,
\qquad
\hbox{$\frac{1}{2}$} \{ S_m, S_m \} + V_m S_m = 0,
\end{equation}
which are obtained from the quantum master equations (\ref{IV47}),
(\ref{IV48})
and (\ref{IV55}) at the lowest order approximation of $\hbar$.
Let us rewrite more explicit the brackets in Eqs. (\ref{VI72})
using their definitions, Eqs. (\ref{IV42}), (\ref{IV49}) and (\ref{IV56}),
\begin{equation}
\label{VI73}
\frac{\delta S_m}{\delta \phi^A}
\frac{\delta S_m}{\delta \phi_{A a}^*} + V_m^a S_m = 0,
\qquad
\frac{\delta S_m}{\delta \phi^A}
\frac{\delta S_m}{\delta \eta_B}
(\sigma_\alpha)_B^{~~\!A} + V_\alpha S_m = 0,
\qquad
\frac{\delta S_m}{\delta \phi^A}
\frac{\delta S_m}{\delta \eta_B}
\gamma^B_A + V_m S_m = 0,
\end{equation}
with $V_m^a \equiv V_+^a + \hbox{$\frac{1}{2}$} m^2 V_-^a$ and
$V_m \equiv V + m \partial/\partial m$, where the action of $V_\pm^a$,
$V_\alpha$ and $V$ on the antifields is given by
(see Eqs. (\ref{II7}) and (\ref{II8})):
\begin{align*}
V_+^a &= \epsilon^{ab} \phi_{A b}^* \frac{\delta}{\delta \bar{\phi}_A} -
\eta_A \frac{\delta}{\delta \phi_{A a}^*},
\\
V_-^a &= \bar{\phi}_B \bigr(
(\sigma^\alpha)^a_{~b} (\sigma_\alpha)^B_{~~\!A} -
\delta^a_b {\bar\gamma}^B_A \bigr)
\frac{\delta}{\delta \phi_{A b}^*} + \phi_{B b}^* \bigr(
(\sigma^\alpha)^{ab} (\sigma_\alpha)^B_{~~\!A} -
\epsilon^{ab} ({\bar\gamma}^B_A + 2 \delta^B_A) \bigr)
\frac{\delta}{\delta \eta_A},
\\
V_\alpha &= \bar{\phi}_B (\sigma_\alpha)^B_{~~\!A}
\frac{\delta}{\delta \bar{\phi}_A} +
\bigr( \phi_{B b}^* (\sigma_\alpha)^B_{~~\!A} +
\phi_{A a}^* (\sigma_\alpha)^a_{~b} \bigr)
\frac{\delta}{\delta \phi_{A b}^*} +
\eta_B (\sigma_\alpha)^B_{~~\!A}
\frac{\delta}{\delta \eta_A},
\\
V &= \bar{\phi}_B {\bar\gamma}^B_A
\frac{\delta}{\delta \bar{\phi}_A} +
\phi_{B b}^* ({\bar\gamma}^B_A + \delta^B_A)
\frac{\delta}{\delta \phi_{A b}^*} +
\eta_B ({\bar\gamma}^B_A + 2 \delta^B_A)
\frac{\delta}{\delta \eta_A}.
\end{align*}
The symmetry properties (\ref{VI73}) of $S_m$ may be expressed also by the
following equations:
\begin{equation}
\label{VI74}
\mathbf{s}_m^a S_m = 0
\qquad
\mathbf{d}_\alpha S_m = 0,
\qquad
\mathbf{d}_m S_m = 0,
\end{equation}
with $\mathbf{s}_m^a \equiv \mathbf{s}_+^a +
\hbox{$\frac{1}{2}$} m^2 \mathbf{s}_-^a$ and $\mathbf{d}_m \equiv
\mathbf{d} + m \partial/\partial m$, where the operators
$\mathbf{s}_\pm^a$, $\mathbf{d}_\alpha$ and $\mathbf{d}$ are
required to fulfil the $sl(1,2)$--superalgebra:
\begin{alignat}{3}
[ \mathbf{d}, \mathbf{d}_\alpha ] &= 0,
&\qquad
[ \mathbf{d}, \mathbf{s}_+^a ] &= \mathbf{s}_+^a,
&\qquad
[ \mathbf{d}, \mathbf{s}_-^a ] &= - \mathbf{s}_-^a,
\nonumber
\\
\label{VI75}
[ \mathbf{d}_\alpha, \mathbf{d}_\beta ] &=
\epsilon_{\alpha\beta}^{~~~\!\gamma} \mathbf{d}_\gamma,
&\qquad
[ \mathbf{d}_\alpha, \mathbf{s}_+^a ] &=
\mathbf{s}_+^b (\sigma_\alpha)_b^{~a},
&\qquad
[ \mathbf{d}_\alpha, \mathbf{s}_-^a ] &=
\mathbf{s}_-^b (\sigma_\alpha)_b^{~a},
\\
\{ \mathbf{s}_+^a, \mathbf{s}_+^b \} &= 0,
&\qquad
\{ \mathbf{s}_-^a, \mathbf{s}_-^b \} &= 0,
&\qquad
\{ \mathbf{s}_+^a, \mathbf{s}_-^b \} &=
- (\sigma^\alpha)^{ab} \mathbf{d}_\alpha - \epsilon^{ab} \mathbf{d}.
\nonumber
\end{alignat}
Indeed, let us restrict our considerations to solutions $S_m$ being
{\it linear} with respect to the antifields.
Let us remark that proper solutions of the classical
master equations for theories with closed gauge algebra and vanishing
new ghost number depends only linearly on the antifields \cite{13}.
Such solutions can be written in the form \cite{6}
\begin{equation}
\label{VI76}
S_m = S_{\rm cl} + (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 ) X,
\end{equation}
where $X$ is assumed to be a $Sp(2)$--scalar (in fact the only one
we are able to build up linear in the antifields) and,
in accordance with the requirement (\ref{IIQ}),
to have Weyl weight $\alpha(X) = \alpha(\bar{\phi}_A) + \alpha(\phi^A) = - 2$,
\begin{equation}
\label{VI77}
X = \bar{\phi}_A \phi^A
\qquad
{\rm with}
\qquad
\mathbf{d}_\alpha X = 0,
\qquad
\mathbf{d}_m X = - 2 X.
\end{equation}
Then, by making use of the $osp(1,2) \oplus u(1)$--superalgebra of these
symmetry operators,
\begin{alignat*}{3}
[ \mathbf{d}_m, \mathbf{d}_m ] &= 0,
&\qquad
[ \mathbf{d}_m, \mathbf{d}_\alpha ] &= 0,
&\qquad
[ \mathbf{d}_m, \mathbf{s}_m^a ] &= \mathbf{s}_m^a,
\\
[ \mathbf{d}_\alpha, \mathbf{d}_\beta ] &=
\epsilon_{\alpha\beta}^{~~~\!\gamma} \mathbf{d}_\gamma,
&\qquad
[ \mathbf{d}_\alpha, \mathbf{s}_m^a ] &=
\mathbf{s}_m^b (\sigma_\alpha)_b^{~a},
&\qquad
\{ \mathbf{s}_m^a, \mathbf{s}_m^b \} &=
- m^2 (\sigma^\alpha)^{ab} \mathbf{d}_\alpha,
\end{alignat*}
one establishes the following relations:
\begin{align*}
\mathbf{s}_m^c (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 ) &=
\hbox{$\frac{1}{2}$} m^2 (\sigma^\alpha)^c_{~d} \mathbf{s}_m^d
\mathbf{d}_\alpha,
\\
\mathbf{d}_\alpha (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 ) &= (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 )
\mathbf{d}_\alpha,
\\
\mathbf{d}_m (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 ) &= (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 ) (
\mathbf{d}_m + 2 ).
\end{align*}
From these relations, by virtue of (\ref{VI77}), it follows that the ansatz
(\ref{VI76}) for $S_m$ really obeys the symmetry requirements (\ref{VI74}).
Thereby, it has to be taken into account that for the classical action
$S_{\rm cl}(A)$ it holds $\mathbf{s}_m^a S_{\rm cl}(A) = 0$ as well as
$\mathbf{d}_\alpha S_{\rm cl}(A) = 0$ and $\mathbf{d}_m S_{\rm cl}(A) = 0$.
In order
to convince ourselves that the equations (\ref{VI74}) can be cast into the
form (\ref{VI73}) let us decompose $\mathbf{s}_m^a$, $\mathbf{d}_\alpha$ and
$\mathbf{d}_m$ into a component acting on the fields and another one acting
on the antifields as follows:
\begin{equation}
\label{VI78}
\mathbf{s}_m^a = \left(\mathbf{s}_m^a \phi^A \right)
\frac{\delta_L}{\delta \phi^A} + V_m^a,
\qquad
\mathbf{d}_\alpha = \left(\mathbf{d}_\alpha \phi^A \right)
\frac{\delta_L}{\delta \phi^A} + V_\alpha,
\qquad
\mathbf{d}_m = \left(\mathbf{d}_m \phi^A \right)
\frac{\delta_L}{\delta \phi^A} + V_m.
\end{equation}
The assumptions (\ref{VI77}) are satisfied if the action of
$\mathbf{d}_\alpha$ and $\mathbf{d}_m$ on $\phi^A$ is defined as
\begin{equation*}
\mathbf{d}_\alpha \phi^A = \phi^B (\sigma_\alpha)_B^{~~\!A}
\qquad {\rm and} \qquad
\mathbf{d}_m \phi^A = \phi^B \gamma^A_B.
\end{equation*}
Then from (\ref{VI76}) one gets for $S_m$ the expression
\begin{equation*}
S_m = S_{\rm cl} +
(\eta_A + \hbox{$\frac{1}{2}$} m^2 {\bar\gamma}^B_A \bar{\phi}_B ) \phi^A -
(\mathbf{s}_m^a \phi^A) \phi_{A a}^* + \bar{\phi}_A (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2 )
\phi^A
\end{equation*}
with ${\bar\gamma}^B_A = - \gamma^B_A - 2 \delta^B_A$. Now, performing in
(\ref{VI74})
the replacements $\mathbf{s}_m^a \phi^A = - \delta_R S_m/\delta \phi_{A a}^*$,
$\mathbf{d}_\alpha \phi^A = \delta S_m/\delta \eta_B
(\sigma_\alpha)_B^{~~\!A}$ and
$\mathbf{d}_m \phi^A = \delta S_m/\delta \eta_B \gamma_B^A$ it is
easily seen that both symmetry requirements, Eqs. (\ref{VI73}) and
(\ref{VI74}),
are equivalent to each other. Thus, we are left with the exercise to
determine the action of the $sl(1,2)$--superalgebra (\ref{VI75}) on the
components of the fields $\phi^A$. Thereby, we restrict ourselves to the
cases of irreducible and first--stage reducible theories with closed
gauge algebra.
\smallskip
\noindent (B) {\it Explicit realization of sl(1,2)
on the fields: Irreducible gauge theories}
\\
For irreducible theories with a closed algebra, because of
$M_{\alpha_0 \beta_0}^{ij} = 0$, the algebra of the generators,
Eq. (\ref{II2}), reduces to
\begin{equation}
\label{VI79}
R^i_{\alpha_0, j} R^j_{\beta_0} -
R^i_{\beta_0, j} R^j_{\alpha_0} =
- R^i_{\gamma_0} F^{\gamma_0}_{\alpha_0 \beta_0},
\end{equation}
where for the sake of simplicity we assume throughout this and the succeeding
subsection that the $A^i$ are {\it bosonic} fields. This algebra defines
the structure tensors $F^{\gamma_0}_{\alpha_0 \beta_0}$. In general,
the restrictions imposed by the Jacobi identity lead to
additional equations with new structure
tensors. But in the simple case under consideration it leads
only to the following relation among the tensors
$F^{\gamma_0}_{\alpha_0 \beta_0}$ and the generators $R^i_{\alpha_0}$:
\begin{equation}
\label{VI80}
F^{\delta_0}_{\eta_0 \alpha_0} F^{\eta_0}_{\beta_0 \gamma_0} -
R^i_{\alpha_0} F^{\delta_0}_{\beta_0 \gamma_0, i} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) = 0.
\end{equation}
In order to construct the proper solution $S_m = S_{\rm cl} + (
\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2) X$,
Eq. (\ref{VI76}), for $X$ one has to choose $X = \bar{A}_i A^i +
\bar{B}_{\alpha_0} B^{\alpha_0} + \bar{C}_{\alpha a} C^{\alpha_0 a}$. The
$sl(1,2)$--transformations of the antifields $\bar{A}_i$, $\bar{B}_{\alpha_0}$
and $\bar{C}_{\alpha_0 a}$ already has been given (see Appendix A). The
corresponding {\em nonlinear} realization of the $sl(1,2)$ in terms of the
fields $A^i$, $B^{\alpha_0}$ and $C^{\alpha_0 a}$ reads
\noindent{(1)~translations:}
\begin{align}
\mathbf{s}_+^a A^i &= R^i_{\alpha_0} C^{\alpha_0 a},
\nonumber
\\
\mathbf{s}_+^a C^{\alpha_0 b} &= \epsilon^{ab} B^{\alpha_0} -
F^{\alpha_0}_{\beta_0 \gamma_0} C^{\beta_0 a} C^{\gamma_0 b},
\label{VI81}
\\
\mathbf{s}_+^a B^{\alpha_0} &= \hbox{$\frac{1}{2}$}
F^{\alpha_0}_{\beta_0 \gamma_0} B^{\beta_0} C^{\gamma_0 a} +
\hbox{$\frac{1}{12}$} \epsilon_{cd} (
F^{\alpha_0}_{\eta_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} +
2 R^i_{\beta_0} F^{\alpha_0}_{\gamma_0 \delta_0, i} )
C^{\gamma_0 a} C^{\delta_0 c} C^{\beta_0 d},
\nonumber
\\
\intertext{(2)~special conformal transformations:}
\mathbf{s}_-^a A^i &= 0,
\nonumber
\\
\mathbf{s}_-^a C^{\alpha_0 b} &= 0,
\label{VI82}
\\
\mathbf{s}_-^a B^{\alpha_0} &= - 2 C^{\alpha_0 a},
\nonumber
\\
\intertext{(3)~symplectic rotations:}
\mathbf{d}_\alpha A^i &= 0,
\nonumber
\\
\mathbf{d}_\alpha C^{\alpha_0 b} &= C^{\alpha_0 a} (\sigma_\alpha)_a^{~b},
\label{VI83}
\\
\mathbf{d}_\alpha B^{\alpha_0} &= 0,
\nonumber
\\
\intertext{and (4)~dilatations:}
\mathbf{d} A^i &= 0,
\nonumber
\\
\mathbf{d} C^{\alpha_0 b} &= C^{\alpha_0 b},
\label{VI84}
\\
\mathbf{d} B^{\alpha_0} &= 2 B^{\alpha_0}.
\nonumber
\end{align}
By making use of Eqs. (\ref{VI79}) and (\ref{VI80}) it is a simple exercise to
prove that the transformations (\ref{VI81})--(\ref{VI84}) actually
obey the $sl(1,2)$--superalgebra (\ref{VI75}).
Let us remark that the nonlinearity of the translations, Eqs.~(\ref{VI81}),
is due to the fact
that the components $\pi^{Aa}$ and $\lambda^a$ of the superfield
$\Phi^A(\theta)$ have been eliminated from the theory by integrating
them out in Eq. (\ref{V68}).
\smallskip
\noindent (C) {\it Explicit realization of sl(1,2) on the fields:
First--stage reducible gauge theories}
\\
Now let us consider first--stage reducible theories. In that case, due to
the condition of first--stage reducibility,
\begin{equation}
\label{VI85}
R^i_{\alpha_0} Z^{\alpha_0}_{\alpha_1} = 0,
\end{equation}
there are independent zero--modes $Z^{\alpha_0}_{\alpha_1}$ of the generators
$R^i_{\alpha_0}$. Their presence does not modify the gauge algebra
\begin{equation}
\label{VI86}
R^i_{\alpha_0, j} R^j_{\beta_0} -
R^i_{\beta_0, j} R^j_{\alpha_0} =
- R^i_{\gamma_0} F^{\gamma_0}_{\alpha_0 \beta_0},
\end{equation}
but it influences the solutions of the Jacobi identity
which appears from the relation
\begin{equation}
\label{VI87}
R^j_{\delta_0} \bigr(
F^{\delta_0}_{\eta_0 \alpha_0} F^{\eta_0}_{\beta_0 \gamma_0} -
R^i_{\alpha_0} F^{\delta_0}_{\beta_0 \gamma_0, i} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr) = 0.
\end{equation}
In addition, new equations and structure tensors occure. One of
these gauge structure relations is the reducibility condition
(\ref{VI85}) itself. In order to derive the others we proceed as
follows:
First, let us cast the Jacobi identity (\ref{VI87}) into a more practical
form. Owing to (\ref{VI85}) the expression in parenthesis must be
proportional to the zero--modes $Z^{\delta_0}_{\alpha_1}$,
\begin{equation}
\label{VI88}
F^{\delta_0}_{\eta_0 \alpha_0} F^{\eta_0}_{\beta_0 \gamma_0} -
R^i_{\alpha_0} F^{\delta_0}_{\beta_0 \gamma_0, i} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) =
3 Z^{\delta_0}_{\alpha_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0},
\end{equation}
where $H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0}(A)$ are new structure
tensors being totally antisymmetric with respect to the indices
$\alpha_0$, $\beta_0$, $\gamma_0$ and depending, in general, on the gauge
fields $A^i$.
Next, we derive an expression for the combination
$Z^{\alpha_0}_{\alpha_1, j} R^j_{\beta_0}$. Multiplying (\ref{VI86}) by
$Z^{\alpha_0}_{\alpha_1}$ and using the relation
$R^i_{\alpha_0, j} Z^{\alpha_0}_{\alpha_1} =
- R^i_{\alpha_0} Z^{\alpha_0}_{\alpha_1, j}$, which follows from (\ref{VI85}),
we obtain
\begin{equation*}
R^i_{\alpha_0} (
Z^{\alpha_0}_{\alpha_1, j} R^j_{\beta_0} +
F^{\alpha_0}_{\beta_0 \gamma_0} Z^{\gamma_0}_{\alpha_1} ) = 0.
\end{equation*}
Again, this may be solved by introducing additional structure tensors
$G^{\gamma_1}_{\beta_0 \alpha_1}(A)$
\begin{equation}
\label{VI89}
Z^{\alpha_0}_{\alpha_1, j} R^j_{\beta_0} +
F^{\alpha_0}_{\beta_0 \gamma_0} Z^{\gamma_0}_{\alpha_1} =
- Z^{\alpha_0}_{\gamma_1} G^{\gamma_1}_{\beta_0 \alpha_1},
\end{equation}
thus defining a new structure equation for first--stage reducible
theories. Multiplying this equation by $Z^{\beta_0}_{\beta_1}$ and
taking into account (\ref{VI85}),
\begin{equation*}
F^{\alpha_0}_{\beta_0 \gamma_0}
Z^{\gamma_0}_{\alpha_1} Z^{\beta_0}_{\beta_1} =
- Z^{\alpha_0}_{\gamma_1} Z^{\beta_0}_{\beta_1}
G^{\gamma_1}_{\beta_0 \alpha_1},
\end{equation*}
we obtain the useful equality
\begin{equation}
\label{VI90}
Z^{\alpha_0}_{\beta_1} G^{\gamma_1}_{\alpha_0 \alpha_1} =
- Z^{\alpha_0}_{\alpha_1} G^{\gamma_1}_{\alpha_0 \beta_1}.
\end{equation}
Moreover, we are able to establish two further gauge structure relations
for the first--stage reducible case showing that
$H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0}$ and
$G^{\alpha_1}_{\alpha_0 \beta_1}$ are not independent of each other. The
first one reads
\begin{equation}
\label{VI91}
\bigr(
G^{\alpha_1}_{\beta_0 \gamma_1} G^{\gamma_1}_{\gamma_0 \beta_1} +
R^i_{\beta_0} G^{\alpha_1}_{\gamma_0 \beta_1, i} +
\hbox{antisym}(\beta_0 \leftrightarrow \gamma_0) \bigr) +
G^{\alpha_1}_{\alpha_0 \beta_1} F^{\alpha_0}_{\beta_0 \gamma_0} +
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} = 0.
\end{equation}
In order to verify this relation we multiply the Jacobi identity (\ref{VI88})
with $Z^{\alpha_0}_{\beta_1}$. By virtue of
$R^i_{\alpha_0} Z^{\alpha_0}_{\beta_1} = 0$ this yields
\begin{align*}
& ( F^{\delta_0}_{\eta_0 \alpha_0} Z^{\alpha_0}_{\beta_1} )
F^{\eta_0}_{\beta_0 \gamma_0} +
F^{\delta_0}_{\eta_0 \beta_0}
( F^{\eta_0}_{\gamma_0 \alpha_0} Z^{\alpha_0}_{\beta_1} ) -
F^{\delta_0}_{\eta_0 \gamma_0}
( F^{\eta_0}_{\beta_0 \alpha_0} Z^{\alpha_0}_{\beta_1} )
\\
& ~ - R^i_{\beta_0} (
F^{\delta_0}_{\gamma_0 \alpha_0, i} Z^{\alpha_0}_{\beta_1} ) +
R^i_{\gamma_0} (
F^{\delta_0}_{\beta_0 \alpha_0, i} Z^{\alpha_0}_{\beta_1} ) -
Z^{\delta_0}_{\alpha_1} (
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} ) = 0.
\end{align*}
After replacing all terms of the form
$F^{\delta_0}_{\eta_0 \alpha_0} Z^{\alpha_0}_{\beta_1}$ according to
(\ref{VI89}) this gives
\begin{align*}
& Z^{\delta_0}_{\beta_1, i}
( R^i_{\alpha_0} F^{\alpha_0}_{\beta_0 \gamma_0} ) +
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\alpha_0 \beta_1} F^{\alpha_0}_{\beta_0 \gamma_0} +
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )
\\
& ~ + \bigr\{ R^i_{\beta_0} (
F^{\delta_0}_{\gamma_0 \alpha_0, i} Z^{\alpha_0}_{\beta_1} -
F^{\delta_0}_{\alpha_0 \gamma_0} Z^{\alpha_0}_{\beta_1, i} ) -
( F^{\delta_0}_{\alpha_0 \gamma_0} Z^{\alpha_0}_{\alpha_1} )
G^{\alpha_1}_{\beta_0 \beta_1} +
\hbox{antisym}(\beta_0 \leftrightarrow \gamma_0) \bigr\} = 0,
\end{align*}
and, using the same relation once more,
\begin{align*}
& Z^{\delta_0}_{\beta_1, i}
( R^i_{\alpha_0} F^{\alpha_0}_{\beta_0 \gamma_0} ) +
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\alpha_0 \beta_1} F^{\alpha_0}_{\beta_0 \gamma_0} +
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )
\\
& ~ + \bigr\{ R^i_{\beta_0} \bigr(
( F^{\delta_0}_{\gamma_0 \alpha_0} Z^{\alpha_0}_{\beta_1} )_{,i} +
Z^{\delta_0}_{\alpha_1, i} G^{\alpha_1}_{\gamma_0 \beta_1} \bigr) +
Z^{\delta_0}_{\alpha_1}
G^{\alpha_1}_{\beta_0 \gamma_1} G^{\gamma_1}_{\gamma_0 \beta_1} +
\hbox{antisym}(\beta_0 \leftrightarrow \gamma_0) \bigr\} = 0.
\end{align*}
Here, the expression in the curly bracket can be rewritten as
\begin{align*}
& Z^{\delta_0}_{\beta_1, i}
( R^i_{\alpha_0} F^{\alpha_0}_{\beta_0 \gamma_0} ) +
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\alpha_0 \beta_1} F^{\alpha_0}_{\beta_0 \gamma_0} +
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )
\\
& ~ + \bigr\{ R^i_{\beta_0}
( F^{\delta_0}_{\gamma_0 \alpha_0} Z^{\alpha_0}_{\beta_1} +
Z^{\delta_0}_{\alpha_1} G^{\alpha_1}_{\gamma_0 \beta_1} )_{,i} +
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\gamma_1 \beta_0} G^{\gamma_1}_{\gamma_0 \beta_1} +
R^i_{\gamma_0} G^{\alpha_1}_{\beta_0 \beta_1, i} ) +
\hbox{antisym}(\beta_0 \leftrightarrow \gamma_0) \bigr\} = 0
\end{align*}
and furthermore, once again using relation (\ref{VI89}),
\begin{align}
& Z^{\delta_0}_{\beta_1, i}
( R^i_{\alpha_0} F^{\alpha_0}_{\beta_0 \gamma_0} ) +
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\alpha_0 \beta_1} F^{\alpha_0}_{\beta_0 \gamma_0} +
3 Z^{\alpha_0}_{\beta_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )
\nonumber\\
& ~ - \bigr\{ R^i_{\beta_0}
( Z^{\delta_0}_{\beta_1, j} R^j_{\gamma_0} )_{,i} -
Z^{\delta_0}_{\alpha_1} (
G^{\alpha_1}_{\beta_0 \gamma_1} G^{\gamma_1}_{\gamma_0 \beta_1} +
R^i_{\gamma_0} G^{\alpha_1}_{\beta_0 \beta_1, i} ) +
\hbox{antisym}(\beta_0 \leftrightarrow \gamma_0) \bigr\} = 0.
\end{align}
This equation, since the algebra (\ref{VI86}) is closed,
\begin{equation*}
Z^{\delta_0}_{\beta_1, i}
( R^i_{\alpha_0} F^{\alpha_0}_{\beta_0 \gamma_0} ) =
Z^{\delta_0}_{\beta_1, i}
( R^j_{\beta_0} R^i_{\gamma_0, j} - R^j_{\gamma_0} R^i_{\beta_0, j} ) =
R^i_{\beta_0} ( Z^{\delta_0}_{\beta_1, j} R^j_{\gamma_0} )_{,i} -
R^i_{\gamma_0} ( Z^{\delta_0}_{\beta_1, j} R^j_{\beta_0} )_{,i},
\end{equation*}
leads immediately to the gauge structure relation (\ref{VI91}).
The second gauge structure relation, which can also be derived by means of
the Jacobi identity, is given by
\begin{align}
\label{VI92}
\bigr(&
H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} -
H^{\alpha_1}_{\eta_0 \delta_0 \alpha_0} F^{\eta_0}_{\beta_0 \gamma_0} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr)
\nonumber
\\
& + \bigr\{
R^i_{\delta_0} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0, i} -
G^{\alpha_1}_{\delta_0 \beta_1} H^{\beta_1}_{\alpha_0 \beta_0 \gamma_0} +
{\rm antisym}\bigr(
\delta_0 \leftrightarrow (\alpha_0, \beta_0, \gamma_0) \bigr) \bigr\} = 0,
\end{align}
where the left--hand side is a totally antisymmetric expression with respect
to $(\alpha_0, \beta_0, \gamma_0, \delta_0)$.
In order to prove that this relation is satisfied we consider the following
identity:
\begin{align*}
\bigr\{&
\bigr( ( Z^{\lambda_0}_{\alpha_1} H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} )
F^{\eta_0}_{\gamma_0 \delta_0} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr)
\\
& + 2 R^i_{\delta_0}
( Z^{\lambda_0}_{\alpha_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )_{,i} +
2 F^{\lambda_0}_{\delta_0 \eta_0}
( Z^{\eta_0}_{\alpha_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} )
\bigr\} + {\rm antisym}\bigr(
\delta_0 \leftrightarrow (\alpha_0, \beta_0, \gamma_0) \bigr) \equiv 0,
\end{align*}
which can be verified by a direct calculation replacing the terms
$Z^{\lambda_0}_{\alpha_1} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0}$ by the
help of the Jacobi identity (\ref{VI88}). Taking into account (\ref{VI89})
one obtains the equation
\begin{align*}
Z^{\lambda_0}_{\alpha_1} \bigr\{& \bigr(
H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr)
\\
& + 2 R^i_{\delta_0} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0, i} -
2 G^{\alpha_1}_{\beta_1 \delta_0} H^{\beta_1}_{\alpha_0 \beta_0 \gamma_0}
\bigr\} + {\rm antisym}\bigr(
\delta_0 \leftrightarrow (\alpha_0, \beta_0, \gamma_0) \bigr) = 0.
\end{align*}
After factoring out the zero--modes $Z^{\lambda_0}_{\alpha_1}$ and using
the identity
\begin{align*}
\bigr(&
H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr) +
{\rm antisym}\bigr(
\delta_0 \leftrightarrow (\alpha_0, \beta_0, \gamma_0) \bigr)
\\
& \equiv 2 \bigr(
H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} -
H^{\alpha_1}_{\eta_0 \delta_0 \alpha_0} F^{\eta_0}_{\beta_0 \gamma_0} +
\hbox{cyclic perm} (\alpha_0, \beta_0, \gamma_0) \bigr),
\end{align*}
this equation acquires the form (\ref{VI92}). The relations
(\ref{VI84})--(\ref{VI92}) are the key equations in order to derive the
$sl(1,2)$--transformations of the fields for the first--stage reducible case.
In order to construct the proper solution $S_m = S_{\rm cl} +
(\hbox{$\frac{1}{2}$} \epsilon_{ab} \mathbf{s}_m^b \mathbf{s}_m^a + m^2) X$
in that case one has to choose
$X = {\bar A}_i A^i + {\bar B}_{\alpha_0} B^{\alpha_0} +
{\bar B}_{\alpha_1 a} B^{\alpha_1 a} + {\bar C}_{\alpha_0 a} C^{\alpha_0 a} +
{\bar C}_{\alpha_1 ab} C^{\alpha_1 ab}$. A realization of the
$sl(1,2)$--transformations of the antifields
${\bar A}_i$, ${\bar B}_{\alpha_0}$, ${\bar B}_{\alpha_1 a}$,
${\bar C}_{\alpha_0 a}$ and ${\bar C}_{\alpha_1 ab}$ already has been given
(see Appendix A).
The corresponding nonlinear realization of the
$sl(1,2)$ in terms of the fields $A^i$,
$B^{\alpha_0}$, $B^{\alpha_1 a}$, $C^{\alpha_0 a}$ and $C^{\alpha_1 ab}$
are the following
\noindent{(1)~translations:}
\begin{align}
\mathbf{s}_+^a A^i &= R^i_{\alpha_0} C^{\alpha_0 a},
\nonumber
\\
\mathbf{s}_+^a C^{\alpha_0 b} &= Z^{\alpha_0}_{\alpha_1} C^{\alpha_1 ab} +
\epsilon^{ab} B^{\alpha_0} -
F^{\alpha_0}_{\beta_0 \gamma_0} C^{\beta_0 a} C^{\gamma_0 b},
\nonumber
\\
\mathbf{s}_+^a B^{\alpha_0} &= Z^{\alpha_0}_{\alpha_1} B^{\alpha_1 a} +
\hbox{$\frac{1}{2}$} F^{\alpha_0}_{\beta_0 \gamma_0} (
B^{\beta_0} C^{\gamma_0 a} -
\epsilon_{cd} Z^{\beta_0}_{\alpha_1} C^{\alpha_1 ac} C^{\gamma_0 d} )
\nonumber
\\
&\quad~ + \hbox{$\frac{1}{12}$} \epsilon_{cd} (
F^{\alpha_0}_{\eta_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} +
2 R^i_{\beta_0} F^{\alpha_0}_{\gamma_0 \delta_0, i} )
C^{\gamma_0 a} C^{\delta_0 c} C^{\beta_0 d},
\label{VI93}
\\
\mathbf{s}_+^a C^{\alpha_1 bc} &= - \epsilon^{ac} B^{\alpha_1 b} -
\epsilon^{ab} B^{\alpha_1 c} +
G^{\alpha_1}_{\alpha_0 \beta_1} C^{\alpha_0 a} C^{\beta_1 bc} -
\hbox{$\frac{1}{2}$} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0}
C^{\alpha_0 a} C^{\beta_0 b} C^{\gamma_0 c},
\nonumber
\\
\mathbf{s}_+^a B^{\alpha_1 b} &= G^{\alpha_1}_{\alpha_0 \beta_1} (
C^{\alpha_0 a} B^{\beta_1 b} -
\hbox{$\frac{1}{2}$} \epsilon_{cd}
Z^{\alpha_0}_{\gamma_1} C^{\gamma_1 ac} C^{\beta_1 bd} ) -
\hbox{$\frac{1}{2}$} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0}
B^{\alpha_0} C^{\beta_0 a} C^{\gamma_0 c}
\nonumber
\\
& \quad ~ + \hbox{$\frac{1}{4}$} \epsilon_{cd}
H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0} Z^{\alpha_0}_{\beta_1} (
3 C^{\beta_0 a} C^{\beta_1 bc} C^{\gamma_0 d} +
C^{\beta_0 b} C^{\beta_1 ac} C^{\gamma_0 d} )
\nonumber
\\
& \quad~ + \hbox{$\frac{1}{8}$} \epsilon_{cd} (
G^{\alpha_1}_{\delta_0 \beta_1} H^{\beta_1}_{\alpha_0 \beta_0 \gamma_0} -
R^i_{\delta_0} H^{\alpha_1}_{\alpha_0 \beta_0 \gamma_0, i} )
C^{\gamma_0 a} C^{\beta_0 b} C^{\alpha_0 c} C^{\delta_0 d}
\nonumber
\\
& \quad~ - \hbox{$\frac{1}{16}$} \epsilon_{cd}
H^{\alpha_1}_{\eta_0 \alpha_0 \beta_0} F^{\eta_0}_{\gamma_0 \delta_0} (
C^{\gamma_0 a} C^{\beta_0 b} + C^{\gamma_0 b} C^{\beta_0 a} )
C^{\alpha_0 c} C^{\delta_0 d},
\nonumber
\\
\intertext{(2)~special conformal transformations:}
\mathbf{s}_-^a A^i &= 0,
\nonumber
\\
\mathbf{s}_-^a C^{\alpha_0 b} &= 0,
\nonumber
\\
\mathbf{s}_-^a B^{\alpha_0} &= - 2 C^{\alpha_0 a},
\label{VI94}
\\
\mathbf{s}_-^a C^{\alpha_1 bc} &= 0,
\nonumber
\\
\mathbf{s}_-^a B^{\alpha_1 b} &= 2 C^{\alpha_1 ab},
\nonumber
\\
\intertext{(3)~symplectic rotations:}
\mathbf{d}_\alpha A^i &= 0,
\nonumber
\\
\mathbf{d}_\alpha C^{\alpha_0 b} &= C^{\alpha_0 a} (\sigma_\alpha)_a^{~b},
\nonumber
\\
\mathbf{d}_\alpha B^{\alpha_0} &= 0,
\label{VI95}
\\
\mathbf{d}_\alpha C^{\alpha_1 bc} &= C^{\alpha_1 ac} (\sigma_\alpha)_a^{~b} +
C^{\alpha_1 ba} (\sigma_\alpha)_a^{~c},
\nonumber
\\
\mathbf{d}_\alpha B^{\alpha_1 b} &= B^{\alpha_1 a} (\sigma_\alpha)_a^{~b},
\nonumber
\\
\intertext{and (4)~dilatations:}
\mathbf{d} A^i &= 0,
\nonumber
\\
\mathbf{d} C^{\alpha_0 b} &= C^{\alpha_0 b},
\nonumber
\\
\mathbf{d} B^{\alpha_0} &= 2 B^{\alpha_0},
\label{VI96}
\\
\mathbf{d} C^{\alpha_1 bc} &= 2 C^{\alpha_1 bc},
\nonumber
\\
\mathbf{d} B^{\alpha_1 b} &= 3 B^{\alpha_1 b}.
\nonumber
\end{align}
By making use of Eqs. (\ref{VI84})--(\ref{VI92}) after somewhat involved and
tedious algebraic mani\-pulations it can be proven that the transformations
(\ref{VI93})--(\ref{VI96}) really obey the $sl(1,2)$--superalgebra
(\ref{VI75}). For some details of this work we refer to Ref.~\cite{6}
where similar calculations were performed for the $osp(1,2)$--superalgebra.
Continuing in the same way, analogous considerations can be made for
higher stage reducible theories. But then, more and more new gauge
structure tensors with increasing numbers of indices and additional gauge
structure relations appear which makes a study of these theories quite
complicated.
\section{\bf Concluding remarks}
In this paper we have revealed the geometrical content of the
$osp(1,2)$--covariant Lagrangian quantization of general massive gauge
theories. A natural geometric formulation of that quantization procedure
is obtained by considering $osp(1,2)$ as subsuperalgebra of
$sl(1,2)$, which is considered as the algebra of generators of conformal
transformations in two anticommuting dimensions. It is shown that proper
solutions of the classical master equations can be constructed being
invariant under $osp(1,2) \oplus u(1)$. The $m$--dependent extended BRST
symmetry is realized in superspace as translations combined with
$m$--dependent special conformal transformations. The $sl(2) \oplus u(1)$
symmetry is realized in superspace as symplectic rotations and dilatations,
respectively. By the choice of a gauge the $sl(2) \oplus u(1)$ symmetry
is broken down to $sl(2) \sim sp(2)$.
In principle, by formal manipulations it is also possible to construct proper
solutions of the corresponding quantum master equations. However, in doing
so a serious problem is to provide a sensible definition of the various
$\Delta$--operators of the quantum master equations,
which do not make sense when applied to local functionals.
In this paper we have not adressed such problems
and related questions as the use of explicit regularizations and
renormalizations schemes and the discussion of the role of anomalies.
\bigskip
\bigskip
\noindent
{\large\bf Acknowledgement}\\
The authors would like to thank P.M. Lavrov for valuable discussions
concerning various aspects of the superfield quantization of general
gauge theories.
\bigskip
\bigskip
\begin{appendix}
\section{Componentwise notation of the $sl(1,2)$ transformations
of the antifields}
In componentwise notation the linear transformations (\ref{II7}) generated by
$V_+^a$ and $V_-^a$ read as follows:
\begin{align*}
V_+^a \bar{A}_i &= \epsilon^{ab} A_{i b}^*,
\\
V_+^a A^*_{i b} &= - \delta^a_b D_i,
\\
V_+^a D_i &= 0,
\\
V_+^a \bar{B}_{\alpha_s|a_1 \cdots a_s} &= \epsilon^{ab}
B_{\alpha_s b|a_1 \cdots a_s}^*,
\\
V_+^a B_{\alpha_s b|a_1 \cdots a_s}^* &= - \delta^a_b
E_{\alpha_s|a_1 \cdots a_s},
\\
V_+^a E_{\alpha_s|a_1 \cdots a_s} &= 0,
\\
V_+^a \bar{C}_{\alpha_s|a_0 \cdots a_s} &= \epsilon^{ab}
C_{\alpha_s b|a_0 \cdots a_s}^*,
\\
V_+^a C_{\alpha_s b|a_0 \cdots a_s}^* &= - \delta^a_b
F_{\alpha_s|a_0 \cdots a_s},
\\
V_+^a F_{\alpha_s|a_0 \cdots a_s} &= 0
\\
\intertext{and}
V_-^a \bar{A}_i &= 0,
\\
V_-^a A_{i b}^* &= 2 \delta^a_b \bar{A}_i,
\\
V_-^a D_i &= 0,
\\
V_-^a \bar{B}_{\alpha_s|a_1 \cdots a_s} &= 0,
\\
V_-^a B_{\alpha_s b|a_1 \cdots a_s}^* &=
2 \delta^a_b \Bigr(
\bar{B}_{\alpha_s|a_1 \cdots a_s} +
\sum_{r = 1}^s \delta^a_{a_r}
\bar{B}_{\alpha_s|a_1 \cdots a_{r - 1} b a_{r + 1} \cdots a_s} \Bigr),
\\
V_-^a E_{\alpha_s|a_1 \cdots a_s} &= 2 \epsilon^{ab} \Bigr(
(s + 2) B_{\alpha_s b|a_1 \cdots a_s} -
\sum_{r = 1}^s B_{\alpha_s a_r|a_1 \cdots a_{r - 1} b a_{r + 1} \cdots a_s}^*
\Bigr),
\\
V_-^a \bar{C}_{\alpha_s|a_0 \cdots a_s} &= 0,
\\
V_-^a C_{\alpha_s b|a_0 \cdots a_s}^* &=
2 \delta^a_b \Bigr(
\bar{C}_{\alpha_s|a_0 \cdots a_s} +
\sum_{r = 0}^s \delta^a_{a_r}
\bar{C}_{\alpha_s|a_0 \cdots a_{r - 1} b a_{r + 1} \cdots a_s} \Bigr),
\\
V_-^a F_{\alpha_s|a_0 \cdots a_s} &= 2 \epsilon^{ab} \Bigr(
(s + 1) C_{\alpha_s b|a_0 \cdots a_s} -
\sum_{r = 0}^s C_{\alpha_s a_r|a_0 \cdots a_{r - 1} b
a_{r + 1} \cdots a_s}^* \Bigr),
\end{align*}
where the definitions (\ref{II9}) and (\ref{II10}) have to be taken into
account. For the transformations (\ref{II8}) generated by $V_\alpha$ and $V$
one gets:
\begin{align*}
V_\alpha \bar{A}_i &= 0,
\\
V_\alpha A_{i b}^* &= A_{i a}^* (\sigma_\alpha)^a_{~b},
\\
V_\alpha D_i &= 0,
\\
V_\alpha \bar{B}_{\alpha_s|a_1 \cdots a_s} &= \sum_{r = 1}^s
\bar{B}_{\alpha_s|a_1 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r},
\\
V_\alpha B_{\alpha_s b|a_1 \cdots a_s}^* &=
B_{\alpha_s a|a_1 \cdots a_s}^* (\sigma_\alpha)^a_{~b} + \sum_{r = 1}^s
B_{\alpha_s b|a_1 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}^*
(\sigma_\alpha)^a_{~a_r},
\\
V_\alpha E_{\alpha_s|a_1 \cdots a_s} &= \sum_{r = 1}^s
E_{\alpha_s|a_1 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r},
\\
V_\alpha \bar{C}_{\alpha_s|a_0 \cdots a_s} &= \sum_{r = 0}^s
\bar{C}_{\alpha_s|a_0 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r},
\\
V_\alpha C_{\alpha_s b|a_0 \cdots a_s}^* &=
C_{\alpha_s a|a_0 \cdots a_s}^* (\sigma_\alpha)^a_{~b} + \sum_{r = 0}^s
C_{\alpha_s b|a_0 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}^*
(\sigma_\alpha)^a_{~a_r},
\\
V_\alpha F_{\alpha_s|a_0 \cdots a_s} &= \sum_{r = 0}^s
F_{\alpha_s|a_0 \cdots a_{r - 1} a a_{r + 1} \cdots a_s}
(\sigma_\alpha)^a_{~a_r}
\\
\intertext{and}
V \bar{A}_i &= - 2 \bar{A}_i,
\\
V A_{i b}^* &= -3 A_{i b}^*,
\\
V D_i &= - 4 D_i,
\\
V \bar{B}_{\alpha_s|a_1 \cdots a_s} &=
- (s + 4) \bar{B}_{\alpha_s|a_1 \cdots a_s},
\\
V B_{\alpha_s|a_1 \cdots a_s}^* &=
- (s + 5) B_{\alpha_s|a_1 \cdots a_s}^*,
\\
V E_{\alpha_s|a_0 \cdots a_s} &=
- (s + 6) E_{\alpha_s|a_0 \cdots a_s},
\\
V \bar{C}_{\alpha_s|a_0 \cdots a_s} &=
- (s + 3) \bar{C}_{\alpha_s|a_1 \cdots a_s},
\\
V C_{\alpha_s|a_0 \cdots a_s}^* &=
- (s + 4) C_{\alpha_s|a_0 \cdots a_s}^*,
\\
V F_{\alpha_s|a_0 \cdots a_s} &=
- (s + 5) F_{\alpha_s|a_0 \cdots a_s}.
\end{align*}
By an explicit calculation it can be verified that the generators
$V_\pm^a$, $V_\alpha$ and $V$ obey the $sl(1,2)$-superalgebra (\ref{II3}).
\end{appendix}
|
1,108,101,565,774 | arxiv | \section{Introduction}
\label{sec:intro}
The study of the fundamental theory of strong interactions (Quantum
Chromodynamics, QCD) in the regimes of extreme densities and temperatures
is ongoing via the measurement of the properties of hot and dense
multi-parton systems produced in high-energy nuclear collisions (see, e.g.,
reviews~\cite{d'Enterria:2006su,Hwa:2010,Salgado:2009jp,Dremin:2010jx}).
The started LHC heavy ion program makes it possible to probe the new
frontiers of the high temperature QCD providing the valuable information
on the dynamical behavior of quark-gluon matter (QGM), as predicted by
lattice calculations. A number of interesting LHC results from
PbPb runs at $\sqrt s_{\rm NN}=2.76$ TeV, have been published by ALICE,
ATLAS and CMS collaborations (see~\cite{Muller:2012zq} for the overview
of the results from the first year of heavy ion physics at LHC).
One of the modern trends in heavy ion physics at high energies is a study of Fourier harmonics
of azimuthal particle distribution, which is a powerful probe of bulk properties of
the created high density matter. It is typically described by a Fourier series of the form:
\begin{eqnarray}
\displaystyle
\label{eq:1}
& & E\frac{d^3N}{dp^3}=\frac{d^2N}{2\pi p_{\rm T}dp_{\rm T}d\eta} \times
\nonumber \\
& & \{
1+2\sum\limits_{n = 1}^\infty v_{\rm n}(p_{\rm T},\eta)
\cos{ \left[ n(\varphi -\Psi_{\rm n}) \right] }
\} ~,
\end{eqnarray}
where $\varphi$ is the azimuthal angle with respect to the reaction plane $\Psi_{\rm n}$,
and $v_{\rm n}$ are the Fourier coefficients. The second harmonic, $v_2$, referred to as
``elliptic flow'', is the most extensively studied one, because it directly relates the
anisotropic shape of the overlap of the colliding nuclei to the corresponding anisotropy of
the outgoing momentum distribution. The momentum and centrality dependencies of the elliptic
flow in PbPb collisions were measured at the
LHC~\cite{Aamodt:2010pa,ATLAS:2011yk,Chatrchyan:2012ta} in the first instance. Then, the
results of measurements of the higher azimuthal
harmonics~\cite{ALICE:2011ab,Aad:2012bu,Chatrchyan:2013kba} and the anisotropic flow of
identified particles~\cite{Abelev2012:di} were published. The higher order coefficients
$v_{\rm n}$ (n$>2$) are smaller than $v_2$. They also carry important information on the
dynamics of the medium created, and complement $v_2$ in providing a more complete picture of
its bulk properties. The two coefficients that have been closely studied are the quadrangular
(or hexadecapole) flow $v_4$~\cite{Kolb:2003zi,Kolb:2004gi} and triangular flow
$v_3$~\cite{Alver:2010gr}. Although the pentagonal and hexagonal flows $v_5$ and $v_6$ are
studied to a lesser extent, there exist some predictions from hydrodynamics on them
also~\cite{Alver:2010dn}.
At relatively low transverse momenta, $p_{\rm T}<3\div4$ GeV/$c$, the azimuthal anisotropy
results from a pressure-driven anisotropic expansion of the created matter, with more particles
emitted in the direction of the largest pressure gradients~\cite{Ollitrault:1992bk}. At higher
$p_{\rm T}$, this anisotropy is understood to result from the path-length dependent energy loss
of partonic jets as they traverse the matter, with more jet particles emitted in the direction
of shortest path-length~\cite{Gyulassy:2000gk}.
In Ref.~\cite{Lokhtin:2012re} the LHC data on multiplicity, charged hadron spectra, elliptic
flow and femtoscopic correlations from PbPb collisions were analyzed in the frameworks of the
HYDJET++ model~\cite{Lokhtin:2008xi}. Taking into account both hard and soft components and
tuning input parameters allow HYDJET++ to reproduce these data. Another
study \cite{Bravina:2013upa} with HYDJET++ was dedicated to the influence of jet production
mechanism on the ratio $v_4/v_2^2$ and its role in violation of the number-of-constituent-quark
(NCQ) scaling~\cite{Noferini:2012ps}, predicted within the HYDJET++ in~\cite{Eyyubova:2009hh}.
In the current paper, tuned HYDJET++ is applied to analyze the LHC data on momentum and
centrality dependences of azimuthal anisotropy harmonics in PbPb collisions, and then to
illuminate the mechanisms of the generation of Fourier coefficients $v_2 \div v_6$. The
detailed study of hexagonal flow $v_6$ is also the subject of our recent
paper~\cite{Bravina:2013ora}.
Note that the LHC data on higher-order azimuthal aniso\-tropy harmonics ($v_2 \div v_4$) were
analyzed with a multiphase transport model (AMPT) in~\cite{Xu:2011jm}. It was shown that AMPT
describes LHC data on the anisotropic flow coefficients $v_{\rm n}$ (n=2$\div$4) for
semi-central PbPb collisions at $p_{\rm T} < 3$ GeV/$c$. It also reproduces reasonably well the
centrality dependence of integral $v_{\rm n}$ for all but most central collisions. Another
approach~\cite{Gale:2012rq} reproducing $v_{\rm n}$ data in ultrarelativistic heavy ion
collisions is the glasma flow with the subsequent relativistic viscous hydrodynamic evolution of
matter through the quark-gluon plasma and hadron gas phases (IP-Glasma+MUSIC model). This model
gives good agreement to $p_{\rm T}$-dependence of $v_{\rm n}$ (n=2$\div$5) and event-by-event
distributions of $v_2 \div v_4$ at RHIC and LHC.
The study of generation of higher flow harmonics within the
HYDJET++ has several attractive features. Firstly, the presence of
elliptic and triangular flow permits us to examine the interference
of these harmonics and its contribution to all higher even and odd
components of the anisotropic flow. If necessary, the original
eccentricities of higher order can be easily incorporated in the
model for the fine tuning of the distributions. Secondly, very rich
table of resonances, which includes about 360 meson and baryon species,
helps one to analyze all possible final state interactions. Thirdly,
the interplay of ideal hydrodynamics with jets can unveil the role of
hard processes in the formation of anisotropic flow of secondary
hadrons. The basic features of the model are described in Sect.~\ref{sec:model}.
\section{HYDJET++ model}
\label{sec:model}
HYDJET++ (the successor of HYDJET~\cite{Lokhtin:2005px}) is the
model of relativistic heavy ion collisions, which includes two
independent components: the soft state (hydro-type) and the hard
state resulting from the in-medium multi-parton fragmentation. The
details of the used physics model and simulation procedure can be
found in the HYDJET++ manual~\cite{Lokhtin:2008xi}. Main features
of the model are sketched below as follows.
The soft component of an event in HYDJET++ is the ``thermal''
hadronic state generated on the chemical and thermal freeze-out
hypersurfaces obtained from the pa\-ra\-met\-ri\-za\-ti\-on of
relativistic hydrodynamics with preset freeze-out conditions (the
adapted event generator FAST MC~\cite{Amelin:2006qe,Amelin:2007ic}).
Hadron multiplicities are calculated using the effective thermal volume
approximation and Poisson multiplicity distribution around its mean value,
which is supposed to be proportional to a number of participating nucleons
for a given impact parameter of a AA collision. To simulate the
elliptic flow effect, the hydro-inspired pa\-ra\-met\-ri\-za\-ti\-on is
implemented for the momentum and spatial anisotropy of a soft
hadron emission source~\cite{Lokhtin:2008xi,Wiedemann:1997cr}.
The model used for the hard component in HYDJET++ is based on the
PYQUEN partonic energy loss model~\cite{Lokhtin:2005px}. The approach
describing the multiple scattering of hard partons relies on accumulated
energy loss via gluon radiation which is associated with each
parton scattering in expanding quark-gluon fluid. It also includes
the interference effect in gluon emission with a finite formation
time using the modified radiation spectrum $dE/dx$ as a function
of the decreasing temperature $T$. The model takes into account
radiative and collisional energy loss of hard partons in
longitudinally expanding quark-gluon fluid, as well as the
realistic nuclear geometry. The simulation of single hard nucleon-nucleon
sub-collisions by PYQUEN is constructed as a modification of the jet event
obtained with the generator of hadron-hadron interactions
PYTHIA$\_$6.4~\cite{Sjostrand:2006za}. Note, that Pro-Q20 tune was used for
the present simulation. The number of PYQUEN jets is generated according
to the binomial distribution. The mean number of jets produced in an AA
event is calculated as a product of the number of binary NN
sub-collisions at a given impact parameter per the integral cross
section of the hard process in NN collisions with the minimum
transverse momentum transfer $p_{\rm T}^{\rm min}$ (the latter is an
input parameter of the model). In HYDJET++, partons
produced in (semi)hard processes with the momentum transfer lower
than $p_{\rm T}^{\rm min}$, are considered as being ``thermalized''.
So, their hadronization products are included ``automatically'' in the
soft component of the event. In order to take into account the
effect of nuclear shadowing on parton distribution functions, we
use the impact parameter dependent
pa\-ra\-met\-ri\-za\-ti\-on~\cite{Tywoniuk:2007xy} obtained in the framework
of Glauber-Gribov theory.
The model has a number of input parameters for the soft and hard components.
They are tuned from fitting to experimental data values for various
physical observables, see~\cite{Lokhtin:2012re} for details.
In order to simulate higher azimuthal anisotropy harmonics, the following modification has been
implemented in the model. HYDJET++ does not contain the fireball evolution from the initial
state to the freeze-out stage. Instead of application of computational relativistic
hydrodynamics, which is extremely time consuming, HYDJET++ employs the simple and frequently
used parametrizations of the freeze-out hypersurface~\cite{Lokhtin:2008xi}. Then, anisotropic
elliptic shape of the initial overlap of the colliding nuclei results in a corresponding
anisotropy of the outgoing momentum distribution. To describe the second harmonic $v_2$ the
model utilizes coefficients $\delta(b)$ and $\epsilon(b)$ representing, respectively, the flow
and the coordinate anisotropy of the fireball at the freeze-out stage as functions of the
impact parameter $b$. These momentum and spatial anisotropy parameters $\delta(b)$ and
$\epsilon(b)$ can either be treated independently for each centrality, or can be related to
each other through the dependence on the initial ellipticity $\epsilon_0(b)=b/2R_A$, where
$R_A$ is the nucleus radius. The last option allows us to describe the elliptic flow
coefficient $v_2$ for most centralities at the RHIC~\cite{Lokhtin:2008xi} and
LHC~\cite{Lokhtin:2012re} energies using only two independent on centrality parameters.
Non-elliptic shape of the initial overlap of the colliding nuclei, which can be characterized
by the initial triangular coefficient $\epsilon_{03}(b)$, results in an appearance of higher
Fourier harmonics in the outgoing momentum distribution. Our Monte-Carlo (MC) procedure allows
us to parametrize easily this anisotropy via the natural modulation of final freeze-out
hypersurface, namely
\begin{equation}
\label{Rbphi}
R(b,\phi)= R_{\rm f}(b)
\frac{\sqrt{1-\epsilon^2(b)}}{\sqrt{1+\epsilon(b)\cos2\phi}}[1+\epsilon_3(b)
\cos3(\phi+\Psi_3^{\rm RP})]~,
\end{equation}
where $\phi$ is the spatial azimuthal angle of the fluid element relatively to
the direction of the impact parameter. $R(b,\phi)$ is the fireball transverse radius in
the given azimuthal direction $\phi$ with the scale $R_{\rm f}(b)$, which is a model
parameter. The phase $\Psi_3^{\rm RP}$ allows us to introduce the third harmonics
possessing its own reaction plane, randomly distributed with respect to the direction of
the impact parameter ($\Psi_2^{\rm RP}=0$). This new anisotropy parameter $\epsilon_3(b)$ can
again be treated independently for each centrality, or can be expressed through the initial
ellipticity $\epsilon_0(b)=b/2R_A$. Note, that such modulation does not affect the elliptic
flow coefficient $v_2$, which was fitted earlier with two parameters $\delta(b)$ and
$\epsilon(b)$~\cite{Lokhtin:2012re,Lokhtin:2008xi}. Figure~\ref{XY_HY} illustrates second and
third harmonics generation in HYDJET++ by representing particle densities in the transverse
plane. One should be aware that the triangular deformation shown here is very strong. The
actual deformations needed to describe triangular flow at LHC energies are typically order of
magnitude weaker.
The modulation of the maximal transverse flow rapidity, first considered in Eq.~(28)
of Ref.~\cite{Lokhtin:2008xi} at the paramet\-ri\-za\-tion of 4-velocity $u$,
\begin{eqnarray}
\label{v34}
\rho_{\rm u}^{\rm max}= \rho_{\rm u}^{\rm max}(b=0)[1+ \rho_{\rm 3u}(b) \cos3\phi +
\rho_{\rm 4u}(b) \cos4\phi]~,
\end{eqnarray}
also permits the introduction of higher azimuthal harmonics related, however, to the direction
of the impact parameter ($\Psi_2^{\rm RP}=0$) only. In this case we get the modulation of the
velocity profile in all freeze-out hypersurface, and can not ``rotate'' this modulation with
independent phase. The new anisotropy parameters, $\rho_{\rm 3u}(b)$ and $\rho_{\rm 4u}(b)$,
can again be treated independently for each centrality, or can be expressed through the initial
ellipticity $\epsilon_0(b)=b/2R_A$.
For current simulations we have introduced the minimal modulation in HYDJET++ using just simple
parameterizations $\epsilon_3(b)\propto \epsilon_0^{1/3}(b)$ and
$\rho_{\rm 4u}(b)\propto \epsilon_0(b)$, while $\rho_{\rm 3u}(b)$ being taken equal to zero.
The corresponding proportionality factors were selected from the best fit of the data to
$v_3(p_{\rm T})$ and $v_4(p_{\rm T})$.
Let us mark that the azimuthal anisotropy parameters $\epsilon(b)$,
$\delta(b)$ and $\epsilon_3(b)$ are fixed at given impact parameter b.
Therefore they do not provide dynamical event-by-event flow fluctuations,
and specify $v_{\rm n}(b)$ accumulated over many events. The main source
of flow fluctuations in HYDJET++ is fluctuations of particle momenta and
multiplicity. Recall, that the momentum-coordinate correlations in
HYDJET++ for soft component is governed by collective velocities of fluid
elements, and so the fluctuations in particle coordinates are reflected in
their momenta. The fluctuations became stronger as resonance decays and
(mini-)jet production are taken into account. An event distribution over
collision impact parameter for each centrality class also increases such
fluctuations. In the current paper we restrict ourselves to analysis of the
event-averaged $v_{\rm n}(p_{\rm T})$. The detailed study of event-by-event
flow fluctuations is the subject of our future investigation. The possible
further modification of HYDJET++ to match experimental data on flow
fluctuations would be smearing of parameters $\epsilon$, $\delta$ and
$\epsilon_3$ at a given b.
\section{Results}
\label{sec:results}
It was demonstrated in~\cite{Lokhtin:2012re} that tuned HYDJET++ model can reproduce the LHC
data on centrality and pseudorapidity dependence of inclusive charged particle multiplicity,
$p_{\rm T}$-spectra and $\pi^\pm \pi^\pm$ correlation radii in central PbPb collisions, and
$p_{\rm T}$- and $\eta$-dependencies of the elliptic flow coefficient $v_2$ (up to $p_{\rm T}
\sim 5$ GeV/$c$ and 40\% centrality). However the reasonable treatment of higher and odd Fourier
harmonics of particle azimuthal distribution $v_{\rm n}$ ($n>2$) needs the additional modifications of
the model, which does not effect azimuthally-integrated physical observables (see previous
section). We have compared the results of HYDJET++ simulations with the LHC data on
$v_{\rm n}$ for inclusive as well as for identified charged hadrons.
\subsection{Anisotropy harmonics for inclusive charge hadrons}
\label{subsec:res1}
The standard way of measuring $v_{\rm n}$ corresponds to the inclusive
particle harmonics on the base of Eq.~(\ref{eq:1}).
Then $v_{\rm n}$ is extracted using the special methods, such as the event plane
$v_{\rm n}\{\rm EP\}$~\cite{Poskanzer:1998yz}, or $m$-particle cumulant
$v_{\rm n}\{m\}$~\cite{Borghini:2001vi,Borghini:2001zr}, or Lee-Yang zero
methods $v_{\rm n}\{\rm LYZ\}$~\cite{Bhalerao:2003xf,Borghini:2004ke}.
In order to estimate the uncertainties related to the experimental definitions of flow
harmonics, HYDJET++ results for different methods of $v_{\rm n}$ extraction were compared with
its ``true'' values, known from the event generator and determined relatively to
$\Psi_2^{\rm RP}$ for even and $\Psi_3^{\rm RP}$ for odd harmonics, respectively.
Figures~\ref{v2_ATLAS}-\ref{v6_CMS} show anisotropic flow coefficients
$v_{\rm n}$ as a function of the hadron transverse momentum $p_{\rm T}$.
Let us discuss first the results of HYDJET++ simulations. It can be
separated in two groups: (i) results obtained with respect to the true
reaction plane straight from the generator, i.e., $v_{2,4,6}(\Psi_2^{\rm RP})$
and $v_{3,5}(\Psi_3^{\rm RP})$, and (ii) those obtained by using the
(sub)event plane method with rapidity gap $|\Delta \eta|>3$. The last method
provides us with $v_{\rm n}\{\rm EP\}$. The main systematic uncertainties for
the methods come from non-flow correlations and flow fluctuations. The last one
(as it is kept in the model currently) almost does not affect mean $v_{\rm n}$
values restored by the EP method, while the non-flow correlations can be
effectively suppressed by applying $\eta$-gap in $v_{\rm n}$ reconstruction.
This gives us a good reconstruction precision for elliptic $v_2$, triangular $v_3$,
and quadrangular $v_4$ flows up to $p_{\rm T} \sim 5$ GeV/$c$. At
higher transverse momenta some differences appear due to non-flow effects from
jets. However, Figs.~\ref{v5_ATLAS} and \ref{v5_CMS} show that pentagonal
flow $v_5$ determined from the model w.r.t. $\Psi_3^{\rm RP}$ and $v_5$
restored w.r.t. the event plane of 5-th order $\Psi_5^{\rm EP}$ differ a
lot. The reason is that although no intrinsic $\Psi^{\rm RP}_5$ is generated
in HYDJET++, pentagonal flow $v_5$ emerges here as a result of the
``interference'' between $v_2$ and $v_3$, each is determined with respect
to its own reaction plane, $v_5 \propto v_2 (\Psi_2^{\rm RP}) \cdot
v_3 (\Psi_3^{\rm RP})$, in line with the conclusions of Ref.~\cite{Teaney:2012ke}.
Hexagonal flow $v_6$ is also very sensitive to the methods used due to nonlinear interplay
of elliptic and triangular flows generating $v_6$, see~\cite{Bravina:2013ora} for details.
The results of HYDJET++ for $v_6\{\rm EP\}$ are not shown on the plots because of too large
statisitcal errors.
Note, that the experimental situation is even more complicated, and the dependence
of measured $v_{\rm n}$ on methods applied may be more crucial for all $n$ due to
apparently larger fluctuations in the data than in the model. For instance, it was
shown in~\cite{Heinz:2013bua} that event-by-event fluctuations in the initial
state may lead to characteristically different $p_{\rm T}$-dependencies for the
anisotropic flow coefficients extracted by different experimental methods.
It is also worth mentioning here that the hump-like structure of the
simulated $v_2(p_{\rm T})$ and $v_3(p_{\rm T})$ signals appears due to
interplay of hydrodynamics and jets. At transverse momenta $p_{\rm T}
\geq 3$\,GeV/$c$ the spectrum of hadrons is dominated by jet particles
which carry very weak flow. Thus, the elliptic and triangular flows in
the model also drop at certain $p_{\rm T}$. Higher flow harmonics arise
in the model solely due to the presence of the $v_2$, $v_3$ and its
interference. Therefore, transverse momentum distributions of these
harmonics inherit the characteristic hump-like shapes.
Now let us consider the ATLAS~\cite{Aad:2012bu} and
CMS~\cite{Chatrchyan:2012ta,Chatrchyan:2013kba} data plotted onto the model results in
Figs.~\ref{v2_ATLAS}-\ref{v6_CMS} for different centrality classes. The event plane
for $v_{\rm n}\{\rm EP\}$ was defined experimentally with respect to n-th harmonics in all
cases with the exeption of CMS data for $v_6\{\rm EP/\Psi_2\}$, which was measured
using second harmonics. One can see that HYDJET++
reproduces experimentally measured $p_{\rm T}$-depen\-den\-ces of $v_2$, $v_3$ and
$v_4\{{\rm LYZ}\}$ up to $p_{\rm T} \sim 5$ GeV/$c$. The centrality dependence of $v_4$
measured by event plane and two-particle cumulant methods is significantly weaker than that
of $v_4$ measured by Lee-Yang zero method, presumably due to large non-flow contribution and
increase of the flow fluctuations in more central events. Since the model is tuned to fit the
$p_{\rm T}-$dependencies of $v_{4}\{{\rm LYZ}\}$, it underestimates the quadrangular flow,
restored by the EP or two-particle cumulant methods, in (semi-)central collisions.
Recall, that in ideal hydrodynamics (at the limit of small temperatures, large transverse
momenta and absence of the flow fluctuations) $v_4\{\Psi_2\} / v_2^2 = 0.5$~\cite{Borghini:2005kd}.
The same trend is seen for $p_{\rm T}$-dependencies of the pentagonal flow.
For central and semi-central topologies up to $\sigma/\sigma_{\rm geo} \approx 20\%$
the $v_5\{{\rm EP}\}$ in the model underestimates the experimentally measured
$v_5\{{\rm EP}\}$, whereas for more peripheral collisions the
agreement between the model and the data is good. Unfortunately, there
are no data on pentagonal flow extracted by the LYZ method. As we have
seen, for $v_2,\ v_3$ and $v_4$ in central and semi-central collisions
the LYZ method provides noticeably weaker flow compared to that obtained
by the EP method. One may expect, therefore, that the pentagonal flow,
$v_5\{{\rm LYZ}\}$, almost free from non-flow contributions, should be closer to the
$v_5$ generated by the HYDJET++. If the future experimental data on $v_5$ will
persist on stronger flow, this fact can be taken as indication of the
possible presence of additional pentagonal eccentricity $\epsilon_5(b)$
with the new phase $\Psi_5^{\rm RP}$ responsible for genuine $v_5$. Both
parameters can be easily inserted in Eq.~(\ref{Rbphi}) for the modulation
of the final freeze-out hypersurface.
At last, $p_{\rm T}$-dependencies of the hexagonal flow in HYDJET++ are similar to that seen
in CMS data within the uncertainties related to methods used. However
$v_{6}(\Psi_2^{\rm RP})$ in the model visibly underestimates ATLAS data on $v_6\{\rm EP\}$
for most central events. The latter fact may be explained by a siginificant
$v_3$ contribution to $v_6\{\rm EP\}$ in central collisions, which is not presented in
$v_{6}(\Psi_2^{\rm RP})$ component:
$v_{6}(\Psi_3^{\rm RP}) \sim v_{6}(\Psi_2^{\rm RP}) < v_6\{\rm EP\}$.
On the other hand, the relative contribution to $v_6\{\rm EP\}$ coming from $v_2$ is
instantly increasing as the reaction becomes more peripheral~\cite{Bravina:2013ora}, and
starting from $20-30$\% centralities we already get $v_6\{{\rm EP}\} \sim v_{6}(\Psi_2^{\rm RP})
\gg v_{6}(\Psi_3^{\rm RP})$ with the approximate agreement between the model and the data.
Some additional checks have been done as well. In the presence of only
elliptic flow all odd higher harmonics are found to be essentially zero.
The quadrangular flow is zero, $v_4 = 0$, if the elliptic flow is absent.
The pentagonal flow disappears, $v_5 = 0$, in case of either $v_2 = 0$
or $v_3 = 0$. The hexagonal flow is zero, $v_6 = 0$, if both
elliptic and triangular flows are absent, $v_2 = 0$ and $v_3 = 0$.
\subsection{Anisotropy harmonics for identified charge hadrons}
\label{subsec:res2}
Finally, let us consider distributions for some hadronic species measured in PbPb collisions
at the LHC. Before addressing to azimuthal anisotropy harmonics of identified hadrons, the
comparision of HYDJET++ results with ALICE data~\cite{Preghenella:2011np} on
$p_{\rm T}$-spectra of negatively charged pions, kaons and anti-protons in PbPb collisions
is displayed in Fig.~\ref{dndpt-pid}. One can see that HYDJET++ reproduces well the measured
transverse momentum spectra of identified hadrons within the whole range of accessible
$p_{\rm T}$.
Figure~\ref{v2v3-pid} presents the comparision of HYDJET++ results and the ALICE
data~\cite{Krzewicki:2011ee} for the elliptic and triangular flow of pions, kaons and
anti-protons at 10--20\% and 40--50\% centrality of PbPb collisions. The agreement between
the model and the data for kaons and anti-protons looks fair. For pions the
model underestimates the data a bit. The discrepancy is more pronounced for more central
collisions indicating, perhaps, presence of strong non-flow correlations in the data.
\section{Conclusion}
\label{sec:summ}
Azimuthal anisotropy harmonics of inclusive and identified charged hadrons in PbPb collisions
at $\sqrt{s}_{\rm NN}=2.76$ TeV have been analyzed in the framework of HYDJET++ model.
The effects of possible non-elliptic shape of the initial overlap of the
colliding nuclei are implemented in HYDJET++ by the modulation of the final freeze-out
hypersurface with the appropriate fitting triangular coefficient. This modulation is not
correlated with the direction of the impact parameter, and two independent ``strong''
lower azimuthal harmonics, $v_2$ and $v_3$, being obtained as a result. They are of
different physical origin, coded partly in the different centrality dependence.
Interference between $v_2$ and $v_3$ generates as ``overtones'' both even and odd higher
azimuthal harmonics, $v_4$, $v_5$, $v_6$, etc.
This mechanism allows HYDJET++ to reproduce the LHC data on $p_{\rm T}$- and centrality
dependencies of the aniso\-t\-ro\-pic flow coefficients $v_n$ (n=2$\div$4) up to $p_{\rm T}
\sim 5$ GeV/$c$ and $40$\% centrality, and also the basic trends for pentagonal $v_5$ and
hexagonal $v_6$ flows. Some discrepancy between the model results and the data on the
pentagonal flow in central events requires further study of additional sources of the
non-flow correlations and flow fluctuations, which may be absent in the model. Although the introduction of internal higher harmonics is also possible in the HYDJET++, there is no clear
evidence in the data to do so at present. Obtained results show that higher harmonics of the
azimuthal flow get very significant contributions from the lower harmonics, $v_2$ and $v_3$.
This circumstance makes it difficult to consider the higher harmonics as independent
characteristics of the early phase of ultrarelativistic heavy ion collisions.
\section*{Acknowledgments}
\begin{acknowledgement}
Discussions with A.V.~Belyaev and D.~d'Enterria are gratefully acknowledged.
We thank our colleagues from CMS, ALICE and ATLAS collaborations for
fruitful cooperation. This work was supported by Russian Foundation for
Basic Research (grant 12-02-91505), Grants of President of Russian Federation
for Scientific Schools Supporting (No. 3920.2012.2 and No. 3042.2014.2), Ministry
of Education and Sciences of Russian Federation (agreement No. 8412), Norwegian Research
Council (NFR), European Union and the Government of Czech Republic under
the project "Support for research teams on CTU" (No. CZ.1.07/2.3.00/30.0034).
\end{acknowledgement}
|
1,108,101,565,775 | arxiv | \section{Introduction}
The pseudoscalar spectrum {[}$I^{G}(J^{PC})=1^{-}(0^{-+})${]} contains three states below 2 GeV: $\pi(140)$; $\pi(1300)$; and $\pi(1800)$. The lightest has been much studied. However, a comprehensive understanding of QCD requires an approach that admits the simultaneous study of the heavier pseudoscalars and, indeed, other systems. An understanding of the hadron spectrum and its realisation within QCD is necessary in order to unravel the nature of the long-range force between light quarks.
The lightest pseudoscalar meson is both a bound state of $u$- and $d$- quarks and the Goldstone mode in QCD associated with the dynamical breaking of chiral symmetry. This is readily and clearly understood using the Dyson-Schwinger equations (DSEs) \cite{Maris:1997hd,Maris:1997tm}. The DSEs have proven a particularly useful device for studying the spectrum and properties of light-quark systems. Modern applications are reviewed in Refs.\ \cite{bastirev,reinhardrev,Maris:2003vk}.
Herein we present results obtained from studies of the inhomogeneous Bethe-Salpeter equations (BSEs) for the scalar and pseudoscalar vertices in QCD. This is a practical means of mapping out the domain of applicability of the leading order term in a systematic and symmetry preserving DSE truncation scheme \cite{munczek,bender,mandarvertex}. Furthermore, we explore the capacity of such studies to complement contemporary numerical simulations of lattice-regularised QCD.
\section{Bound states from spacelike data}
\label{sec:Numerical-Tools}
The BSE provides a Poincar\'e covariant tool with which to calculate the properties of bound states in quantum field theory. The inhomogeneous equation for a pseudoscalar quark-antiquark vertex is\footnote{For simplicity we work with two degenerate flavours of quarks. Hence, Pauli matrices are sufficient to represent the flavour structure. We employ a Euclidean metric with the conventions described, e.g., in Sec.~2.1 of Ref.\,\protect\cite{bastirev}.}
\begin{eqnarray}
\nonumber \lefteqn{\left[\Gamma_5^j(k;P)\right]_{tu}=
Z_{4}\gamma_{5}\,\frac{\tau^j}{2} }\\
&& +\int_{q}^{\Lambda}\left[\chi_5^j(q;P)\right]_{sr}K_{rs}^{tu}(q,k;P)\,,
\label{eq:DSE_inhomogeneous}
\end{eqnarray}
where $k$ is the relative and $P$ the total momentum of the constituents;
$r,\ldots,u$ represent colour, Dirac and flavour matrix indices;
\begin{equation}
\chi_5^j(q;P) = S(q_{+}) \Gamma_5^j(q;P) S(q_{-})\,,
\end{equation}
$q_{\pm}=q\pm P/2$; and $\int_{q}^{\Lambda}$ represents a Poincar\'{e}
invariant regularisation of the integral, with $\Lambda$ the regularisation
mass-scale \cite{Maris:1997hd,Maris:1997tm}. In Eq.~(\ref{eq:DSE_inhomogeneous}),
$K$ is the fully amputated and renormalised dressed-quark-antiquark scattering kernel and $S$ is the renormalised dressed-quark propagator. ($SSK$ is a renormalisation group invariant). The dressed-quark propagator has the form
\begin{eqnarray}
S(p)^{-1}
& =& \frac{1}{Z(p^2,\zeta^2)}\left[ i\gamma\cdot p + M(p^2)\right] ,
\label{sinvp}
\end{eqnarray}
and is obtained as the solution of QCD's gap equation:
\begin{equation}
S(p)^{-1}=Z_{2}\left(i\gamma\cdot p+m_{\mathrm{bm}}\right)+\Sigma(p)\,,\label{eq:quark_prop}
\end{equation}
\begin{equation}
\Sigma(p) = Z_{1} \int_{q}^{\Lambda} g^{2} D_{\mu\nu}(p-q) \frac{\lambda^{a}}{2} \gamma_{\mu} S(q) \Gamma_{\nu}^{a}(q;p)\,,
\end{equation}
augmented by the renormalisation condition
\begin{equation}
\label{renormS} \left.S(p)^{-1}\right|_{p^2=\zeta^2} = i\gamma\cdot p +
m(\zeta)\,,
\end{equation}
where $m(\zeta)$ is the running current-quark mass at the renormalisation point $\zeta$. These equations involve the quark-gluon-vertex, quark wave function and Lagrangian mass renormalisation constants, $Z_{1,2,4}(\zeta,\Lambda)$, each of which depends on the gauge parameter, the renormalisation point and the regularisation mass-scale.
The solution of Eq.\,(\ref{eq:DSE_inhomogeneous}) has the form
\begin{eqnarray}
\nonumber
\lefteqn{i \Gamma_{5 }^j(k;P) = \frac{\tau^j}{2} \gamma_5
\left[ i E_5(k;P) + \gamma\cdot P \, F_5(k;P) \right.} \\
\nonumber & &
\left.+ \, \gamma\cdot k \,k\cdot P\, G_5(k;P)+
\sigma_{\mu\nu}\,k_\mu P_\nu \,H_5(k;P) \right].\\
\label{genpvv}
\end{eqnarray}
This is the minimal and complete form required by Poincar\'e covariance. The homogeneous pseudoscalar BSE is obtained from Eq.\,(\ref{eq:DSE_inhomogeneous}) merely by omitting the \emph{driving term}; viz., $Z_{4}\gamma_{5}\frac{\tau^j}{2} $. The equation thus obtained defines an eigenvalue problem, with the bound state's mass-squared being the eigenvalue and its Bethe-Salpeter amplitude, the eigenvector. As such, the equation only has solutions at isolated timelike values of $P^2$. On the other hand, the solution of the inhomogeneous equation, Eq.\,(\ref{eq:DSE_inhomogeneous}), exists for all values of $P^2$, timelike and spacelike, with each bound state exhibited as a pole. This is illustrated by Fig.\,\ref{fig:DSE_Inverted}, wherein the solution is seen to evolve smoothly with $P^2$ and the pole associated with the pseudoscalar ground state is abundantly clear.
\begin{figure}[t]
\includegraphics[clip,width=0.45\textwidth]{bse_data.eps}\vspace*{-4ex}
\caption{\label{fig:DSE_Inverted} Pseudoscalar amplitude $E_5(0;P^2)$ obtained by solving the inhomogeneous BSE, Eq.\,(\ref{eq:DSE_inhomogeneous}), using the renormalisation-group-improved rainbow-ladder truncation described in \protect\cite{mariscairns}. The vertical dotted line indicates the position of the ground state $\pi$ mass pole. The inset shows $1/E_5(0;P^2)$.\vspace*{-4ex}}
\end{figure}
A numerical determination of the precise location of the first pole in $E_5(k^2=0;P^2)$ will generally be difficult. The task becomes harder if one seeks to obtain the positions of excited states in addition. It is for these reasons that the homogeneous equation is usually used. However, if one is employing a framework that can only provide information which is equivalent to the form of this vertex at spacelike momenta, then a scheme must be devised that will yield the pole positions.
One obvious alternative is to focus on $P_E(P^2)=1/E(k^2=0;P^2)$ and locate its zeros, and it is plain from Fig.\,\ref{fig:DSE_Inverted} that this approach can at least be successful for the ground state. It is important to determine whether this (or another) approach can also be used in practice to determine some properties of excited states when information is available only for spacelike $P^2$.
While the DSEs can be used to generate such information, and the analysis of that information is what brought us to this point, herein for this purpose we consider a simple model for an inhomogeneous vertex whose analytic structure is known precisely; namely,
\begin{equation}
V(P^2)=b + \sum_{i=1}^M\frac{a_{i}}{P^{2}+m_{i}^{2}}\,, \label{eq:simple_model}
\end{equation}
where: for each $i$, $m_{i}$ is the bound state's mass and $a_{i}$ is the residue of the bound state pole in the vertex, which is related to the state's decay constant; and $b$ is a constant that represents the perturbative background that is necessarily present in the ultraviolet. The particular parameter values we employ are listed in Table\,\ref{tab:Model_Param}. This \textit{Ansatz} provides a data sample that captures the essential qualitative features of true DSE solutions for colour-singlet three-point Schwinger functions, such as that depicted in Fig.\,\ref{fig:DSE_Inverted}.
\begin{table}[t]
\begin{center}
\caption{\label{tab:Model_Param} Parameters characterising our vertex \textit{Ansatz}, Eq.\,(\protect\ref{eq:simple_model}). They were chosen without prejudice, subject to the constraint in quantum field theory that residues of poles in a three-point Schwinger function must alternate in sign \cite{Holl:2004fr}, and ordered such that $m_i<m_{i+1}$. We use $b=0.78$. This is the calculated value of $Z_4(\zeta = 19\,{\rm GeV},\Lambda=200\,{\rm GeV})$ used to obtain the curves in Fig.\,\protect\ref{fig:DSE_Inverted}.}
\begin{tabular*}{0.45\textwidth}{
|c@{\extracolsep{0ptplus1fil}}|l@{\extracolsep{0ptplus1fil}}|l@{\extracolsep{0ptplus1fil}}|}\hline
$i$~ & Mass & Residue \\\hline
1& 0.14& ~4.23\\
2& 1.06& -5.6 \\
3&1.72& ~3.82\\
4&2.05 & -3.45 \\
5&2.2& ~2.8 \\\hline
\end{tabular*}\vspace*{-4ex}
\end{center}
\end{table}
To proceed, we employ a diagonal Pad\'e approximant of order $N$ to analyse the data sample generated by Eq.\,(\ref{eq:simple_model}); viz., we use
\begin{equation}
f_{N}(P^{2}) = \frac{c_{0}+c_{1}P^{2}+\ldots+c_{N}P^{2N}} {1+c_{N+1}P^{2}+\ldots+c_{2N}P^{2N}}
\label{eq:pade}
\end{equation}
as a means by which to fit $1/V(P^2)$. The known ultraviolet behaviour of the vertex requires that we use a diagonal approximant. NB.\ A real-world data sample will exhibit logarithmic evolution beyond our renormalisation point. No simple Pad\'e approximant can recover that. However, this is not a problem in practical applications because the approximant is never applied on that domain.
In a confining theory it is likely that a colour-singlet three-point function exhibits a countable infinity of bound state poles. Therefore no finite order approximant can be expected to recover all the information contained in that function. Our vertex model exhibits $M$ bound state poles. We expect that an approximant of order $N<M$ will at most provide reliable information about the first $N-1$ bound states, with the position and residue associated with the $N^{\rm th}$ pole providing impure information that represents a mixture of the remaining $M-(N-1)$ signals and the continuum. We anticipate that this is the pattern of behaviour that will be observed with any rank-$N$ approximation to a true Schwinger function. To explore this aspect of the procedure we studied the $N$-dependence of the Pad\'e fit.
The domain of spacelike momenta for which information is available may also affect the reliability of bound state parameters extracted via the fitting procedure. We analysed this possibility by fitting Eq.\,(\ref{eq:pade}) to our \textit{Ansatz} data on a domain $(0,P_{\mathrm{max}}^{2}]$, and studying the $P_{\mathrm{max}}^{2}$-dependence of the fit parameters.
In all cases we found that a Pad\'e approximant fitted to $1/V(P^2)$ can accurately recover the pole residues and locations associated with the ground and first excited states. However, there is no cause for complacency.
\begin{figure}[t]
\includegraphics[clip,width=0.45\textwidth]{masses-3.eps}\vspace*{-4ex}
\caption{\label{fig:3pade_mass} Pole positions (mass values) obtained through a fit of Eq.\,(\protect\ref{eq:pade}) with $N=3$ to data for $1/V(P^2)$ generated from Eq.\,(\protect\ref{eq:simple_model}) with the parameters listed in Table \ref{tab:Model_Param}. The coordinate $P_{\mathrm{max}}^{2}$ is described in the text. Horizontal dotted lines indicate the three lightest masses in Table \ref{tab:Model_Param}. The ground state mass (solid line) obtained from the Pad\'e approximant lies exactly on top of the dotted line representing the true value.\vspace*{-4ex}}
\end{figure}
In Fig.\,\ref{fig:3pade_mass} we exhibit the $P_{\mathrm{max}}^{2}$-dependence of the mass-parameters determined via a $N=3$ Pad\'e approximant. Plateaux appear for three isolated zeros, which is the maximum number possible, and the masses these zeros define agree very well with the three lightest values in Table \ref{tab:Model_Param}. This appears to suggest that the procedure has performed better than we anticipated. However, that inference is seen to be false in Fig.\,\ref{fig:3pade_residue}, which depicts the $P_{\mathrm{max}}^{2}$-dependence of the pole residues. While the results for $a_{1,2}$ are correct, the result inferred from the plateau for $a_3$ is incorrect. It is important to appreciate that if we had not known the value of $a_3$ \textit{a priori}, then we would very likely have been misled by the appearance of a plateau and produced an erroneous \emph{prediction} from the fit to numerical data. Plainly, a $N=3$ approximant can at most only provide reliable information for the first $M-N=2$ bound states.
We explored this further and applied a $N=4$ approximant to the same model. In this case we could still only extract reliable information for the first two poles. We subsequently biased the fit procedure by \emph{hard-wiring} in Eq.\,(\ref{eq:pade}) the residue and position of the lightest pole masses. This did not help. The $N=4$ approximant could still not provide results that improved upon what we had already learnt with the $N=3$ approximant.
\begin{figure}[tb]
\includegraphics[clip,width=0.45\textwidth]{residues-3.eps}\vspace*{-4ex}
\caption{\label{fig:3pade_residue} Pole residues obtained through a fit of Eq.\,(\protect\ref{eq:pade}) with $N=3$ to data for $1/V(P^2)$ generated from Eq.\,(\protect\ref{eq:simple_model}) with the parameters in Table \ref{tab:Model_Param}. Horizontal dotted lines indicate the residues associated with the three lightest masses in Table \ref{tab:Model_Param}. The residue associated with the ground state (solid line) lies exactly atop the dotted line representing the true value. The residue for the second pole exhibits a plateau at the correct (negative) value. However, the plateau exhibited by the result for the residue of the third pole is wrong.\vspace*{-4ex}}
\end{figure}
The results described herein, and our continuing analysis and exploration of other methods, including those popular in contemporary simulations of lattice-regularised QCD \cite{latticecorrelator}, suggest that solely from spacelike data it is only ever possible to extract reliable information about bound states with masses which do not much exceed $1\,$GeV. While these results are preliminary, they nevertheless provide sound reasons for caution.
\begin{figure}[tb]
\includegraphics[clip,width=0.45\textwidth]{mass_0.33.eps}\vspace*{-4ex}
\caption{\label{fig:dse_masses}
Pole positions (mass values) obtained through a fit of Eq.\,(\protect\ref{eq:pade}) with $N=3$ to the DSE result for $1/E_5(k^2=0,P^2)$ depicted in Fig.\,\protect\ref{fig:DSE_Inverted}. Horizontal dotted lines indicate the masses obtained for the ground and first excited state via a direct solution of the homogeneous BSE \protect\cite{Holl:2004fr}.\vspace*{-4ex}}
\end{figure}
Following this background work on the viability \emph{in principle} of using spacelike data alone to extract bound state information, we applied the method to the true DSE-calculated pseudoscalar vertex that is in depicted Fig.\,\ref{fig:DSE_Inverted}. The decay constants and masses for the ground state pseudoscalar and first radial excitation were obtained from the homogeneous BSE in Ref.\,\cite{Holl:2004fr}. The comparison between these masses and those inferred from the Pad\'e approximant is presented in Fig.\,\ref{fig:dse_masses}. As suggested by our background work, with perfect (effectively noiseless) spacelike data at hand, reliable information on the first two states in this channel can be obtained.\footnote{We are currently exploring the impact of Gaussian noise in the data.} We do not present a plot of the residues but they are accurate. In particular, the ground state residue is positive and that of the first excited state is negative, and this is obtained without any bias in the fit.
\section{Sigma terms}
The $\sigma$-term for a state $O$ is given by
\begin{equation}
\label{sigmasystem}
\sigma_{O} = m(\zeta) \frac{\partial m_O}{\partial m(\zeta)} \,,
\end{equation}
where $m_O$ is the mass of the state, and it is a keen probe of the impact of explicit chiral symmetry breaking on a hadron's mass. The nucleon's $\sigma$-term, $\sigma_N$, has been estimated using: chiral effective theory, e.g.\ Refs.\,\cite{Gasser:1991ce,ulf}; lattice-QCD, e.g.\ Ref.\,\cite{Leinweber:2000sa,ulf2}; and QCD-based models, e.g.\ Ref.\,\cite{lyubovitsky}. Our recent interest in hadron $\sigma$-terms is motivated by their utility in using observational data to place constraints on the variation of nature's fundamental parameters \cite{uzan}. In Table \ref{sigmaterms} we reproduce results calculated in Ref.\,\cite{Flambaum:2005kc} along with new results described in the text. We note that
\begin{equation}
\frac{\delta m_O}{m_O} = \frac{\sigma_O}{m_O} \frac{\delta m(\zeta)}{m(\zeta)} \,,
\end{equation}
so that the dimensionless quantity tabulated measures the linear relative response of the mass of interest to a fractional change in the current-quark mass, which is a renormalisation group invariant.
\begin{table}[t]
\begin{center}
\caption{\label{sigmaterms} Calculated $\sigma$-terms. Those for $\pi$, $N$ and $\Delta$ were reported in Ref.\,\protect\cite{Flambaum:2005kc}. The values for $\rho$ and $\omega$ listed herein were obtained via a direct analysis of the $m(\zeta)$-dependence of the vector meson mass obtained in a solution of the rainbow-ladder truncation of the quark DSE and homogeneous meson BSE. They improve the values reported in Ref.\,\protect\cite{Flambaum:2005kc}, which were obtained using a simple fit to $m_\rho(m(\zeta))$ provided in Ref.\,\cite{marisvienna}. All results are renormalisation-point-independent and were obtained with: $m_{u,d}(\zeta)= 3.7\,$MeV, $m_{s}(\zeta)= 82\,$MeV, $m_{c}(\zeta)= 0.97\,$GeV and $m_{b}(\zeta)= 4.1\,$GeV, where $\zeta=19\,$GeV. Perturbative evolution can be used to determine the associated renormalisation-point-independent current-quark-masses.}
\begin{tabular*}{0.45\textwidth}{c@{\extracolsep{0ptplus1fil}}
|c@{\extracolsep{0ptplus1fil}}
c@{\extracolsep{0ptplus1fil}}
c@{\extracolsep{0ptplus1fil}}
c@{\extracolsep{0ptplus1fil}}
}\hline
H & $\pi$ & $\pi_1$ & $\sigma$ & \\
$\rule{0em}{3.5ex}\displaystyle\frac{\sigma_H}{m_H}$
& 0.498 & 0.017 & 0.013 & \\\hline
H & $\rho$ & $\omega$ & $N$ & $\Delta$ \\
$\rule{0em}{3.5ex}\displaystyle\frac{\sigma_H}{m_H}$
& 0.021 & 0.034 & 0.064 & 0.041 \\\hline
$q$ & $u$,$d$ & $s$ & $c$ & $b$ \\
$\rule{0em}{3.5ex}\displaystyle\frac{\sigma_q}{M^E_q}$
& 0.023 & 0.230 & 0.637 & 0.851 \\\hline
\end{tabular*}\vspace*{-4ex}
\end{center}
\end{table}
\medskip
\hspace*{-\parindent}\underline{\textit{Radial Excitation of the Pion}}.\hspace*{0.5em}
We have a framework which enables the calculation of $\sigma_{\pi_1}$ using Eq.\,(\ref{sigmasystem}), where $\pi_1$ denotes the first pseudoscalar radial excitation. To be specific, we can straightforwardly obtain the current-quark-mass-dependence of the $\pi_1$ in the renormalisation-group-improved rainbow-ladder (RL) DSE truncation described in Ref.\,\cite{mariscairns}. In this truncation \mbox{$m^{\rm RL}_{\pi_1}=1.06\,$GeV \cite{Holl:2004fr}} and we find\footnote{We are currently unable to provide a reliable estimate of meson-loop corrections to the mass and $\sigma$-term of the $\pi_1$.}
\begin{equation}
\sigma_{\pi_{1}}^{\rm RL}=0.018\,\textrm{GeV}.
\end{equation}
This value is considerably smaller than that for the ground state pion: $\sigma_\pi = 0.069\,$GeV. However, that merely serves again to emphasise the particular character of the mass of QCD's Goldstone mode and its amplification by dynamical chiral symmetry breaking.
\medskip
\hspace*{-\parindent}\underline{\textit{Scalar Meson}}.\hspace*{0.5em}
We do not pretend to have a complete understanding of the lowest mass $0^{++}$ state in the hadron spectrum. In this channel the rainbow-ladder truncation may be unreliable because the cancellations between higher-order terms in the systematic DSE truncation, which are so effective and important in pseudoscalar and vector channels, do not occur \cite{cdrQC2}. This is entangled with the phenomenological difficulties encountered in understanding the scalar states below $1.4\,$GeV (see, e.g., Refs.\,\cite{MikeP}).
Nevertheless, the rainbow-ladder DSE truncation provides a light-quark scalar-meson solution. Combining the results from a number of sources, one finds \cite{bastirev} $m_\sigma^{\rm RL} = 0.64\pm 0.06\,$GeV. With the kernel described in Ref.~\cite{mariscairns}, which we have used throughout, one obtains
\begin{eqnarray}
\label{massRL}
m_{\sigma}^{\rm RL} & = & 0.675\,{\rm GeV}\,,\\
\label{sigmaRL}
2\, m_\sigma^{\rm RL} \sigma_{\sigma}^{{\rm RL}} & = & (0.184\,{\rm GeV})^2 \\
& \Rightarrow & \sigma_{\sigma}^{{\rm RL}} = 0.025\,{\rm GeV}. \label{sigmaRLR}
\end{eqnarray}
The scalar meson described by the rainbow-ladder truncation has a large coupling to two pions \cite{Maris:2000ig}. It is therefore important to consider the effect of a $\pi \pi$ loop correction to this state's mass and $\sigma$-term. Such meson loop corrections can be estimated \cite{Leinweber:2000sa,Flambaum:2005kc,Wright:2000gg} and have a modest quantitative impact ($\lsim 15$\%) on $\sigma_\rho$, $\sigma_N$ and $\sigma_\Delta$ and a larger effect ($\lsim 30$\%) on $\sigma_\omega$. Their effect in this case can be analysed in the same way.
We consider a single $\pi\pi$-loop self-energy
\begin{eqnarray}
\nonumber \lefteqn{
\Pi^{\pi\pi}_{\sigma}(m_\sigma^2)}\\
&& = \frac{-3g^2_{\sigma\pi\pi}}{16\pi^{2}} \int_{0}^{\infty}dk \frac{k^{2}\, u_{\Lambda_\sigma}(k)^2} {\omega(k)\left(\omega(k)^2-m_{\sigma}^{2}/4\right)}\,,
\label{sigmaloop}
\end{eqnarray}
where $\omega(k)=\sqrt{m_{\pi}^{2}+k^{2}}$. The self energy both corrects $m_\sigma^{\rm RL}$ and provides for the $\sigma\to \pi\pi$ width. In Eq.\,(\ref{sigmaloop})
\begin{equation}
\frac{g_{\sigma\pi\pi}}{m_\sigma^{\rm RL}}=5.51 \; \Rightarrow \;
\Im \sqrt{s}_{\rm T} = 0.300 {\rm GeV}.
\end{equation}
This is a typical value for the imaginary part of the T-matrix pole \cite{pdg} and corresponds to
\begin{equation}
\Gamma_{\sigma\pi\pi} = 0.55\,{\rm GeV}
\,,
\end{equation}
which equates numerically to $0.92\,(2\,\Im \sqrt{s}_{\rm T})$.
The function $u_{\Lambda_\sigma}(k^2) = 1/(1+k^{2}/\Lambda_{\sigma}^{2})^{2}$ in Eq.\,(\ref{sigmaloop}) is a form factor, introduced to represent the nonpointlike nature of the $\pi$ and $\sigma$ and hence the $\sigma\pi\pi$ vertex. The analysis of Refs.\,\mbox{\cite{Bloch:1999yk,MarisFBS}} indicates that the intrinsic size of the $\sigma$ described herein is 84\% of that of the $\rho$. We therefore choose a value of the regularisation mass-scale $\Lambda_{\sigma}=\Lambda_{\rho\pi\pi}/0.84$. With $\Lambda_{\rho\pi\pi}=1.23\,$GeV at $m_\pi=0.14\,$GeV, as determined in Ref.\,\cite{Leinweber:2001ac}, $\Re\Pi_\sigma^{\pi\pi}((m^{\rm RL}_\sigma)^2) = - (0.395\,{\rm GeV})^2$. Hence, the $\pi\pi$-loop acts to reduce the mass of the rainbow-ladder $\sigma$-meson and, from the shifted pole position,
\begin{equation}
\label{sigma1loop}
m_\sigma^{{\rm RL}+\pi\pi} = 0.624\,{\rm GeV.}
\end{equation}
This is an $8$\% reduction cf.\ Eq.\,(\ref{massRL}). With all else kept fixed in Eq.\,(\ref{sigmaloop}), a variation of $\Lambda_\sigma$ by $\pm 20$\% alters the result in Eq.\,(\ref{sigma1loop}) by $\mp 6$\%. NB.\ From Ref.\,\cite{pdg}, one might consider $0.60 \pm 0.25\,$GeV as typifying the mass of the lightest scalar meson.
We define the $\pi\pi$-loop correction to the scalar $\sigma$-term via
\begin{equation}
\left. m_{\pi}^{2}\frac{\partial}{\partial m_{\pi}^{2}}\, \Re\Pi_{\sigma}^{\pi\pi}\right|_{(m_{\pi}^{2})_{{\rm expt.}}}
= -\,(0.161\,{\rm GeV})^{2}\,,
\end{equation}
and combine this with Eq.\,(\ref{sigmaRL}) accordingly:
\begin{eqnarray}
\nonumber \lefteqn{m(\zeta) \frac{\partial\,\Re m_\sigma^2 }{\partial m(\zeta)}
=(0.090\,{\rm GeV})^2}\\
& &=: 2 \,m_\sigma^{\Re({\rm 1-loop})} \, \sigma_\sigma^{\Re({\rm 1-loop})}\\
& \Rightarrow & \sigma_\sigma^{\Re({\rm 1-loop})} = 0.0073 \,{\rm GeV}\,,
\label{sigmaresult}
\end{eqnarray}
since $m_\sigma^{\Re({\rm 1-loop})}=0.547\,$GeV. Equation (\ref{sigmaresult}) is a $71$\% reduction cf.\ Eq.\,(\ref{sigmaRLR}). The effect is large because the scalar-meson is very broad. The $\Pi_\rho^{\pi\pi}$ self-energy contributes much less to properties of the rainbow-ladder $\rho$-meson because the width/mass ratio is significantly smaller \cite{Flambaum:2005kc,pichowsky}.
It is noteworthy that a self-consistent solution of $s-(m_\sigma^{\rm RL})^2-\Pi_\sigma^{\pi\pi}(\Re s)=0$ gives a pole position
\begin{equation}
\label{polemass}
\surd s_\sigma = 0.578- i \, 0.311\,{\rm GeV}.
\end{equation}
The third iteration is the last to introduce a change $>1$\%, and all iterations beyond the first reduce the real part in Eq.\,(\ref{polemass}) by a total of $< 7$\%. The effects are greater than that of the nucleon's $\pi N$ self energy \cite{Flambaum:2005kc,NpiN}. These notes provide a gauge for the accuracy of the one loop analysis.
\medskip
\hspace*{-\parindent}\underline{\textit{Constituent quarks}}.\hspace*{0.5em}
One measure of the importance of dynamical chiral symmetry breaking to the dressed-quark mass function is the magnitude, relative to the current-quark mass, of the Euclidean constituent-quark mass; viz.,
\begin{equation}
\label{CQM}
(M^E)^2 := s \ni s = M(s)^2 .
\end{equation}
For the current-quark-masses in Table \ref{sigmaterms}
\begin{equation}
\begin{array}{c|cccc}
Q & u,d & s & c & b \\\hline
\rule{0em}{3ex} M^E_Q\,({\rm GeV}) & 0.42 & 0.56 & 1.57 & 4.68
\end{array}
\end{equation}
A constituent-quark $\sigma$-term can subsequently be defined \cite{Flambaum:2005kc}
\begin{equation}
\label{sMEQ}
\sigma_Q := m(\zeta) \, \frac{\partial M^E}{\partial m(\zeta)}\,.
\end{equation}
It is a renormalisation-group-invariant that can be determined from solutions of the gap equation, Eq.\,(\ref{eq:quark_prop}). Our results are presented in Table \ref{sigmaterms}.
\begin{figure}[t]
\includegraphics[clip,width=0.45\textwidth]{quark_mass_ratio.eps}\vspace*{-4ex}
\caption{\label{fig:quark_ratio} Ratio $\sigma_Q/M^E_Q$ [Eqs.\,(\protect\ref{CQM}) \& (\protect\ref{sMEQ})].
It is a measure of the current-quark-mass-dependence of dynamical chiral symmetry breaking. The vertical dotted lines correspond to the $u=d$, $s$, $c$ and $b$ current-quark masses listed in Table \protect\ref{sigmaterms}.\vspace*{-4ex}}
\end{figure}
In Fig.\,\ref{fig:quark_ratio} we depict $\sigma_{Q}/M^E_Q$. It is a measure of the effect on the dressed-quark mass-function of explicit chiral symmetry breaking compared with the sum of the effects of explicit and dynamical chiral symmetry breaking. One anticipates that for light-quarks this ratio must vanish because the magnitude of their constituent-mass owes primarily to dynamical chiral symmetry breaking, while for heavy-quarks it should approach one. The figure confirms these expectations.
\section{Summary and Conclusion}
We presented arguments which indicate that using solely spacelike information about a Schwinger function, it may only be possible to extract properties of bound states with masses $\lsim 1.2\,$GeV. With perfect spacelike information it is possible to accurately reconstruct the pole contributions from states which satisfy this bound. However, such reconstruction methods provide no information beyond that which is already available in studies that employ physical (timelike) bound state momenta. We illustrated this via the Dyson-Schwinger equations.
In the context of numerical simulations of lattice-regularised QCD, we speculate that it may be possible to reach higher mass states by employing lattice data to constrain the infrared behaviour of DSE integral equation kernels and subsequently using the DSEs to provide information on Schwinger functions at timelike momenta.
The $\sigma$ term is one useful gauge of the impact of explicit chiral symmetry breaking on a hadron's mass. Such information is important for the interpretation of measurements that indicate a spacial and/or temporal variation in Nature's fundamental parameters. We calculated and reported $\sigma$-terms for a range of hadrons: the ground state pion; the pion's first radial excitation; a light scalar meson; the $\rho$ and $\omega$; the nucleon and $\Delta$; and the $u$, $s$, $c$ and $b$ constituent-quarks.
The analysis of the scalar channel is interesting. It is consistent with a picture of the lightest scalar as a bound state of a dressed-quark and \mbox{-antiquark} combined with a considerable two-pion component, which reduces the quark-core mass by $\lsim 15$\% but dramatically alter the current-quark-mass-dependence of the $0^{++}$ pole position.
The constituent-quark $\sigma$-terms provide a tool that is useful for assessing the flavour-dependence of dynamical chiral symmetry breaking.
\medskip
\hspace*{-\parindent}\textbf{Acknowledgments.}\hspace*{0.5em}
We avow discussions with P.\,O.~Bowman, V.\,V.~Flambaum, A.~Krass\-nigg, D.\,B.~Leinweber, P.\,C.~Tandy and A.\,G.~Williams.
This work was supported by:
Dept.\ of Energy, Office of Nucl.\ Phys., contract nos.\ DE-FG02-00ER41135 and W-31-109-ENG-38;
National Science Foundation grant no.\ PHY-0301190;
and the \textit{A.\,v.\ Humboldt-Stiftung} via a \textit{F.\,W.\ Bessel Forschungspreis}.
It benefited from the facilities of ANL's Computing Resource Center.
\begingroup\raggedright
|
1,108,101,565,776 | arxiv | \section{Introduction}
\label{sec:orgef5e1fd}
\label{ch:chapter2_modelling_acoustics}
Fire changes the acoustics properties of a room by introducing temperature variations and time-dependent temperature and flow fields \cite{quintiere2006fundamentals}. Firefighters use acoustic alarms to locate and rescue downed firefighters on the fireground. This work aims to understand how sound propagation in a room changes due to a fire being introduced into the room. \citet{Abbasi2020_Change,abbasi2020Sound} showed that the measured acoustic impulse response of a room is significantly changed. Low-frequency modes increased in frequency, and higher-frequency modal structure was lost. We hypothesize that the dominant mechanism for the measured changes in impulse response is the time-varying temperature field which leads to a time-varying sound speed field. To test this hypothesis, this work used numerical modeling, allowing the decoupling of some of these effects to isolate the dominant physical mechanism. Two types of sound propagation models (a ray model and a full-wave finite element model) were used with a sophisticated computational fluid dynamics (CFD) fire model to simulate the effect of the fire on acoustic propagation.
\section{Experimental Results}
\label{sec:org24021dc}
The numerical models developed in this article are compared with experimentally measured impulse responses previously shown in \citet{Abbasi2020_Change,abbasi2020Sound}. Experiments 1 and 3 (according to the nomenclature introduced in \citet{abbasi2020Sound} are modeled). \Cref{fig:exp1_schem} and \Cref{fig:exp3_schem} shows the schematic diagrams for those two experiments. For experiment 1, the impulse/frequency response measured by microphone 2 is used for comparison. For experiment 3, the impulse/frequency response measured by the `left ear' microphone is used for comparison.
\begin{figure}[]
\centering
\includegraphics[keepaspectratio,width=1.0\linewidth,height=0.8\textheight]{Figure1.pdf}
\caption{\label{fig:exp1_schem}
(Color online) Top view of the burn compartment and equipment for experiment 1, described in \cite{Abbasi2020_Change,abbasi2020Sound}. Microphones were placed at height \(H\) = 0.56 m above the floor.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[keepaspectratio,width=1.0\linewidth,height=0.8\textheight]{Figure2.pdf}
\caption{\label{fig:exp3_schem}
(Color online) Top view of the burn compartment and equipment for experiment 3, described in \cite{abbasi2020Sound}. The glass manikin head is in the upper right corner of the figure, facing in the \ensuremath{X}-negative direction (toward the speaker).}
\end{figure}
\section{Fire Modeling}
\label{sec:orgfd5bc75}
\label{sec:fds_model}
A fire is a reaction between fuel and oxygen releasing heat and chemical byproducts. The experimental compartment fires of \citet{Abbasi2020_Change} were modeled using the open-source finite difference CFD model Fire Dynamics Simulator version 6 (FDS). The large-eddy turbulence model in FDS \cite{mcgrattan2013fire} was used to simulate the three-dimensional temperature field created by the fire. FDS was used to compute the spatial and temporal evolution of the temperature field (two-dimensional and three-dimensional) which were used as input to the acoustic models. In \Cref{sec:comsol_scattering}, the model is used to simulate a three-dimensional temperature field created by a fire in an open environment. The fire is based on the one used in the experimental measurements described in \citet{Abbasi2020_Change}, without any compartment effects. In \Cref{sec:2d_fire_comsol} the experimental compartment fires shown in \citet{Abbasi2020_Change} are simulated. A two-dimensional slice of the temperature field was taken for each time step to capture the direct path between the acoustic source and receiver. The fire was modeled as a planar surface with a specified heat release rate (HRR). The walls, floor, and ceiling were modeled as 16-cm-thick gypsum.
\section{Finite Element Acoustic Model}
\label{sec:org5437aea}
\label{sec:comsol_model_overview}
The time-domain wave equation, and its frequency-domain analog, the Helmholtz equation, govern linear sound propagation. The Helmholtz equation can be solved numerically using the finite element method (FEM). To construct the model, the geometry of interest is discretized into small elements on which the discretized form of the Helmholtz equation is solved. For accurate solution, the geometry must be discretized with maximum element size < \(\frac{ \lambda }{ 10 }\) where \(\lambda\) is the wavelength \cite[p. 554]{multiphysics_manual_4p3}. One consequence of this is that for frequencies of interest for the PASS problem (500~Hz to 5000~Hz), this can require hundreds of millions of grid points as shown in \Cref{fig:model_n_point_comsol}. Therefore, three-dimensional modeling of the compartment fire acoustics problem was found to be impractical. This work will limit the acoustic modeling to two-dimensional slices. It is important to note that our purpose in this modeling was understanding the acoustics of the room fire and to gain insight into the experimental results, not as a design or auralization tool. This is analogous to calculating transmission loss along a radial in underwater acoustics, though the authors acknowledge that out-of-plane effects will be significant in this geometry, while they can be negligible in some underwater acoustics problems. Because of the two-dimensional nature of this modeling, out-of-plane acoustic paths are not present in the model and therefore comparisons between model and measurement may exhibit differences because of this approximation. It is useful nonetheless to test the degree to which this expedient approximation remains valid. Hence in this work, we focus on trends in the results rather than a quantitative comparison. In the limiting case of a long narrow hallway, the model and data would be more directly comparable. Also, we limit the model to frequency-independent losses, to isolate path changes due to temperature variations as the fundamental difference between iso-velocity and compartment fire acoustic propagation. We believe despite these limitations, this model is a step forward in modeling compartment fire acoustics.
The commercial finite element package COMSOL Multiphysics (version 4.2) was used in this work. The finite element model was computed on the temperature distribution calculated by FDS. The flow field is ignored, thus isolating the effect of temperature variations. This assumption is valid since the velocities of the flows are insignificant compared to the lowest sound speed in the model.
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure3.jpg}
\caption{\label{fig:model_n_point_comsol}
(Color online) Number of discrete cells required as a function of frequency for finite element modeling in two and three dimensions for a \SI{3}{\meter\cubed} room.}
\end{figure}
\subsection{Three-dimensional finite element model: scattering from the bare flame.}
\label{sec:org113e205}
\label{sec:comsol_scattering}
The fire creates chaotic temperature and flow fields that could scatter sound. Previous work by \cite{abbasi2013development} showed that a flame scattered acoustic energy and thereby impacted the accuracy of an acoustic range finder operated through the flame. This section describes the results of a three-dimensional finite element acoustics model coupled with a three-dimensional finite-difference CFD fire model (COMSOL and FDS) to understand the effect of the flame on the sound propagating through the flame.
A COMSOL model was constructed to compute the acoustic pressure \(P(f,T)\) as a function of time \(T\) and frequency \(f\). A measurement of the change in acoustic level due to the fire is given by \(\Delta RL(f,T) = 10 \log_{10} (\frac{P(f,T)}{P(f,T=0)})^2\) which was computed in post-processing. \Cref{fig:3d_fire_scattering_1} shows a diagram of the domain. The domain consists of a \SI{2}{\meter}~x~\SI{2}{\meter}~x~\SI{2}{\meter} space. Coordinate positions for the receiver, source and fire are shown in \Cref{tab:comsol_poses}. The source is a \SI{0.1}{\meter} x \SI{0.1}{\meter} plane with a constant source amplitude of 1 Pa.
\begin{table}[]
\caption{\label{tab:comsol_poses}
(Color online) Table of receiver, source and fire positions for the three-dimensional COMSOL}
\begin{tabular}{|c|c|c|c|}
\hline
& \(X\) (m) & \(Y\) (m) & \(Z\) (m)\\
\hline
Burner & 0.0 & 0 & 0\\
Source & \(-1.0\) & 0 & 1\\
Receiver R1 & \(-0.5\) & 0 & 1\\
Receiver R2 & 0.0 & 0 & 1\\
Receiver R3 & 0.5 & 0 & 1\\
Receiver R4 & 1.0 & 0 & 1\\
\hline
\end{tabular}
\end{table}
The simulated fire matches the properties of the burner used in the experimental measurements described in \cite{Abbasi2020_Change}. It has a square profile \SI{0.3}{\meter} x \SI{.3}{\meter}. COMSOL was run the frequency domain acoustics mode, a frequency sweep from 200~Hz to 900~Hz, with \(\delta f\)~=~1~Hz. The model was recomputed every 0.1~s for 2~s. The fire was modeled in FDS and the three-dimensional temperature field computed is input to COMSOL at each computational time step. FDS was run with 3.125~cm grid resolution. The COMSOL mesh was recomputed every 100 Hz with maximum element size \(\frac{\lambda}{10}\), and \(c_0 =\) \SI{500}{\meter\per\second}. Plane-wave radiation conditions were applied in COMSOL to the boundaries of the geometry, and in FDS the boundaries were assigned open. This was done to approximate a flame in a free field, with a source on one side, and receivers in front of the source.
\Cref{fig:3d_fire_scattering_2} shows the temperature field at four example times over the course of the model run and \Cref{fig:COMSOL_results_plot_3d_fds} shows \(\Delta RL(f,T)\) as a function of time and frequency. At \ensuremath{T}~=~0~s no fire is present, and that is considered the baseline condition. As the fire develops, there is a change in the received acoustic pressure spectrum at all of the receivers. The receiver closest to the source is impacted least, and low frequencies are impacted the least. The greatest \(\Delta RL(f,T)\) occurs when the flame impinges on the horizontal plane containing the receivers. \Cref{fig:COMSOL_results_plot_3d_fds_max_change_line} shows the fire is acting as a notch filter.
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure4.pdf}
\caption{\label{fig:3d_fire_scattering_1}
(Color online) Diagram of the three-dimensional sound propagation simulation discussed in \Cref{sec:comsol_scattering}. A fire is placed in a free field, computed in a cubic computational domain \SI{2}{\meter} per side, with four receivers placed in front of a source.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure5.jpg}
\caption{\label{fig:3d_fire_scattering_2}
(Color online) The temperature field created by the fire used for the three-dimensional fire/acoustic model discussed in \Cref{sec:comsol_scattering}. The field 0.1~s after the start of the simulation, at the time of ignition is shown in (a). The field is shown at subsequent times in (b), (c), and (d).}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure6.pdf}
\caption{\label{fig:COMSOL_results_plot_3d_fds}
(Color online) The change in acoustic pressure, \(\Delta RL(f,T)\), is shown at four different receiver positions in front of a source over time as a flame develops in the environment.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure7.pdf}
\caption{\label{fig:COMSOL_results_plot_3d_fds_max_change_line}
(Color online) The change in acoustic pressure, \(\Delta RL(f,T)\), is shown at four different receiver positions in front of a source over time. Each subfigure shows the \(\Delta RL(f,T)\) for every time step for 2 s with \(\delta t\) = 0.1 s}
\end{figure}
\subsection{Propagation in a two-dimensional slice through a compartment fire}
\label{sec:org9e0d08b}
\label{sec:2d_fire_comsol}
The experimental room for compartment fire experiments described in \cite{Abbasi2020_Change} was modeled using FDS. The vertical two-dimensional slice chosen was 0.5 m from the burner (shown as the blue slice in the top left subfigure of \Cref{fig:fds_grid_plot}). The simulated thermal fields are shown in \Cref{fig:fds_grid_plot} at various times. The temperature increases and then stabilizes as the system reaches a steady-state. The FDS model included a 413 kW fire, an open door in the hallway, and gypsum walls. The model is discretized using 128 x 128 x 32 grid points, resulting in 7.5~cm x 7.5~cm x 10~cm (\(X\), \(Y\), \(Z\)) grid resolution. The fire was ignited at \(T\)~=~20~s and allowed to run till \(T\)~=~120 s.
The finite element acoustics model used a two-dimensional geometry (\(X=2.1\)~m, \(Z = 5.1\)~m) with rigid boundary conditions. The COMSOL frequency domain module conducted a parametric sweep over frequency, with the \(\delta f\)~=~1~Hz. Re-meshing was a substantial run time expense; therefore, the mesh was regenerated every 500~Hz between 1~Hz and 2000~Hz and every 100~Hz between 2000~Hz and 4000~Hz. The highest frequency in the interval was used to compute the maximum element size. The acoustic source was placed in the lower-left corner (\(X\) = 0--0.1 m, \(Z\) = 0--0.1 m) of the vertical slice shown in \Cref{fig:fds_grid_plot}. The source was a 100 cm\(^2\) area with a constant 1 Pa source level. The point receiver was placed at (\(X = 3, Z = 0.5\)), modeling a crawling firefighter. The temperature slice was output from FDS every 1~s and input as the air temperature in the COMSOL model.
\Cref{fig:comsol_0_4k} shows the frequency response of the system computed using this model. Before ignition, the response is stationary. Modal peaks are visible and consistent. Introducing the fire into the compartment results in a time-varying unsteady frequency response. Low-frequency modes increase in frequency, and higher-frequency modal structure disappears. The model results show characteristics similar to the experimental results in \cite{Abbasi2020_Change}; low-frequency modes increase in frequency, high-frequency modes are less prominent, and the frequency response is highly time-varying after ignition. The dashed lines indicate modes whose frequencies were manually tracked over time.
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure8.pdf}
\caption{\label{fig:fds_grid_plot}
(Color online) Results of CFD fire model showing the temperature in a two-dimensional plane as a function of time. The location of the slice is shown in (a). The cyan area (lower left on the subfigures) marks the source and the cyan `x' marks the receiver position.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure9.pdf}
\caption{\label{fig:comsol_0_4k}
(Color online) Finite element modeled acoustic frequency response for a source/receiver pair placed in a two-dimensional slice through a compartment fire. Ignition is at \(T\) = 0 s. Subfigure (a) shows the full band from 1~Hz to 4000~Hz, and Subfigure (b) shows the band from 1~Hz to 500~Hz. The blue lines track the frequency of select modes.}
\end{figure}
\section{Two-Dimensional Ray Model in a Compartment Fire}
\label{sec:orgb7b0611}
\label{sec:ray_methods_overview}
Ray theory is derived from the wave equation and considers acoustic paths (or rays) that follow Snell's law \cite{blackstock2000fundamentals,jensen2011computational}. Because of its geometric nature, ray theory is computationally efficient compared to full-wave methods. Ray models used for room acoustics have traditionally been limited to iso-velocity environments, primarily because typical room acoustics applications do not include temperature/sound speed variations \cite{savioja2015overview}. A room with a fire has significant sound speed variation and therefore we cannot use traditional room acoustics software. Ray models used in underwater acoustics typically take sound speed variations into account and have been shown to provide an excellent comparison with measured data \cite{urick1983principles}. Therefore, we used an existing open-source underwater acoustics ray trace software, BELLHOP \cite{porter2007bellhop}. BELLHOP uses a predictor-corrector scheme to model ray paths. BELLHOP can output ray paths, eigenrays, and transmission loss at receiver locations. The version of BELLHOP used in this work is range-dependent and two-dimensional \cite{porter2011bellhop}. The BELLHOP code was modified (shown in \Cref{fig:bellhop_changes}) to add a constraint to ensure eigenrays always intersected with the receiver location within \textpm{}\SI{1}{\cm}. The model is limited to specular reflections only.
\begin{figure*}
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure10.jpg}
\caption{\label{fig:bellhop_changes}
(Color online) A line by line difference between the original bellhop source code (bellhop.f90 source file) (in red) and the modifications made that code (in green).}
\end{figure*}
Ray paths and delay times at the receiver were computed by providing BELLHOP with a sound speed field from the FDS model described in \Cref{sec:fds_model} at each 1-second interval. The ray trace field was a rectangular region representing a vertical slice in the burn structure. A reflection coefficient of 0.4 dB was applied to all boundaries. The rectangular geometry and source/receiver position were adjusted to match the experiment being modeled.
Ten thousand rays were launched from an omnidirectional source, with launch angles equally spaced between \SI{0}{\degree} and \SI{360}{\degree}. For each ray trace, approximately 2000 arrivals are recorded. At each time step, the eigenray delays and amplitudes were computed. Let \(A_{n}\) and \(D_{n}\) be the amplitude and delay for arrival \(n\). The frequency response \(H(f)\) was computed using \Cref{eq:ray_freq_resp},
\begin{equation}
\label{eq:ray_freq_resp}
H(f) = \sum_{n}^{N} A_{n} e^{-i 2 \pi f D_{n}},
\end{equation}
where \(f\) is the frequency in~Hz. The amplitude of each arrival is frequency-independent. The frequency response is the coherent sum of all arrivals. By assuming frequency-independent amplitude, we isolate the sound speed perturbations as the dominant mechanism for any changes in the impulse and frequency response.
Experiment 1 and Experiment 3 were modeled. FDS used a mesh resolution of 0.10~m x 0.10~m x 0.09~m. The heat release rate (HRR) was set to 150~kW to match the experiments. The fire, acoustics source and acoustic receiver were positioned based on \Cref{fig:exp1_schem} and \Cref{fig:exp3_schem}. Visualization of the FDS models used to model experiment 1 (\Cref{fig:FDS_model_schem_exp1}(a)) and experiment 3 (\Cref{fig:FDS_model_schem_exp3}(a)) are shown. The flame is in one corner of the room. The visualization shows the geometry of the compartment, the two-dimensional plane of interest (with temperature marked by color), the vertical computational grid, the \(X\), \(Y\), and \(Z\) axes, and the smoke exiting the compartment. Between the two experiments, the position of the fire and the acoustic source/receiver changes. \Cref{fig:FDS_model_schem_exp1}(b) and \Cref{fig:FDS_model_schem_exp3}(b) show source/receiver positions in the \ensuremath{X}-\(Z\) plane for experiment 1 and 3 respectively.
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure11.pdf}
\caption{\label{fig:FDS_model_schem_exp1}
(Color online) Visualization of experiment 1 modeled in FDS is shown in (a). The temperature at the \(Y\)~=~0.5 m plane is shown at \(T\)~=~264.0 s. Ray model schematic for experiment 1 is shown in (b), showing source and receiver positions.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure12.pdf}
\caption{\label{fig:FDS_model_schem_exp3}
(Color online) Visualization of experiment 3 modeled in FDS is shown in (a). The temperature at the \(Y\)~=~3.5 m plane is shown at \(T\)~=~120.3 s. Ray model schematic for experiment 3 is shown in (b), showing source and receiver positions.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure13.jpg}
\caption{\label{fig:ray_model_exp2_time_2}
(Color online) Ray paths modeling experiment 1 before ignition (a) and 20 s after ignition (b). This is visualization of the ray paths, limited to three boundary interactions.}
\end{figure}
\Cref{fig:ray_model_exp2_time_2} shows the instantaneous ray trace at two times (before ignition, and 20 s after ignition) modeling experiment 1. The fire changes the environment, resulting in rays launched at the same angle taking different paths. The additional floor interactions would result in reflection loss each time, increasing transmission loss over distance from the source. The change in ray paths could also change the perceived location of the PASS alarm.
\Cref{fig:ray_model_data_freq_resp_comparision_exp_2,fig:ray_model_data_freq_resp_comparision_exp_8} compare modeled and measured evolution of the frequency responses for experiments 1 and 3 respectively. Modeled frequency response for experiment 1 captures many of the features seen in the measured response; increase in the frequency of modes; loss of consistent modal structure from ignition to \(T\) = 120~s; and the grouping of certain modes above 2000 Hz. The model for experiment 3 also captures many features of the experiment frequency response. While the models do not perfectly match the experimental results, very similar characteristics are seen. The models show differences between experiments 1 and 3 like that also present in the measured results. For example, in experiment 3 there is a complete loss of modal structure above 3500~Hz, which is not the case for experiment 1. The ray-traced models show this difference.
\Cref{fig:ray_model_data_eigen_ray_comparision_exp_2} and \Cref{fig:ray_model_data_eigen_ray_comparision_exp_8} compare modeled eigenray delay time, and experimentally measured impulse response for experiment 1 and 3 respectively. A distinction is made in this section between eigenray delays and the measured impulse responses. The eigenray delays are a perfect impulse response (i.e. the response of a discrete delta function, to the limit of float point 64-bit precision). In contrast, the measured impulse responses are the response to bandwidth-limited finite duration pulse. This is an important distinction because the eigenray delays show much finer time resolution than the measured impulse responses. The eigenray delay time shows remarkably similar patterns as the measured impulse response, despite the model being two-dimensional and the experimental being three-dimensional. The earlier arrivals are the least impacted and more stable. Later arrivals have a larger change in delay time. In addition to the decreasing delay time after ignition, there is also a random spread in the times. The marker color in the eigenray plot indicates the elevation angle of the ray at the source (angle from the horizontal, positive is towards the ceiling). Observe that the shallow rays arrive at the receiver earlier (direct path and shallow bottom reflection paths) and are impacted the least after ignition. The early arrivals being least impacted by the fire is consistent with the measured data. Certain key features are captured very well. For example, both modeled and measured data show a crossing of arrival paths due to the fire, i.e. paths that arrived earlier in iso-speed arrive later than other paths after ignition.
The ray models were run with identical parameters, except for the geometry and fire configuration. The differences between model results are qualitatively like the differences between the experimental results. While the comparisons between model and data are not perfect, the model captures many of the features seen in the data. This lends validity to the model, and to the assertion that the temperature variation is a major cause of the frequency response change.
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure14.pdf}
\caption{\label{fig:ray_model_data_freq_resp_comparision_exp_2}
(Color online) Modeled (a) and measured (b) frequency response for experiment 1.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure15.jpg}
\caption{\label{fig:ray_model_data_freq_resp_comparision_exp_8}
(Color online) Modeled (a) and measured (b) frequency response for experiment 3.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure16.jpg}
\caption{\label{fig:ray_model_data_eigen_ray_comparision_exp_2}
(Color online) Modeled (a) and measured (b) impulse response for experiment 1. The modeled response shows the arrival delay time, color coded by the source angle.}
\end{figure}
\begin{figure}[]
\centering
\includegraphics[angle=0,width=1.0\linewidth, keepaspectratio,,height=0.8\textheight]{Figure17.jpg}
\caption{\label{fig:ray_model_data_eigen_ray_comparision_exp_8}
(Color online) Modeled (a) and measured (b) impulse response for experiment 3. The modeled response shows the arrival delay time, color coded by the source angle.}
\end{figure}
\section{Conclusion}
\label{sec:org6abf2e4}
Sound propagation in a compartment fire was modeled using two acoustic modeling modalities, ray theory, and full-wave finite element modeling. Both models relied on environmental inputs from a CFD fire model. Good agreement was found in measured and modeled frequency and impulse responses. The results show that temperature variations due to the fire can account for many of the observed phenomena.
Future work into this problem should explore three-dimensional modeling with frequency-dependent losses and the effect of horizontal temperature gradients.
\section{Acknowledgement}
\label{sec:orgb853aa1}
The experimental measurements were funded by the U.S. Department of Homeland Security Assistance to Firefighters Grants Program. Analysis was self-funded by Dr. Abbasi. The authors thank Mudeer Habeeb, Kyle Ford, and Joelle Suits for their assistance with the experiments.
\section{References}
\label{sec:orgf1c4971}
|
1,108,101,565,777 | arxiv | \section{Introduction}
Superradiance is an interesting phenomenon in black hole physics \cite{Manogue1988,Greiner1985,Cardoso2004,Brito:2015oca,Brito:2014wla}. When a charged bosonic wave is impinging upon a charged rotating black hole, the wave is amplified by the black hole if the wave frequency $\omega$ obeys
\begin{equation}\label{superRe}
\omega < n\Omega_H + e\Phi_H,
\end{equation}
where $e$ and $n$ are the charge and azimuthal number of the bosonic wave mode, $\Omega_H$ is the angular velocity of the black hole horizon and $\Phi_H$ is the electromagnetic potential of the black hole horizon. This superradiant scattering was studied long time ago \cite{P1969,Ch1970,M1972,Ya1971,Bardeen1972,Bekenstein1973,Damour:1976kh}, and has broad applications in various areas of physics(for a comprehensive review, see\cite{Brito:2015oca}).
If there is a mirror mechanism that makes the amplified wave be scattered back and forth, it will lead to the superradiant instability of the background black hole geometry \cite{PTbomb,Cardoso:2004nk,Herdeiro:2013pia,Degollado:2013bha}. Superradiant (in)stability of various kinds of black holes has been studied extensively in the literature.
The superradiant (in)stability of rotating Kerr black holes under massive scalar perturbation has been studied in \cite{Huang:2019xbu,Strafuss:2004qc,Konoplya:2006br,Cardoso:2011xi,Dolan:2012yt,Hod:2012zza,Hod:2014pza,Aliev:2014aba,Hod:2016iri,Degollado:2018ypf}. Superradiant instability of a Kerr black hole that is perturbed by a massive vector field is also discussed in \cite{East:2017ovw,East:2017mrj}.
Rotating or charged black holes with asymptotically curved space are proved to be superradiantly unstable because the curved backgrounds provide natural mirror-like boundary conditions
\cite{Cardoso:2004hs,Cardoso:2013pza,Zhang:2014kna,Delice:2015zga,Aliev:2015wla,Wang:2015fgp,
Ferreira:2017tnc,Wang:2014eha,Bosch:2016vcp,Huang:2016zoz,Gonzalez:2017shu,Zhu:2014sya}.
Among the study of superradiant (in)stability of black holes, an interesting result is that the four-dimensional extremal and non-extremal Reissner-Nordstrom(RN) black holes have been proved to be superradiantly stable against charged massive scalar perturbation in the full parameter space of the black-hole-scalar-perturbation system\cite{Hod:2013eea,Huang:2015jza,Hod:2015hza,DiMenza:2014vpa}. The argument is that the two conditions for superradiant instability, (1) existence of a trapping potential well outside the black hole horizon and (2) superradiant amplification of the trapped modes, cannot be satisfied simultaneously \cite{Hod:2013eea,Hod:2015hza}.
In this paper, the study of superradiant stability of four-dimensional RN black holes will be generalized to higher dimensional. As a first step, we will analytically study the superradiant stability of five and six-dimensional extremal RN black holes under charged massive scalar perturbation. In Section II, we give a general description of the model we are interested in. In Section III and IV, we provide the proof for five and six-dimensional extremal RN black hole cases respectively. The last Section is devoted to conclusion and discussion.
\section{D-dimensional RN black holes and Klein-Gordon equation}
In this section we will give a description of the $D$-dimensional RN black hole and the Klein-Gordon equation for the charged massive scalar perturbation. The metric of the $D$-dimensional RN black hole \cite{Myers:1986un,Destounis:2019hca} is
\bea
ds^2=-f(r)dt^2+\frac{dr^2}{f(r)}+r^2d\O_{D-2}^2.
\eea
$d\O_{D-2}^2$ is the common line element of a $(D-2)$-dimensional unit sphere $ S^{D-2}$
\bea
d\O_{D-2}^2=d\th_{D-2}^2+\sum^{D-3}_{i=1} \prod_{j=i+1}^{D-2}\sin^2(\th_{j})d\th_i^2,
\eea
where the ranges of the angular coordinates are $\th_i\in [0,\pi](i=2,..,D-2), \th_1\in [0,2\pi]$.
$f(r)$ reads
\bea
f(r)=1-\frac{2m}{r^{D-3}}+\frac{q^2}{r^{2(D-3)}},
\eea
where the parameters $m$ and $q^2$ are related with the ADM mass $M$ and electric charge $Q$ of the RN black hole,
\bea
m=\frac{8\pi}{(D-2)Vol(S^{D-2})}M,~~ q= \frac{8\pi}{\sqrt{2(D-2)(D-3)}Vol(S^{D-2})}Q.
\eea
In the above equation, $Vol(S^{D-2})=2\pi^{\frac{D-1}{2}}/\Gamma(\frac{D-1}{2})$ is the volume of unit $(D-2)$-sphere. The inner and outer horizons of the RN black hole are
$
r_\pm=(m\pm\sqrt{m^2-q^2})^{1/(D-3)}.
$
For extremal RN black holes, the inner and outer horizons become one horizon
\bea
r_h=m^{1/(D-3)}.
\eea
The electromagnetic field outside the black hole horizon is described
by the following 1-form vector potential
\bea
A=-\sqrt{\frac{D-2}{2(D-3)}}\frac{q}{r^{D-3}} dt=-c_D\frac{q}{r^{D-3}} dt.
\eea
The equation of motion for a charged massive scalar perturbation in the RN black hole background is governed by
the following covariant Klein-Gordon equation
\bea
(D_\nu D^\nu-\mu^2)\phi=0,
\eea
where $D_\nu=\nabla_\nu-ie A_\nu$ is the covariant derivative and $\mu,~e$ are the mass and charge of the scalar field respectively. The solution with definite angular frequency for the above Klein-Gordon equation can be decomposed as
\bea
\phi(t,r,\th_i)=e^{-i\o t}R(r)
\Theta(\th_i).
\eea
The angular eigenfunctions $\Theta(\th_i)$ are $(D-2)$-dimensional scalar spherical harmonics and the eigenvalues are given by $-l(l+D-3), (l=0,1,2,..)$\cite{Chodos:1983zi,Higuchi:1986wu,Rubin1984,Achour:2015zpa,Lindblom:2017maa}.
The radial equation is
\bea\label{eq-radial}
\Delta\frac{d}{dr}(\Delta\frac{d R}{dr})+U R=0,
\eea
where $\Delta=r^{D-2}f(r)$ and
\bea
U=(\o+e A_t)^2 r^{2(D-2)}-l(l+D-3) r^{D-4}\Delta-\mu^2 r^{D-2}\Delta.
\eea
Define the tortoise coordinate $y$ by $dy=\frac{r^{D-2}}{\Delta}dr$ and a new radial function $\tilde{R}=r^{\frac{D-2}{2}}R$, then
the radial equation \eqref{eq-radial} can be rewritten as
\bea\label{tor-eq}
\frac{d^2\tilde{R}}{dy^2}+\tilde{U} \tilde{R}=0,
\eea
where
\bea
\tilde{U}=\frac{U}{r^{2(D-2)}}-\frac{(D-2)f(r)[(D-4)f(r)+ 2 r f'(r)]}{4r^2}.
\eea
The asymptotic behaviors of $\tilde{U}$ at the spatial infinity and outer horizon are
\bea
\lim_{r\rightarrow +\infty}\tilde{U}= \o^2-\mu^2,~~
\lim_{r\rightarrow r_+} \tilde{U}= (\o-c_D\frac{e q}{r_+^{D-3}})^2=(\o-e\Phi_h)^2,
\eea
where $\Phi_h$ is the electric potential of the outer horizon of the black hole.
The physical boundary conditions that we need are ingoing wave at the horizon ($y\to -\infty$) and bound states (exponentially decaying modes) at spatial infinity ($ y\to +\infty $). Then the asymptotic solutions of the equation \eqref{tor-eq} are as follows
\bea
r\to +\infty (y\to +\infty ),~\tilde{R}\sim {{e}^{-\sqrt{{{\mu }^{2}}-{{\omega }^{2}}}{y}}};\\
r\to {{r}_{+}}(y\to -\infty ),~\tilde{R}\sim {{e}^{-i(\omega -e\Phi_{h}){y}}}.
\eea
The exponentially decaying modes (bound state condition) require the following inequality
\bea\label{boundstate}
\omega^2<\mu^2.
\eea
Next we define a new radial function $\psi=\Delta^{1/2} R$, then the radial equation \eqref{eq-radial} can be written as a Schrodinger-like equation
\bea
\frac{d^2\psi}{dr^2}+(\o^2-V)\psi=0,
\eea
where $V$ is the effective potential.
In the extremal case, the explicit expression for the effective potential $V$ is
\bea
V=\o^2+\frac{B}{A},
\eea
where $A$ and $B$ are
\bea\label{V}
A&=&4r^{2}(r^{2D-6}-2 m r^{D-3}+m^2)^2=4r^2(r^{D-3}-m)^4,\\
B&=&4(\mu^2-\o^2)r^{4D-10}+(2l+D-2)(2l+D-4)r^{4D-12}-8(m\mu^2-c_D e m \o)r^{3D-7}\nn\\
&-&4m(2\l_l+(D-4)(D-2))r^{3D-9}+4m^2(\mu^2-c_D^2 e^2)r^{2D-4}\nn\\
&+&2m^2(2 \l_l+3(D-4)(D-2))r^{2D-6}-4m^3(D-4)(D-2)r^{D-3}\nn\\
&+&m^4(D-4)(D-2),
\eea
and $\l_l=l(l+D-3)$.
The asymptotic behaviors of this effective potential $V$ at the horizon and spatial infinity are
\bea
V\rightarrow -\infty ,~~~~~r\rightarrow r_h;\\
V\rightarrow \mu^2 ,~~~~~r\rightarrow +\infty.
\eea
At the spatial infinity, the asymptotic behavior of the derivative of the effective potential, $V'(r)$, is
\bea\label{asymp}
V'(r)\rightarrow \left\{
\begin{array}{ll}
\frac{-(D-2)(D-4)-4\l_l-8m(\mu^2+c_D e \o-2\o^2)}{2r^3}, & \hbox{$D=5$;} \\
\frac{-(D-2)(D-4)-4\l_l}{2r^3}, & \hbox{$ D\geqslant 6$.}
\end{array}
\right.
\eea
The superradiance condition in the extremal case is
\bea\label{sup-con-extr}
\o<e\Phi_h=e c_D\frac{q}{r_h^{D-3}}=c_D e=\sqrt{\frac{D-2}{2(D-3)}} e.
\eea
Together with the bound state condition \eqref{boundstate}, we can prove $V'(r)<0$ at spatial infinity when $D=5$. It is also obvious that $V'(r)<0$ at spatial infinity when $D\geqslant 6$.
This means that there is no potential well when $ r\to +\infty$ and there is at least one extreme for the
effective potential $V(r)$ outside the horizon.
In the following sections, we will prove that there is indeed only one extreme outside the event horizon $r_h$ for the effective potential in each case of $D=5,6$ extremal RN black holes, no potential well exists outside the event horizon for the superradiance modes. So the $D=5,6$ extremal RN black holes are superradiantly stable under massive scalar perturbation.
It is worth noting that the important mathematical theorem we will use in the proof is the\textit{ Descartes' rule of signs} which asserts that the number of positive roots of a polynomial equation with real coefficients is at most the number of sign changes in the sequence of the polynomial's coefficients.
\section{D=5 extremal RN black holes}
For a $D$=5 extremal RN black hole, the event horizon is located at $r^{(5)}_h=\sqrt{m}$.
The explicit expression of superradiance condition \eqref{sup-con-extr} is
\bea\label{sup-con-5D}
\o<\frac{\sqrt{3}}{2} e\approx 0.87 e.
\eea
In order to prove there is only one extreme outside the event horizon for the effective potential $V_5(r)$ in the D=5 case, we will consider the derivative of $V_5(r)$, i.e. $V_5'(r)$, and prove that there is only one real root for $V_5'(r)=0$ when $r>r^{(5)}_h$. The key result in the proof is summarized in Table I.
\begin{table}
\caption{Possible signs of $\{a_5,a_4,a_3,a_2,a_1,a_0\}$ in different intervals of $t$.}
\centering
\renewcommand{\multirowsetup}{\centering}
\begin{tabular}{|c|p{1.1cm}<{\centering}|p{1.1cm}<{\centering}|p{1.1cm}<{\centering}|p{1.1cm}<{\centering}
|p{1.1cm}<{\centering}|p{1.1cm}<{\centering}|}
\hline
$t$&$a_5$ & $a_4$ & $a_3$ & $a_2$ & $a_1$&$a_0$\\
\hline
(0.61,~0.87)&- & - & - & - & -& +\\
\hline
(0.41,~0.61)&- & - & - & - & +& +\\
\hline
\multirow{2}*{(0.25,~0.41)}&\multirow{2}*{-} & \multirow{2}*{-} & \multirow{2}*{-} & - & \multirow{2}*{+}& \multirow{2}*{+}\\
\cline{5-5}
& & & &+ & &\\
\hline
\multirow{3}*{(0.12,~0.25)}&\multirow{3}*{-} &\multirow{3}*{ -} & - & - & \multirow{3}*{+}& \multirow{3}*{+}\\
\cline{4-5}
& & &- &+ & &\\
\cline{4-5}
& & &+ &+ & &\\
\hline
\multirow{4}*{(0,~~~0.12)}&\multirow{4}*{-} & - & - & - & \multirow{4}*{+}&\multirow{4}*{+} \\
\cline{3-5}
& & -&- &+ & &\\
\cline{3-5}
& &- &+ &+ & &\\
\cline{3-5}
& & +&+ &+ & &\\
\hline
\end{tabular}\label{table}
\end{table}
From the general expression of the effective potential \eqref{V}, we can calculate that the denominator of $V_5'(r)$ is $2 r^3(r^2-m)^5$. The numerator of $V_5'(r)$ is
\bea\nn
n_5&=&3 m^5 - 15 m^4 r^2 + 2 m^3 r^4 (15 - 2l(l+2)) +
2 m^2 r^6 (-15 + 3 e^2 m - 4 m \mu^2 + 2l(l+2)) \\&+&
m r^8 (15 + 6 e^2 m + 16 m \mu^2 - 12 \sqrt{3} e m \o + 4l(l+2))\nn \\
&-& r^{10} (3 + 8 m \mu^2 + 4 m (\sqrt{3} e - 4 \o) \o + 4 l(l+2)).
\eea
As mentioned before, we want to consider the real roots of $V_5'(r)=0$ when $r>r_h^{(5)}$. It is equivalent to considering the real roots of $n_5(r)=0$ when $r>r_h^{(5)}$.
Now we make a change for the variable $r$ and let $z=r^2-m$, then the numerator of $V_5'(r)$ is rewritten as
\bea
n_5&=&z^5 (-3 - 4 m (2 \mu^2 + (\sqrt{3} e - 4 \o) \o) - 4 \l_l)+
2 m z^4 (3 e^2 m - 12 m \mu^2 - 16 \sqrt{3} e m \o + 40 m \o^2 -8 \l_l)\nn\\
&&+ 2 m^2 z^3 (15 e^2 m - 12 m \mu^2 - 44 \sqrt{3} e m \o + 80 m \o^2 -10 \l_l)\nn\\
&& + 2 m^3 z^2 (27 e^2 m - 56 \sqrt{3} e m \o -4 (m (\mu^2 - 20 \o^2) + \l_l))\nn\\
&&+ 2 m^5 z(21 e^2 - 34 \sqrt{3} e \o + 40 \o^2) +4 m^6 (3 e^2 - 4 \sqrt{3} e \o + 4 \o^2)\nn\\
&&=\sum_{i=0}^5 a_i z^i,
\eea
where
\bea
a_5&=&-3 -8m\mu^2- 4 m \o (\sqrt{3} e - 4 \o) - 4 \l_l,\nn\\
a_4&=&2 m (3 e^2 m - 12 m \mu^2 - 16 \sqrt{3} e m \o + 40 m \o^2 -8 \l_l),\nn\\
a_3&=&2 m^2 (15 e^2 m - 12 m \mu^2 - 44 \sqrt{3} e m \o + 80 m \o^2 -10 \l_l),\nn\\
a_2&=&2 m^3(27 e^2 m - 56 \sqrt{3} e m \o -4 (m (\mu^2 - 20 \o^2) + \l_l)),\nn\\
a_1&=& 2 m^5(21 e^2 - 34 \sqrt{3} e \o + 40 \o^2),\nn\\
a_0&=&4 m^6 (3 e^2 - 4 \sqrt{3} e \o + 4 \o^2)=4m^6(2\o-\sqrt{3}e)^2.
\eea
A real root of $n_5(r)=0 $ when $r>r_h^{(5)}$ corresponds to a positive root of $n_5(z)=0$ when $z>0$.
In the next, we will prove there is indeed only one positive real root for the equation $n_5(z)=0$ by analyzing the signs of coefficients of $n_5(z)$.
It is obvious that $a_0>0$. Given the bound state condition and superradiance condition, $\o^2<\mu^2,\o<\frac{\sqrt{3}}{2} e$, it is easy to prove that $a_5<0$,
\bea
a_5&=&-3 -8m\mu^2- 4 m \o (\sqrt{3} e - 4 \o) - 4 \l_l\nn\\
&=&-3-8m(\mu^2-\o^2)-4m\o(\sqrt{3} e - 2 \o)-4\l_l<0.
\eea
The other coefficients can be rewritten as following
\bea
a_4&=&2 m (3 e^2 m - 12 m \mu^2 - 16 \sqrt{3} e m \o + 40 m \o^2 -8 \l_l)\nn\\
&=&2m(-8 \l_l+12m\o^2-12m\mu^2)+2m^2e^2(3- 16 \sqrt{3}t+28t^2),\\
a_3&=&2 m^2 (15 e^2 m - 12 m \mu^2 - 44 \sqrt{3} e m \o + 80 m \o^2 -10 \l_l)\nn\\
&=&2m^2(-10 \l_l+12m\o^2- 12 m \mu^2)+2m^3e^2(15- 44 \sqrt{3}t+68t^2),\\\label{a2}
a_2&=&2 m^3(27 e^2 m - 56 \sqrt{3} e m \o -4 (m (\mu^2 - 20 \o^2) + \l_l))\nn\\
&=&2 m^3(-4\l_l-4m\mu^2+4m\o^2)+2m^4e^2(27-56 \sqrt{3}t+76t^2),\\
a_1&=& 2 m^5(21 e^2 - 34 \sqrt{3} e \o + 40 \o^2)\nn\\
&=&2m^5e^2(21-34 \sqrt{3}t+40t^2),
\eea
where $t=\o/e$. For $a_1$, one can check $a_1<0$ is equivalent to
\bea\label{a11}
\frac{7\sqrt{3}}{20}<t<\frac{\sqrt{3}}{2},~or~0.61<t<0.87.
\eea
For $a_2$, the first term in \eqref{a2} is negative, so a sufficient condition for $a_2<0$ is that the second term in \eqref{a2} is also negative, i.e.
\bea\label{a22}
\frac{9\sqrt{3}}{38}<t<\frac{\sqrt{3}}{2},~or~0.41<t<0.87.
\eea
Using similar analysis, a sufficient condition for $a_3<0$ is
\bea\label{a33}
\frac{5\sqrt{3}}{34}<t<\frac{\sqrt{3}}{2},~or~0.25<t<0.87.
\eea
A sufficient condition for $a_4<0$ is
\bea\label{a44}
\frac{\sqrt{3}}{14}<t<\frac{\sqrt{3}}{2},~or~0.12<t<0.87.
\eea
Let us analyse the signs of coefficients $\{a_5,a_4,a_3,a_2,a_1,a_0\}$ with the varying of the parameter $t$ from 0 to $\frac{\sqrt{3}}{2}$.
When $0.61<t<0.87$, according to the equations \eqref{a11} to \eqref{a44}, all $a_i (i=1,2,3,4)$ are negative, the signs of the six coefficients $\{a_5,a_4,a_3,a_2,a_1,a_0\}$ are $\{-----+\}$.
When $0.41<t<0.61$, $a_1$ is positive and all other coefficients are negative. The signs of the six coefficients are $\{----++\}$.
When $0.25<t<0.41$, the sign of $a_2$ is not fixed by the above analysis and the signs of the six coefficients may be $\{----++\}$ or $\{---+++\}$.
For further analysis, we consider following two differences,
\bea
\frac{a_2}{8m^3}-\frac{a_3}{20m^2}&=&\frac{ me^2}{20} (105- 192 \sqrt{3}t + 240 t^2+ 4 \mu^2/e^2),\\
\frac{a_3}{20m^2}-\frac{a_4}{16m}&=&\frac{3me^2}{40} (15- 32 \sqrt{3}t + 40 t^2 + 4 \mu^2/e^2 ).
\eea
One can check that
\bea\label{a2a3}
\frac{a_2}{8m^3}>\frac{a_3}{20m^2}
\eea
for $0<t<0.25$, and
\bea\label{a3a4}
\frac{a_3}{20m^2}>\frac{a_4}{16m}
\eea
for $0<t<0.12$.
This implies if $a_3>0$, then $a_2>0$ for $0<t<0.25$ and if $a_4>0$, then $a_2, a_3 >0$ for $0<t<0.12$.
When $0.12<t<0.25$, according to the equations \eqref{a11} to \eqref{a44}, $a_4<0$ and $a_1>0$. The signs of $a_2, a_3$ are not fixed. But given equation \eqref{a2a3}, the possible signs of $\{a_3, a_2\}$ are
$\{-,-\}, \{-,+\},\{+,+\}$. Then the signs of the six coefficients may be $\{----++\}$, $\{---+++\}$, or $\{--++++\}$.
When $0<t<0.12$, we can make a similar analysis as the above case and find that the signs of the six coefficients may be $\{----++\}$, $\{---+++\}$, $\{--++++\}$, or $\{-+++++\}$.
Based on the above analysis on the possible signs of coefficients $\{a_5,a_4,a_3,a_2,a_1,a_0\}$, we conclude that
the number of sign changes in the sequence of the six coefficients is always 1 for $0<t<\frac{\sqrt{3}}{2}$. According to
Descartes' rule of signs, the polynomial equation $n_5(z)=0$ has at most one positive real root, which means the effective potential has at most one extreme outside the horizon. And we already know that there is at least one extreme (maximum) for the effective potential outside the horizon based on the asymptotic analysis of $V_5(r)$. So there is only one maximum for the effective potential $V_5(r)$ outside the horizon and no potential well exists. The D=5 extremal RN black hole is superradiantly stable.
\section{D=6 extremal RN black holes}
For a D=6 extremal RN black hole, the event horizon is $r^{(6)}_h=m^{1/3}$.
The explicit expression of superradiance condition \eqref{sup-con-extr} is
\bea\label{sup-con-6D}
\o<\sqrt{\frac{{3}}{2}} e.
\eea
The denominator of the derivative of effective potential $V_6'(r)$ is $2 r^3(r^3-m)^5$. The numerator of $V_6'(r)$ is
\bea\nn
n_6=-4(\l_l+2)r^{15}+4 (-3 m \mu^2 - \sqrt{6} e m \o + 6 m \o^2)r^{14}+40m r^{12}\\\nn
+4 (2 e^2 m^2 + 6 m^2 \mu^2 - 3 \sqrt{6} e m^2 \o) r^{11}+4 m^2(-20 + 3 \l_l)r^9\\
+4 m^3(2 e^2 - 3 \mu^2)r^8+8 m^3(10 -\l_l)r^6- 40 m^4 r^3+8 m^5.
\eea
One can check that when $r$ goes to infinity, the asymptotic behavior of $V_6'(r)$ is
\bea\label{6Asymp}
V_6'\rightarrow \frac{-2(\l_l+2)}{r^3}+{\cal O}( \frac{1}{r^4}),
\eea
which is negative. Thus, there is no trapping potential well near the spatial infinity. This result is consistent with the general discussion before.
Now we change the radial variable from $r$ to $z=r-r^{(6)}_h=r-m^{1/3}$. The numerator of the derivative of the effective potential, $n_6$, is rewritten as
\bea\label{n6z}
&&n^{(6)}(z)=(-8 - 4 \l_l) z^{15}+4(-30 m^{1/3} - 3 m \mu^2 - \sqrt{6} e m \o + 6 m \o^2 -
15 m^{1/3} \l_l)z^{14}\nn\\
&+&4(-210 m^{2/3} - 42 m^{4/3} \mu^2 - 14 \sqrt{6} e m^{4/3} \o +
84 m^{4/3} \o^2 - 105 m^{2/3} \l_l)z^{13} \nn\\
&+&4(-900 m - 273 m^{5/3} \mu^2 - 91 \sqrt{6} e m^{5/3} \o +
546 m^{5/3} \o^2 - 455 m \l_l)z^{12}\nn\\
&+&4(-2610 m^{4/3} + 2 e^2 m^2 - 1086 m^2 \mu^2 - 367 \sqrt{6} e m^2 \o +
2184 m^2 \o^2 - 1365 m^{4/3} \l_l)z^{11}\nn\\
&+&4(-5346 m^{5/3} + 22 e^2 m^{7/3} - 2937 m^{7/3} \mu^2 -
1034 \sqrt{6} e m^{7/3} \o + 6006 m^{7/3} \o^2 - 3003 m^{5/3} \l_l)z^{10}\nn\\
&+&4(-7830 m^2 + 110 e^2 m^{8/3} - 5676 m^{8/3} \mu^2 -
2167 \sqrt{6} e m^{8/3} \o + 12012 m^{8/3} \o^2 - 5002 m^2 \l_l)z^{9}\nn\\
&+&4(-8100 m^{7/3} + 332 e^2 m^3 - 8022 m^3 \mu^2 - 3498 \sqrt{6} e m^3 \o +
18018 m^3 \o^2 - 6408 m^{7/3} \l_l)z^{8}\nn\\
&+&4(-5670 m^{8/3} + 676 e^2 m^{10/3} - 8340 m^{10/3} \mu^2 -
4422 \sqrt{6}e m^{10/3} \o + 20592 m^{10/3} \o^2 -
6327 m^{8/3} \l_l)z^{7}\nn\\
& +&4(-2430 m^3 + 980 e^2 m^{11/3} - 6321 m^{11/3} \mu^2 -
4389 \sqrt{6} e m^{11/3} \o + 18018 m^{11/3} \o^2 - 4755 m^3 \l_l)z^{6}\nn\\
&+&4(-486 m^{10/3} + 1036 e^2 m^4 - 3402 m^4 \mu^2 - 3388 \sqrt{6} e m^4 \o +
12012 m^4 \o^2 - 2637 m^{10/3} \l_l)z^{5}\nn\\
& +&4(800 e^2 m^{13/3} - 1233 m^{13/3} \mu^2 - 1991 \sqrt{6} e m^{13/3} \o +
6006 m^{13/3} \o^2 - 1017 m^{11/3} \l_l)z^{4}\nn\\
& +&4(442 e^2 m^{14/3} - 270 m^{14/3} \mu^2 - 859 \sqrt{6} e m^{14/3} \o +
2184 m^{14/3} \o^2 - 243 m^4 \l_l)z^{3}\nn\\
&+&4(166 e^2 m^5 - 27 m^5 \mu^2 - 256 \sqrt{6} e m^5 \o + 546 m^5 \o^2 -
27 m^{13/3} \l_l)z^{2}\nn\\
& +&4(38 e^2 m^{16/3} - 47 \sqrt{6} e m^{16/3} \o + 84 m^{16/3} \o^2)z+8 m^{17/3}(\sqrt{2}e-\sqrt{3}\o)^2\\
&\equiv& \sum_{i=0}^{15} b_i z^i.
\eea
We will prove that the polynomial equation $n_6(z)=0$ has at most one positive real root in the following.
According to Descartes' rule of signs, we need to prove the number of sign changes is 1 in the sequence of the polynomial's sixteen coefficients, $\{b_{15}, b_{14}, ..., b_0\}$.
It is easy to see that
\bea\label{b0}
b_0=8 m^{17/3}(\sqrt{2}e-\sqrt{3}\o)^2>0,~~b_{15}=-8 - 4 \l_l<0.
\eea
Let's check the sign of $b_{14}$. We rewrite $b_{14}$ as following
\bea
b_{14}&=&4(-30 m^{1/3} - 3 m \mu^2 - \sqrt{6} e m \o + 6 m \o^2 -
15 m^{1/3} \l_l)\nn\\
&=&4[-30 m^{1/3} - 15 m^{1/3} \l_l+(- 3 m \mu^2+3m\o^2)+(3m\o^2 - \sqrt{6} e m \o)].
\eea
Taking into account the bound state and superradiance conditions, $\o<\mu,~\o<e\Phi_h=\sqrt{2/3}e$, the two terms in parentheses of the above equation are negative and then
\bea
b_{14}<0.
\eea
Similarly, we can easily prove
\bea
b_{13}<0,~~b_{12}<0.
\eea
However, it is not easy to prove the signs of $b_1, b_2,.., b_{11}$ similarly.
Let's study the relation between the signs of $b_1$ and $ b_2$. These two coefficients can be read out directly from \eqref{n6z},
\bea
b_1&=&4(38 e^2 m^{16/3} - 47 \sqrt{6} e m^{16/3} \o + 84 m^{16/3} \o^2),\\
b_2&=&4(166 e^2 m^5 - 27 m^5 \mu^2 - 256 \sqrt{6} e m^5 \o + 546 m^5 \o^2 -
27 m^{13/3} \l_l).
\eea
Define two new coefficients $b'_1, b'_2$,
\bea
b'_1=\frac{b_1}{4*84 m^{16/3}},~b'_2=\frac{b_2}{4*519 m^5}.
\eea
The difference between $b'_1$ and $ b'_2$ is
\bea
b'_1-b'_2=\frac{321 e^2}{2422}+\frac{256}{173} \sqrt{\frac{2}{3}} e \o-\frac{47 e \o}{14 \sqrt{6}}+\frac{9 \lambda_l }{173 m^{2/3}}+\frac{9 }{173}({\mu}^2- \o^2).
\eea
Given the bound state condition, $\o^2<\mu^2$, the last term in the above equation is positive.
Given the superradiance condition \eqref{sup-con-6D}, one can check the sum of the first three terms in the above equation is positive.
It is obvious that the $\l_l$ term in the above equation is also positive. So we have
\bea
b'_1-b'_2>0.
\eea
The possible signs of $\{b_2,b_1\}$ is $\{-,-\},\{-,+\},\{+,+\}$. The sign of $b_1$ is not smaller than the sign of $b_2$, i.e.
\bea\label{b1b2}
\text{sign}(b_2)\leqslant \text{sign}(b_1).
\eea
Next, let's study the relation between the signs of $b_2$ and $ b_3$. The coefficient $b_3$ can be read out directly from \eqref{n6z},
\bea
b_3=4(442 e^2 m^{14/3} - 270 m^{14/3} \mu^2 - 859 \sqrt{6} e m^{14/3} \o +
2184 m^{14/3} \o^2 - 243 m^4 \l_l).
\eea
Define a new coefficient $b'_3$,
\bea
b'_3=\frac{b_3}{7656 m^{14/3}}.
\eea
The difference between $b'_2$ and $b'_3$ is
\bea
b'_2-b'_3=\frac{4907 e^2}{55187}-\frac{256}{173} \sqrt{\frac{2}{3}} e \o+\frac{859 e \o}{319 \sqrt{6}}+\frac{8271 \lambda_l }{110374 m^{2/3}}+\frac{4914}{55187}( {\mu}^2-\o^2).
\eea
Given the bound state condition, $\o^2<\mu^2$, the last term in the above equation is positive.
Given the superradiance condition \eqref{sup-con-6D}, one can check the sum of the first three terms in the above equation is positive.
It is obvious that the $\l_l$ term in the above equation is also positive. So we have
\bea
b'_2-b'_3>0.
\eea
The possible signs of $\{b_3,b_2\}$ is $\{-,-\},\{-,+\},\{+,+\}$. The sign of $b_2$ is not smaller than the sign of $b_3$, i.e.
\bea\label{b2b3}
\text{sign}(b_3)\leqslant \text{sign}(b_2).
\eea
In order to study the relation between the signs of $b_3$ and $ b_4$. Define a new parameter
\bea
b'_4=\frac{b_4}{4*4473 m^{13/3}}.
\eea
The difference between $b'_3$ and $b'_4$ is
\bea
b'_3-b'_4=\frac{32137 e^2}{507529}+\frac{1991 \sqrt{\frac{2}{3}} e \o}{1591}-\frac{859 e \o}{319 \sqrt{6}}+\frac{87411 \lambda_l }{1015058 m^{2/3}}+\frac{59514 }{507529}(\mu^2-\o^2).
\eea
We can prove the above difference is positive with the same method as previous cases. So the possible signs of $\{b_4,b_3\}$ is $\{-,-\},\{-,+\},\{+,+\}$, i.e.
\bea\label{b3b4}
\text{sign}(b_4)\leqslant \text{sign}(b_3).
\eea
Similarly, define the following new coefficients
\bea
b'_5=\frac{b_5}{4*8610 m^4}, b'_6=\frac{b_6}{4*11697 m^{11/3}}, b'_7=\frac{b_7}{4*12252 m^{10/3}},\\ b'_8=\frac{b_8}{39984 m^3}, b'_9=\frac{b_9}{25344 m^{8/3}},
b'_{10}=\frac{b_{10}}{12276 m^{7/3}}, b'_{11}=\frac{b_{11}}{4392 m^2}.
\eea
Given the bound state condition, $\o^2<\mu^2$, and the superradiance condition \eqref{sup-con-6D}, we find that [see the Appendix]
\bea
b'_4>b'_5>b'_6>b'_7>b'_8>b'_9>b'_{10}>b'_{11}.
\eea
So we have
\bea\label{b11}
\text{sign}(b_{11})\leqslant \text{sign}(b_{10})\leqslant \text{sign}(b_9)\leqslant \text{sign}(b_8)
\leqslant \text{sign}(b_7) \leqslant \text{sign}(b_6)\leqslant \text{sign}(b_5)\leqslant \text{sign}(b_4).
\eea
With the results \eqref{b1b2}, \eqref{b2b3}, \eqref{b3b4},\eqref{b11}, the possible signs of $\{b_{11},...,b_1\}$ may be
all plus, $\{+,+,..,+,+\}$, all minus $\{-,-,..,-,-\}$, or $\{-,...,-,+,...,+\}$.
The signs of the sixteen coefficients of $n_6(z)$, $\{b_{15},b_{14},b_{13},...,b_{1},b_0\}$, are
$\{-,-,-,-,*,*,...,*,+\}$. The $*$ parts are the signs of $\{b_{11},...,b_1\}$. It is obvious that for all possible signs of $\{b_{11},...,b_1\}$, the number of the sign changes in the sequence of the sixteen coefficients is always 1.
So there is at most one extreme for the effective potential $V_6(r)$ outside the horizon and together with our
previous asymptotic analysis of $V_6(r)$, we conclude that there is only one maximum for the effective potential $V_6(r)$ outside the horizon and no potential well exists. The D=6 extremal RN black hole is superradiantly stable.
\section{Conclusion and discussion}
In this paper, superradiant stability of D=5,6 extremal RN black holes under charged massive scalar perturbation is investigated. A new method is developed that depends mainly on Descartes' rule of signs for the polynomial equations. In $D$=5 case, based on the asymptotic analysis of the effective potential $V(r)$ \eqref{asymp}, we know there is at least one extreme for the effective potential outside the horizon. With the new method, we prove that the derivative of the effective potential has at most one extreme outside the horizon \eqref{table}. There is only one maximum for the effective potential outside the horizon and no potential well exists, so the extremal RN black hole is superradiantly stable. In $D$=6 case, the asymptotic analysis \eqref{6Asymp} shows that the effective potential has at least one extreme outside the horizon. According to the sign relations \eqref{b1b2}, \eqref{b2b3}, \eqref{b3b4},\eqref{b11} and Descartes' rule of signs, we know there is at most one extreme for the effective potential outside the horizon. There is only one maximum for the effective potential outside the horizon and no potential well exists, so the six-dimensional extremal RN black hole is also superradiantly stable.
In $D$=6 case, it is quite unexpected that there are such interesting sign relations \eqref{b1b2}, \eqref{b2b3}, \eqref{b3b4},\eqref{b11} between the signs of the complicated coefficients in $n_6(z)$. It does not seem to be a coincidence. Due to this observation, we also have finished part of the proof for $D$=7 case and found similar interesting sign relations. So we conjecture that all $D$-dimensional ($D\geqslant 5$) extremal RN black holes are superradiantly stable under charged massive scalar perturbation which is minimally coupled with the black holes. It will be interesting to give a general proof for this, although it is not an easy work.
\begin{acknowledgements}
This work is partially supported by Guangdong Major Project of Basic and Applied Basic Research (No. 2020B0301030008), Science and Technology Program of Guangzhou (No. 2019050001) and Natural Science Foundation of Guangdong Province (No. 2020A1515010388).
\end{acknowledgements}
|
1,108,101,565,778 | arxiv | \section{Introduction}\label{sec: Introduction}
The problem of controlling the output of a system so as to achieve asymptotic tracking of prescribed trajectories is one of the most fundamental problems in control theory. In the general context of finite-dimensional linear time-invariant (LTI) control systems, the problem of setpoint regulation control is very classical and has been widely investigated. One possible way to solve this problem is based on the augmentation of the state-space representation of the plant with an integral component of the tracking error and the use of the separation principle by exploiting separately a Luenberger observer (which allows the estimation of the state based on the measure only) and a stabilizing full-state feedback (see, e.g., \cite{hespanha2018linear,antsaklis2006linear}). Even if this approach has reached a very high level of maturity for finite-dimensional control systems, its possible extension to infinite-dimensional systems, as those considered in this paper, is still an open problem.
Infinite-dimensional systems emerge in many practical applications due to the occurrence of delays, reaction-diffusion dynamics, or even flexible behavior (see, e.g., \cite{meurer2012control,morris2020,krstic2008boundary} for introductory textbooks on dedicated control theory for infinite-dimensional systems). While many efficient control design methods have been reported for the stabilization of distributed parameter systems, very few have been extended to the problem of output regulation. The main reason is that all techniques that have been developed for finite-dimensional LTI cannot be easily generalized to infinite-dimensional plants. For instance, the frequency domain approach has been generalized to the infinite-dimensional setting, but it requires to deal with an infinite number of poles, yielding an infinite-dimensional pole allocation problem (which is one rationale behind the frequency domain approach for regulation of finite-dimensional LTI systems). The state-space approach is followed in this work.
We propose in this paper, for the first time, an output feedback control design procedure to achieve the setpoint regulation control of a reaction-diffusion system by means of a finite-dimensional observer coupled with a PI controller. The considered reaction-diffusion plant, which might be unstable, is modeled by a Sturm-Liouville operator as those classical introduced in the context of parabolic partial differential equation. Note that the case of PI regulation of this system by means of a state feedback was reported in~\cite{lhachemi2020pi} (see also~\cite{pohjolainen1982robust,xu1995robust,dos2008boundary,trinh2017design,terrand2018regulation,barreau2019practical,terrand2019adding,coron2019pi,lhachemi2020pi} for various approaches about PI control design for different types of PDEs). Here we go beyond by proposing an output feedback PI control strategy. Even if the proposed procedure also applies to bounded control inputs and bounded observations, we focus the presentation on boundary controls and boundary measurements. This is because these configurations are the most interesting in terms of practical applications. They are also the most challenging since they require to deal with unbounded control and observation operators (see e.g., \cite{curtain2020introduction} for further explanations). Several cases for the input-to-output map are investigated. This includes in particular Dirichlet control inputs (which can easily be extended to Neumann control inputs as discussed in conclusion) along with Dirichlet and/or Neumann to-be-regulated outputs and measured outputs. We also show that our procedure can be used to regulate a system output that is distinct of the measured one. Therefore, our approach gives a complete framework to study every associated input-to-output maps. The proposed control design strategy is based on the introduction of an adequate integral component for setpoint regulation purposes and an appropriate coupling of a state-feedback with a finite-dimensional observer.
It is worth noting that the design of finite-dimensional observer-based controllers for distributed parameter plants is a challenging task due to the fact that the separation principle, that is classically used for finite-dimensional systems, does not apply for infinite-dimensional systems~\cite{curtain1982finite,balas1988finite,harkort2011finite}. A constructive method for solving this stabilization problem for a reaction-diffusion system was reported in~\cite{katz2020constructive} in the case were the either control or observation operator is bounded. This approach was extended in~\cite{lhachemi2020finite} to the case were both control and observation operators are unbounded, including both Dirichlet and Neumann settings. The present work, taking advantage of~\cite{lhachemi2020finite}, goes beyond the simple problem of closed-loop stabilization by embracing the issue of output setpoint regulation control. Our approach is based on Lyapunov direct methods. Let us emphasize that when Dirichlet/Neumann boundary conditions are considered for either the control input or the to-be-regulated output, the solutions need to be sufficiently regular. However, our conditions do not need further regularity than the ones required by the existence results of classical solutions. Our main results take the form of sets of explicit sufficient conditions ensuring the both stability and regulation control of the closed-loop plant. We assess that these sufficient conditions are always feasible when selecting the order of the finite-dimensional observer large enough. Therefore, we show in a constructive manner that the setpoint regulation control of a reaction-diffusion plant can always be achieved by the coupling of an integral component, a finite-dimensional observer, and a state-feedback.
The paper is organized by considering successively different input-output maps for the reaction-diffusion equation depending on the selected boundary measured output, the to-be-regulated output, and the control input. After recalling classical notations and properties for the Sturm-Liouville operators in Section~\ref{sec: preliminaries}, the case of a Dirichlet observation and a Dirichlet control input is considered in Section~\ref{sec: Dirichlet measurement and regulation control}. Then the case of a Neumann measurement and a Dirichlet control input is considered in Section~\ref{sec: Neuman}. While the to-be-regulated output and the measured output are the same in the two latter sections, a crossed configuration is considered in Section~\ref{sec: crossed configuration}. The regulation problem is solved for a Dirichlet measured output, a Neumann to-be-regulated output, and a Dirichlet control input. This final result completes the picture and gives a full study of the different cases for the input-to-output map of the considered class of distributed parameter systems. Some numerical simulations are given in Section \ref{sec: num} for this final result. Section~\ref{sec: conclusion} collects some concluding remarks.
\section{Notation and properties}\label{sec: preliminaries}
Spaces $\mathbb{R}^n$ are endowed with the Euclidean norm denoted by $\Vert\cdot\Vert$. The associated induced norms of matrices are also denoted by $\Vert\cdot\Vert$. Given two vectors $X$ and $Y$, $ \mathrm{col} (X,Y)$ denotes the vector $[X^\top,Y^\top]^\top$. $L^2(0,1)$ stands for the space of square integrable functions on $(0,1)$ and is endowed with the inner product $\langle f , g \rangle = \int_0^1 f(x) g(x) \,\mathrm{d}x$ with associated norm denoted by $\Vert \cdot \Vert_{L^2}$. For an integer $m \geq 1$, the $m$-order Sobolev space is denoted by $H^m(0,1)$ and is endowed with its usual norm denoted by $\Vert \cdot \Vert_{H^m}$. For a symmetric matrix $P \in\mathbb{R}^{n \times n}$, $P \succeq 0$ (resp. $P \succ 0$) means that $P$ is positive semi-definite (resp. positive definite) while $\lambda_M(P)$ (resp. $\lambda_m(P)$) denotes its maximal (resp. minimal) eigenvalue.
Let $p \in \mathcal{C}^1([0,1])$ and $q \in \mathcal{C}^0([0,1])$ with $p > 0$ and $q \geq 0$. Let the Sturm-Liouville operator $\mathcal{A} : D(\mathcal{A}) \subset L^2(0,1) \rightarrow L^2(0,1)$ be defined by $\mathcal{A}f = - (pf')' + q f$ on the domain $D(\mathcal{A}) \subset L^2(0,1)$ given by either $D(\mathcal{A}) = \{ f \in H^2(0,1) \,:\, f(0)=f(1)=0 \}$ or $D(\mathcal{A}) = \{ f \in H^2(0,1) \,:\, f'(0)=f(1)=0 \}$. The eigenvalues $\lambda_n$, $n \geq 1$, of $\mathcal{A}$ are simple, non negative, and form an increasing sequence with $\lambda_n \rightarrow + \infty$ as $n \rightarrow + \infty$. Moreover, the associated unit eigenvectors $\phi_n \in L^2(0,1)$ form a Hilbert basis. We also have $D(\mathcal{A}) = \{ f \in L^2(0,1) \,:\, \sum_{n\geq 1} \vert \lambda_n \vert ^2 \vert \left< f , \phi_n \right> \vert^2 \}$ and $\mathcal{A}f = \sum_{n \geq 1} \lambda_n \left< f , \phi_n \right> \phi_n$.
Let $p_*,p^*,q^* \in \mathbb{R}$ be such that $0 < p_* \leq p(x) \leq p^*$ and $0 \leq q(x) \leq q^*$ for all $x \in [0,1]$, then it holds~\cite{orlov2017general}:
\begin{equation}\label{eq: estimation lambda_n}
0 \leq \pi^2 (n-1)^2 p_* \leq \lambda_n \leq \pi^2 n^2 p^* + q^*
\end{equation}
for all $n \geq 1$. Assuming further than $p \in \mathcal{C}^2([0,1])$, we have for any $x \in \{0,1\}$ that $\phi_n (x) = O(1)$ and $\phi_n' (x) = O(\sqrt{\lambda_n})$ as $n \rightarrow + \infty$ \cite{orlov2017general}. Finally, one can check that, for all $f \in D(\mathcal{A})$,
\begin{align}
\sum_{n \geq 1} \lambda_n \left< f , \phi_n \right>^2
& = \left< \mathcal{A}f , f \right>
= \int_0^1 p (f')^2 + q f^2 \,\mathrm{d}x . \label{eq: inner product Af and f}
\end{align}
This implies that, for any $f \in D(\mathcal{A})$, the series expansion $f = \sum_{n \geq 1} \left< f , \phi_n \right> \phi_n$ holds in $H^1(0,1)$ norm. Then, using the definition of $\mathcal{A}$ and the fact that it is a Riesz-spectral operator, we obtain that the latter series expansion holds in $H^2(0,1)$ norm. Due to the continuous embedding $H^1(0,1) \subset L^{\infty}(0,1)$, we obtain that $f(x) = \sum_{n \geq 1} \left< f , \phi_n \right> \phi_n(x)$ and $f'(x) = \sum_{n \geq 1} \left< f , \phi_n \right> \phi_n'(x)$ for all $x \in [0,1]$.
\section{Dirichlet measurement and regulation control}\label{sec: Dirichlet measurement and regulation control}
We consider the reaction-diffusion system with Dirichlet boundary observation described for $t > 0$ and $x \in (0,1)$ by
\begin{subequations}\label{eq: dirichlet boundary measurement - RD system}
\begin{align}
z_t(t,x) & = \left( p(x) z_x(t,x) \right)_x + (q_c - q(x)) z(t,x) \label{eq: dirichlet boundary measurement - RD system - 1} \\
z_x(t,0) & = 0 , \quad z(t,1) = u(t) \label{eq: dirichlet boundary measurement - RD system - 2} \\
z(0,x) & = z_0(x) \label{eq: dirichlet boundary measurement - RD system - 3} \\
y(t) & = z(t,0) \label{eq: dirichlet boundary measurement - RD system - 4}
\end{align}
\end{subequations}
in the case $p \in \mathcal{C}^2([0,1])$. Here $q_c \in\mathbb{R}$ is a constant, $u(t) \in\mathbb{R}$ is the command input, $y(t) \in\mathbb{R}$ with $c \in L^2(0,1)$ is the measurement, $z_0 \in L^2(0,1)$ is the initial condition, and $z(t,\cdot) \in L^2(0,1)$ is the state. Our control design objective is to design a finite-dimensional observer and an estimated state feedback to achieve the setpoint regulation of $y(t)$ to some prescribed reference signal $r(t)$.
\subsection{Spectral reduction}
We introduce the change of variable
\begin{equation}\label{eq: change of variable}
w(t,x) = z(t,x) - x^2 u(t) .
\end{equation}
Then we have
\begin{subequations}\label{eq: homogeneous RD system}
\begin{align}
& w_t(t,x) = ( p(x) w_x(t,x) )_x + (q_c - q(x)) w(t,x) \label{eq: homogeneous RD system - 1} \\
& \phantom{w_t(t,x) =}\; + a(x) u(t) + b(x) \dot{u}(t) \nonumber \\
& w_x(t,0) = 0 , \quad w(t,1) = 0 \label{eq: homogeneous RD system - 2} \\
& w(0,x) = w_0(x) \label{eq: homogeneous RD system - 3} \\
& \tilde{y}(t) = w(t,0) \label{eq: homogeneous RD system - 4}
\end{align}
\end{subequations}
with $a,b \in L^2(0,1)$ defined by $a(x) = 2p(x) + 2xp'(x) + (q_c-q(x))x^2$ and $b(x) = -x^2$, respectively, $\tilde{y}(t) = y(t)$, and $w_0(x) = z_0(x) - x^2 u(0)$. Introducing the auxiliary command input $v(t) = \dot{u}(t)$, we infer that
\begin{subequations}\label{eq: homogeneous RD system - abstract form}
\begin{align}
\dot{u}(t) & = v(t) \label{eq: homogeneous RD system - abstract form - auxiliary input v} \\
\dfrac{\mathrm{d} w}{\mathrm{d} t}(t,\cdot) & = - \mathcal{A} w(t,\cdot) + q_c w(t,\cdot) + a u(t) + b v(t)
\end{align}
\end{subequations}
with $D(\mathcal{A}) = \{ f \in H^2(0,1) \,:\, f'(0)=f(1)=0 \}$. We introduce the coefficients of projection $w_n(t) = \left< w(t,\cdot) , \phi_n \right>$, $a_n = \left< a , \phi_n \right>$, and $b_n = \left< b , \phi_n \right>$. Considering classical solutions associated with any $z_0 \in H^2(0,1)$ and any $u(0) \in \mathbb{R}$ such that $z_0'(0)=0$ and $z_0(1) = u(0)$ (their existence for the upcoming closed-loop dynamics is an immediate consequence of \cite[Chap.~6, Thm.~1.7]{pazy2012semigroups}), we have $w(t,\cdot) \in D(\mathcal{A})$ for all $t \geq 0$ and we infer that
\begin{subequations}\label{eq: homogeneous RD system - spectral reduction}
\begin{align}
\dot{u}(t) & = v(t) \label{eq: homogeneous RD system - spectral reduction - 1} \\
\dot{w}_n(t) & = ( -\lambda_n + q_c ) w_n(t) + a_n u(t) + b_n v(t) \, , \quad n \geq 1 \label{eq: homogeneous RD system - spectral reduction - 2} \\
\tilde{y}(t) & = \sum_{i \geq 1} \phi_i(0) w_i(t) \label{eq: homogeneous RD system - spectral reduction - 3}
\end{align}
\end{subequations}
\subsection{Control design}
Let $N_0 \geq 1$ and $\delta > 0$ be given such that $- \lambda_n + q_c < -\delta < 0$ for all $n \geq N_0 +1$. Let $N \geq N_0 + 1$ be arbitrary. Inspired by~\cite{katz2020constructive}, we design an observer to estimate the $N$ first modes of the plant while the state-feedback is performed on the $N_0$ first modes of the plant. Specifically, introducing $W^{N_0}(t) = \begin{bmatrix} w_{1}(t) & \ldots & w_{N_0}(t) \end{bmatrix}^\top$, $A_0 = \mathrm{diag}(- \lambda_{1} + q_c , \ldots , - \lambda_{N_0} + q_c)$, $B_{0,a} = \begin{bmatrix} a_1 & \ldots & a_{N_0} \end{bmatrix}^\top$, and $B_{0,b} = \begin{bmatrix} b_1 & \ldots & b_{N_0} \end{bmatrix}^\top$, we have
\begin{equation}\label{eq: W^N0 dynamics - 1}
\dot{W}^{N_0}(t) = A_0 W^{N_0}(t) + B_{0,a} u(t) + B_{0,b} v(t) .
\end{equation}
Our objective is to introduce an integral component to achieve the setpoint regulation control of the system output $y(t)$. To do so, proceeding as in~\cite{lhachemi2020pi}, we consider first the case of the following classical integral component:
$\dot{z}_i = y(t) - r(t) = \sum_{n \geq 1} \phi_n(0) w_n(t) - r(t)$. Then, introducing $\xi_p(t) = z_i(t) - \sum_{n \geq N_0 +1} \frac{\phi_n(0)}{-\lambda_n +q_c} w_n(t)$, we obtain that $\dot{\xi}_p(t)
= \sum_{n = 1}^{N_0} \phi_n(0) w_n(t) + \alpha_0 u(t) + \beta_0 v(t) - r(t)$ with
\begin{align}\label{eq: def alpha_0 and beta_0}
\alpha_0 = - \sum_{n \geq N_0 + 1} \frac{a_n \phi_n(0)}{-\lambda_n + q_c} , \quad \beta_0 = - \sum_{n \geq N_0 + 1} \frac{b_n \phi_n(0)}{-\lambda_n + q_c} .
\end{align}
Since $w_n(t)$ are not measured, we replace them by their estimated version $\hat{w}_n(t)$ which will be described below. Hence, the employed integral component takes the form:
\begin{align}\label{eq: dirichlet boundary measurement - integral component}
\dot{\xi}(t)
& = \sum_{n = 1}^{N_0} \phi_n(0) \hat{w}_n(t) + \alpha_0 u(t) + \beta_0 v(t) - r(t) .
\end{align}
We now define for $1 \leq n \leq N$ the observer dynamics:
\begin{align}
\dot{\hat{w}}_n (t)
& = ( -\lambda_n + q_c ) \hat{w}_n(t) + a_n u(t) + b_n v(t) \nonumber \\
& \phantom{=}\; - l_n \left( \sum_{i=1}^N \phi_i(0) \hat{w}_i(t) - \alpha_1 u(t) - \tilde{y}(t) \right) \label{eq: dirichlet boundary measurement - observer dynamics - 1}
\end{align}
with
\begin{equation}\label{eq: dirichlet boundary measurement - def alpha_1}
\alpha_1 = \sum_{n \geq N +1} \dfrac{a_n \phi_n(0)}{-\lambda_n + q_c}
\end{equation}
and where $l_n \in\mathbb{R}$ are the observer gains. We impose $l_n = 0$ for $N_0+1 \leq n \leq N$ and the initial condition of the observer as $\hat{w}_n(0) = 0$ for all $1 \leq n \leq N$. We define for $1 \leq n \leq N$ the observation error as $e_n(t) = w_n(t) - \hat{w}_n(t)$. Hence we have
\begin{align}
& \dot{\hat{w}}_n (t) = ( -\lambda_n + q_c ) \hat{w}_n(t) + a_n u(t) + b_n v(t) + l_n \sum_{i=1}^{N_0} \phi_i(0) e_i(t) \nonumber \\
& + l_n \sum_{i=N_0+1}^{N} \dfrac{\phi_i(0)}{\sqrt{\lambda_i}} \tilde{e}_i(t) + l_n \alpha_1 u(t) + l_n \zeta(t) \label{eq: dirichlet boundary measurement - observer dynamics - 2}
\end{align}
with $\tilde{e}_n(t) = \sqrt{\lambda_n} e_n(t)$ and $\zeta(t) = \sum_{n \geq N+1} \phi_n(0) w_n(t)$. Hence, introducing $\hat{W}^{N_0}(t) = \begin{bmatrix} \hat{w}_{1}(t) & \ldots & \hat{w}_{N_0}(t) \end{bmatrix}^\top$, $E^{N_0}(t) = \begin{bmatrix} e_{1}(t) & \ldots & e_{N_0}(t) \end{bmatrix}^\top$, $\tilde{E}^{N-N_0}(t) = \begin{bmatrix} \tilde{e}_{N_0 +1} & \ldots & \tilde{e}_N \end{bmatrix}^\top$, $C_0 = \begin{bmatrix} \phi_1(0) & \ldots & \phi_{N_0}(0) \end{bmatrix}$ and $C_1 = \begin{bmatrix} \frac{\phi_{N_0 +1}(0)}{\sqrt{\lambda_{N_0 + 1}}} & \ldots & \frac{\phi_{N}(0)}{\sqrt{\lambda_{N}}} \end{bmatrix}$, and $L = \begin{bmatrix} l_1 & \ldots & l_{N_0} \end{bmatrix}^\top$, we obtain that
\begin{align}
\dot{\hat{W}}^{N_0}(t) & = A_0 \hat{W}^{N_0}(t) + B_{0,a} u(t) + B_{0,b} v(t) \label{eq: hat_W^N0 dynamics - 2} \\
& \phantom{=}\; + L C_0 E^{N_0}(t) + L C_1 \tilde{E}^{N-N_0}(t) + \alpha_1 L u(t) + L \zeta(t) \nonumber .
\end{align}
With
\begin{equation}\label{eq: def hat W^N_0_a(t)}
\hat{W}^{N_0}_a(t) = \mathrm{col} (u(t),\hat{W}^{N_0}(t),\xi(t)) ,
\end{equation}
$\tilde{L} = \mathrm{col}(0,L,0)$, and defining
\begin{equation}\label{eq: def matrices A1 B1 and Br}
A_1 = \begin{bmatrix} 0 & 0 & 0 \\ B_{0,a} & A_0 & 0 \\ \alpha_0 & C_0 & 0 \end{bmatrix} , \quad
B_{1} = \begin{bmatrix} 1 \\ B_{0,b} \\ \beta_0 \end{bmatrix} , \quad
B_{r} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} ,
\end{equation}
we deduce that
\begin{align}
\dot{\hat{W}}_a^{N_0}(t) & = A_1 \hat{W}_a^{N_0}(t) + B_{1} v(t) - B_{r} r(t) + \tilde{L} C_0 E^{N_0}(t) \nonumber \\
& \phantom{=}\; + \tilde{L} C_1 \tilde{E}^{N-N_0}(t) + \alpha_1 \tilde{L} u(t) + \tilde{L} \zeta(t) . \label{eq: hat_W_a^N0 dynamics - 1}
\end{align}
Setting the auxiliary command input as
\begin{equation}\label{eq: v - state feedback}
v(t) = K \hat{W}_a^{N_0}(t) ,
\end{equation}
and defining
\begin{equation}\label{eq: def Acl}
A_\mathrm{cl}(\alpha_1) = A_1 + B_1 K + \alpha_1 \tilde{L} \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} ,
\end{equation}
we obtain that
\begin{align}
\dot{\hat{W}}_a^{N_0}(t) & = A_\mathrm{cl}(\alpha_1) \hat{W}_a^{N_0}(t) - B_{r} r(t) \nonumber \\
& \phantom{=}\; + \tilde{L} C_0 E^{N_0}(t) + \tilde{L} C_1 \tilde{E}^{N-N_0}(t) + \tilde{L} \zeta(t) \label{eq: hat_W_a^N0 dynamics - 2}
\end{align}
and, from (\ref{eq: W^N0 dynamics - 1}) and (\ref{eq: hat_W^N0 dynamics - 2}),
\begin{align}
\dot{E}^{N_0}(t) & = ( A_0 - L C_0 ) E^{N_0}(t) - L C_1 \tilde{E}^{N-N_0}(t) \nonumber \\
& \phantom{=}\; - \alpha_1 L \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} \hat{W}^{N_0}_{a} - L \zeta(t) . \label{eq: E^N0 dynamics - 2}
\end{align}
We now define $\hat{W}^{N-N_0}(t) = \begin{bmatrix} \hat{w}_{N_0 + 1}(t) & \ldots & \hat{w}_{N}(t) \end{bmatrix}^\top$, $A_2 = \mathrm{diag}(- \lambda_{N_0 + 1} + q_c , \ldots , - \lambda_{N} + q_c)$, $B_{2,a} = \begin{bmatrix} a_{N_0 + 1} & \ldots & a_{N} \end{bmatrix}^\top$, $B_{2,b} = \begin{bmatrix} b_{N_0 + 1} & \ldots & b_{N} \end{bmatrix}^\top$. We obtain from (\ref{eq: dirichlet boundary measurement - observer dynamics - 1}) with $l_n = 0$ for $N_0 +1 \leq n \leq N$ that
\begin{align}
& \dot{\hat{W}}^{N-N_0}(t)
= A_2 \hat{W}^{N-N_0}(t) + B_{2,a} u(t) + B_{2,b} v(t) \nonumber \\
& = A_2 \hat{W}^{N-N_0}(t) + \left( B_{2,b} K + \begin{bmatrix} B_{2,a} & 0 & 0 \end{bmatrix} \right) \hat{W}_a^{N_0}(t) \label{eq: hat_W^N-N0 dynamics}
\end{align}
and, using (\ref{eq: homogeneous RD system - spectral reduction - 2}) and (\ref{eq: dirichlet boundary measurement - observer dynamics - 1}),
\begin{equation}\label{eq: E^N-N0 dynamics}
\dot{\tilde{E}}^{N-N_0}(t) = A_2 \tilde{E}^{N-N_0}(t) .
\end{equation}
Putting now together (\ref{eq: hat_W_a^N0 dynamics - 2}-\ref{eq: E^N-N0 dynamics}) while introducing
\begin{equation}\label{eq: def X}
X(t) = \mathrm{col} \left( \hat{W}_a^{N_0}(t) , E^{N_0}(t) , \hat{W}^{N-N_0}(t) , \tilde{E}^{N-N_0}(t) \right) ,
\end{equation}
we obtain that
\begin{equation}\label{eq: dynamics closed-loop system - finite dimensional part}
\dot{X}(t) = F X(t) + \mathcal{L} \zeta(t) - \mathcal{L}_r r(t)
\end{equation}
where
\begin{equation*}
F = \begin{bmatrix}
A_\mathrm{cl}(\alpha_1) & \tilde{L} C_0 & 0 & \tilde{L} C_1 \\
-\alpha_1 L \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} & A_0 - L C_0 & 0 & -L C_1 \\
B_{2,b} K + \begin{bmatrix} B_{2,a} & 0 & 0 \end{bmatrix} & 0 & A_2 & 0 \\
0 & 0 & 0 & A_2
\end{bmatrix} ,
\end{equation*}
\begin{equation*}
\mathcal{L} = \mathrm{col} ( \tilde{L} , - L , 0 , 0 ) , \quad
\mathcal{L}_r = \mathrm{col} ( B_r , 0 , 0 , 0 ) .
\end{equation*}
Defining $E = \begin{bmatrix} 1 & 0 & \ldots & 0\end{bmatrix}$ and $\tilde{K} = \begin{bmatrix} K & 0 & 0 & 0\end{bmatrix}$, we obtain from (\ref{eq: def hat W^N_0_a(t)}), (\ref{eq: v - state feedback}), and (\ref{eq: def X}) that
\begin{equation}\label{eq: u and v in function of X}
u(t) = E X(t) , \quad v(t) = \tilde{K} X(t)
\end{equation}
and we can introduce
\begin{equation}\label{eq: matrix G}
G = \Vert a \Vert_{L^2}^2 E^\top E + \Vert b \Vert_{L^2}^2 \tilde{K}^\top \tilde{K} \preceq g I
\end{equation}
with $g = \Vert a \Vert_{L^2}^2 + \Vert b \Vert_{L^2}^2 \Vert K \Vert^2$ a constant independent of $N$.
\begin{lemma}\label{eq: Dirichlet measurement - Kalman condion}
$(A_1,B_1)$ is controllable and $(A_0,C_0)$ is observable.
\end{lemma}
\begin{proof}
From~\cite[Lem.~2]{lhachemi2020pi}, $(A_1,B_1)$ is controllable if and only if
\begin{equation}\label{eq: kalman condition previous work}
\left(\begin{bmatrix} 0 & 0 \\ B_{0,a} & A_0 \end{bmatrix} , \begin{bmatrix} 1 \\ B_{0,b} \end{bmatrix} \right)
\end{equation}
satisfies the Kalman condition and the matrix
\begin{equation*}
T = \begin{bmatrix} 0 & 0 & 1 \\ B_{0,a} & A_0 & B_{0,b} \\ \alpha_0 & C_0 & \beta_0 \end{bmatrix}
\end{equation*}
is invertible. The former condition was assessed in~\cite{lhachemi2020finite}. Hence we focus on the latter one. Let $\begin{bmatrix} u_e & w_{1,e} & \ldots & w_{N_0,e} & v_e \end{bmatrix}^\top \in\mathrm{ker}(T)$. We obtain that
\begin{subequations}\label{eq: check Kalman condition}
\begin{align}
v_e & = 0 , \label{eq: check Kalman condition - 1} \\
a_n u_e + (-\lambda_n +q_c) w_{n,e} & = 0 , \quad 1 \leq n \leq N_0 , \label{eq: check Kalman condition - 2} \\
\alpha_0 u_e + \sum_{n=1}^{N_0} \phi_n(0) w_{n,e} & = 0 . \label{eq: check Kalman condition - 3}
\end{align}
\end{subequations}
Defining for $n \geq N_0 +1$ the quantity $w_{n,e} = -\frac{a_n}{-\lambda_n+q_c}u_e$, we have $(-\lambda_n+q_c)w_{n,e} + a_n u_e = 0$ for all $n \geq 1$. Hence $(w_{n,e})_{n \geq 1} , (\lambda_n w_{n,e})_{n \geq 1} \in l^2(\mathbb{N})$ ensuring that $w_e \triangleq \sum_{n \geq 1} w_{n,e} \phi_n \in D(\mathcal{A})$ and $\mathcal{A} w_e = \sum_{n \geq 1} \lambda_n w_{n,e} \phi_n \in D(\mathcal{A})$. This shows that $- \mathcal{A} w_e + q_c w_e + a u_e = 0$. Moreover, from (\ref{eq: check Kalman condition - 3}) and using (\ref{eq: def alpha_0 and beta_0}), we infer that $w_e(0) = 0$. From the two last identities, we have that $(p w_e')' + (q_c - q) w_e + a u_e = 0$, $w_e(0) = w_e'(0) = 0$, and $w_e(1) = 0$. Introducing the change of variable $z_e(x) = w_e(x) + x^2 u_e$, we deduce that $(p z_e')' + (q_c - q) z_e = 0$, $z_e(0) = z_e'(0) = 0$, and $z_e(1) = u_e$. By Cauchy uniqueness, we infer that $z_e = 0$ hence $u_e = z_e(1) = 0$. Thus we have $w_e = z_e - x^2 u_e = 0$ hence $w_{n,e} = 0$ for all $n \geq 1$. We deduce that $\mathrm{ker}(T)=\{0\}$. Overall, we have shown that $(A_1,B_1)$ is controllable. Finally, the pair $(A_0,C_0)$ is observable because 1) $A_0$ is diagonal with simple eigenvalues, 2) by Cauchy uniqueness, $\phi_n(0) \neq 0$ for all $n \geq 1$.
\end{proof}
Hence we can select in the sequel $K \in\mathbb{R}^{1 \times (N_0 +2)}$ and $L \in\mathbb{R}^{N_0}$ such that $A_1 + B_1 K$ and $A_0 - L C_0$ are Hurwitz.
\subsection{Equilibirum condition and dynamics of deviations}
We aim at characterizing the equilibrium condition of the closed-loop system composed of the reaction-diffusion system (\ref{eq: dirichlet boundary measurement - RD system}), the auxiliary command input dynamics (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}), the integral action (\ref{eq: dirichlet boundary measurement - integral component}), the observer dynamics (\ref{eq: dirichlet boundary measurement - observer dynamics - 1}), and the state-feedback (\ref{eq: v - state feedback}). To do so let $r(t) = r_e \in\mathbb{R}$ be arbitrary. We must solve the system of equations:
\begin{subequations}\label{eq: dirichlet boundary measurement - equilibirum condition}
\begin{align}
0 & = (-\lambda_n+q_c) w_{n,e} + a_n u_e + b_n v_e = 0 , \quad n \geq 1 , \label{eq: dirichlet boundary measurement - equilibirum condition - 1} \\
0 & = v_e = K \hat{W}^{N_0}_{a,e} , \label{eq: dirichlet boundary measurement - equilibirum condition - 2} \\
0 & = \sum_{n=1}^{N_0} \phi_n(0) \hat{w}_{n,e} + \alpha_0 u_e + \beta_0 v_e - r_e , \label{eq: dirichlet boundary measurement - equilibirum condition - 3} \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e \label{eq: dirichlet boundary measurement - equilibirum condition - 4} \\
& \phantom{=}\; - l_n \left\{ \sum_{i=1}^{N} \phi_i(0) \hat{w}_{i,e} - \alpha_1 u_e - \tilde{y}_e \right\} , \quad 1 \leq n \leq N_0 , \nonumber \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e , \quad N_0 + 1 \leq n \leq N , \label{eq: dirichlet boundary measurement - equilibirum condition - 5} \\
\tilde{y}_e & = \sum_{n \geq 1} \phi_n(0) w_{n,e} . \label{eq: dirichlet boundary measurement - equilibirum condition - 6}
\end{align}
\end{subequations}
We first note from (\ref{eq: dirichlet boundary measurement - equilibirum condition - 2}) that $v_e = 0$. Then, from (\ref{eq: dirichlet boundary measurement - equilibirum condition - 1}) we have $w_{n,e} = -\frac{a_n}{-\lambda_n +q_c}u_e$ for all $n \geq N_0 +1$. In particular, from (\ref{eq: dirichlet boundary measurement - equilibirum condition - 5}), we have $\hat{w}_{n,e} = w_{n,e} = -\frac{a_n}{-\lambda_n +q_c}u_e$ for all $N_0 + 1 \leq n \leq N$. Defining $e_{n,e} = w_{n,e} - \hat{w}_{n,e}$ and $\zeta_e = \sum_{n \geq N+1} \phi_n(0) w_{n,e}$, we obtain that $e_{n,e} = 0$ for all $N_0 + 1 \leq n \leq N$. Hence, from (\ref{eq: dirichlet boundary measurement - equilibirum condition - 4}), we infer that $0 = (-\lambda_n + q_c) \hat{w}_{n,e} + a_n u_e + l_n \sum_{i = 1}^{N_0} \phi_i(0) e_{i,e} + l_n \alpha_1 u_e + l_n \zeta_e$ for all $1 \leq n \leq N_0$. Combining this latter identity with (\ref{eq: dirichlet boundary measurement - equilibirum condition - 1}), we obtain that $(A_0-LC_0)E^{N_0}_e - L \alpha_1 u_e - L \zeta_e = 0$. Invoking (\ref{eq: dirichlet boundary measurement - def alpha_1}), we note that $\alpha_1 u_e = - \sum_{n \geq N+1} \phi_n(0) w_{n,e} = - \zeta_e$, implying that $(A_0-LC_0)E^{N_0}_e = 0$. Since $A_0-LC_0$ is Hurwitz, we infer that $e_{n,e} = 0$ for all $1 \leq n \leq N_0$. In particular, $\hat{w}_{n,e} = w_{n,e}$ for all $1 \leq n \leq N$. From (\ref{eq: dirichlet boundary measurement - equilibirum condition - 2}-\ref{eq: dirichlet boundary measurement - equilibirum condition - 4}) we deduce that $0 = A_\mathrm{cl}(\alpha_1) \hat{W}_{a,e}^{N_0} - B_r r_e + \tilde{L}\zeta_e$. Recalling that $\zeta_e = - \alpha_1 u_e$ and $A_\mathrm{cl}(\alpha_1)$ is defined by (\ref{eq: def Acl}), we obtain that $(A_1+B_1 K) \hat{W}_{a,e}^{N_0} = B_r r_e$. Since $A_1+B_1 K$ is Hurwitz, we infer that $\hat{W}_{a,e}^{N_0} = \begin{bmatrix} u_e & \hat{w}_{1,e} & \ldots & \hat{w}_{N_0,e} & \xi_e \end{bmatrix}^\top = (A_1+B_1 K)^{-1} B_r r_e$. This is in particular compatible with (\ref{eq: dirichlet boundary measurement - equilibirum condition - 2}) since, based on (\ref{eq: def matrices A1 B1 and Br}), we indeed obtain that $K \hat{W}^{N_0}_{a,e} = 0$. We note that $(w_{n,e})_{n \geq 1} , (\lambda_n w_{n,e})_{n \geq 1} \in l^2(\mathbb{N})$ ensuring that $w_e \triangleq \sum_{n \geq 1} w_{n,e} \phi_n \in D(\mathcal{A})$ and $\mathcal{A} w_e = \sum_{n \geq 1} \lambda_n w_{n,e} \phi_n \in D(\mathcal{A})$. Using (\ref{eq: dirichlet boundary measurement - equilibirum condition - 1}), we obtain that $- \mathcal{A} w_e + q_c w_e + a u_e + b v_e = 0$. Introducing the change of variable $z_e = w_e + x^2 u_e$, $z_e$ is a static solution of (\ref{eq: dirichlet boundary measurement - RD system - 1}-\ref{eq: dirichlet boundary measurement - RD system - 2}) associated with the constant control input $u(t) = u_e$. Denoting by $y_e \triangleq z_e(0) = w_e(0) = \tilde{y}_e$, we infer from (\ref{eq: dirichlet boundary measurement - equilibirum condition - 3}) while invoking (\ref{eq: def alpha_0 and beta_0}) that
\begin{align*}
r_e & = \sum_{n=1}^{N_0} \phi_n(0) \hat{w}_{n,e} + \alpha_0 u_e
= \sum_{n \geq 1} \phi_n(0) w_{n,e}
= y_e .
\end{align*}
Hence, for an arbitrarily given constant reference signal $r(t) = r_e \in\mathbb{R}$, the equilibirum condition of the closed-loop system is unique, fully characterized by $r_e$, and is such that $y_e = r_e$.
We can now introduce the dynamics of deviation of the different quantities w.r.t the equilibrium condition characterized by $r_e \in\mathbb{R}$. In particular:
\begin{subequations}\label{eq: dirichlet boundary measurement - dynamics of deviations}
\begin{align}
& \Delta w(t,x) = \Delta z(t,x) - x^2 \Delta u(t) , \label{eq: dirichlet boundary measurement - dynamics of deviations - 1} \\
& \Delta \dot{X}(t) = F \Delta X(t) + \mathcal{L} \Delta \zeta(t) - \mathcal{L}_r \Delta r(t) , \label{eq: dirichlet boundary measurement - dynamics of deviations - 2} \\
& \Delta\zeta(t) = \sum_{n \geq N+1} \phi_n(0) \Delta w_n(t) , \label{eq: dirichlet boundary measurement - dynamics of deviations - 3} \\
& \Delta \dot{w}_n(t) = (-\lambda_n + q_c) \Delta w_n(t) + a_n \Delta u(t) + b_n \Delta v(t) , \label{eq: dirichlet boundary measurement - dynamics of deviations - 4} \\
& \Delta v(t) = K \Delta \hat{W}_a^{N_0}(t) , \label{eq: dirichlet boundary measurement - dynamics of deviations - 5} \\
& \Delta\tilde{y}(t) = \Delta y(t) = \sum_{n \geq 1} \phi_n(0) \Delta w_n(t) \label{eq: dirichlet boundary measurement - dynamics of deviations - 6}
\end{align}
\end{subequations}
with $\Delta w_n(t) = \left< \Delta w(t,\cdot) , e_n \right>$.
\subsection{Stability analysis and regulation assessment}
We define the constant $M_{1,\phi} = \sum_{n \geq 2} \frac{\phi_n(0)^2}{\lambda_n}$, which is finite when $p \in \mathcal{C}^2([0,1])$ because we recall that $\phi_n(0) = O(1)$ as $n \rightarrow + \infty$ and (\ref{eq: estimation lambda_n}) hold.
\begin{theorem}\label{thm: Case of a Dirichlet boundary measurement - stab}
Let $p \in \mathcal{C}^2([0,1])$ with $p > 0$, $q \in \mathcal{C}^0([0,1])$ with $q \geq 0$, and $q_c \in \mathbb{R}$. Consider the reaction-diffusion system described by (\ref{eq: dirichlet boundary measurement - RD system}). Let $N_0 \geq 1$ and $\delta > 0$ be given such that $- \lambda_n + q_c < -\delta < 0$ for all $n \geq N_0 +1$. Let $K \in\mathbb{R}^{1 \times (N_0 +2)}$ and $L \in\mathbb{R}^{N_0}$ be such that $A_1 + B_1 K$ and $A_0 - L C_0$ are Hurwitz with eigenvalues that have a real part strictly less than $-\delta < 0$. Assume that there exist $N \geq N_0 + 1$, $P \succ 0$, and $\alpha,\beta,\gamma > 0$ such that
\begin{align}
\Theta & = \begin{bmatrix} F^\top P + P F + 2 \delta P + \alpha \gamma G & P \mathcal{L} \\ \mathcal{L}^\top P^\top & -\beta \end{bmatrix} \prec 0 , \label{eq: Theta involving gamma} \\
\Gamma_n & = -\lambda_n + q_c + \delta + \frac{\lambda_n}{\alpha} + \frac{\beta M_{1,\phi}}{2\gamma} \leq 0 \nonumber
\end{align}
for all for $n \geq N+1$. Then, for any $\eta \in [0,1)$, there exists $M > 0$ such that, for any $z_0 \in H^2(0,1)$ and any $u(0) \in \mathbb{R}$ such that $z_0'(0)=0$ and $z_0(1) = u(0)$, the classical solution of the closed-loop system composed of the plant (\ref{eq: dirichlet boundary measurement - RD system}), the integral actions (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}) and (\ref{eq: dirichlet boundary measurement - integral component}), the observer dynamics (\ref{eq: dirichlet boundary measurement - observer dynamics - 1}) with null initial condition, and the state feedback (\ref{eq: v - state feedback}) satisfies
\begin{align}
& \Delta u(t)^2 + \Delta \xi(t)^2 + \sum_{n=1}^{N} \Delta \hat{w}_n(t)^2 + \Vert \Delta z(t) \Vert_{H^1}^2 \nonumber \\
& \qquad\leq M e^{-2 \delta t} ( \Delta u(0)^2 + \Delta \xi(0)^2 + \Vert \Delta z_0 \Vert_{H^1}^2 ) \nonumber \\
& \phantom{\qquad\leq}\; + M \sup_{\tau\in[0,t]} e^{-2\eta\delta(t-\tau)} \Delta r(\tau) ^2 \label{eq: dirichlet boundary measurement - stab result}
\end{align}
for all $t \geq 0$. Moreover, the above constraints are always feasible for $N$ large enough.
\end{theorem}
\begin{proof}
Let $P \succ 0$ and $\gamma > 0$ and consider the Lyapunov function candidate defined by
\begin{equation}\label{eq: Lyap function for H1 stab}
V(\Delta X,\Delta w) = \Delta X^\top P \Delta X + \gamma \sum_{n \geq N+1} \lambda_n \left< \Delta w , \phi_n \right>^2 .
\end{equation}
with $\Delta X\in\mathbb{R}^{2N+2}$ and $\Delta w \in D(\mathcal{A})$. Proceeding exactly as in~\cite{lhachemi2020finite} but taking into account the extra contribution of the reference signal appearing in (\ref{eq: dynamics closed-loop system - finite dimensional part}), we obtain for $t \geq 0$ that
\begin{align}
& \dot{V}(t) + 2 \delta V(t)
\leq \begin{bmatrix} \Delta X(t) \\ \Delta \zeta(t) \end{bmatrix}^\top \Theta \begin{bmatrix} \Delta X(t) \\ \Delta \zeta(t) \end{bmatrix} \nonumber \\
& - 2 \Delta X(t)^\top P \mathcal{L}_r \Delta r(t)
+ 2 \gamma \sum_{n \geq N+1} \lambda_n \Gamma_n \Delta w_n(t)^2 \label{eq: Lyap time derivative}
\end{align}
with $\alpha,\beta>0$ arbitrary and where, with a slight abuse of notation, $\dot{V}(t)$ denotes the time derivative of $V(X(t),w(t))$ along the system trajectories (\ref{eq: dirichlet boundary measurement - dynamics of deviations}). From (\ref{eq: Theta involving gamma}), there exists $\epsilon > 0 $ such that $\theta \preceq - \epsilon I$. Hence the assumptions imply that $\dot{V}(t) + 2 \delta V(t) \leq - \epsilon \Vert \Delta X(t) \Vert^2 - 2 \Delta X(t)^\top P \mathcal{L}_r \Delta r(t) \leq \frac{\Vert P\mathcal{L}_r \Vert^2}{\epsilon} \Delta r(t)^2$ where Young's inequality has been used to derive the latter estimate. After integration, we obtain for any $\eta \in [0,1)$ the existence of a constant $M_1 > 0$ such that $V(t) \leq e^{-2\delta t} V(0) + M_1 \sup_{\tau\in[0,t]} e^{-2\eta\delta(t-\tau)} \Delta r(\tau)^2$ for all $t \geq 0$. The claimed estimate (\ref{eq: dirichlet boundary measurement - stab result}) easily follows from the definition (\ref{eq: Lyap function for H1 stab}) of the Lyapunov function, the use of (\ref{eq: inner product Af and f}), Poincar{\'e}'s inequality, and the change of variable (\ref{eq: dirichlet boundary measurement - dynamics of deviations - 1}).
We now show that we can always select $N \geq N_0 + 1$, $P \succ 0$ and $\alpha,\beta,\gamma > 0$ such that $\Theta \prec 0$ and $\Gamma_n \leq 0$ for all $n \geq N+1$. By the Schur complement, $\Theta \prec 0$ is equivalent to $F^\top P + P F + 2 \delta P + \alpha\gamma G + \frac{1}{\beta} P \mathcal{L} \mathcal{L}^\top P^\top \prec 0$. We define $F = F_1 + F_2$ where
\begin{subequations}\label{eq: matrices F1 and F2}
\begin{equation}
F_1 = \begin{bmatrix}
A_1 + B_1 K & \tilde{L} C_0 & 0 & \tilde{L} C_1 \\
0 & A_0 - L C_0 & 0 & -L C_1 \\
B_{2,b} K + \begin{bmatrix} B_{2,a} & 0 & 0 \end{bmatrix} & 0 & A_2 & 0 \\
0 & 0 & 0 & A_2
\end{bmatrix} ,
\end{equation}
\begin{equation}
F_2 = \begin{bmatrix}
\alpha_1 \tilde{L} \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} & 0 & 0 & 0 \\
-\alpha_1 L \begin{bmatrix} 1 & 0 & 0 \end{bmatrix} & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}
\end{equation}
\end{subequations}
with $\Vert F_2 \Vert \rightarrow 0$, because $\alpha_1 \rightarrow 0$, when $N \rightarrow + \infty$. In order to apply the Lemma reported in the Appendix, we note that $A_1 + B_1 K + \delta I$ and $A_0 - L C_0 + \delta I$ are Hurwitz while $\Vert e^{(A_2+\delta I) t} \Vert \leq e^{-\kappa_0 t}$ with $\kappa_0 = \lambda_{N_0+1} - q_c -\delta > 0$. Moreover, $\Vert \tilde{L} C_1 \Vert \leq \Vert L \Vert \Vert C_1 \Vert$, $\Vert L C_1 \Vert \leq \Vert L \Vert \Vert C_1 \Vert$, with $\Vert C_1 \Vert = O(1)$ as $N \rightarrow + \infty$ while $\Vert B_{2,b} K + \begin{bmatrix} B_{2,a} & 0 & 0 \end{bmatrix} \Vert \leq \Vert b \Vert_{L^2} \Vert K \Vert + \Vert a \Vert_{L^2}$ where the right-hand side is a constant independent of $N$. Hence, the application of the Lemma reported in the Appendix to $F_1 + \delta I$ yields the existence of $P \succ 0$ such that $F_1^\top P + P F_1 + 2 \delta P = -I$ and $\Vert P \Vert = O(1)$ as $N \rightarrow + \infty$. Therefore, we have $F^\top P + P F + 2 \delta P + \alpha\gamma G + \frac{1}{\beta} P \mathcal{L} \mathcal{L}^\top P^\top = -I + F_2^\top P + P F_2 + \alpha\gamma G + \frac{1}{\beta} P \mathcal{L} \mathcal{L}^\top P^\top$ where $G$ satisfies (\ref{eq: matrix G}) and $\Vert \mathcal{L} \Vert = \sqrt{2} \Vert L \Vert$, which is independent of $N$. Hence, setting $\alpha = \beta = \sqrt{N}$ and $\gamma = N^{-1}$, we infer the existence of $N \geq N_0 + 1$ such that $\Theta \prec 0$ and $\Gamma_n \leq 0$ for all $n \geq N+1$.
\end{proof}
We can now assess the setpoint regulation control of the left Dirichlet trace.
\begin{theorem}\label{thm: Case of a Dirichlet boundary measurement - reg}
Under both assumptions and conclusions of Theorem~\ref{thm: Case of a Dirichlet boundary measurement - stab}, for any $\eta \in [0,1)$, there exists $M_r > 0$ such that
\begin{align}
\vert y(t) - r(t) \vert & \leq M_r e^{-\delta t} ( \vert \Delta u(0) \vert + \vert \Delta \xi(0) \vert + \Vert \Delta z_0 \Vert_{H^1} ) \label{eq: dirichlet boundary measurement - reg result} \\
& \phantom{\leq}\; + M_r \sup_{\tau\in[0,t]} e^{-\eta\delta(t-\tau)} \vert \Delta r(\tau) \vert \nonumber
\end{align}
for all $t \geq 0$.
\end{theorem}
\begin{proof}
Recalling that $y_e = r_e$, one has $\vert y(t) - r(t) \vert \leq \vert \Delta y(t) \vert + \vert \Delta r(t) \vert$. From (\ref{eq: dirichlet boundary measurement - dynamics of deviations - 6}) and Cauchy-Schwarz inequality, we infer that $\vert \Delta y(t) \vert \leq \sqrt{\sum_{n \geq 1} \frac{\phi_n(0)^2}{\lambda_n}} \sqrt{\sum_{n \geq 1} \lambda_n \Delta w_n(t)^2}$. Using now (\ref{eq: inner product Af and f}) we infer the existence of a constant $M_2 > 0$ such that $\vert \Delta y(t) \vert \leq M_2 \Vert \Delta w(t) \Vert_{H^1}$. The proof is completed by invoking the change of variable (\ref{eq: dirichlet boundary measurement - dynamics of deviations - 1}) and the stability result (\ref{eq: dirichlet boundary measurement - stab result}).
\end{proof}
\section{Neumann measurement and regulation control}\label{sec: Neuman}
We now consider the reaction-diffusion system with Neumann boundary observation described for $t > 0$ and $x \in (0,1)$ by
\begin{subequations}\label{eq: neumann boundary measurement - RD system}
\begin{align}
z_t(t,x) & = \left( p(x) z_x(t,x) \right)_x + (q_c - q(x)) z(t,x) \label{eq: neumann boundary measurement - RD system - 1} \\
z(t,0) & = 0 , \quad z(t,1) = u(t) \label{eq: neumann boundary measurement - RD system - 2} \\
z(0,x) & = z_0(x) \\
y(t) & = z_x(t,0)
\end{align}
\end{subequations}
in the case $p \in \mathcal{C}^2([0,1])$.
\subsection{Control design}
Introducing the change of variable
\begin{equation}\label{eq: neumann boundary measurement - change of variable}
w(t,x) = z(t,x) - x u(t)
\end{equation}
we obtain
\begin{subequations}\label{eq: neumann boundary measurement - homogeneous RD system}
\begin{align}
& w_t(t,x) = ( p(x) w_x(t,x) )_x + (q_c - q(x)) w(t,x) \label{eq: neumann boundary measurement - homogeneous RD system - 1} \\
& \phantom{w_t(t,x) =}\; + a(x) u(t) + b(x) \dot{u}(t) \nonumber \\
& w(t,0) = 0 , \quad w(t,1) = 0 \label{eq: neumann boundary measurement - homogeneous RD system - 2} \\
& w(0,x) = w_0(x) \label{eq: neumann boundary measurement - homogeneous RD system - 3} \\
& \tilde{y}(t) = w_x(t,0) \label{eq: neumann boundary measurement - homogeneous RD system - 4}
\end{align}
\end{subequations}
with $a,b \in L^2(0,1)$ defined by $a(x) = p'(x) + (q_c-q(x))x$ and $b(x) = -x$, respectively, $\tilde{y}(t) = y(t) - u(t)$ , and $w_0(x) = z_0(x) - x u(0)$. Introducing the auxiliary command input $v(t) = \dot{u}(t)$, we infer that (\ref{eq: homogeneous RD system - abstract form}) still holds but the domain of $\mathcal{A}$ is now replaced by $D(\mathcal{A}) = \{ f \in H^2(0,1) \,:\, f(0)=f(1)=0 \}$. Then, considering classical solutions associated with any $z_0 \in H^2(0,1)$ and any $u(0) \in \mathbb{R}$ such that $z_0(0)=0$ and $z_0(1) = u(0)$ (their existence for the upcoming closed-loop dynamics is an immediate consequence of \cite[Chap.~6, Thm.~1.7]{pazy2012semigroups}), (\ref{eq: homogeneous RD system - spectral reduction - 1}-\ref{eq: homogeneous RD system - spectral reduction - 2}) is still valid while (\ref{eq: homogeneous RD system - spectral reduction - 3}) is replaced by
\begin{equation}\label{eq: neumann boundary measurement - measurement bis}
\tilde{y}(t) = \sum_{i \geq 1} \phi_i'(0) w_i(t) .
\end{equation}
Based on similar motivations than the ones reported in Section~\ref{sec: Dirichlet measurement and regulation control}, we consider the integral component
\begin{align}\label{eq: neumann boundary measurement - integral component}
\dot{\xi}(t)
& = \sum_{n = 1}^{N_0} \phi_n'(0) \hat{w}_n(t) + \alpha_0 u(t) + \beta_0 v(t) - r(t) .
\end{align}
with
\begin{subequations}\label{eq: neumann boundary measurement - def alpha_0 and beta_0}
\begin{align}
\alpha_0 & = 1 - \sum_{n \geq N_0 + 1} \frac{a_n \phi_n'(0)}{-\lambda_n + q_c} , \\
\beta_0 & = - \sum_{n \geq N_0 + 1} \frac{b_n \phi_n'(0)}{-\lambda_n + q_c}
\end{align}
\end{subequations}
and where the observation dynamics, for $1 \leq n \leq N$, take the form:
\begin{align}
\dot{\hat{w}}_n (t)
& = ( -\lambda_n + q_c ) \hat{w}_n(t) + a_n u(t) + b_n v(t) \label{eq: neumann boundary measurement - observer dynamics - 1} \\
& \phantom{=}\; - l_n \left( \sum_{i=1}^N \phi_i'(0) \hat{w}_i(t) - \alpha_1 u(t) - \tilde{y}(t) \right) \nonumber
\end{align}
with
\begin{equation}\label{eq: neumann boundary measurement - def alpha_1}
\alpha_1 = \sum_{n \geq N +1} \dfrac{a_n \phi_n'(0)}{-\lambda_n + q_c}
\end{equation}
and where $l_n \in\mathbb{R}$ are the observer gains. We impose $l_n = 0$ for $N_0+1 \leq n \leq N$ and the initial condition of the observer as $\hat{w}_n(0) = 0$ for all $1 \leq n \leq N$. Proceeding now as in Section~\ref{sec: Dirichlet measurement and regulation control} but with the updated versions of the matrices $C_0$ and $C_1$ now given by $C_0 = \begin{bmatrix} \phi_1'(0) & \ldots & \phi_{N_0}'(0) \end{bmatrix}$ and $C_1 = \begin{bmatrix} \dfrac{\phi_{N_0 +1}'(0)}{\lambda_{N_0 + 1}} & \ldots & \dfrac{\phi_{N}'(0)}{\lambda_{N}} \end{bmatrix}$ while redefining $\tilde{e}_n(t)$ and $\zeta(t)$ as $\tilde{e}_n(t) = \lambda_n e_n(t)$ and $\zeta(t) = \sum_{n \geq N+1} \phi_n'(0) w_n(t)$, we infer that (\ref{eq: dynamics closed-loop system - finite dimensional part}) holds.
\begin{lemma}
$(A_1,B_1)$ is controllable and $(A_0,C_0)$ is observable.
\end{lemma}
The proof of this Lemma is analogous to the one of Lemma~\ref{eq: Dirichlet measurement - Kalman condion} and is thus omitted. We select in the sequel $K \in\mathbb{R}^{1 \times (N_0 +2)}$ and $L \in\mathbb{R}^{N_0}$ such that $A_1 + B_1 K$ and $A_0 - L C_0$ are Hurwitz.
\subsection{Equilibirum condition and dynamics of deviations}
Proceeding similarly to Section~\ref{sec: Dirichlet measurement and regulation control}, we can characterize the equilibrium condition of the closed-loop system composed of the reaction-diffusion system (\ref{eq: neumann boundary measurement - RD system}), the auxiliary command input dynamics (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}), the integral action (\ref{eq: neumann boundary measurement - integral component}), the observer dynamics (\ref{eq: neumann boundary measurement - observer dynamics - 1}), and the state-feedback (\ref{eq: v - state feedback}). In particular, setting $r(t) = r_e \in\mathbb{R}$, it can be shown that there exists a unique solution to :
\begin{subequations}\label{eq: neumann boundary measurement - equilibirum condition}
\begin{align}
0 & = (-\lambda_n+q_c) w_{n,e} + a_n u_e + b_n v_e = 0 , \quad n \geq 1 , \label{eq: neumann boundary measurement - equilibirum condition - 1} \\
0 & = v_e = K \hat{W}^{N_0}_{a,e} , \label{eq: neumann boundary measurement - equilibirum condition - 2} \\
0 & = \sum_{n=1}^{N_0} \phi_n'(0) \hat{w}_{n,e} + \alpha_0 u_e + \beta_0 v_e - r_e , \label{eq: neumann boundary measurement - equilibirum condition - 3} \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e \label{eq: neumann boundary measurement - equilibirum condition - 4} \\
& \phantom{=}\; - l_n \left\{ \sum_{i=1}^{N} \phi_i'(0) \hat{w}_{i,e} - \alpha_1 u_e - \tilde{y}_e \right\} , \quad 1 \leq n \leq N_0 , \nonumber \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e , \quad N_0 + 1 \leq n \leq N , \label{eq: neumann boundary measurement - equilibirum condition - 5} \\
\tilde{y}_e & = \sum_{n \geq 1} \phi_n'(0) w_{n,e} . \label{eq: neumann boundary measurement - equilibirum condition - 6}
\end{align}
\end{subequations}
Moreover we can define $w_e \triangleq \sum_{n \geq 1} w_{n,e} \phi_n \in D(\mathcal{A})$. Introducing the change of variable $z_e = w_e + x u_e$, $z_e$ is a static solution of (\ref{eq: neumann boundary measurement - RD system - 1}-\ref{eq: neumann boundary measurement - RD system - 2}) associated with the constant control input $u(t) = u_e$. Denoting by $y_e \triangleq z_e'(0)$, we also infer that $y_e = r_e$, achieving the desired reference tracking. This allows the introduction of the dynamics of deviation of the different quantities w.r.t the equilibrium condition characterized by $r_e \in\mathbb{R}$. We have:
\begin{subequations}\label{eq: neumann boundary measurement - dynamics of deviations}
\begin{align}
& \Delta w(t,x) = \Delta z(t,x) - x \Delta u(t) , \label{eq: neumann boundary measurement - dynamics of deviations - 1} \\
& \Delta \dot{X}(t) = F \Delta X(t) + \mathcal{L} \Delta \zeta(t) - \mathcal{L}_r \Delta r(t) , \label{eq: neumann boundary measurement - dynamics of deviations - 2} \\
& \Delta\zeta(t) = \sum_{n \geq N+1} \phi_n'(0) \Delta w_n(t) , \label{eq: neumann boundary measurement - dynamics of deviations - 3} \\
& \Delta \dot{w}_n(t) = (-\lambda_n + q_c) \Delta w_n(t) + a_n \Delta u(t) + b_n \Delta v(t) , \label{eq: neumann boundary measurement - dynamics of deviations - 4} \\
& \Delta v(t) = K \Delta \hat{W}_a^{N_0}(t) , \label{eq: neumann boundary measurement - dynamics of deviations - 5} \\
& \Delta\tilde{y}(t) = \Delta y(t) - \Delta u(t) = \sum_{n \geq 1} \phi_n'(0) \Delta w_n(t) . \label{eq: neumann boundary measurement - dynamics of deviations - 6}
\end{align}
\end{subequations}
\subsection{Stability analysis and regulation assessment}
We define, for any $\epsilon \in (0,1/2]$, the constant $M_{2,\phi}(\epsilon) = \sum_{n \geq 2} \frac{\phi_n'(0)^2}{\lambda_n^{3/2+\epsilon}}$, which is finite when $p \in \mathcal{C}^2([0,1])$ because we recall that $\phi_n'(0) = O(\sqrt{\lambda_n})$ as $n \rightarrow + \infty$ and (\ref{eq: estimation lambda_n}) hold.
\begin{theorem}\label{thm: Case of a Neumann boundary measurement - stab}
Let $p \in \mathcal{C}^2([0,1])$ with $p > 0$, $q \in \mathcal{C}^0([0,1])$ with $q \geq 0$, and $q_c \in \mathbb{R}$. Consider the reaction-diffusion system described by (\ref{eq: neumann boundary measurement - RD system}). Let $N_0 \geq 1$ and $\delta > 0$ be given such that $- \lambda_n + q_c < -\delta < 0$ for all $n \geq N_0 +1$. Let $K \in\mathbb{R}^{1 \times (N_0 +2)}$ and $L \in\mathbb{R}^{N_0}$ be such that $A_1 + B_1 K$ and $A_0 - L C_0$ are Hurwitz with eigenvalues that have a real part strictly less than $-\delta < 0$. Assume that there exist $N \geq N_0 + 1$, $P \succ 0$, $\epsilon \in (0,1/2]$, and $\alpha,\beta,\gamma > 0$ such that
\begin{equation*}
\Theta \prec 0
\end{equation*}
where $\Theta$ is defined by (\ref{eq: Theta involving gamma}), and
\begin{align*}
\Gamma_n & = -\lambda_n + q_c + \delta + \frac{\lambda_n}{\alpha} + \frac{\beta M_{2,\phi}(\epsilon)}{2\gamma}\lambda_n^{1/2+\epsilon} \leq 0
\end{align*}
for all for $n \geq N+1$. Then, for any $\eta \in [0,1)$, there exists $M > 0$ such that, for any $z_0 \in H^2(0,1)$ and any $u(0) \in \mathbb{R}$ such that $z_0(0)=0$ and $z_0(1) = u(0)$, the classical solution of the closed-loop system composed of the plant (\ref{eq: neumann boundary measurement - RD system}), the integral actions (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}) and (\ref{eq: neumann boundary measurement - integral component}), the observer dynamics (\ref{eq: neumann boundary measurement - observer dynamics - 1}) with null initial condition, and the state feedback (\ref{eq: v - state feedback}) satisfies (\ref{eq: dirichlet boundary measurement - stab result}) for all $t \geq 0$. Moreover, the above constraints are always feasible for $N$ large enough.
\end{theorem}
\begin{proof}
Let $P \succ 0$ and $\gamma > 0$ and consider the Lyapunov function candidate defined by (\ref{eq: Lyap function for H1 stab}). Then, proceeding as in~\cite{lhachemi2020finite} but taking into account the extra contribution of the reference signal appearing in (\ref{eq: dynamics closed-loop system - finite dimensional part}), we obtain that (\ref{eq: Lyap time derivative}) holds for all $t \geq 0$. Now the proof of the stability estimate (\ref{eq: dirichlet boundary measurement - stab result}) is analogous to the one reported in the proof of Theorem~\ref{thm: Case of a Dirichlet boundary measurement - stab}.
It remains to show that we can always select $N \geq N_0 + 1$, $P \succ 0$, $\epsilon \in (0,1/2]$, and $\alpha,\beta,\gamma > 0$ such that $\Theta \prec 0$ and $\Gamma_n \leq 0$ for all $n \geq N+1$. To handle the constraint $\Theta \prec 0$, we proceed as in the last part of the proof of Theorem~\ref{thm: Case of a Dirichlet boundary measurement - stab}. This is allowed because $\Vert C_1 \Vert = O(1)$ as $N \rightarrow + \infty$. Noting now that, because $\epsilon \in (0,1/2]$, $\lambda_n^{1/2+\epsilon} = \lambda_n/\lambda_n^{1/2-\epsilon} \leq \lambda_n/\lambda_{N+1}^{1/2-\epsilon}$ for all $n \geq N+1$, we infer that $\Gamma_n \leq - \left( 1 -\frac{1}{\alpha} - \frac{\beta M_{2,\phi}(\epsilon)}{2\gamma\lambda_{N+1}^{1/2-\epsilon}} \right) \lambda_n + q_c + \delta$ for all $n \geq N+1$. Setting $\epsilon = 1/8$, $\alpha = \beta = N^{1/8}$, and $\gamma = N^{-3/16}$, we deduce the existence of an integer $N \geq N_0 + 1$ large enough such that $\Theta \prec 0$ and $\Gamma_n \leq 0$ for all $n \geq N+1$.
\end{proof}
We are now in position to assess the setpoint regulation control of the left Dirichlet trace.
\begin{theorem}\label{thm: Case of a Neumann boundary measurement - reg}
Under both assumptions and conclusions of Theorem~\ref{thm: Case of a Dirichlet boundary measurement - stab}, for any $\eta \in [0,1)$, there exists $M_r > 0$ such that
\begin{align}
& \vert y(t) - r(t) \vert \nonumber \\
& \leq M_r e^{-\delta t} ( \vert \Delta u(0) \vert + \vert \Delta \xi(0) \vert + \Vert \Delta z_0 \Vert_{H^1} + \Vert \mathcal{A} \Delta w_0 \Vert_{L^2} ) \nonumber \\
& \phantom{\leq}\; + M_r \sup_{\tau\in[0,t]} e^{-\eta\delta(t-\tau)} \vert \Delta r(\tau) \vert \label{eq: neumann boundary measurement - reg result}
\end{align}
for all $t \geq 0$ where $\Delta w_0 = \Delta z_0 - x \Delta u(0)$.
\end{theorem}
\begin{proof}
Recalling that $y_e = r_e$, one has $\vert y(t) - r(t) \vert \leq \vert \Delta y(t) \vert + \vert \Delta r(t) \vert$. We infer from (\ref{eq: neumann boundary measurement - dynamics of deviations - 6}) and Cauchy-Schwarz inequality that $\vert \Delta y(t) \vert \leq \sqrt{\sum_{n \geq 1}\frac{\phi_n'(0)^2}{\lambda_n^2}}\sqrt{\sum_{n \geq 1} \lambda_n^2 \Delta w_n(t)^2} + \vert \Delta u(t) \vert$. In view of the stability estimate (\ref{eq: dirichlet boundary measurement - stab result}) provided by Theorem~\ref{thm: Case of a Neumann boundary measurement - stab}, we only need to study the term $\sum_{n \geq 1} \lambda_n^2 \Delta w_n(t)^2$. This can be done as in \cite[Proof of Theorem~2]{lhachemi2020pi}, yielding the claimed estimate (\ref{eq: neumann boundary measurement - reg result}).
\end{proof}
\section{Dirichlet measurement and Neumann regulation control}\label{sec: crossed configuration}
We now consider the reaction-diffusion system described by (\ref{eq: dirichlet boundary measurement - RD system - 1}-\ref{eq: dirichlet boundary measurement - RD system - 3}), still in the case $p \in \mathcal{C}^2([0,1])$, but this time with the boundary measurement $y_m(t)$ and the (unmeasured) output to-be-regulated $y_r(t)$ which are distinct, described by:
\begin{equation}\label{eq: crossed - RD system - measurements}
y_m(t) = z(t,0) , \quad y_r(t) = z_x(t,1) .
\end{equation}
\subsection{Control design}
Using the change of variable (\ref{eq: change of variable}), we obtain (\ref{eq: homogeneous RD system - 1}-\ref{eq: homogeneous RD system - 3}) still hold while (\ref{eq: homogeneous RD system - 4}) is replaced by
\begin{subequations}\label{eq: crossed - measurement}
\begin{align}
& \tilde{y}_m(t) = w(t,0) = z(t,0) = y_m(t) , \\
& \tilde{y}_r(t) = w_x(t,1) = z_x(t,1) -2u(t) = y_r(r) -2u(t) .
\end{align}
\end{subequations}
Then, considering classical solutions, (\ref{eq: homogeneous RD system - spectral reduction - 1}-\ref{eq: homogeneous RD system - spectral reduction - 2}) is still valid while (\ref{eq: homogeneous RD system - spectral reduction - 3}) is replaced by
\begin{equation}\label{eq: crossed - measurement bis}
\tilde{y}_m(t) = \sum_{i \geq 1} \phi_i(0) w_i(t) , \quad \tilde{y}_r(t) = \sum_{i \geq 1} \phi_i'(1) w_i(t) .
\end{equation}
Based on similar motivations that the ones reported in Section~\ref{sec: Dirichlet measurement and regulation control}, we consider the integral component
\begin{align}\label{eq: crossed - integral component}
\dot{\xi}(t)
& = \sum_{n = 1}^{N_0} \phi_n'(1) \hat{w}_n(t) + \alpha_0 u(t) + \beta_0 v(t) - r(t) .
\end{align}
with
\begin{subequations}\label{eq: crossed - def alpha_0 and beta_0}
\begin{align}
\alpha_0 & = 2 - \sum_{n \geq N_0 + 1} \frac{a_n \phi_n'(1)}{-\lambda_n + q_c} , \\
\beta_0 & = - \sum_{n \geq N_0 + 1} \frac{b_n \phi_n'(1)}{-\lambda_n + q_c}
\end{align}
\end{subequations}
and where the observation dynamics, for $1 \leq n \leq N$, takes the form:
\begin{align}
\dot{\hat{w}}_n (t)
& = ( -\lambda_n + q_c ) \hat{w}_n(t) + a_n u(t) + b_n v(t) \nonumber \\
& \phantom{=}\; - l_n \left( \sum_{i=1}^N \phi_i(0) \hat{w}_i(t) - \alpha_1 u(t) - \tilde{y}_m(t) \right) \label{eq: crossed - observer dynamics - 1}
\end{align}
with
\begin{equation}\label{eq: crossed - def alpha_1}
\alpha_1 = \sum_{n \geq N +1} \dfrac{a_n \phi_n(0)}{-\lambda_n + q_c}
\end{equation}
and where $l_n \in\mathbb{R}$ are the observer gains. We impose $l_n = 0$ for $N_0+1 \leq n \leq N$ and the initial condition of the observer as $\hat{w}_n(0) = 0$ for all $1 \leq n \leq N$. Adopting now the same definitions as the ones used in Section~\ref{sec: Dirichlet measurement and regulation control} except that the matrix $A_1$, originally defined by (\ref{eq: def matrices A1 B1 and Br}), is now replaced by
\begin{equation*}
A_1 = \begin{bmatrix} 0 & 0 & 0 \\ B_{0,a} & A_0 & 0 \\ \alpha_0 & C_0^* & 0 \end{bmatrix}
\end{equation*}
where $C_0^* = \begin{bmatrix} \phi_1'(1) & \ldots & \phi_{N_0}'(1) \end{bmatrix}$, we infer that (\ref{eq: dynamics closed-loop system - finite dimensional part}) holds.
\begin{lemma}\label{lem: crossed - controllability lemma}
The pair $(A_0,C_0)$ is observable. Moreover, if the unique solution of $(pf')' + (q_c - q) f =0$ with $f(1)=1$ and $f'(1)=0$ is such that $f'(0) \neq 0$, then the pair $(A_1,B_1)$ is controllable.
\end{lemma}
\begin{proof}
The observability of $(A_0,C_0)$ was assessed in Lemma~\ref{eq: Dirichlet measurement - Kalman condion}. From~\cite[Lem.~2]{lhachemi2020pi}, and because the pair (\ref{eq: kalman condition previous work}) is controllable, then $(A_1,B_1)$ is controllable if and only if the matrix
\begin{equation*}
T = \begin{bmatrix} 0 & 0 & 1 \\ B_{0,a} & A_0 & B_{0,b} \\ \alpha_0 & C_0^* & \beta_0 \end{bmatrix}
\end{equation*}
is invertible. Let $\begin{bmatrix} u_e & w_{1,e} & \ldots & w_{N_0,e} & v_e \end{bmatrix}^\top \in\mathrm{ker}(T)$. We obtain that $v_e = 0$, $a_n u_e + (-\lambda_n +q_c) w_{n,e} = 0$ for all $1 \leq n \leq N_0$, and $\alpha_0 u_e + \sum_{n=1}^{N_0} \phi_n'(1) w_{n,e} = 0$. Defining for $n \geq N_0 +1$ the quantity $w_{n,e} = -\frac{a_n}{-\lambda_n+q_c}u_e$, we have $(-\lambda_n+q_c)w_{n,e} + a_n u_e = 0$ for all $n \geq 1$. Hence $(w_{n,e})_{n \geq 1} , (\lambda_n w_{n,e})_{n \geq 1} \in l^2(\mathbb{N})$ ensuring that $w_e \triangleq \sum_{n \geq 1} w_{n,e} \phi_n \in D(\mathcal{A})$ and $\mathcal{A} w_e = \sum_{n \geq 1} \lambda_n w_{n,e} \phi_n \in D(\mathcal{A})$. This shows that $- \mathcal{A} w_e + q_c w_e + a u_e = 0$. Moreover, using (\ref{eq: crossed - def alpha_0 and beta_0}), we also have $0 = \alpha_0 u_e + \sum_{n=1}^{N_0} \phi_n'(1) w_{n,e} = 2 u_e + w_e'(1)$. From the two latter identities, we infer that $(p w_e')' + (q_c - q) w_e + a u_e = 0$, $w_e'(0) = w_e(1) = 0$, and $w_e'(1) + 2 u_e = 0$. Introducing the change of variable $z_e(x) = w_e(x) + x^2 u_e$, we deduce that $(p z_e')' + (q_c - q) z_e = 0$, $z_e'(0) = z_e'(1) = 0$, and $z_e(1) = u_e$. From our assumption, we infer that then $z_e = 0$ hence $u_e = z_e(1) = 0$. Thus we have $w_e = z_e - x^2 u_e = 0$ hence $w_{n,e} = 0$ for all $n \geq 1$. We deduce that $\mathrm{ker}(T)=\{0\}$. Overall, we have shown that $(A_1,B_1)$ is controllable.
\end{proof}
We now select $K \in\mathbb{R}^{1 \times (N_0 +2)}$ and $L \in\mathbb{R}^{N_0}$ such that $A_1 + B_1 K$ and $A_0 - L C_0$ are Hurwitz.
\subsection{Equilibirum condition and dynamics of deviations}
Proceeding as in Section~\ref{sec: Dirichlet measurement and regulation control}, we can characterize the equilibrium condition of the closed-loop system composed of the reaction-diffusion system (\ref{eq: dirichlet boundary measurement - RD system - 1}-\ref{eq: dirichlet boundary measurement - RD system - 3}) with (\ref{eq: crossed - RD system - measurements}), the auxiliary command input dynamics (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}), the integral action (\ref{eq: crossed - integral component}), the observer dynamics (\ref{eq: crossed - observer dynamics - 1}), and the state-feedback (\ref{eq: v - state feedback}). In particular, setting $r(t) = r_e \in\mathbb{R}$, it can be shown that there exists a unique solution to :
\begin{subequations}\label{eq: crossed - equilibirum condition}
\begin{align}
0 & = (-\lambda_n+q_c) w_{n,e} + a_n u_e + b_n v_e = 0 , \quad n \geq 1 , \label{eq: crossed - equilibirum condition - 1} \\
0 & = v_e = K \hat{W}^{N_0}_{a,e} , \label{eq: crossed - equilibirum condition - 2} \\
0 & = \sum_{n=1}^{N_0} \phi_n'(1) \hat{w}_{n,e} + \alpha_0 u_e + \beta_0 v_e - r_e , \label{eq: crossed - equilibirum condition - 3} \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e \label{eq: crossed - equilibirum condition - 4} \\
& \phantom{=}\; - l_n \left\{ \sum_{i=1}^{N} \phi_i(0) \hat{w}_{i,e} - \alpha_1 u_e - \tilde{y}_e \right\} , \; 1 \leq n \leq N_0 , \nonumber \\
0 & = (-\lambda_n+q_c) \hat{w}_{n,e} + a_n u_e + b_n v_e , \quad N_0 + 1 \leq n \leq N , \label{eq: crossed - equilibirum condition - 5} \\
\tilde{y}_{m,e} & = \sum_{n \geq 1} \phi_n(0) w_{n,e} , \label{eq: crossed - equilibirum condition - 6} \\
\tilde{y}_{r,e} & = \sum_{n \geq 1} \phi_n'(1) w_{n,e} . \label{eq: crossed - equilibirum condition - 7}
\end{align}
\end{subequations}
Moreover we can define $w_e \triangleq \sum_{n \geq 1} w_{n,e} \phi_n \in D(\mathcal{A})$. Introducing the change of variable $z_e = w_e + x^2 u_e$, $z_e$ is a static solution of (\ref{eq: neumann boundary measurement - RD system - 1}-\ref{eq: neumann boundary measurement - RD system - 2}) associated with the constant control input $u(t) = u_e$. Denoting by $y_{r,e} \triangleq z_e'(1)$, we also infer that $y_{r,e} = r_e$, achieving the desired reference tracking. Consequently, we obtain the following dynamics of deviations:
\begin{subequations}\label{eq: crossed - dynamics of deviations}
\begin{align}
\Delta w(t,x) &= \Delta z(t,x) - x^2 \Delta u(t) , \\
\Delta \dot{X}(t) & = F \Delta X(t) + \mathcal{L} \Delta \zeta(t) - \mathcal{L}_r \Delta r(t) , \\
\Delta\zeta(t) & = \sum_{n \geq N+1} \phi_n(0) \Delta w_n(t) , \\
\Delta \dot{w}_n(t) & = (-\lambda_n + q_c) \Delta w_n(t) + a_n \Delta u(t) + b_n \Delta v(t) , \\
\Delta v(t) & = K \Delta \hat{W}_a^{N_0}(t) , \\
\Delta\tilde{y}_m(t) & = \Delta y_m(t) = \sum_{n \geq 1} \phi_n(0) \Delta w_n(t) , \\
\Delta\tilde{y}_r(t) & = \Delta y_r(t) - 2 \Delta u(t) = \sum_{n \geq 1} \phi_n'(1) \Delta w_n(t) .
\end{align}
\end{subequations}
\subsection{Stability analysis and regulation assessment}
The proof of the following Theorem directly follows from the proofs reported in the previous sections.
\begin{theorem}\label{thm: crossed}
Under the assumption of Lemma~\ref{lem: crossed - controllability lemma}, the stability result stated by Theorem~\ref{thm: Case of a Dirichlet boundary measurement - stab} also applies to the closed-loop system composed of the plant (\ref{eq: dirichlet boundary measurement - RD system - 1}-\ref{eq: dirichlet boundary measurement - RD system - 3}) with (\ref{eq: crossed - RD system - measurements}), the integral actions (\ref{eq: homogeneous RD system - abstract form - auxiliary input v}) and (\ref{eq: crossed - integral component}), the observer dynamics (\ref{eq: crossed - observer dynamics - 1}) with null initial condition, and the state feedback (\ref{eq: v - state feedback}). Moreover, for any $\eta \in [0,1)$, there exists $M_r > 0$ such that
\begin{align}
& \vert y_r(t) - r(t) \vert \nonumber \\
& \leq M_r e^{-\delta t} ( \vert \Delta u(0) \vert + \vert \Delta \xi(0) \vert + \Vert \Delta z_0 \Vert_{H^1} + \Vert \mathcal{A} \Delta w_0 \Vert_{L^2} ) \nonumber \\
& \phantom{\leq}\; + M_r \sup_{\tau\in[0,t]} e^{-\eta\delta(t-\tau)} \vert \Delta r(\tau) \vert \label{eq: crossed - reg result}
\end{align}
for all $t \geq 0$ where $\Delta w_0 = \Delta z_0 - x^2 \Delta u(0)$.
\end{theorem}
\section{Numerical illustration}\label{sec: num}
We illustrate the result of Section~\ref{sec: crossed configuration} corresponding to Dirichlet measurement and Neumann regulation using a modal approximation that captures the 50 dominant modes of the reaction-diffusion plant. We set $p=1$, $q=0$, and $q_c=3$ for which the open-loop plant is unstable. Selecting $\delta = 0.5$, we obtain $N_0 = 1$, the feedback gain $K = \begin{bmatrix} -10.4134 & -11.3747 & 2.3100 \end{bmatrix}$, and the observer gain $L = 1.4373$. The conditions of Theorem~\ref{thm: crossed} are found feasible for $N = 3$. The time-domain evolution of the closed-loop system trajectories are depicted in fig.~\ref{fig: sim}, confirming the theoretical predictions.
\begin{figure}
\centering
\subfigure[State $z(t,x)$]{
\includegraphics[width=3.5in]{sim_state-eps-converted-to.pdf}
}
\subfigure[Observation error $e(t,x) = w(t,x) - \sum_{n=1}^{N} \hat{w}_n(t) \phi_n(x)$]{
\includegraphics[width=3.5in]{sim_err_obs-eps-converted-to.pdf}
}
\subfigure[Command input $u(t) = z(t,1)$]{
\includegraphics[width=3.5in]{sim_u-eps-converted-to.pdf}
}
\subfigure[Command input $y(t) = z_x(t,1)$]{
\includegraphics[width=3.5in]{sim_reg-eps-converted-to.pdf}
}
\caption{Time evolution in closed-loop with Dirichlet boundary measurement $y_m(t) = z(t,0)$ and Neumann boundary regulation $y_r(t) = z_x(t,1)$ for the reaction-diffusion system (\ref{eq: dirichlet boundary measurement - RD system - 1}-\ref{eq: dirichlet boundary measurement - RD system - 3})}
\label{fig: sim}
\end{figure}
\section{Conclusion}\label{sec: conclusion}
We proposed the design of a finite-dimensional observer-based PI controller in order to achieve the output stabilization of a reaction-diffusion equation, as well as the regulation control of various system outputs such as Dirichlet and Neumann traces. Even if presented in the case of a Dirichlet boundary control input, the presented results easily extend to Neumann boundary control (the only significant modification consists of the modification of the change of variable formula to obtain an homogeneous PDE, yielding different $a,b \in L^2(0,1)$). Moreover, the case of in-domain measurements and/or regulated outputs can also be handled with the same approach provided the satisfaction of adequate observability conditions.
|
1,108,101,565,779 | arxiv | \section{Introduction}
Suppose that $X$ is a separable topological vector space and $T$
is a continuous linear mapping on $X$. If $x \in X$, then the
orbit of $x$ under $T$ is defined as $Orb(T,x) = \{x, Tx, T^{2}x,
\ldots\}$. An operator $T$ is called hypercyclic if there is a
vector $x$ such that $Orb(T,x)$ is dense in $X$ and in this case
$x$ is called a hypercyclic vector for $T$ (see \cite{14} for an
exhaustive survey on hypercyclicity).
It is interesting that many continuous linear mappings can
actually be hypercyclic. The first example of hypercyclicity
appeared in the space of entire functions, by Birkhoff \cite{3} in 1929. He
showed the hypercyclicity of the translation operator, while
MacLane \cite{19} proved the hypercyclicity of the differentiation
operator in 1952. Hypercyclicity on Banach spaces was discussed in
1969 by Rolewics \cite{20}, who showed that $\lambda B$ is
hypercyclic whenever $B$ is the unilateral backward shift (on
$\ell^{p}$ and $c_{0}$) and $|\lambda| > 1$.
A nice condition for hypercyclicity is the Hypercyclicity
Criterion (Theorem~1.1 below), which was developed by Kitai
\cite{17} and independently by Gethner and Shapiro \cite{12}. This
criterion has been used to show that certain classes of
composition operators \cite{6}, weighted shifts \cite{21},
adjoints of multiplication operators \cite{7}, and adjoints of
subnormal and hyponormal operators \cite{5}, are hypercyclic.
Hypercyclicity has also been established in various other settings
by means of this criterion \cite{1,4,6,8,12,13,16}. Salas
\cite{21} showed that every perturbation of the identity by a
unilateral weighted backward shift with nonzero bounded weights is
hypercyclic, and he also gave a characterization of the
hypercyclic weighted shifts in terms of their weights.
But, then Montes and Leon showed that these hypercyclic operators
do satisfy the criterion as well (\S2 of \cite{17} and
Proposition~4.3 of \cite{18}). Bes and Peris proved that a
continuous linear operator $T$ on a Frechet space satisfies the
Hypercyclicity Criterion if and only if it is hereditarily
hypercyclic. In particular they show that hypercyclic operators
with either a dense generalized kernel or a dense set of periodic
points must satisfy the criterion. Also, they provide a
characterization of those weighted shifts $T$ that are
hereditarily hypercyclic with respect to a given sequence
$\{n_{k}\}_k$ of positive integers, as well as conditions under
which $T$ and $\{T^{n_k}\}_k$ share the same set of hypercyclic
vectors \cite{2}.
\begin{theor}[(The Hypercyclicity Criterion)] Suppose $X$ is a
separable Banach space and $T$ is a continuous linear mapping on
$X$. If there exists two dense subsets $Y$ and $Z$ in $X$ and a
sequence $\{n_{k}\}$ such that{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm \arabic{enumi}.}
\leftskip -.2pc
\item $T^{n_{k}} y \rightarrow 0$ for every $y \in Y${\rm ,}
\item there exists functions $S_{n_{k}}\hbox{\rm :}\ Z \rightarrow X$
such that for every $z \in Z, S_{n_{k}} z \rightarrow 0${\rm ,} and
$T^{n_{k}} S_{n_{k}} z \rightarrow z${\rm ,}\vspace{-.5pc}
\end{enumerate}
then $T$ is hypercyclic.
\end{theor}
Note that the sequence $\{n_{k}\}$ in Theorem~1.1 need not be the
entire sequence $\{n_{k}\} = \{k\}$ of positive integers. Salas
\cite{22} and Herrero \cite{15} have shown that there are
hypercyclic operators on Hilbert spaces that do not satisfy the
Hypercyclicity Criterion for the entire sequence $\{k\}$, but so
far no hypercyclic operator has been found that does not satisfy
the Hypercyclicity Criterion in its general form. In this paper
our work was stimulated by the well-known question: Does every
hypercyclic operator satisfy the hypothesis of the Hypercyclicity
Criterion? (see \cite{2}).
We give necessary and sufficient conditions in terms of open
subsets for an operator on a separable Hilbert space to satisfy
the Hypercyclicity Criterion. For this, see Theorem~2.6,
Corollary~2.11 and Proposition~2.12. Also, in the proof of
Theorem~2.6, we pay attention to hypercyclicity on the operator
algebra $B(H)$ and the algebra of Hilbert--Schmidt operators,
$B_{2}(H)$. Recall that if $\{e_{i}\}_{i}$ is an orthonormal basis
for a separable Hilbert space $H,A \in B(H)$ and
\begin{equation*}
\|A\|_{2} = \left[ \sum^{\infty}_{i=1} \|Ae_{i}\|^{2}
\right]^{1/2},
\end{equation*}
then $\|A\|_{2}$ is independent of the basis chosen and hence is
well-defined. If $\|A\|_{2} < \infty$, then $A$ is called a
Hilbert--Schmidt operator and by this norm $B_{2}(H)$ is a Hilbert
space. Indeed, $B_2(H)$ is a special case of the Schatten
$p$-class of $H$ when $p=2$. For more details about these classes
of operators, see \cite{10,23}.
Chan \cite{9} showed that hypercyclicity can occur on the operator
algebra $B(H)$ with the strong operator topology (SOT-topology)
that is not metrizable. For example, when $T$ satisfies the
Hypercyclicity Criterion, then the left multiplication operator
$L_{T}$ is SOT-hypercyclic on $B(H)$, that is, $L_{T}$ is
hypercyclic on $B(H)$ with strong operator topology.
\section{Main results}
From now on we suppose that $H$ is a separable
infinite-dimensional Hilbert space.
\setcounter{theore}{0}
\begin{definit}$\left.\right.$\vspace{.5pc}
\noindent {\rm Let $L\hbox{:}\ B(H) \rightarrow B(H)$ be linear
and bounded. We say that $L$ is SOT-hypercyclic if there exist
some $T \in B(H)$ such that the set $Orb(L,T) = \{T, LT, L^{2}T,
\ldots\}$ is dense in $B(H)$ in the strong operator topology. Also
we say that $L\hbox{:}\ B_{2}(H) \rightarrow B_{2}(H)$ is
$\|\hbox{$\cdot$}\|_{2}$-hypercyclic if there exists some $T \in B_{2}(H)$
such that $Orb(L,T)$ is dense in $B_{2}(H)$ with
$\|\hbox{$\cdot$}\|_{2}$-topology.}
\end{definit}
\begin{definit}$\left.\right.$\vspace{.5pc}
\noindent {\rm For any operator $T \in B(H)$, define the left
multiplication operator $L_{T}\hbox{:}\ B(H) \rightarrow B(H)$ by
$L_{T}(S) = TS$ for every $S \in B(H)$.}
\end{definit}
Note that $B_{2}(H)$ is an ideal of $B(H)$ and hence
$L_{T}\hbox{:}\ B_{2}(H) \rightarrow B_{2}(H)$ is also
well-defined. We show that $B(H)$ and $B_{2}(H)$, respectively
with the strong operator topology and $\|\hbox{$\cdot$}\|_{2}$-topology,
are separable. For this, see the following Lemma~2.3.
Suppose $\{e_{i}\hbox{:}\ i \geq 1\}$ is an orthonormal basis for
a separable Hilbert space $H$ and $S(H)$ denotes the set of all
finite rank operators $T$ such that there exists $N_{T} \in
\mathbb{N}$, satisfying $Te_{i} = 0$ for $i \geq N_T$.
\begin{lem} Suppose $E = \{e_{i}\hbox{\rm :}\ i \geq 1\}$ is a basis for a
separable Hilbert space $H${\rm ,} then $S(H)$ is SOT-dense in $B(H)$
and also $\|\hbox{$\cdot$}\|_{2}$-dense in $B_{2}(H)${\rm ;} moreover{\rm
,} $S(H)$ is\break separable.
\end{lem}
\begin{proof} Suppose that $A \in B_{2}(H)$ and $\varepsilon > 0$.
Then there exist $N \in \mathbb{N}$ such that
$\sum^{\infty}_{i=N+1} \|Ae_{i}\|^{2} < \varepsilon^{2}$. Now
define the finite rank operator $F$ by $F=A$ on $[e_{k}\hbox{:}\ 1
\leq k \leq N]$ and $F=0$ on $[e_{k}\hbox{:}\ 1 \leq k \leq
N]^{\bot}$. ($[e_{k}\hbox{:}\ 1 \leq k \leq N]$ means the linear
span of $\{e_{k}\hbox{:}\ 1 \leq k \leq N\}$). Thus
$\|A-F\|^{2}_{2} = \sum^{\infty}_{i=N+1} \|Ae_{i}\|^{2}<
\varepsilon^{2}$ and so $S(H)$ is $\|\hbox{$\cdot$}\|_{2}$-dense. Also,
of \cite{9} p.~234 implies that every $\|\hbox{$\cdot$}\|_{2}$-dense subset
of $B_{2}(H)$ is SOT-dense in $B(H)$, and so it follows that
$S(H)$ is SOT-dense. Now the proof is complete.\hfill $\Box$
\end{proof}
The following result is the main tool that we used to show that an
operator is hypercyclic. Versions of this result have appeared in
the work of Godefroy and Shapiro (\cite{13}, Theorem~1.2) and
Kitai (\cite{17}, Theorem~2.1).
\begin{propo}$\left.\right.$\vspace{.5pc}
\noindent If $T$ is a continuous operator on a separable Banach
space $X${\rm ,} then $T$ is hypercyclic if and only if for any
two non-void open sets $U$ and $V$ in $X, T^{n} U \cap V \neq
\phi$ for some positive integer $n$.
\end{propo}
Godefroy and Shapiro (\cite{13}, Corollary~1.3) also gave a
sufficient condition for hypercyclicity that is a direct
consequence of Proposition~2.4.
\begin{coro}$\left.\right.$\vspace{.5pc}
\noindent An operator $T$ on a separable Banach space $X$ is
hypercyclic if for each pair $U,V$ of non-void open subsets of
$X${\rm ,} and each neighborhood $W$ of zero in $X${\rm ,} there
are infinitely many positive integers $n$ such that both $T^{n}U
\cap W$ and $T^{n} W \cap V$ are non-empty.
\end{coro}
\begin{remar}$\left.\right.$
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .35pc
\item In Proposition~2.4, the condition $T^{n}U \cap V \neq \phi$ is
equivalent to the condition $U \cap T^{-n}V \neq \phi$.
\item \looseness -1 If an operator $T$ is hypercyclic, then it automatically has
a dense set of hypercyclic vectors. For, if a vector $x$ is
hypercyclic for $T$, then so is $T^{n}x$ for any positive integer
$n$. Thus the condition `$T^{n} U \cap V \neq \phi$ for some
positive integer $n$', in Proposition~2.4, can be replaced by the
condition `$T^{n}U \cap V \neq \phi$ for infinitely many positive
integers $n$'.
\item Equivalent to the hypothesis of Corollary~2.5 is the
apparently weaker requirement that the sets $T^{n}U \cap W$ and
$T^{n}W \cap V$ be non-empty for a single $n$.
\end{enumerate}
\end{remar}
The following theorem shows that the converse of the above
corollary is equivalent to the Hypercyclicity Criterion. Remember
that for vectors $g,h$ in $H$ the operator $g \otimes h$ denotes a
rank one operator and is defined by $(g \otimes h)(f) = \langle f,
h \rangle g$.
\begin{theor}[\!] For any operator $T \in B(H)${\rm ,} the following
are equivalent{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .15pc
\item $T$ satisfies the hypothesis of the Hypercyclicity Criterion.
\item For each pair $U,V$ of non-void open subsets of $H${\rm ,} and
each neighborhood $W$ of zero{\rm ,} $T^{n}U \cap W \neq \phi$ and
$T^{n}W \cap V \neq \phi$ for some integer $n$.
\end{enumerate}
\end{theor}
\begin{proof} It is easy to see that (i) implies (ii) (for
details see Corollary~1.4 in \cite{7}). For the converse, assume
that $T$ satisfies property (ii). First we show that for each pair
$U',V'$ of non-void $\|\hbox{$\cdot$}\|_{2}$-open subsets of $B_{2}(H)$
there is an integer $n \geq 1$ such that $U' \cap L^{-n}_{T} V'
\neq \phi$. For this, fix an orthonormal basis $E=\{e_{i}\hbox{:}\
i \geq 1\}$ for $H$. By using Lemma~2.3 there exist finite rank
operators $A$ and $B$ such that $A \in S(H) \cap U'$ and $B \in
S(H) \cap V'$, whence for a certain integer $N \geq 1$ we have
$A(e_{i}) = B(e_{i})=0$ for $i>N$. But for some $\varepsilon > 0$
we have
\begin{equation*}
\left\{D \in S(H)\hbox{:}\ \|D-A\|_{2} < \varepsilon\right\}
\subseteq S(H) \cap U',
\end{equation*}
and
\begin{equation*}
\left\{D \in S(H)\hbox{:}\ \|D-B\|_{2} < \varepsilon\right\}
\subseteq S(H) \cap V'.
\end{equation*}
Now consider the following open sets:
\begin{equation*}
U_{i} = \left\{ h \in H\hbox{:}\ \|h-Ae_{i}\| <
\frac{\varepsilon}{2\sqrt{N}} \right\}, V_{i} = \left\{ h \in
H\hbox{:}\ \|h-Be_{i}\| < \frac{\varepsilon}{2\sqrt{N}} \right\}
\end{equation*}
$\left.\right.$\vspace{-1.5pc}
\noindent for $i=1,2,\ldots,N$. Note that Corollary~2.5 or remark (iii)
implies that $T$ is hypercyclic. Now by using Proposition~2.4
repeatedly (indeed by remark (ii)), it follows that there exist
integers $0 = n_{0} < n_{1} \leq n_{2} \leq \cdots \leq n_{N-1}$
and $0=m_{0} < m_{1} \leq m_{2} \leq \cdots \leq m_{N-1}$ such
that
\begin{equation}
U=U_{1} \cap T^{-n_{1}} U_{2} \cap T^{-n_{2}} U_{3} \cap \cdots
\cap T^{-n_{N-1}} U_{N} \neq \phi
\end{equation}
and
\begin{equation}
V=V_{1} \cap T^{-m_{1}} V_{2} \cap T^{-m_{2}} V_{3} \cap \cdots
\cap T^{-m_{N-1}} V_{N} \neq \phi.
\end{equation}
Put $W= \{h\hbox{:}\ \|h\|< \delta\}$ where
\begin{equation}
\delta = \min\left\{\frac{\varepsilon}{2\sqrt{N} \|T\|^{n_{i-1}}},
\frac{\varepsilon}{2\sqrt{N}\|T\|^{m_{i-1}}}\hbox{:}\ i =
1,2,\ldots,N\right\}.
\end{equation}
Since $T$ satisfies the hypothesis (ii) of Theorem~2.6, then there
exists some $x \in W$ and $y \in U$ such that $T^{n}x \in V$ and
$T^{n} y \in W$ for some integer $n$. The relations (1) and (2)
imply that
\begin{equation}
\|T^{n_{i-1}} y - Ae_{i}\| < \frac{\varepsilon}{2\sqrt{N}}; \quad
\|T^{n}(T^{m_{i-1}} x) - Be_{i}\| < \frac{\varepsilon}{2\sqrt{N}}
\end{equation}
for $i=1,2,\ldots,N$. Now define $S_{1} = \sum^{N}_{i=1}
T^{n_{i-1}} y \otimes e_{i}$ and $S_{2} = \sum^{N}_{i=1}
T^{m_{i-1}} x \otimes e_{i}$. Let $S=S_{1} + S_{2}$. Then $S$ is a
Hilbert--Schmidt operator, because it has finite rank. Note that by
(3), $\|T^{m_{i-1}}x\| \leq \|T\|^{m_{i-1}} \|x\| < \delta
\|T\|^{m_{i-1}} < \frac{\varepsilon}{2\sqrt{N}}$. Now by using (4)
we get the following inequalities:
\begin{align*}
\|S-A\|_{2} &\leq \|S_{1} - A\|_{2} + \|S_{2}\|_{2}\\[.4pc]
&= \left\{ \sum^{N}_{i=1} \|S_{1} e_{i} - Ae_{i}\|^{2}\right\}^{1/2} +
\left\{ \sum^{N}_{i=1} \|S_{2} e_{i}\|^{2}\right\}^{1/2}\\[.4pc]
&= \left\{ \sum^{N}_{i=1} \|T^{n_{i-1}} y -
Ae_{i}\|^{2}\right\}^{1/2} + \left\{ \sum^{N}_{i=1}
\|T^{m_{i-1}} x\|^{2}\right\}^{1/2} < \varepsilon.
\end{align*}
Hence $S \in U'$. Also note that since $T^{n}y \in W$, by (3) we
get $\|T^{n_{i-1}}(T^{n}y)\| \leq \|T\|^{n_{i-1}} \delta <
\frac{\varepsilon}{2\sqrt{N}}$, and thus we have
\begin{align*}
\|L^{n}_{T} S-B\|_{2} &\leq \|L^{n}_{T} S_{2} - B\|_{2} +
\|L^{n}_{T} S_{1}\|_{2}\\[.4pc]
&= \left\{ \sum^{N}_{i=1} \|T^{n} S_{2} e_{i} - Be_{i}
\|^{2}\right\}^{1/2} + \left\{ \sum^{N}_{i=1} \|T^{n} S_{1}
e_{i}\|^{2} \right\}^{1/2}\\[.4pc]
&= \left\{ \sum^{N}_{i=1} \|T^{n}(T^{m_{i-1}} x) -
Be_{i}\|^{2}\right\}^{1/2}\\[.4pc]
&\quad\, + \left\{ \sum^{N}_{i=1} \|T^{n_{i-1}}
(T^{n} y)\|^{2}\right\}^{1/2} < \varepsilon.
\end{align*}
So $L^{n}_{T} S \in V'$. Now it follows that $U' \cap L^{-n}_{T}
V' \neq \phi$ and so by Proposition~2.4, $L_{T}$ is $\| \;
\|_{2}$-hypercyclic. This also implies that
$\bigoplus^{\infty}_{n=1} T\hbox{:}\ \bigoplus^{\infty}_{n=1} H
\rightarrow \bigoplus^{\infty}_{n=1} H$ is hypercyclic, because
the left multiplication operator $L_{T}\hbox{:}\ B_{2}(H)
\rightarrow B_{2}(H)$ is unitary equivalent to the operator
$\bigoplus^{\infty}_{n=1} T\hbox{:}\ \bigoplus^{\infty}_{n=1} H
\rightarrow \bigoplus^{\infty}_{n=1} H$ (see \cite{11}, p.~6). Now
Theorem~2.3 in \cite{2} implies that $T$ satisfies the
Hypercyclicity Criterion, and so the proof is now complete.\hfill
$\Box$
\end{proof
\begin{propo}$\left.\right.$\vspace{.5pc}
\noindent If $T \in B(H)${\rm ,} then the following are
equivalent{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .15pc
\item $T$ satisfies the hypothesis of the Hypercyclicity Criterion.
\item $T$ is hypercyclic and for each non-void open subset $U$ and
each neighborhood $W$ of zero{\rm ,} $T^{n}U \cap W \neq \phi$ and
$T^{-n} U \cap W \neq \phi$ for some integer $n$.
\end{enumerate}
\end{propo}
\begin{proof} By Theorem~2.6 it suffices to show that (ii) implies
(i). So let (ii) hold. By Theorem~2.6, it suffices to show that
(ii) in Theorem~2.6 holds. Since $T$ is hypercyclic, by
Proposition~2.4, $U \cap T^{-m} V \neq \phi$ for some positive
integer $m$. Let $G$ be a neighborhood of zero that is contained
in $W \cap T^{-m} W$. By condition (ii), there exists some
positive integer $n$ such that $T^{-n} G \cap (U \cap T^{-m} V)
\neq \phi$ and $G \cap T^{-n} (U \cap T^{-m} V) \neq \phi$. But
$T^{-n}G \cap (U \cap T^{-m}V)$ is a subset of $T^{-n}W \cap U$,
hence $T^{-n}W \cap U \neq \phi$. Also $G \cap T^{-n}(U \cap
T^{-m}V)$ is a subset of $T^{-m}W \cap T^{-n}(T^{-m}V) = T^{-m}(W
\cap T^{-n}V)$ which implies that $T^{-n}V \cap W \neq \phi$.
Thus, hypothesis (ii) of Theorem~2.6 holds and so the proof is
complete.\hfill $\Box$
\end{proof}
\begin{rem}{\rm We say that the sequence
$\{T_{n}\}^{\infty}_{n=}$ of bounded linear operators on a Hilbert
space $H$ is hypercyclic provided that there exists some $x \in H$
such that the collection of images $\{T_{n}x\hbox{:}\ n=1,2,\ldots\}$ is
dense in $H$. Note that Theorem~1.1, Proposition~2.4 and
Corollary~2.5 can be extended to the case where hypercyclicity of
$T$ is replaced by hypercyclicity for the sequence
$\{T_{n}\}^{\infty}_{n=1}$ of bounded linear operators that have
dense range. In particular we say that $\{T_{n}\}^{\infty}_{n=1}$
satisfies the hypothesis of the Hypercyclicity Criterion if in the
hypothesis of Theorem~1.1, we use $T_{n_{k}}$ instead of
$T^{n_{k}}$. It also implies that if the sequence
$\{T_{n}\}^{\infty}_{n=1}$ satisfies the hypothesis of the
Hypercyclicity Criterion, then $\{T_{n}\}^{\infty}_{n=1}$ is
hypercyclic (see Theorem~1.2, Corollaries~1.3 and 1.5 in
\cite{13}).}
\end{rem}
It is not difficult to see that Theorem~2.6 and Proposition~2.7
work for the sequence $\{T_{n}\}^{\infty}_{n=1}$ of bounded linear
operators provided that $T_{n}T_{m} = T_{m}T_{n}$ for each pair
$m,n$ of positive integers. Hence we can deduce the following
corollary.
\begin{coro}$\left.\right.$\vspace{.5pc}
\noindent Suppose that $\{T_{n}\}^{\infty}_{n=1}$ is a sequence of
bounded linear operators on a Hilbert space $H$ such that
$T_{n}T_{m} = T_{m}T_{n}$ for each pair $m,n$ of positive integers
and have dense range. Then the following are equivalent{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .35pc
\item $\{T_{n}\}^{\infty}_{n=1}$ satisfies the hypothesis of the
Hypercyclicity Criterion.
\item For each pair $U,V$ of non-void open subsets of $H${\rm ,}
and each neighborhood $W$ of zero{\rm ,} $T_{n}U \cap W \neq \phi$
and $T_{n}W \cap V \neq \phi$ for some integer $n$.
\item $\{T_{n}\}^{\infty}_{n=1}$ is hypercyclic and for each
non-void open subset $U$ and each neighborhood $W$ of zero{\rm ,}
$T_{n}U \cap W \neq \phi$ and $T^{-1}_{n}U \cap W \neq \phi$ for
some integer $n$.\vspace{-1pc}
\end{enumerate}
\end{coro}
The following definition is introduced in \cite{2}.
\begin{definit}$\left.\right.$\vspace{.5pc}
\noindent {\rm Suppose that $T \in B(H)$ and $\{n_{k}\}$ is a
sequence of positive integers. We say that $T$ is hereditarily
hypercyclic with respect to $\{n_{k}\}$ if for any subsequence
$\{n_{k_{m}}\}$ of $\{n_{k}\}$, the sequence $\{T^{n_{k_{m}}}\}$
is hypercyclic.}
\end{definit}
Now we summarize all necessary and sufficient conditions for the
Hypercyclicity Criterion in the following corollary.
\begin{coro}$\left.\right.$\vspace{.5pc}
\noindent For any operator $T \in B(H)${\rm ,} the following are
equivalent{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .35pc
\item $T$ satisfies the hypothesis of the Hypercyclicity Criterion.
\item $T$ is hereditarily hypercyclic with respect to a subsequence
$\{n_{k}\}$ of positive integers.
\item $\bigoplus^{\infty}_{i=1} T$ is hypercyclic on
$\bigoplus^{\infty}_{i=1} H$.
\item The left multiplication operator $L_{T}\hbox{\rm :}\ B_{2}(H)
\rightarrow B_{2}(H)$ is $\|\hbox{$\cdot$}\|_{2}$-hypercyclic.
\item For each pair $U,V$ of non-void open subsets of $X${\rm ,} and
each neighborhood $W$ of zero{\rm ,} $T^{n}U \cap W \neq \phi$ and
$T^{n}W \cap V \neq \phi$ for some integer $n$.
\end{enumerate}
\end{coro}
\begin{proof} The proof is an immediate consequence of Theorem~2.6,
Proposition~2.7 and Theorem~2.3 in \cite{2}.\hfill $\Box$
\end{proof}
The following proposition represents some relation between
hypercyclicity and the Hypercyclicity Criterion.
\begin{propo}$\left.\right.$\vspace{.5pc}
\noindent For any operator $T \in B(H)$ the following are
equivalent{\rm :}
\begin{enumerate}
\renewcommand\labelenumi{\rm (\roman{enumi})}
\leftskip .35pc
\item $T$ satisfies the hypothesis of the Hypercyclicity Criterion.
\item There exists a dense subset $Y$ in $X$ and a sequence
$\{n_{k}\}$ such that $\{T^{n_{k}}\}$ is hypercyclic and
$T^{n_{k}}y \rightarrow 0$ for every $y \in Y$.
\item There exists a sequence $\{n_{k}\}$ such that for each pair
$U,V$ of non-void open subsets of $H$, there is $N \geq 1$ such
that $T^{n_{k}} U \cap V \neq \phi$ for any $k \geq N$.
\end{enumerate}
\end{propo}
\begin{proof}$\left.\right.$
\noindent (i) $\rightarrow$ (ii): It follows from condition (ii)
of Corollary~2.11.
\noindent (ii) $\rightarrow$ (i): Let $T_{k} = T^{n_{k}}, U$ be
any non-void open set and also let $W$ be any open neighborhood of
zero. Then by Remark~2.8, $\{T_{k}\}_{k}$ is hypercyclic and so
there is some sequence $\{m_{k}\}$ of positive integers such that
$T_{m_{k}}W \cap U \neq \phi$ for every $k \geq 1$. Now if $y \in
U \cap Y$, then $T_{m_{k}}y = T^{n_{m_{k}}}y \rightarrow 0$ which
yields $T_{m_{k}}U \cap W \neq \phi$. It holds condition (iii) of
Corollary~2.11, hence $\{T_{k}\}$ satisfies the hypothesis of the
Hypercyclicity Criterion and so $\{T^{n_{k}}\}$ and consequently
$T$ satisfy the Hypercyclicity Criterion.
\noindent (iii) $\rightarrow$ (i): It suffices to show that
condition (iii) implies condition (v) of Corollary~2.11. For this
let $U,V$ be a pair of non-void open subsets of $H$ and $W$ be any
neighborhood of zero. Then for some integer $N$, we have
\begin{equation*}
T^{n_{k}} U \cap W \neq \phi; \quad T^{n_{k}} W \cap V \neq \phi
\end{equation*}
for any $k>N$. Thus indeed condition (v) of Corollary~2.11 is
consistent.
\noindent (i) $\rightarrow$ (iii): Note that by condition (ii)
of Corollary~2.11, $T$ is hereditarily hypercyclic with respect to
a sequence $\{n_{k}\}$ of positive integers. Now suppose that
(iii) does not hold. So there exist some pair $U,V$ of non-void
open sets such that $T^{n_{k_{m}}} U \cap V = \phi$ for some
subsequence $\{n_{k_{m}}\}$ of $\{n_{k}\}$. But
$\{T^{n_{k_{m}}}\}$ is hypercyclic and so it is a contradiction.
Hence for every pair $U,V$ of non-void open sets, there is $N \geq
1$ such that $T^{n_{k}} U \cap V \neq \phi$ for any $k \geq N$.
The proof is now complete.\hfill$\Box$
\end{proof}
\section*{Acknowledgment}
The authors thank the referee for many interesting comments and
helpful suggestions about the paper.
|
1,108,101,565,780 | arxiv | \section{Introduction}
Let $H^\infty(\mathbb{D})$ be the Banach algebra of all bounded analytic functions on the unit disk $\mathbb{D}$ equipped with the supremum norm $\Vert\cdot\Vert_\infty$. It is known (but non-trivial) that $H^\infty(\mathbb{D})$ can be regarded as a closed subalgebra of $L^\infty(\mathbb{T})$ by $f(e^{\sqrt{-1}\theta}) := \lim_{r\nearrow1}f(re^{\sqrt{-1}\theta})$ a.e.~$\theta$. Then, $L^\infty(\mathbb{T})$ is isometrically isomorphic to $C(X)$ with a certain compact Hausdorff space $X$ via the Gel'fand representation $f \mapsto \hat{f}$, and the linear functional $f \in H^\infty(\mathbb{D}) \mapsto \frac{1}{2\pi}\int_0^{2\pi} f(e^{\sqrt{-1}\theta})\,d\theta$ is known to admit a unique representing measure $m$ on $X$ so that $\frac{1}{2\pi}\int_0^{2\pi} f(e^{\sqrt{-1}\theta})\,d\theta = \int_X \hat{f}(x)\,m(dx)$ holds. In this setup, Amar and Lederer \cite{Amar and Lederer:CRParis71} proved that any closed subset $F \subset X$ with $m(F)=0$ admits $f \in H^{\infty}(\mathbb{D})$ with $\Vert f\Vert_\infty \le 1$ such that $P := \{x \in X : \hat{f}(x) = 1\} = \{x \in X : |\hat{f}(x)| = 1 \}$ contains $F$ and $m(P) = 0$ still holds. This is a key in any existing proof of the uniqueness of predual of $H^\infty(\mathbb{D})$. The reader can find some information on Amar and Lederer's result in \cite[\S6]{Pelczynski:CBMS76} and also see \cite{Barbey:ArchMath75}.
The main purpose of these notes is to provide an analogous fact of the above-mentioned result of Amar and Lederer for non-commutative $H^\infty$-algebras introduced by Arveson \cite{Arveson:AJM67} in the 60's under the name of finite maximal subdiagonal algebras. Here a non-commutative $H^\infty$-algebra means a $\sigma$-weakly closed (possibly non-self-adjoint) unital subalgebra $A$ of a finite von Neuamnn algebra $M$ with a faithful normal tracial state $\tau$ satisfying the following conditions:
\begin{itemize}
\item the unique $\tau$-preserving (i.e., $\tau\circ E = \tau$) conditional expectation $E : M \rightarrow D := A\cap A^*$ is multiplicative on $A$;
\item the $\sigma$-weak closure of $A+A^*$ is exactly $M$,
\end{itemize}
where $A^* := \{a^* \in M : a \in A\}$. (Remark here that an important work due to Exel \cite{Exel:AJM88} plays an important r\^{o}le behind this simple definition.) In what follows we write $A = H^\infty(M,\tau)$ and call $D$ the diagonal subalgebra. Recently, in their series of papers Blecher and Labuschagne established many fundamental properties of these non-commutative $H^\infty$-algebras, analogous to classical theories modeled after $H^\infty(\mathbb{D})$, all of which are nicely summarized in \cite{BlecherLabuschagne:Survey07}. The reader can also find a nice exposition (especially, on the non-commutative Hilbert transform in the framework of $H^\infty(M,\tau)$) in Pisier and Xu's survey on non-commutative $L^p$-spaces \cite[\S8]{PisierXu:Handbook03}.
More precisely, what we want to prove here is that for any non-zero singular $\varphi \in M^*$ in the sense of Takesaki \cite{Takesaki:TohokuMathJ58} one can find a ``peak" projection $p$ for $A$ in the sense of Hay \cite{Hay:IntegralEqOpTh07} such that $p$ dominates the (right) support projection of $\varphi$ but is smaller than the central support projection $z_s \in M^{\star\star}$ of the singular part $M^\star\ominus M_\star$. This is not exactly same as Amar and Lederer's result, but is enough for usual applications (even in classical theory for $H^\infty(\mathbb{D})$). Indeed, we will demonstrate it by proving that any non-commutative $H^\infty$-algebra $A = H^\infty(M,\tau)$ has the unique predual $M_\star/A_\perp$ with $A_\perp := \{\psi \in M_\star : \psi|_A = 0\}$. Proving it is our initial motivation; in fact, it can be regarded as an affirmative answer to the following natural (at least for us) question: Is the relative topology on $A$ induced from $\sigma(M,M_\star)$, which is most important, an intrinsic one of $A$ ? Also, our unique predual result may provide a new perspective in the direction of establishing the uniqueness of preduals by Grothendieck \cite{Grothendieck:CanadianJMath55} for $L^\infty$-spaces, by Dixmier \cite{Dixmier:BullFrance53} and Sakai \cite{Sakai:Pacific56} for von Neumann algebras or $W^*$-algebras, and then by Ando \cite{Ando:CommentMath78} and also a little bit later but independent work due to Wojtaszczyk \cite{Wojtaszczyk:Studia79} for $H^\infty(\mathbb{D})$. In particular, our result can be regarded as a simultaneous generalization of those classical results. Moreover, our result is an affirmative answer to a question posed by Godefroy stated in \cite{BlecherLabuschagne:Survey07}, and more importantly it covers any existing generalization like \cite{Chaumat:CRAcadParis79},\cite{Godefroy:TAMS84} of the above-mentioned work for $H^\infty(\mathbb{D})$ as a particular case. A natural ``Lebesgue decomposition" or ``normal/singular decomposition" for the dual of $H^\infty(M,\tau)$ is also given. The decomposition was first given by our ex-student Shintaro Sewatari in his master thesis \cite{Sewatari:MasterThesis} as a simple application of the non-commutative F.~and M.~Riesz theorem recently established by Blecher and Labuschagne \cite{BlecherLabuschagne:Studia07} so that the finite dimensionality assumption for the diagonal subalgebra $D$ was necessary there. Here it is established in full generality based on our Amar--Lederer type result instead of the non-commutative F.~and M.~Riesz theorem. After the completion of the presented work, the author found the paper \cite{Pfitzner:BLMS07} of H.~Pfitzner, where it is shown that any separable $L$-embedded Banach space $X$ becomes the unique predual of its dual $X^\star$. This means that establishing the Lebesgue decomposition is enough to show the uniqueness of predual for any non-commutative $H^\infty$-algebra $A = H^\infty(M,\tau)$ with $M_\star$ separable.
Our Amar--Lederer type result also enables us to remove the finite dimensionality assumption for the diagonal subalgebra $D$ from the results in \cite{BlecherLabuschagne:Studia07} numbered 3.5, 4.1, 4.2 and 4.3 there, including the non-commutative Gleason--Whitney theorem. Moreover, it gives a nice variant of Blecher and Labuschagne's non-commutative F.~and M.~Riesz theorem. Thus, it unexpectedly brings the current theory of non-commutative $H^p$-spaces due to Blecher and Labuschagne (see \cite{BlecherLabuschagne:Survey07}), which was already somewhat complete and satisfying, to an even more perfect and satisfactory form, though the presented work was initially aimed to prove the unique predual result for $H^\infty(M,\tau)$ as mentioned above.
In closing, we should note that a bit different syntax has been (and will be) used for dual spaces. For a Banach space $X$ we denote by $X^\star$ and $X_\star$ its dual and predual instead of the usual $X^*$ and $X_*$, while $X^*$ stands for the set of adjoints of elements in $X$ when $X$ is a subset of a $C^*$-algebra.
\medskip\noindent
{\it Acknowledgment.} We thank Professor Timur Oikhberg for kindly advising us to mention what the unique predual $M_\star/A_\perp$ possesses Pelczynski's property {\rm(V$^*$)} in Corollary \ref{C3.3} explicitly. We also thank the anonymous referee for his or her critical reading and a number of fruitful suggestions, which especially enable us to improve the presentation of the materials given in \S4.
\section{Amar--Lederer Type Result for $H^\infty(M,\tau)$}
Let $A = H^\infty(M,\tau)$ be a non-commutative $H^\infty$-algebra with a finite von Neumann algebra $M$ and a faithful normal tracial state $\tau$ on $M$.
\begin{theorem} \label{T2.1} For any non-zero singular $\varphi \in M^\star$ there is a contraction $a \in A$ and a projection $p \in M^{\star\star}$ such that
\begin{itemize}
\item[(2.1.1)] $a^n$ converges to $p$ in the $w^*$-topology $\sigma(M^{\star\star},M^\star)$ as $n\rightarrow\infty$;
\item[(2.1.2)] $\langle|\varphi|,p\rangle = |\varphi|(1)$;
\item[(2.1.3)] $\langle\psi,p\rangle = 0$ for all $\psi \in M_\star$ {\rm(}regarded as a subspace of $M^\star${\rm)}, or equivalently $a^n$ converges to $0$ in $\sigma(M,M_\star)$ as $n\rightarrow\infty$. This, in particular, shows that $p \le z_s$.
\end{itemize}
Here, $\langle\cdot,\cdot\rangle : M^\star \times M^{\star\star} \rightarrow \mathbf{C}$ is the dual pairing and $|\varphi|$ denotes the absolute value of $\varphi$ with the polar decomposition $\varphi = v\cdot|\varphi|$ due to Sakai \cite{Sakai:ProcJapanAcad58} and Tomita \cite{Tomita:MathJOkayama59}, when regarding $\varphi$ as an element in the predual of the enveloping von Neumann algebra $M^{\star\star}$ by $(M^{\star\star})_\star = M^\star$.
\end{theorem}
\begin{proof}
Note that $|\varphi|$ is still singular. In fact, $|\varphi| = v^*\cdot\varphi \in v^* z_s M^\star \subset z_s M^\star$ since $z_s$ is a central projection. Here $z_s$ stands for the central support projection of $M^\star\ominus M_\star$ as in \S1. The orthogonal families of non-zero projections in $\mathrm{Ker}|\varphi|$ clearly form an inductive set by inclusion, and then Zorn's lemma ensures the existence of a maximal family $\{q_k\}$, which is at most countable since $M$ is $\sigma$-finite. Let $q_0 := \sum_k q_k$ in $M$. If $q_0\neq1$, then Takesaki's criterion \cite{Takesaki:PJapanAcad59} shows the existence of a non-zero projection $r \in \mathrm{Ker}|\varphi|$ with $r \le 1-q_0$, a contradiction to the maximality. Thus, $q_0 = 1$. Moreover, if $\{q_k\}$ is a finite set, then $|\varphi|(1) = \sum_k |\varphi|(q_k) = 0$, a contradiction. Therefore, $\{q_k\}$ is a countably infinite family with $\sum_k q_k = 1$ in $M$. Letting $p_n := 1-\sum_{k\le n} q_k$ we have $p_n \rightarrow 0$ $\sigma$-weakly as $n\rightarrow\infty$ but $|\varphi|(p_n) = |\varphi|(1) $ for all $n$. Set $p_0 := \bigwedge_n p_n$ in $M^{\star\star}$. Then, $\langle|\varphi|,p_0\rangle = \lim_n \langle|\varphi|,p_n\rangle = \lim_n |\varphi|(p_n) = |\varphi|(1) \neq 0$, and in particular, $p_0 \neq 0$.
Choosing a subsequence if necessary, we may and do assume $\tau(p_n) \le n^{-6}$. Then we can define an element $g := \sum_{n=1}^\infty n p_n \in L^2(M,\tau)$, the non-commutative $L^2$-space associated with $(M,\tau)$, since $\sum_{n=1}^\infty \Vert n p_n \Vert_{2,\tau} \le \sum_{n=1}^\infty n^{-2} < +\infty$. By the non-commutative Riesz theorem \cite[Theorem 1]{Randrianantoanina:JAustral98} and \cite[Theorem 5.4]{MarsalliWest:JOT98} there is an element $\tilde{g}=\tilde{g}^* \in L^2(M,\tau)$ such that $f := g+\sqrt{-1}\tilde{g}$ falls in the closure $[A]_{2,\tau}$ of $A$ in $L^2(M,\tau)$ via the canonical embedding $M\hookrightarrow L^2(M,\tau)$. We can regard $g,\tilde{g},f \in L^2(M,\tau)$ as unbounded operators, affiliated with $M$, on the Hilbert space $\mathcal{H} := L^2(M,\tau)$ with a common core $\mathcal{D}$. Let $\xi \in \mathcal{D}$ be chosen arbitrary. Since $g \ge 0$ and $\tilde{g} = \tilde{g}^*$, one has $\Vert(1+f)\xi\Vert_{2,\tau}\Vert\xi\Vert_{2,\tau} \ge |((1+f)\xi|\xi)_\tau| = |(\xi|\xi)_\tau + (g\xi|\xi)_\tau + \sqrt{-1}(\tilde{g}\xi|\xi)_\tau| \ge \Vert\xi\Vert_{2,\tau}^2$ and similarly $\Vert(1+f)^*\xi\Vert_{2,\tau}\Vert\xi\Vert_{2,\tau} \ge \Vert\xi\Vert_{2,\tau}^2$, and hence $(1+f)^{-1} \in M$ exists and $\Vert(1+f)^{-1}\Vert_\infty \le 1$. Also, similarly one has $\Vert(1+f)\xi\Vert_{2,\tau}\Vert\xi\Vert_{2,\tau} \ge |((1+f)\xi|f\xi)_\tau| = |(\xi|f\xi)_\tau + (f\xi|f\xi)_\tau| = |(\xi|g\xi)_\tau - \sqrt{-1}(\xi|\tilde{g}\xi)_\tau + (f\xi|f\xi)_\tau| \ge \Vert f\xi\Vert_{2,\tau}^2$ so that $\Vert f\xi\Vert_{2,\tau} \le \Vert(1+f)\xi\Vert_{2,\tau}$ holds. Therefore, $\Vert f(1+f)^{-1}\zeta\Vert_{2,\tau} \le \Vert\zeta\Vert_{2,\tau}$ for all $\zeta \in \mathcal{H}$, and thus $b:=f(1+f)^{-1} \in M$ is a contraction.
We will then prove that $b$ actually falls in $A$. First, recall the following standard but non-trivial fact: any bounded element in the closure $[A]_{p,\tau}$ of $A$ in $L^p(M,\tau)$, the non-commutative $L^p$-space, falls in $A$. In fact, let $x \in [A]_{p,\tau}$ be a bounded element, i.e., $x \in M$, and then there is a sequence $\{a_n\}$ in $A$ with $\Vert a_n - x\Vert_{p,\tau} \longrightarrow
0$ as $n\rightarrow\infty$. For each $y \in A$ with $E(y)=0$ one has $\Vert a_n y - xy\Vert_{p,\tau} \longrightarrow 0$ as $n\rightarrow\infty$ so that $\tau(xy) = \lim_n\tau(a_n y) = 0$ implying $x \in A$, where we use $A = \{ x \in M : \tau(xy) = 0\ \text{for all $y \in A$ with $E(y)=0$}\}$ due to Arveson \cite{Arveson:AJM67}. (It seems that this fact is used but not mentioned explicitly in the final step of the proof of \cite[Lemma 2]{Randrianantoanina:JAustral98} that we need here). Letting $g_N := \sum_{n=1}^N np_n \in M$ with its conjugate $\widetilde{g_N}$ we have $f_N = g_N + \sqrt{-1}\widetilde{g_N} \longrightarrow f$ in $L^2(M,\tau)$ as $N\rightarrow\infty$ thanks to the non-commutative Riesz theorem \cite[Theorem 1]{Randrianantoanina:JAustral98} and \cite[Theorem 5.4]{MarsalliWest:JOT98}. As before, for each $N$ one has $(1+f_N)^{-1} \in M$ and $\Vert(1+f_N)^{-1}\Vert_\infty\le1$, and moreover the discussion in \cite[Lemma 2]{Randrianantoanina:JAustral98} shows that $(1+f_N)^{-1}$ indeed falls in $A$. Since $(1+f)^{-1} \in M$ and $\Vert(1+f)^{-1}\Vert_\infty\le1$ as shown before, we have, for each $\xi \in M \subset L^2(M,\tau)$ (a right-bounded vector in $L^2(M,\tau)$), $\Vert((1+f_N)^{-1}-(1+f)^{-1})\xi\Vert_{2,\tau} = \Vert (1+f_N)^{-1} (f-f_N)(1+f)^{-1}\xi\Vert_{2,\tau} \le \Vert \xi \Vert_\infty \Vert f - f_N\Vert_{2,\tau} \longrightarrow 0$ as $N\rightarrow\infty$ so that $(1+f)^{-1} = \lim_N (1+f_N)^{-1} \in A$ in strong operator topology, implying $b = f(1+f)^{-1} \in M \cap [A]_{2,\tau} = A$ as claimed above.
As before we have $\Vert(1+f)\xi\Vert_{2,\tau}\Vert\xi\Vert_{2,\tau} \ge |((1+f)\xi|\xi)_\tau| \ge (g\xi|\xi)_\tau \ge n(p_n\xi|\xi)_\tau = n\Vert p_n\xi\Vert_{2,\tau}^2$ for each $\xi \in \mathcal{D}$. Here the inequality $(g\eta|\eta)_\tau \ge n(p_n\eta|\eta)_\tau$ for $\eta$ in the domain of $g$ is used. (This can be easily checked when $\eta$ is in $M \subset L^2(M,\tau)$, and $M \subset L^2(M,\tau)$ is known to form a core of $g$ thanks to a classical result, see, e.g.~\cite[Theorem 9.8]{StratilaZsido:Book}). Thus, letting $\xi := (1+f)^{-1}\zeta$ for each $\zeta \in \mathcal{H}$ we get $\Vert p_n(1+f)^{-1}\zeta\Vert_{2,\tau}^2 \le n^{-1}\Vert\zeta\Vert_{2,\tau}\Vert(1+f)^{-1}\zeta\Vert_{2,\tau} \le n^{-1}\Vert\zeta\Vert_{2,\tau}^2$ so that $\Vert p_n - p_n b \Vert_\infty = \Vert p_n(1+f)^{-1}\Vert_\infty \le n^{-1/2}$. In the universal representation $M \curvearrowright \mathcal{H}_u$ we have $\Vert(p_0 - p_0 b)\zeta\Vert_{\mathcal{H}_u} \le \Vert p_0\zeta - p_n\zeta\Vert_{\mathcal{H}_u} + \Vert p_n - p_n b\Vert_\infty\Vert\zeta\Vert_{\mathcal{H}_u} + \Vert p_n(b\zeta) - p_0(b\zeta)\Vert_{\mathcal{H}_u} \le \Vert p_0\zeta - p_n\zeta\Vert_{\mathcal{H}_u} + n^{-1/2}\Vert\zeta\Vert_{\mathcal{H}_u} + \Vert p_n(b\zeta) - p_0(b\zeta)\Vert_{\mathcal{H}_u} \longrightarrow 0$ as $n \rightarrow \infty$ for each $\zeta \in \mathcal{H}_u$ since $p_0 = \bigwedge_n p_n$ in $M^{\star\star} = M''$ on $\mathcal{H}_u$. Since $b$ is a contraction, we get $p_0 = p_0 b = bp_0 = p_0 bp_0$. Then, by \cite[Lemma 3.7]{Hay:IntegralEqOpTh07} the new contraction $a := (1+b)/2$ satisfies that $a^n$ converges to a certain projection $p \in M^{\star\star}$ in $\sigma(M^{\star\star},M^\star)$ as $n\rightarrow\infty$, and $p_0 \le p$ so that $\langle|\varphi|,p\rangle = |\varphi|(1)$. If a vector $\xi \in \mathcal{H}$ satisfies $\Vert a\xi\Vert_{2,\tau} = \Vert\xi\Vert_{2,\tau}$, then $2\Vert\xi\Vert_{2,\tau} = \Vert \xi + b\xi\Vert_{2,\tau} \le \Vert\xi\Vert_{2,\tau}+\Vert b\xi\Vert_{2,\tau} \le 2\Vert\xi\Vert_{2,\tau}$, which implies $\Vert b\xi\Vert_{2,\tau} = \Vert\xi\Vert_{2,\tau}$ and $\Vert\xi + b\xi\Vert_{2,\tau} = \Vert\xi\Vert_{2,\tau}+\Vert b\xi\Vert_{2,\tau}$. Then, it is plain to see that these two norm conditions imply $b\xi=\xi$. However, $(1+f)^{-1}\xi = (1-b)\xi = 0$ so that $\xi = 0$. Therefore, there is no reducing subspace of $a$ in $\mathcal{H}$, on which $a$ acts as a unitary. Hence the so-called Foguel decomposition (\cite{Foguel:PacificJMath63}) shows that $a^n \longrightarrow 0$ $\sigma$-weakly as $n\rightarrow\infty$. In particular, $\langle \psi,p\rangle = \lim_n \langle\psi,a^n\rangle = \lim_n \psi(a^n) = 0$ for all $\psi \in M_\star$.
\end{proof}
Choose $\varphi \in M^\star$, and decompose it into the normal and singular parts $\varphi = \varphi_n + \varphi_s$ with $\varphi_n := (1-z_s)\cdot\varphi \in M_\star$ and $\varphi_s := z_s\cdot\varphi \in M^\star\ominus M_\star$. Assume that $\varphi_s \neq 0$, and let $p \in M^{\star\star}$ be a projection for $\varphi_s$ as in Theorem \ref{T2.1}. By (2.1.2) and the polar decomposition $\varphi_s = v\cdot|\varphi_s|$ we have $|\langle \varphi_s,(1-p)x\rangle| = |\langle v\cdot|\varphi_s|,(1-p)x\rangle| \le \langle|\varphi_s|,1-p\rangle^{1/2}\langle|\varphi_s|,v^* x^* x v\rangle^{1/2} = 0$ for every $x \in M^{\star\star}$ so that $\varphi_s\cdot(1-p) = 0$, i.e., $\varphi_s = \varphi_s\cdot p$. Moreover, by (2.1.3) a similar estimate shows $\varphi_n\cdot p = 0$. Hence, we get $\varphi_s = \varphi\cdot p$. Therefore we have the following corollary:
\begin{corollary}\label{C2.2} If $\varphi \in M^\star$ has the non-zero singular part $\varphi_s \in M^\star\ominus M_\star$, then there is a contraction $a \in A$ and a projection $p \in M^{\star\star}$ such that $a^n \longrightarrow p$ in $\sigma(M^{\star\star},M^\star)$, $a^n \longrightarrow 0$ in $\sigma(M,M_\star)$ as $n\rightarrow\infty$ and $\varphi_s = \varphi\cdot p$.
\end{corollary}
We next examine the contraction $a$ and the projection $p$ in Theorem \ref{T2.1} and/or Corollary \ref{C2.2}. By \cite[Lemma 3.6]{Hay:IntegralEqOpTh07}, $a$ peaks at $p$ and moreover $(a^* a)^n \searrow p$ in $\sigma(M^{\star\star},M^\star)$ as $n\rightarrow\infty$ so that $p$ is a closed projection in the sense of Akemann \cite{Akemann:JFA69},\cite{Akemann:JFA70}. For any positive $\psi \in M^\star$ one has $\sum_{n=2}^N |\psi((a^* a)^n - (a^* a)^{n-1})| = -\sum_{n=2}^N \psi((a^* a)^n - (a^* a)^{n-1}) = \psi(a^* a) - \psi((a^* a)^N) \longrightarrow \langle \psi, a^* a - p\rangle$ as $N\rightarrow\infty$, from which one easily observes that the sequence $\{(a^* a)^n\}$ is weakly unconditionally convergent, see, e.g.~\cite[D\'{e}finition 1]{GodefroyTalagrand:CRParis81}. This fact is necessary in the course of proving that $M_\star/A_\perp$ is the unique predual of $A$.
\section{First Applications: Predual of $H^\infty(M,\tau)$}
We first establish the following theorem:
\begin{theorem}\label{T3.1} $M_\star/A_\perp$ is the unique predual of $A=H^\infty(M,\tau)$.
\end{theorem}
A Banach space $E$ is said to have a unique predual when the following property holds: If the duals $F^\star$ and $G^\star$ of two other Banach spaces $F$ and $G$ are isometrically isomorphic to $E$, then $F = G$ must hold in the dual $E^{\star}$ via the canonical embeddings. Our discussion will be done in the line presented in \cite[IV]{Godefroy:TAMS84} so that what we will actually prove is that $M_\star/A_\perp$ has property (X) in the sense of Godefroy and Talagrand and the desired assertion immediately follows from their result, see \cite[D\'{e}finition 3, Th\'{e}or\`{e}me 5]{GodefroyTalagrand:CRParis81}.
\begin{proof} Choose $\varphi \in A^\star$, and then one can extend it to $\tilde{\varphi} \in M^\star$ by the Hahn--Banach extension theorem. Decompose $\tilde{\varphi}$ into the normal and singular parts $\tilde{\varphi} = \tilde{\varphi}_n + \tilde{\varphi}_s$. It suffices to show the following: If $\lim_n \varphi(x_n) = 0$ for any weakly unconditionally convergent sequence $\{x_n\}$ in $A$ with $x_n \longrightarrow 0$ in $\sigma(A,M_\star/A_{\perp})$ or the relative topology from $\sigma(M,M_\star)$ as $n\rightarrow\infty$, then $\tilde{\varphi}_s|_A = 0$, that is, $\varphi = \tilde{\varphi}_n|_A$ must hold. We may assume $\tilde{\varphi}_s \neq 0$. By Corollary \ref{C2.2} together with the discussion just below it, we can find two sequences $\{a_n\}$ and $\{b_n\}$ and a projection $p \in M^{\star\star}$ such that (i) the $a_n$'s are in $A$; (ii) the $b_n$'s are in $M$ and $\{b_n\}$ is weakly (in $\sigma(M,M^\star)$) unconditionally convergent; (iii) both $a_n$ and $b_n$ converge to $p$ in $\sigma(M^{\star\star},M^\star)$ but to $0$ in $\sigma(M,M_\star)$; (iv) $\tilde{\varphi}_s = \tilde{\varphi}\cdot p$. Then, as same as in \cite[Th\'{e}or\`{e}me 33]{Godefroy:TAMS84} (by using a trick in \cite[the proof of Proposition 1.c.3 in p.32]{LindenstraussTzafriri:BookVolII}) we may and do assume that $\{a_n\}$ is also weakly unconditionally convergent. Let $x \in A$ be chosen arbitrary, and then $\{a_n x\}$ clearly becomes weakly unconditionally convergent. Moreover, it trivially holds that $a_n x \longrightarrow 0$ in $\sigma(M,M_\star)$ as $n\rightarrow\infty$. Therefore, we have $\tilde{\varphi}_s(x) = \langle \tilde{\varphi},px\rangle = \lim_n \langle \tilde{\varphi},a_n x\rangle = \lim_n \varphi(a_n x) = 0$ by the assumption here.
\end{proof}
As is well-known the predual $M_\star$ of a von Neumann algebra $M$ can be naturally embedded to the dual $M^\star$ as the range of an $L$-projection, see \cite{Takesaki:TohokuMathJ58}. Hence it is natural to ask whether the predual $M_\star/A_\perp$ of $A = H^\infty(M,\tau)$ can be also embedded to the dual $A^\star$ as the range of an $L$-projection. This is indeed true in general. Here we will explain it as an application of our Amar--Lederer type result.
Denote by $A_n^\star$ the set of all $\varphi \in A^\star$ that can be extended to $\tilde{\varphi} \in M_\star$, and also by $A_s^\star$ the set of all $\psi \in A^\star$ that can be extended to $\tilde{\psi} \in M^\star\ominus M_\star$. This definition agrees with \cite[p.35]{Ando:CommentMath78}. For any $\varphi \in A^\star$, by the Hahn--Banach extension theorem one can extend it to $\tilde{\varphi} \in M^\star$. Then, decompose $\tilde{\varphi}$ into the normal and singular parts $\tilde{\varphi} = \tilde{\varphi}_n + \tilde{\varphi}_s$. We set $\varphi_n := \tilde{\varphi}_n|_A \in A_n^\star$ and $\varphi_s := \tilde{\varphi}_s|_A \in A_s^\star$. Then we call $\varphi = \varphi_n + \varphi_s$ an ``$(M\supset A)$-Lebesgue decomposition" of $\varphi$. On first glance, it is likely that this decomposition depends on the particular choice of the extension $\tilde{\varphi}$. However, we have:
\begin{proposition}\label{P3.2} The following hold true{\rm:}
\begin{itemize}
\item[(3.4.1)] $A_n^\star\cap A_s^\star = \{0\}$.
\item[(3.4.2)] The notion of $(M\supset A)$-Lebesgue decomposition $\varphi = \varphi_n+\varphi_s$ of $\varphi \in A^\star$ is well-defined, that is, $\varphi_n$ and $\varphi_s$ are uniquely determined by $\varphi$. Moreover, $\Vert\varphi\Vert = \Vert\varphi_n\Vert + \Vert\varphi_s\Vert$ holds.
\end{itemize}
\end{proposition}
\begin{proof} (3.4.1) On contrary, suppose that there is a non-zero $\varphi \in A_n^\star\cap A_s^\star$, and then one can choose $\tilde{\varphi}_n \in M_\star$ and $\tilde{\varphi}_s \in M^\star\ominus M_\star$ in such a way that $\varphi = \tilde{\varphi}_n|_A = \tilde{\varphi}_s|_A$. Since $\varphi \neq 0$ implies $\tilde{\varphi}_s\neq0$, one can find, by Corollary \ref{C2.2}, a contraction $a \in A$ and a projection $p \in M^{\star\star}$ so that $a^n \longrightarrow p$ in $\sigma(M^{\star\star},M^\star)$, $a^n \longrightarrow 0$ in $\sigma(M,M_\star)$ as $n\rightarrow\infty$ and $\tilde{\varphi}_s = \tilde{\varphi}_s\cdot p$. Let $x \in A$ be arbitrary, and $a^n x \longrightarrow 0$ in $\sigma(M,M_\star)$ clearly holds. Then one has $\varphi(x) = \tilde{\varphi}_s(x) = \langle\tilde{\varphi}_s,px\rangle = \lim_n \langle\tilde{\varphi}_s,a^n x\rangle = \lim_n \varphi(a^n x) = \lim_n \tilde{\varphi}_n(a^n x) = 0$, a contradiction.
(3.4.2) Assume that we have two $(M\supset A)$-Lebesgue decompositions $\varphi = \varphi_{n1} + \varphi_{s1} = \varphi_{n2} + \varphi_{s2}$. Then $\varphi_{n1}-\varphi_{n2} = \varphi_{s2}-\varphi_{s1} \in A_n^\star\cap A_s^\star = \{0\}$ by (3.4.1) so that $\varphi_{n1}=\varphi_{n2}$ and $\varphi_{s1}=\varphi_{s2}$. Hence the $(M\supset A)$-Lebesgue decomposition is well-defined. Let $\tilde{\varphi} \in M^\star$ be the Hahn-Banach extension of $\varphi$, i.e., $\Vert\tilde{\varphi}\Vert = \Vert\varphi\Vert$. By definition we have $\varphi_n = \tilde{\varphi}_n|_A$ and $\varphi_s = \tilde{\varphi}_s|_A$. Then one has $\Vert\varphi\Vert = \Vert\tilde{\varphi}\Vert = \Vert\tilde{\varphi}_n\Vert + \Vert\tilde{\varphi}_s\Vert \ge \Vert\varphi_n\Vert + \Vert\varphi_s\Vert \ge \Vert \varphi_n + \varphi_s\Vert = \Vert\varphi\Vert$ so that the desired norm equation follows.
\end{proof}
\begin{corollary}\label{C3.3} The predual $M_\star/A_\perp$ of $A = H^\infty(M,\tau)$ is the range of an $L$-projection from $A^\star$. Hence $M_\star/A_\perp$ has Pe\l czy\'nski's property {\rm(V$^*$)}, and, in particular, is sequentially weakly complete.
\end{corollary}
\begin{proof} The first part is immediate from the above proposition since $A_n^\star = M_\star/A_\perp$ trivially holds. The latter half is due to Pfitzner's theorem \cite{Pfitzner:Studia93} and an observation of Pe{\l}czy\'nski \cite[Proposition 6]{Pelczynski:BullAcadPolon62}.
\end{proof}
It seems a natural question to find an ``intrinsic characterization" of singularity for elements in $A^\star$ like Takesaki's criterion \cite{Takesaki:PJapanAcad59}. It seems that there is no such result even in the classical theory.
\section{Second Applications: Noncommutative Function Algebra Theory}
In this section we will explain how our Amar--Lederer type result nicely complements the non-commutative function algebra theory due to Blecher and Labuschagne \cite{BlecherLabuschagne:Studia07}. The key is the following variant of Blecher and Labuschagne's F.~and M.~Riesz theorem, which was given implicitly in the previous version of these notes. The current, explicit formulation was suggested by the referee.
\begin{theorem}\label{T4.1} Any non-commutative $H^\infty$-algebra $A=H^\infty(M,\tau)$ satisfies the following property: Whenever $\varphi \in M^\star$ annihilates $A$, the normal and singular parts $\varphi_n$ and $\varphi_s$ annihilate $A$ separately.
\end{theorem}
\begin{proof} Although the proof is essentially same as that of Proposition \ref{P3.2}, we do give it for the sake of completeness. Let us choose $\varphi \in M^\star$ in such a way that $\varphi|_A = 0$, and decompose it into the normal and singular parts $\varphi = \varphi_n + \varphi_s$. On contrary, we assume that $\varphi_n|_A \neq0$ or $\varphi_s|_A\neq0$. If there existed $x \in A$ with $\varphi_n(x) \neq 0$, then it would follow that $\varphi_s(x) = -\varphi_n(x) \neq 0$. Thus we may assume that $\varphi_s|_A \neq 0$. Then, by Corollary \ref{C2.2} one can find a contraction $a \in A$ and a projection $p \in M^{\star\star}$ so that $a^n \longrightarrow p$ in $\sigma(M^{\star\star},M^\star)$, $a^n \longrightarrow 0$ in $\sigma(M,M_\star)$ as $n\rightarrow\infty$ and $\varphi_s = \varphi_s\cdot p$. For any $x \in A$, the $a^n x$'s still fall in $A$ but $\varphi(a^n x) \longrightarrow \langle\varphi,px\rangle = \varphi_s(x)$, and consequently $\varphi_s(x) = 0$, a contradiction.
\end{proof}
Blecher and Labuschagne's F.~and M.~Riesz theorem \cite[\S3]{BlecherLabuschagne:Studia07}, which is originated in classical theory, asserts a quite similar property, that is, whenever $\varphi \in M^\star$ annihilates $A_0 := \{ a \in A : E(a) = 0\}$ the normal and singular parts $\varphi_n$ and $\varphi_s$ annihilate $A_0$ and $A$, respectively, and moreover its necessary and sufficient requirement is that $D$ is finite dimensional. Note that this F.~and M.~Riesz property is apparently stronger than the consequence of Theorem \ref{T4.1} here, and it should be remarked that the proofs of Corollary 3.5, Theorem 4.1, Theorem 4.2 and Corollary 4.3 in \cite{BlecherLabuschagne:Studia07} need only the consequence of the above theorem but do not use Blecher and Labuschagne's F.~and M.~Riesz theorem itself. Thus, they all hold true without any assumption. Consequently, we get the next theorem.
\begin{theorem} Any non-commutative $H^\infty$-algebra $A=H^\infty(M,\tau)$ enjoys the following{\rm:}
\begin{itemize}
\item[(4.2.1)] If $\varphi \in M^\star$ annihilates $A+A^*$, then the $\varphi$ must be singular. {\rm (}cf. \cite[Corollary 3.5]{BlecherLabuschagne:Studia07}.{\rm)}
\item[(4.2.2)] Every Hahn--Banach extension to $M$ of any normal {\rm (}i.e., continuous in the relative topology induced from $\sigma(M,M_\star)${\rm )} functional on $A$ must fall in $M_\star$. {\rm (}cf. the second part of \cite[Theorem 4.1]{BlecherLabuschagne:Studia07}. {\rm)}
\item[(4.2.3)] Any $\varphi \in M_\star$ is the unique Hahn--Banach extension of its restriction to $A+A^*$. In particular, $\Vert\varphi\Vert=\Vert\varphi|_{A+A^*}\Vert$ for any $\varphi \in M_\star$. {\rm (}cf. \cite[Theorem 4.2]{BlecherLabuschagne:Studia07}.{\rm)}
\item[(4.2.4)] Any element in $M$ can be $\sigma$-weakly approximated by a norm-bounded net consisting of elements in $A+A^*$. {\rm (}cf. \cite[Corollary 4.3]{BlecherLabuschagne:Studia07}.{\rm)}
\end{itemize}
\end{theorem}
The above (4.2.2), called the non-commutative Gleason--Whitney theorem, might sound a contradiction to what Pe\l czy\'nski pointed out in \cite[Proposition 6.3]{Pelczynski:CBMS76}, a comment to Amar and Lederer's result. However, this is not the case since the (4.2.2) mentions only Hahn--Banach extensions.
\begin{remarks} Following the referee's suggestion let us call a subalgebra $A$ of a finite von Neumann algebra $M$ with a faithful normal tracial state an F.~and M.~Riesz algebra if it satisfies the consequence of Theorem \ref{T4.1} of these notes. (Clearly, $H^\infty(M,\tau)$ with $D$ finite dimensional has been an example of F.~and M.~Riesz algebra since the appearance of \cite{BlecherLabuschagne:Studia07}.) Then, any F.~and M.~Riesz algebra has (GW1) of \cite{BlecherLabuschagne:Studia07}, and furthermore does (GW) of \cite{BlecherLabuschagne:Studia07} if $A+A^*$ is $\sigma$-weak dense in $M$. Also, Corollary 3.5, Theorem 4.1, Theorem 4.2 and Corollary 4.3 in \cite{BlecherLabuschagne:Studia07} hold true if $A+A^*$ is $\sigma$-weak dense in $M$ too. The proofs in \cite{BlecherLabuschagne:Studia07} still work without any change. The referee communicated to us that he or she had noticed in 2007 these observations together with the fact that any F.~and M.~Riesz algebra with separable predual has unique predual.
\end{remarks}
}
|
1,108,101,565,781 | arxiv | \section{Introduction}
Simultaneous wireless information and power transfer (SWIPT) has recently received a lot of attention. Compared to conventional energy harvesting techniques, SWIPT can be used even if wireless nodes do not have access to external energy sources, such as solar and winder power.
The key idea of SWIPT is to collect energy from radio frequency (RF) signals, and this new concept of energy harvesting was first proposed in \cite{varshney08} and \cite{Grover10}. Particularly by assuming that the receiver has the capability to carry out energy harvesting and information decoding at the same time, the tradeoff between information rate and harvested energy has been characterized in \cite{varshney08} and \cite{Grover10}. Motivated by the difficulty of designing a circuit performing both energy harvesting and signal detection simultaneously, a practical receiver architecture has been developed in \cite{Zhouzhang13}, where two receiver strategies, power splitting and time sharing, have been proposed and their performance have been analyzed.
The concept of SWIPT was initially studied in simple scenarios with one source-destination pair, where the use of co-channel interference for energy harvesting was considered in \cite{LiangLiu12} and the combination of multiple-input multiple-output (MIMO) technologies with SWIPT was investigated in \cite{Xiangzt12}. SWIPT has been recently applied to various important communication scenarios more complicated than the case with one source-destination pair. For example, in \cite{Juzhangh13} the application of SWIPT to multiple access channels has been considered, where a few solutions for system throughput maximization have been proposed. Broadcasting scenarios have been considered in \cite{Ruizhangbroadcast13} and \cite{Huangl13}, where one transmitter is to serve two types of users, energy receivers and information receivers, simultaneously. In \cite{Liuzhangmiso2014} the joint design of uplink information transfer and downlink energy transfer has been considered, where sophisticated algorithms for energy beamforming, power allocation and throughput maximization have been proposed. The idea of SWIPT has also been applied to wireless cognitive radio systems, where opportunistic energy harvesting from RF signals has been studied in \cite{Ruicogswipt}.
The application of SWIPT to cooperative networks is important since the lifetime of the relay batteries can be extended by efficiently using the energy harvested from the relay observations. In \cite{Krikidis12} a greedy switching approach between data decoding and energy harvesting has been proposed for the case with one source-destination pair and one relay. In \cite{Nasirzhou} the outage performance achieved by amplified-and-forward (AF) relaying protocols has been developed, and the use of decode-and-forward (DF) strategies has been investigated in multi-user energy harvesting cooperative networks \cite{Dingpoor133}. Relay selection has been studied in a broadcasting scenario where energy harvesting was carried out at the destinations, instead of relays \cite{Himal141}. The impact of the random locations of wireless nodes on the path loss and the outage performance has been characterized by applying stochastic geometry in \cite{Dingpoor132}.
In conventional cooperative networks, the max-min criterion has been recognized as a diversity-optimal selection strategy \cite{Krikidis091,Jingmaxmin1,Song11maxmin}. Take a DF cooperative network with one source-destination pair and $M$ relays as an example. Provided that the $i$-th relay is used, the capacity of a DF relay channel is $\min\{\log(1+\rho |h_i|^2), \log(1+\rho |g_i|^2)\}$, where $\rho$ is the transmit signal-to-noise ratio (SNR), $h_i$ is the channel gain between the source and the relay, and $g_i$ is the channel gain between the relay and the destination. Obviously the max-min criterion, i.e. $\max\{\min\{|h_i|^2,|g_i|^2\}, 1\leq i \leq M\}$, is capacity optimal and can achieve the maximal diversity gain, $M$. But is this conclusion still valid when energy harvesting relays are used?
The main contribution of this paper is to characterize the performance of the max-min selection criterion in energy harvesting cooperative networks. We first construct a general framework of energy harvesting cooperative networks, where $M$ pairs of sources and destinations communicate with each other via a relay. Among the $M$ user pairs, the relay will schedule $m$ of them to transmit. It is important to point out that the problem of relay selection for the scenario with one source-destination pair is a special case of the formulated framework by setting $m=1$. When only a single user is scheduled, the exact expression for the outage probability achieved by the max-min criterion is developed by carefully grouping the possible outage events and then applying order statistics. Based on this obtained expression, asymptotic studies of the outage probability are carried out to show that the diversity gain achieved by the max-min criterion is only $\frac{M+1}{2}$, much less than the full diversity gain, $M$.
The reason for this loss of the diversity gain is that the max-min criterion treats the source-relay channels and the relay-destination channels equally. However, when an energy harvesting relay is used, it is important to observe that the source-relay channels become more important. For example, the source-relay channels impact not only the reception reliability at the relay, but also the relay transmission power. Recognizing this fact, a few modified user scheduling approaches are developed, which is the second contribution of this paper. Particularly for the case of $m=1$, an efficient user scheduling approach is proposed, and analytical results are developed to demonstrate that this approach can achieve the maximal diversity gain. This approach can be extended to the case of $m>1$, by applying exhaustive search. A greedy user scheduling approach is also developed by assuming that the relay always has data to be sent to all the destinations. The use of this greedy approach yields closed-form expressions for the outage probability and diversity order, which can be used as an upper bound for the other approaches. Simulation results are also provided to demonstrate the accuracy of the developed analytical results and facilitate the performance comparison among the addressed user scheduling approaches.
\section{System Model}\label{section II}
Consider a cooperative communication scenario with $M$ source-destination pairs and one {\it energy harvesting} relay. The $M$ users compete for the wireless medium, and the relay will schedule $m$ user pairs over $2m$ time slots, $0\leq m \leq M$. All the channels are assumed to be independent and identically (i.i.d.) quasi-static Rayleigh fading, and this indoor slow fading model is valid for many applications of wireless energy transfer, such as wireless body area networks and smart homes \cite{Himal141} and \cite{Dingpoor132}. In Section \ref{section simulation}, the impact of the path loss and the random locations of the users on the outage performance will be studied\footnote{ Note that when the users are randomly deployed, the effective channel gains, i.e. the combinations of Rayleigh fading and large scale path loss, can be still approximated as independent and identically exponentially distributed variables \cite{Dingpoor132}.}. It is assumed that the relay has access to global channel state information (CSI), which is important for the relay to carry out user scheduling.
During the $j$-th time slot, consider that the $i$-th user pair is scheduled to transmit its message $s_i$, where the details for user scheduling will be provided in the next two sections. The power splitting strategy will be used at the DF relay. Particularly the relay will first direct the observation flow to the detection circuit, and then to the energy harvesting circuit if there is any energy left after successful detection \cite{Zhouzhang13} and \cite{Dingpoor133}. Therefore the observation at the relay is given by
\begin{eqnarray}
y_{ri} = \sqrt{P(1-\theta_i)}h_is_i+n_{ri},
\end{eqnarray}
where $\theta_i$ is the power splitting factor, $P$ is the transmission power at the source, $h_i$ denotes the channel gain from the $i$-th source to the relay, and $n_{ri}$ denotes the additive white Gaussian noise. As discussed in \cite{Dingpoor133}, the optimal value of $\theta_i$ for a DF relay is $\max\left\{1-\frac{\epsilon}{|h_i|^2},0\right\}$, the maximal value of $\theta_i$ constrained by successful detection at the relay, where $\epsilon=\frac{2^{2R}-1}{P}$ and $R$ denotes the targeted data rate. The power obtained at the relay after carrying out energy harvesting from the $i$-th user pair is given by
\begin{equation}
P_{ri} = \eta P\left[|h_i|^2-\epsilon\right]^+,
\end{equation}
where $\eta$ denotes the energy harvesting coefficient, and $[x]^+$ denotes $\max\{x,0\}$. At the $(m+j)$-th time slot, the relay forwards $s_i$ to the $i$-th destination, and the receive SNR at this destination is given by
\begin{eqnarray}
SNR_i = P_{i} |g_i|^2,
\end{eqnarray}
where $P_{i}$ denotes the relay transmission power allocated to the $i$-th destination, and $g_i$ denotes the channel gain between the relay and the $i$-th destination. Note that $P_{ri}$ is not necessarily equal to $P_i$, depending on the used relay strategy, as discussed in the following sections.
\section{The Performance Achieved by The Max-min Criterion}\label{section III}
\subsection{User scheduling based on the max-min criterion}
In this section, the performance achieved by the user scheduling strategy based on the max-min criterion is studied. Particularly we will focus on the case that the relay selects only one user pair, i.e. $m=1$, and more discussions about the case with $m>1$ will be provided in the next section. Note that the scenario addressed in this section can be shown mathematically the same as the problem of relay selection for the case with one source-destination pair and $M$ relays. Therefore the results obtained for the addressed scheduling problem will be also applicable to the max-min relay selection cases.
Since only one user pair is scheduled, the energy harvested from the $i$-th source will be used to power the relay transmission to the $i$-th destination, i.e. $P_{i}=P_{ri}$. The max-min user scheduling strategy can be described as follows:
\begin{itemize}
\item The relay first finds out the worst link of each user pair. Denote $z_i= \min\{|h_i|^2, |g_i|^2\}$.
\item The user pair with the strongest worst link is selected, i.e. the $i^*$-th user pair is selected because $i^*=\arg \max \left\{z_1, \ldots, z_M\right\}$.
\end{itemize}
Provided that the relay can decode the $i^*$-th source's message correctly, the SNR at the corresponding destination is given by
\begin{eqnarray}
SNR_{i^*} = \eta P\left(|h_{i^*}|^2-\epsilon\right) |g_{i^*}|^2.
\end{eqnarray}
\subsection{Performance evaluation}
The outage probability achieved by the max-min based scheduling scheme can be written as follows:
\begin{eqnarray}\label{outage probability}
\mathrm{P}_{o}\triangleq\mathrm{P}\left(|h_{i^*}|^2<\epsilon\right) + \mathrm{P}\left((|h_{i^*}|^2-\epsilon)|g_{i^*}|^2<\epsilon_1, |h_{i^*}|^2>\epsilon \right),
\end{eqnarray}
where $\epsilon_1=\frac{\epsilon}{\eta}$. Although the outage probability achieved by the max-min criterion is shown in a simple term as in \eqref{outage probability}, it is challenging to evaluate this probability. The reason is that the use of the scheduling strategy has changed the statistical property of the channel gains. For example, $|h_{i^*}|^2$ is no longer exponentially distributed. The density function of $\min\{|h_{i^*}|^2,|g_{i^*}|^2\}$ can be found by using order statistics, and the key step is to restructure the expression of the outage probability shown in \eqref{outage probability} into a form to which the density function of $\min\{|h_{i^*}|^2,|g_{i^*}|^2\}$ can be applied. In the following theorem, the exact expression for the outage probability achieved by the max-min scheme is provided.
\begin{theorem}\label{theorm1}
When a single user is scheduled, the outage probability achieved by the max-min user scheduling strategy is given by
\begin{eqnarray}\label{theorem eq}
\mathrm{P}_o&=& \frac{e^{-\epsilon}}{2}\sum_{i=0}^{M}{M \choose i} \frac{(-1)^i}{2i-1}\left(1-e^{-(2i-1)\epsilon}\right) \\\nonumber &&+M \sum^{M-1}_{i=0}{M-1 \choose i} (-1)^i \left(\frac{e^{-\epsilon}-e^{-(2i+2)\epsilon}}{2i+1} +\frac{e^{-(2i+2)\epsilon}-e^{-(2i+2)\epsilon_0}}{2i+2} - e^{-\epsilon} \beta(\epsilon_0,i) \right)
\\ \nonumber &&+ \frac{\left(1-e^{-2\epsilon}\right)^M}{2}+ M\sum^{M-1}_{i=0} {M-1\choose i} (-1)^i \left(\frac{e^{-2(i+1)\epsilon}-e^{-2(i+1)\epsilon_0}}{2(i+1)} - e^{-(2i+1)\epsilon}\beta(\epsilon_0-\epsilon, i) \right),
\end{eqnarray}
where $\beta(y,i)=\int^{\epsilon_0}_{0}e^{-(2i+1)y-\frac{\epsilon_1}{y}} dy$, and $\epsilon_0=\frac{\epsilon+\sqrt{\epsilon^2+4\epsilon_1}}{2}$.
\end{theorem}
\begin{proof}
See the appendix.
\end{proof}
The expression shown in \eqref{theorem eq} can be used to numerically evaluate the outage probability achieved by the max-min scheduling approach, as shown in Section \ref{section simulation}. In addition, it can also be used for the analysis of the diversity gain achieved by the max-min approach, as shown in the following theorem.
\begin{theorem}\label{theorem 2}
When a single user pair is scheduled, the diversity order achieved by the max-min user scheduling approach is $\frac{M+1}{2}$.
\end{theorem}
\begin{proof}
See the appendix.
\end{proof}
For the addressed topology, there are $M$ independent pathes given $M$ user pairs, which means that the maximal diversity gain is $M$. And Theorem \ref{theorem 2} indicates that the max-min scheduling approach cannot achieve this maximum diversity.
As a benchmark scheme, recall a conventional cooperative network that has the same topology as the one described in Section \ref{section II}. Without loss generality, let $P_i=P$, i.e. the relay transmission power is the same as the source power. It can be easily verified that the max-min approach can achieve the optimal diversity gain, $M$, as shown in the following. The outage probability achieved by the max-min approach is
\begin{eqnarray}\nonumber
\mathrm{P}_o&=&\mathrm{P}(|h_{i^*}|^2<\epsilon)+\mathrm{P}(|g_{i^*}|^2<\epsilon,|h_{i^*}|^2>\epsilon) \\ \nonumber
&=&\mathrm{P}(|h_{i^*}|^2<\epsilon,|g_{i^*}|^2>\epsilon)+\mathrm{P}(|h_{i^*}|^2<\epsilon,|g_{i^*}|^2<\epsilon)
+\mathrm{P}(|g_{i^*}|^2<\epsilon,|h_{i^*}|^2>\epsilon)\\ &=& \mathrm{P}(\min\{|h_{i^*}|^2,|g_{i^*}|^2\}<\epsilon)\rightarrow \epsilon^M,\label{maxmin conventional}
\end{eqnarray}
where the last step is obtained by using the probability density function (pdf) shown in \eqref{short1} and applying the high SNR approximation. Comparing \eqref{maxmin conventional} to \eqref{theorem 2}, one can observe that the performance of the max-min scheduling approach in two system setups is significantly different, and new efficient user scheduling strategies are needed for energy harvesting cooperative networks.
\section{Modified User Scheduling Strategics }\label{section IV}
\subsection{Scheduling a single user pair}\label{subsection 1}
A straightforward approach of user scheduling for the energy harvesting scenario is described as follows:
\begin{itemize}
\item Construct a subset of user pairs containing all the destinations whose source information can be decoded correctly at the relay. Denote this subset as $\mathcal{S}\triangleq \{i\in \mathcal{S}: |h_i|^2\geq \epsilon\}$.
\item Select a destination from $\mathcal{S}$ to minimize the outage probability of the relay transmission. Denote the index of the selected user by $i^*$, i.e. $i^*=\arg \max \{(|h_i|^2-\epsilon)|g_i|^2, i\in \mathcal{S}\}$.
\end{itemize}
The outage probability achieved by this user scheduling strategy can be expressed as follows:
\begin{eqnarray}
\mathrm{P}_o &\triangleq& \mathrm{P}\left( |S|=0 \right)+\mathrm{P}\left( (|h_{i^*}|^2-\epsilon)|g_{i^*}|^2 <\epsilon_1,|\mathcal{S}|>0\right) \\ \nonumber &=&
\mathrm{P}\left( |S|=0 \right)+\sum^{M}_{n=1}\underset{T_1}{\underbrace{\mathrm{P}\left( (|h_{i^*}|^2-\epsilon)|g_{i^*}|^2 <\epsilon_1||\mathcal{S}|=n\right)}}\mathrm{P}(|\mathcal{S}|=n),
\end{eqnarray}
where $|\mathcal{S}|$ denotes the cardinality of the set.
Denote $x_i=|h_i|^2$, and order $x_i$ as $x_{(1)}\leq \cdots \leq x_{(M)}$. The probability of $\mathrm{P}(|\mathcal{S}|=n)$ can be calculated as follows:
\begin{eqnarray}\label{sn}
\mathrm{P}(|\mathcal{S}|=n) &=& \mathrm{P}(x_{(M-n)}<\epsilon, x_{(M-n+1)}>\epsilon)\\ \nonumber &=& \frac{M!}{(M-n)!n!} \left(1-e^{-\epsilon}\right)^{M-n} e^{-n\epsilon},
\end{eqnarray}
for $0\leq n \leq M$, where the last equation is obtained by applying the joint pdf of $x_{(M-n)}$ and $x_{(N-n+1)}$ \cite{David03} and \cite{Dingtcom07}. On the other hand $T_1$ can be simply expressed as follows:
\begin{eqnarray}
T_1 &=& \left[\mathrm{P}\left( (x_i-\epsilon)y_i <\epsilon_1, |i\in \mathcal{S}, |\mathcal{S}|=n\right)\right]^{n},
\end{eqnarray}
where $y_i=|g_i|^2$. In the following we first consider the case of $n\geq 1$. The conditions of $T_1$, $i\in\mathcal{S}$ and $ |\mathcal{S}|=n$, imply $x\geq \epsilon$, which means that the conditional CDF of $x_i$ is given by
\begin{eqnarray}
F_{x_i|i\in\mathcal{S}, |\mathcal{S}|\geq 1}(x) = \frac{e^{-\epsilon }-e^{-x}}{e^{-\epsilon}},
\end{eqnarray}
for $x\geq\epsilon$. The two conditions, $i\in\mathcal{S}$ and $|\mathcal{S}|=n$, do not affect $y_i$ which is still exponentially distributed. Therefore the factor $T_1$ can be calculated as follows:
\begin{eqnarray}
T_1 &=& \left(\mathcal{E}_{y}\left(\frac{e^{-\epsilon }-e^{-\frac{\epsilon_1}{y}-\epsilon}}{e^{-\epsilon}}\right)\right)^n\\ \nonumber &=& \left(1 - 2\sqrt{\epsilon_1}\mathbf{K}_{1}\left(2\sqrt{\epsilon_1}\right)\right)^n,
\end{eqnarray}
where $\mathbf{K}_n(\cdots)$ denotes the modified Bessel function of the second kind.
Recall that $x\mathbf{K}_1(x)\approx 1+\frac{x^2}{2}\ln \frac{x}{2}$ for $x\rightarrow 0$, \cite{Dingpoor133}, which means $T_1\approx \epsilon \ln \frac{1}{\epsilon}$. The overall outage probability can be approximated as follows:
\begin{eqnarray}\label{eqlemma1}
\mathrm{P}_o &=&
\left(1-e^{-\epsilon}\right)^M +\sum^{M}_{n=1}\left(1 - 2\sqrt{\epsilon_1}\mathbf{K}_{1}\left(2\sqrt{\epsilon_1}\right)\right)^n \frac{M!}{(M-n)!n!} \left(1-e^{-\epsilon}\right)^{M-n} e^{-n\epsilon}\\ \nonumber &\approx& \epsilon^M +\sum^{M}_{n=1} \epsilon^n \left(\ln \frac{1}{\epsilon}\right)^n \frac{M!}{(M-n)!n!} \epsilon^{M-n}.
\end{eqnarray}
When $\epsilon \rightarrow 0$, it is straightforward to show $\frac{\log \mathrm{P}_o}{\log \epsilon}\rightarrow M$, which results in the following lemma.
\begin{lemma}\label{lemma 1}
The proposed user scheduling strategy can achieve the full diversity gain $M$.
\end{lemma}
Compared to the maxi-min based approach, the proposed scheduling strategy can achieve a larger diversity gain. The reason for this performance improvement is that the source-relay channels have been given a more important role for use scheduling, compared to the relay-destination channels. Particularly the source-relay channels have been considered when forming $\mathcal{S}$ and also selecting the best user from the set, whereas the relay-destination channels affect only the second step.
\subsection{Scheduling $m$ user pairs}\label{subsection 3}
The approach proposed in the previous subsection can be extended to the case of scheduling $m$ user pairs, as described in the following.
\begin{itemize}
\item Construct a subset of user pairs, $\mathcal{S}$, as defined in Section \ref{subsection 1}.
\item Find all possible combinations of the users in $ \mathcal{S} $, denoted by $\{\pi_1, \cdots, \pi_{{|\mathcal{S}| \choose \min\{m,|\mathcal{S}|\}}}\}$, where each set contains $\min\{m,|\mathcal{S}|\}$ users, i.e. $\pi_i=\{\pi_i(1),\ldots, \pi_i(\min\{m,|\mathcal{S}|\})\}$.
\item For each possible combination, $\pi_i$, $1\leq i \leq {|\mathcal{S}| \choose \min\{m,|\mathcal{S}|\}}$
\begin{itemize}
\item Calculate the accumulated power obtained from energy harvesting, $\sum_{j=1}^{\min\{m,|\mathcal{S}|\}}P_{r\pi_i(j)}$.
\item Distribute the overall power among $m$ destinations equally, i.e. $P_i=\frac{\sum_{j=1}^{\min\{m,|\mathcal{S}|\}}P_{r\pi_i(j)}}{\min\{m,|\mathcal{S}|\}}$.
\item Find the worst outage performance among the $\min\{m,|\mathcal{S}|\}$ users in $\pi_i$, denoted by $\mathrm{P}_{o,\pi_i}$.
\end{itemize}
\item Select the combination which minimize the worst user outage performance, i.e. $i^*=\arg \min \{\mathrm{P}_{o,\pi_1}, \cdots, \mathrm{P}_{o,\pi_{{|\mathcal{S}| \choose \min\{m,|\mathcal{S}|\}}}}\}$.
\end{itemize}
This scheduling approach is to exhaustively search all possible combinations of the $|\mathcal{S}|$ user pairs, and one combination will be selected if it can minimize the outage probability for the worst user case. Provided that there is a large number of users to be scheduled, the complexity of this exhaustive search scheme can be infeasible due to the large number of the possible combinations. Note that in this paper, we consider only the equal power allocation strategy, whereas other power allocation strategies, such as the sequential water filling scheme proposed in \cite{Dingpoor133}, can also be applied.
It is difficult to analyze the performance achieved by the exhaustive search approach, since the channel gains from different combinations might be correlated. Instead, we will propose a greedy approach which is applicable to delay tolerant networks, and also serves as an upper bound for the system performance.
\subsection{Greedy user scheduling approach}\label{subsection 2}
First order all the source-relay channels and the relay-destination channels, i.e. $|h_{(1)}|^2\leq \ldots \leq |h_{(M)}|^2$ and $|g_{(1)}|^2\leq \ldots \leq |g_{(M)}|^2$. The greedy user scheduling approach can be described as follows:
\begin{itemize}
\item Construct a subset of user pairs, $\mathcal{S}$, as defined in Section \ref{subsection 1}.
\item Schedule $\min\{m,|\mathcal{S}|\}$ sources with the best source-relay channel conditions during the first $\min\{m,|\mathcal{S}|\}$ time slots, i.e. the $\min\{m,|\mathcal{S}|\}$ sources with the following channels, $|h_{(M-\min\{m,|\mathcal{S}|\}+1)}|^2\leq \ldots \leq |h_{(M)}|^2$.
\item Calculate the accumulated power obtained from energy harvesting, $\sum_{j=1}^{\min\{m,|\mathcal{S}|\}}P_{r (M-j+1)}$.
\item Schedule $\min\{m,|\mathcal{S}|\}$ destinations with the best relay-destination channel conditions during the second $\min\{m,|\mathcal{S}|\}$ time slots, i.e. the $\min\{m,|\mathcal{S}|\}$ destinations with the following channels, $|g_{(M-\min\{m,|\mathcal{S}|\}+1)}|^2\leq \ldots \leq |g_{(M)}|^2$, with equally allocated transmission power, denoted by $P_{\min\{m,|\mathcal{S}|\}}=\frac{\sum_{j=1}^{\min\{m,|\mathcal{S}|\}}P_{r (M-j+1)}}{\min\{m,|\mathcal{S}|\}}$.
\end{itemize}
Note that the scheduled destinations are not necessarily the partners of the scheduled sources, so this greedy approach assumes that the relay always has data to be transmitted to all the destinations.
Based on the above strategy description, the outage probability at the $i$-th best destination, $1\leq i \leq \min\{m, |\mathcal{S}|\}$, can be written as follows:
\begin{eqnarray}
\mathrm{P}_{oi}&\triangleq& \mathrm{P}\left(|\mathcal{S}|=0\right) +\sum^{M}_{n=1}\mathrm{P}\left(\left.P_{\min\{m,|\mathcal{S}|\}}|g_{(M-i+1)}|^2<(2^{2R}-1)\right||\mathcal{S}|=n\right) \mathrm{P}\left(|\mathcal{S}|=n\right) .
\end{eqnarray}
And the following lemma provides the exact expression of the above outage probability.
\begin{lemma}\label{lemma 2}
The outage probability achieved by the greedy user scheduling approach is given by:
\begin{eqnarray}\label{eqlemma2}
\mathrm{P}_{oi}&\triangleq& \mathrm{P}\left(|\mathcal{S}|=0\right) +\sum^{m}_{n=1} T_2\mathrm{P}\left(|\mathcal{S}|=n\right)+\sum^{M}_{n=m+1}T_3\mathrm{P}\left(|\mathcal{S}|=n\right),
\end{eqnarray}
where $\mathrm{P}(|\mathcal{S}|=n)$ is defined in \eqref{sn}, $T_2= i{M \choose i}\sum^{M-i}_{k=0} {M-i \choose k} \frac{ (-1)^k}{k+i} \left(1- \frac{2\left((k+i)n\epsilon_1\right)^{\frac{n}{2}}}{(n-1)!}\mathbf{K}_n\left(2\sqrt{(k+i)n\epsilon_1}\right) \right)$, $T_3=\frac{M!}{(M-i)!(i-1)!}\sum^{M-i}_{l=0}{M-i \choose l} \frac{(-1)^l}{l+i} \left( 1- T_4\right)$, $T_4=\sum^{n-m-1}_{k=0} d_{m,k}\left(\sum^{m}_{j=1}\frac{2a_{j,k}\left(m\epsilon_1(l+i)\right)^{\frac{j}{2}} \mathbf{K}_{j}\left(2\sqrt{m\epsilon_1(l+i)}\right) }{(j-1)!} \right.$
$ \left. +2b_k \sqrt{\frac{m\epsilon_1(l+i)}{1+\frac{k+1}{m}}} \mathbf{K}_1\left(2\sqrt{\epsilon_1(l+i)\left(m+ k+1 \right)}\right)\right)$, $d_{m,k}=\frac{n!}{(n-m-1)!m!m}{n-m-1 \choose k} (-1)^k$, $b_k=(-1)^m\frac{m^{m}}{(k+1)^{m}}$, and $a_{j,k}= \frac{(-1)^{m-j}m^{m-j+1}}{(k+1)^{m-j+1}}$.
\end{lemma}
\begin{proof}
See the appendix.
\end{proof}
Although the outage probability expression in Lemma \ref{lemma 2} can be used for numerical studies, this form is quite complicated and cannot be used for analyzing diversity gains. For the special case of $m=1$, asymptotic studies can be carried out and the achievable diversity gain can be obtained, as shown in the following lemma.
\begin{lemma}\label{lemma 3}
When scheduling only a single user pair, i.e. $m=1$, the diversity gain achieved by the greedy user scheduling approach is $M$.
\end{lemma}
\begin{proof}
See the appendix.
\end{proof}
The fact that the greedy user scheduling approach can achieve the full diversity gain is not surprising, since the greedy approach outperforms the diversity-optimal one described in Section \ref{subsection 1}.
\section{Numerical Results}\label{section simulation}
In this section, computer simulations will be carried out to evaluate the performance of the user scheduling approaches addressed in this paper. To simplify clarifications, we term the user scheduling approaches described in Section \ref{subsection 1}, \ref{subsection 3} and \ref{subsection 2} as ``Approach I", ``Approach II", and ``Approach III", respectively.
We first focus on the scenario where only a single user is scheduled. In Fig. \ref{fig_1} the accuracy of the developed analytical results about the outage probability shown in Theorem \ref{theorm1}, \eqref{eqlemma1}, and Lemma \ref{lemma 2}, is verified by using simulation results, where the targeted data rate is $R=4$ bits per channel use (BPCU), and the energy harvesting efficiency coefficient is $\eta=1$. As can be seen from the figure, the developed analytical results match the simulation results exactly. In Fig. \ref{fig_2} the outage probabilities achieved by different user scheduling approaches are examined with more details, where analytical results are used to generate the figure. As a benchmark, the scheme with a random selected user is also shown in the figure, and its outage performance is the worst among all the scheduling approaches. On the other hand, Approach III, the greedy user scheduling approach, can achieve the best outage performance. The max-min scheduling approach can outperform random relaying, since its diversity gain can be improved when more users join in the competition, as shown in Theorem \ref{theorem 2}. However, it will result in some performance loss compared to Approach I and Approach III, since it cannot achieve the full diversity gain, as indicated in Theorem \ref{theorem 2}.
\begin{figure}[!htbp]\centering
\epsfig{file=single_accuracyR4.eps, width=0.48\textwidth, clip=}
\caption{Analytical results vs computer simulations. Only one user pair is scheduled, $\eta=1$. The targeted data rate is $R=4$ BPCU. }\label{fig_1}\vspace{-1em}
\end{figure}
\begin{figure}[!htbp]\centering
\epsfig{file=single_com_R2.eps, width=0.48\textwidth, clip=}
\caption{Comparison of various user scheduling approaches. Only one user pair is scheduled. $\eta=1$. The targeted data rate is $R=2$ BPCU. }\label{fig_2}\vspace{-1em}
\end{figure}
Since the main focus of this paper is to study the performance of the max-min user scheduling approach, Fig. \ref{fig_3} is provided in order to closely examine the diversity order achieved by this approach. Particularly the analytical results developed in Theorem \ref{theorm1} are used to generate the curves of outage probabilities. To clearly demonstrate achievable diversity gains, auxiliary lines with the diversity order of $\frac{M+1}{2}$ are also shown as a benchmark. As can be seen from the figure, the outage probability curves for the max-min approach are always parallel to the benchmarking curves. Recall that the diversity order is indicated by the slope of an outage probability curve. Therefore Fig. \ref{fig_3} confirms that the diversity order achieved by the max-min approach is $\frac{M+1}{2}$, as indicated by Theorem \ref{theorem 2}. The reason for this loss of diversity gains is that the max-min approach treats the source-relay channels and the relay-destination channels equally important when user scheduling is carried out. However, when an energy harvesting relay is used, the source-relay channels become more important, since they affect not only the transmission reliability during the first phase, but also the transmission power for the second phase.
\begin{figure}[!htbp]\centering
\epsfig{file=single_diversity_R2.eps, width=0.48\textwidth, clip=}
\caption{Verification of the diversity order for the max-min scheduling approach. Only one user pair is scheduled. $\eta=1$. The targeted data rate is $R=2$ BPCU.}\label{fig_3}\vspace{-1em}
\end{figure}
\begin{figure}[!htbp]\centering
\epsfig{file=mullti_com_R2m2M10.eps, width=0.48\textwidth, clip=}
\caption{Comparison of various user scheduling approaches. The total number of user pairs is $M=10$, $\eta=1$ and two user pairs are scheduled, $m=2$. }\label{fig_4}\vspace{-1em}
\end{figure}
\begin{figure}[!htbp]\centering
\epsfig{file=multiple_accuracy_M6m3.eps, width=0.48\textwidth, clip=}
\caption{Analytical results vs computer simulations. The total number of user pairs is $M=6$, $\eta=1$ and three user pairs are scheduled, $m=3$. }\label{fig_5}\vspace{-1em}
\end{figure}
In Figs. \ref{fig_4} and \ref{fig_5} we will focus on the scenario when multiple user pairs are scheduled. Particularly, in Fig. \ref{fig_4} we compare the outage performance achieved by the three schemes, the max-min approach and the two approaches proposed in Section \ref{section IV}. The total number of the user pairs is $M=10$ and two user pairs will be scheduled. Since the scheduled users experience different outage performance, in the figure we show the outage performance for the user with the strongest SNR and also the user with the weakest SNR. As can be observed from the figure, Approach III, the greedy user scheduling approach, can achieve the best outage performance, and the max-min approach achieves the worst performance. But it is worthy to point out that Approach II outperforms the max-min approach at a price of high computational complexity, since Approach II needs to enumerate all possible combinations of the user pairs. In Fig. \ref{fig_5}, we evaluate the accuracy of the analytical results developed in Lemma \ref{lemma 2}, by comparing the outage probability calculated using \eqref{eqlemma2} to computer simulations. The total number of the user pairs is $M=6$ and three user pairs will be scheduled. As can be observed from the figure, the developed analytical results match the computer simulations exactly.
Finally we present some simulation results when $\eta<1$ and the large scale path loss is considered. Particularly consider a disk with the relay at its center and its diameter as $4$ meters. The $M$ pairs of sources and destinations are uniformly deployed in this disc, and the used path loss exponent is $2$. In Fig. \ref{fig_6} and Fig. \ref{fig_7}, the performance of the user scheduling approaches for the cases of $m=1$ and $m=2$ are shown, respectively. As can be seen from Fig. \ref{fig_6}, the use of the user scheduling approaches can improve the system performance compared to the random relaying scheme. Another observation from both figures is that, among all the opportunistic scheduling approaches, the max-min approach achieves the worst performance, and the greedy approach outperforms the other user scheduling approaches, which is consistent to the previous figures.
\begin{figure}[!htbp]\centering
\epsfig{file=mullti_eta_R2R3m1M6.eps, width=0.48\textwidth, clip=}
\caption{Comparison of various user scheduling approaches. $\eta=0.5$. The total number of user pairs is $M=6$, and one user pair is scheduled, $m=1$. }\label{fig_6}\vspace{-1em}
\end{figure}
\begin{figure}[!htbp]\centering
\epsfig{file=mullti_eta_R2m2M6.eps, width=0.48\textwidth, clip=}
\caption{Comparison of various user scheduling approaches. $\eta=0.5$. The total number of user pairs is $M=6$, and two user pairs are scheduled, $m=3$. The targeted data rate is $R=2$ BPCU. }\label{fig_7}\vspace{-1em}
\end{figure}
\section{Conclusions} \label{section conclusion}
In this paper, we considered an energy harvesting cooperative network with $M$ source-destination pairs and one relay, where the relay schedules only $m$ user pairs for transmissions. It is important to point out that for the special case of $m=1$, the addressed scheduling problem is the same as relay selection for the scenario with one source-destination pair and $M$ relays. The main contribution of this paper is to show that the use of the max-min criterion will result in loss of diversity gains, when an energy harvesting relay is employed. Particularly when only one user is scheduled, analytical results have been developed to demonstrate that the diversity gain achieved by the max-min criterion is only $\frac{M+1}{2}$, much less than the maximal diversity gain $M$. Motivated by this performance loss, a few user scheduling approaches tailored to energy harvesting networks have been proposed and their performance is analyzed. Simulation results have been provided to demonstrate the accuracy of the developed analytical results and facilitate the performance comparison. When developing user scheduling approaches, only reception reliability is considered, and it is assumed that the network is delay tolerant. It is a promising future direction to study how to achieve a balanced tradeoff between reception reliability and user delay.
\bibliographystyle{IEEEtran}
|
1,108,101,565,782 | arxiv | \section{Abstract}
Shared e-scooters have become a familiar sight in many cities around the world. Yet the role they play in the mobility space is still poorly understood. This paper presents a study of the use of Bird e-scooters in the city of Atlanta. Starting with raw data which contains the location of available Birds over time, the study identifies trips and leverages the Google {\it Places} API to associate each trip origin and destination with a Point of Interest (POI). The resulting trip data is then used to understand the role of e-scooters in mobility
by clustering trips using 10 collections of POIs, including business, food and recreation, parking, transit, health, and residential. The trips between these POI clusters reveal some surprising, albeit sensible, findings about the role of e-scooters in mobility, as well as the time of the day where they are most popular.
\hfill\break%
\noindent\textit{Keywords}: Shared E-scooters, Mobility, Origin-Destination Pairs, Point of Interest, Time of day.
\newpage
\section{Introduction}
E-scooters have become a familiar sight in cities around the world. When they are not seen traveling on streets, bicycle lanes, and around campuses, they can be found parked at transit stations, in front of stores, restaurants, and apartment buildings, or next to parks, stadium, and parking lots.
Despite their growing popularity, the role of e-scooters in mobility remains poorly understood \cite{StuttgartScooters}. Are they used as a complement to transit in order to address its first/last problem? Are e-scooters mostly convenience vehicles whose trips replace a short walk or are they part of daily mobility activities of a segment of the population? These are important questions to address as e-scooters have the potential to change the mobility landscape in cities but may also require investment in infrastructures and regulations in order to ensure the safety of their riders \cite{BestPractice}.
This paper is an attempt to answer some of these questions, using a data-driven approach. Starting with raw data describing where e-scooters are located when they are idle, the study first identifies e-scooter trips, i.e., origin-destination pairs over time. The trips are then filtered to remove the substantial noise present in the dataset and, in particular, a large number of short trips (less than a few meters) that do not represent actual e-scooter use. The origin and destination of e-scooter trips are then mapped onto a Point of Interest (POI), obtained using the {\it Google Places API} and, in particular, its {\it Nearby Search} and {\it Text Search} functionalities. Google Places provides a number of predefined types for POIs, which this study augments to capture sites that are potentially important for mobility. Once each trip is associated with two POIs, it becomes possible to group POIs in meaningful categories and to try isolating the purpose of each trip. These trip purposes offer unique insights of how e-scooters are currently being used and what their roles could be in the future of mobility. Although the data-driven methodology presented in this paper is general, the paper focuses on the use of Bird e-scooters in the city of Atlanta and, more specifically, its midtown and downtown sections and their neighboring neighborhoods which capture about 70\% of the ridership.
The trip purpose analysis is rather illuminating, and somewhat surprising: It reveals that, at this point, {\it e-scooters seem to fill specific mobility needs that complement existing modes}, by offering a convenient and affordable transportation option, at least for some population segments. The analysis also reveals the importance of {\em the time of day} in e-scooter use. The main contributions of the paper can be summarized as follows:
\begin{enumerate}
\item It presents a data-driven methodology for inferring the purposes of e-scooter trips, starting from raw data on idle e-scooter locations and the Google Places API for identifying POIs.
\item It provides an analysis of trip purposes for e-scooters in the city of Atlanta, grouping trips along 10 broad categories that include public transit, business, recreation, food, residential locations, and health facilities.
\item It analyzes the main trip categories in more depth, diving into more detailed trip purposes that reveal some interesting insights on how e-scooters are currently used.
\end{enumerate}
The rest of the paper is structured as follows. It starts with a brief of background on the operations of {\it Bird}, a ride-sharing company for e-scooters and a review the existing literature on e-scooters. The paper then describes the dataset used for the analysis and the methodology adopted for identifying meaningful trip purposes. The methodology involves the transformation of the raw data into trips, each of which consists of an origin and a destination and their times, the identification of potential POIs using the Google Places API, as well as the association of trips with two POIs. Each of these steps requires some careful cleaning and calibration, which are documented in each of the sections, given the distance between the raw data and actual e-scooter trips, as well as the difficulty in associating POIs to trips. The trip purposes are then presented in matrix form, where the rows and columns represent broad categories of POIs. Deep dives on the most popular categories, and on the time of use, are also presented. The paper then concludes with implications for mobility.
\section{Background}
Bird \cite{Bird} is a ride-sharing company whose business model emulates bike- and car-sharing companies. A rider can activate a bird e-scooter (a Bird) whenever they see one, drive it to their destination, and drop it anywhere. The cost is \$1 to start a trip and 29 cents per minute. Birds can be operated from 7 A.M. to 9 P.M. Every night after hours, Birds are picked up and taken to charge. The following morning they are dropped off by 7 A.M.
\section{Literature Review}
As mentioned in the introduction, the role of e-scooters in mobility remains poorly understood and has not been studied in depth. The study in \cite{StuttgartScooters} is the closest related work: It concerns the analysis of e-scooter riders in the city of Stuttgart in Germany using various clusterings of the population. Starting with a dataset which contains the e-scooter trips, as well as demographic data, they identified four classes of riders: Power Users, Casual Users (Generation X), Casual Users (Generation Y), and One-Time users. The Power Users account for 40\% of the revenue in their dataset but is a rather small group compared to the others. The analysis in this paper is orthogonal in scope and complement those findings nicely: It is not customer-centric, since no such data was available. Instead, this paper focuses on trip purposes and the role of e-scooters in the mobility landscape. In addition, the methodology followed in our study is fundamentally different: It relies on POIs to identify trip purposes. Interestingly, the analysis provided in this paper also reveals, albeit in an indirect way, the main customer groups for e-scooters. A study of best practices in the management of e-scooters is presented in \cite{BestPractice}. It studies the correlation between e-scooter uses and accidents and provides recommendations on how to improve safety for e-scooter riders. The optimal location of charging stations for e-scooters is studied in \cite{CHEN2018519}. The way riders park their e-scooters in the city of San Jose is studied in \cite{ScooterPark}. E-scooters were also a focus in the study of measuring equitable access to mobility described in \cite{ScooterEquity}. The study reports where e-scooters are available and concludes that e-scooter availability is aligned with vehicle availability and are not evenly distributed or used in low-income neighborhoods. These observations are consistent with the trip purposes identified in this paper.
\section{The Dataset}
Bird stores data for all of their e-scooters and had this information publicly available for a short time. This data was available for cities such as Atlanta, Miami, Los Angeles, Portland, and Charlotte. However, for the purposes of this study, only trips in Atlanta were considered. During the data collection, the Bird's server updates approximately every 10 minutes and stores the time of the last update, as well as the data for all Birds in the city. Each Bird is associated with a unique e-scooter ID and has latitude and longitude coordinates for each system update. However, if the Bird is in use at the time of the system update, the e-scooter ID will not be present on the server and instead will have 'null' in its place. Additionally, the Bird's server does not store historical data, so the extraction program had to pull the data in real time. This data was conducted using Bird's API over the following dates in 2019: January 26 -- February 1, February 2--5, 10--13 and 15--19, and February 26 -- March 5.
\section{The Methodology}
The goal of this project is to understand how e-scooters are being used and, in particular, to identify the purpose of every e-scooter trip. To achieve this objective, the analysis of the Bird dataset requires a number of steps that can be summarized as follows:
\begin{enumerate}
\item Extraction of e-scooter trips from the dataset;
\item Extraction of Points of Interest (POI for short) using the Google Places API;
\item Association of POIs to the origin and destination of each trip;
\item Determination of the purpose of each trip;
\item Aggregation of trip purposes by POI categories and presentation in matrix form.
\end{enumerate}
Many of these steps are complicated by various factors that are highlighted in subsequent sections.
\section{Trip Identification}
This section reviews the first step of the methodology, trip identification, which includes both extraction and cleaning.
\subsection{Trip Extraction}
The first step consists in extracting trips from e-scooter traces. Each e-scooter is tracked independently on a daily basis. The data provides successive coordinate locations and time-stamps for each e-scooter. These start and end locations, and their times, are then used as the origin and destination of the trip. As a result, each obtained trip has an origin, destination, start and end times (month/day/year/time), and a distance of displacement. There are 2.6 million data points in the dataset which spans the days mentioned earlier.
Due to the nature of Bird's server, which only updates once every 10 minutes, these trips are necessarily an approximation. First observe that, if a trip starts after a given time-stamp $t_1$ and ends before the system updates again at $t_2$, the trip has a start time of $t_1$ regardless of how close the actual start time was to $t_2$. Similarly, any ride that ends after a given time-stamp (say $t_3$) but before the system updates again (at $t_4$) produces a trip with an end time of $t_4$. As a result, this definition of trip cannot capture a situation where two trips start and end in between two time-stamps.
\subsection{Trip Cleaning}
The raw trips then need to be cleaned. First, because of Bird's operating hours, any trip that has a start time or end time outside of the 7am to 9pm range is excluded. This decreases the chance of classifying as trip when an e-scooter being picked up for charging. All trips under 75 meters and over 3000 meters are also eliminated. This is done in part to eliminate any charging trip that occurs during the day, as well as short distance trips that are assumed to be noise. Surprisingly, in the dataset, there are 1,516,631 rides under 5 meters, 1,983,723 under 10 meters, and 2,354,293 under 20 meters, which provides compelling evidence that these rides do not correspond to actual trips. After this filtering steps, there are around 23,000 trips left, which is much more reasonable.
Figure \ref{fig:trips} highlights the raw and cleaned trips.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.0in]{Raw_Trip_Distances.png}
\caption{The Raw Trips.}
\end{subfigure}%
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.0in]{Cleaned_Trip_Distances.png}
\caption{The Cleaned Trips.}
\end{subfigure}%
\caption{Illustrating the Trip Cleaning Process.}
\label{fig:trips}
\end{figure}
\section{Points of Interest}
This section describes the methodology to obtain the POIs from the Google Places API.
\subsection{Google Places}
This study uses {\it Google Places} \cite{GooglePlaces} to obtain POIs:
{\em Places} has various capabilities and this study employs two of them: the {\it Nearby Search} and the {\it Text Search}. Both of these conduct a search within a given radius of a specific coordinate. {\it Nearby Search} relies on {\bf predefined types} provided by Google in \cite{GooglePlacesPredefined}: Hence {\it Nearby Search} has the benefit to pull results that match the specific predefined type specified by queries. The {\it Text Search} pulls information on the POIs that match the text string in the query input: It has a slightly lower degree of reliability.
To keep the Google search budget reasonable, the city is divided into a grid. Once the city is embedded in this grid, it quickly becomes apparent that one grid cell is responsible for 70\% of the total traffic: That cell captured the (important) midtown and downtown areas. As a result, only trips in this region are analyzed. The upper right and lower left coordinate boundaries for this specific grid are [33.789279,-84.35961499999999] and [33.74837933333333,-84.40562333333332]. This leaves a final count of 16,217 total trips.
Although Google Places is powerful, it is not without limitations. In order to prevent mass data extraction by each query, Google does not return more than 20 results. To overcome this, an even smaller grid is created out of the new coordinate boundaries (8x8, 64 total grid points). For each grid point, a search is conducted using the center of the grid point and a calculated radius that would ensure the grid point is circumscribed within the query area.
\subsection{General Points of Interest}
Out of the 90 predefined types for {\it Nearby Search}, only those deemed relevant for the purpose of this study are used in order to allow for as many queries as possible for a given query budget. There are, however, gaps in predefined types for {\it Nearby Search}. More precisely, there is no residential type. POIs such as apartments and condos are thus obtained using the {\it Text Search}. Note also that the {\it Nearby Search} associates with each POI its name, location, predefined types, and vicinity (address). In general, there are about 2--3 predefined types returned for a given POI.
\subsection{Additional Points of Interest}
To increase the overall POI density in the selected region, some additional procedures are conducted. After reducing the selected region to a 15x15 grid, more queries are run for the types "condo", "lodging", "park", "restaurant", and "subway station" in the grid points with the lowest POI density. Thirteen business points of interest, corresponding to major corporations, were also manually created to overcome spurious parking associations. Lastly, three neighborhoods were manually added, one being in Midtown, one being Old Fourth Ward, and one being Virginia Highlands.
\subsection{Primary Types}
For classification purposes, this study also associates with each POI a {\em primary} type. The set of primary types is composed of Google's predefined types and the types introduced by this study (e.g., apartment). The primary type of a POI obtained by the {\it Nearby Search} is the first predefined type returned by Google. This selection is based on the assumption that Google returns the list of associated types in decreasing order of relevance.
For those added through the {\it Text Search} whose respective types do not exist in the Google API, their primary type was programmatically added to their list of associated types.
\subsection{POI Buffers}
There are some limitations with using a specific coordinate point. Consider the Mercedes-Benz stadium in Atlanta. Because its coordinates are in its center, the stadium has a low probability to be associated with Birds, which are not allowed inside the building. As a result, a restaurant or coffee shop with a more readily available coordinate could end up being associated with a Bird that was parked right next to the Benz. This concept can be extended to other points of interest such as MARTA stations, the Georgia Aquarium, and neighborhoods. To address this limitation, a series of "buffer" points of interest are programmatically added around these key POIs using an azimuthal equidistant projection.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.0in]{MercedesBenz.jpeg}
\caption{The Mercedes-Benz Stadium.}
\end{subfigure}%
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.0in]{GAAquarium.jpeg}
\caption{The Georgia Aquarium.}
\end{subfigure}%
\caption{Illustrating the POI Buffers.}
\label{fig:buffers}
\end{figure}
Eight buffers are given to the Georgia Aquarium (radius = 90 m), Mercedes-Benz Dome (radius = 140 m), MARTA stations (radius = 20 m), and neighborhoods (radius = 140 m). To reduce the distance between the POIs associated with a trip within a neighborhood, an additional ring of 16 buffers are added around all 3 neighborhoods (radius = 290 m). These buffer points inherit the same data as their parent POIs. POI buffers are illustrated in Figure \ref{fig:buffers}.
\subsection{Duplicate Removal}
After collecting all POIs, it is necessary to address duplicates entries as well as other issues. Indeed, a search on "restaurant" and a search on "bar" may return the same POI. In this case, the {\it Nearby Search} result whose primary type does not match the query type is removed. More generally, in presence of duplicates, only the POIs whose primary type matches their first predefined type are kept. Of course, none of the results from the {\it Text Search} can be post-processed in this way since these queries are not based on predefined types.
Another complication comes from the fact that there are many POIs with only Atlanta in their vicinity field as opposed to a proper address. Upon further investigation, none of these POIs are assigned proper coordinates and it appears Google does not have an accurate location information on them. All of these are also excluded.
The last change to the POI data is motivated by the fact that different POIs may share the same exact location. As a result, it is not possible to associate a trip purpose to an O-D pair with that location. To capture this uncertainty, a new primary type is created and called {\em multiple}. All POIs sharing a location are then replaced by a single {\em multiple} POI. The new instance inherits all of the primary types, including duplicates, from its predecessors. Note that, when the two POIs are of the same primary type, the merging process still occurs but the primary type is kept.
\subsection{Grouping POIs}
\begin{table}[!t]
\centering
\includegraphics[height=2.5in]{Types_Into_Groups.png}
\caption{The 10 POI Groups Used in This Study.}
\label{table:group}
\end{table}
Because this study relies on 42 primary types of POIs, these are further categorized into 10 different groups described in Table \ref{table:group}. Bus stops could also be included, but this would result in an extremely high volume of associations, most of which are deemed to be unreasonable. Future work should cross-analyze Automatic Passenger Count and GPFS data with bus stops to determine potential associations to bus stops. As a result, after this step, every POI is thus endowed with a group and a primary type.
\section{Linking Trips and POIs}
With clean trips and POIs, it becomes possible to associate a POI with the origin and destination of a trip. For each trip coordinate (i.e., origin or destination), the closest POI is found using a k-dimensional tree. Because {\tt scipy}'s k-dimensional tree uses euclidean distances, the POIs, origins, and destinations are first converted to a Cartesian plane. The trips are then augmented with their origin and destination POIs. Moreover, the distances between the origin and destination and their POIs are also added. Indeed, these distances help in establishing the likelihood of the inferred trip purpose. In the case where the same POI is returned for both origin and destination, an additional Google query is run on the origin location. The choice of the origin for reassignment is motivated by the fact that a Bird rider has more control over her destination than her origin.
\subsection{Distance Threshold Sensitivity Analysis}
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.3in]{distance_origin_hist.png}
\caption{Distances between Origins and POIs.}
\end{subfigure}%
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.3in]{distance_destination_hist.png}
\caption{Distances between Destinations and POIs.}
\end{subfigure}%
\caption{Distances between Origins/Destinations and POIs.}
\label{fig:distances}
\end{figure}
Although the origins and destinations of all trips are assigned a POI, there are significant variations in distances between origins/destinations and POIs, as shown in Figure \ref{fig:distances}. In general, the larger the distance between an origin/destination and its POI, the lower the confidence in the association. This means that, in order to eliminate spurious associations, a certain distance threshold needs to be established. To find such a suitable cutoff point, it is useful to look at how the number of trips increases with distance for various groups of POIs.
\begin{figure}[!t]
\centering
\includegraphics[width=10cm]{number_rides_destination_sensitivity.png}
\caption{Number of Trip Destinations per POI Group as a Function of POI Distance.}
\label{fig:POIgroups}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=10cm]{percentage_destination_sensitivity.png}
\caption{Percentage of Trip Destinations per POI Groups as a Function of POI Distance.}
\label{fig:POIgroupsPercent}
\end{figure}
Figure \ref{fig:POIgroups} depicts the results: Across POI groups, the number of trips at a given threshold displays a similar trend of logarithmic growth as the distance increases. This means that the main increase in trip/POI associations occurs for distances within the 0 to 50 meter range. Beyond 50 meters, the increase in trip/POI associations is minimal and probably not reliable.
The same information expressed in percentage is shown in Figure \ref{fig:POIgroupsPercent}: It tells a slightly different, if not still reaffirming, story. The percentages of trips associated with each POI group outside of {\it business}, {\it parking}, and {\it residence} have minor fluctuations. {\it Business} shows a sharp decrease as the distance threshold increases, and {\it parking} and {\it residential} POIs compensate for its decrease. As all groups find a relatively stable percentage at 50 meters, this is determined to be most suitable threshold going forward. Note that the percentage decrease in the POI group {\it Business} indicates that business associations correspond to small distances and hence should be highly reliable. As will become clear, this is reassuring given the conclusions of this study.
As e-scooters provide a high amount of flexibility for transportation, 50 meters may seem to be quite far. However, this is not necessarily the distance from origin/destination to the nearest POI. In reality, the coordinates returned from Google are generally in the center of a point of interest. A rider, therefore, would drop off her e-scooter near an entry point to the location, rather than the exact coordinate in the middle of the building. When taking into account, a 50 meter threshold becomes very reasonable. In total, out of the 16,217 paths, 9,531 have both an origin and destination associated at 50 meters or less.
\section{Trip Purposes}
It is now possible to try to understand trip purpose for Birds.
\subsection{Overall Trip Purposes}
\begin{table}[!t]
\centering
\includegraphics[height=1.7in]{full_matrix.jpg}
\caption{Trip Purposes: A 2-Dimensional View. The x-axis are the origins and the y-axis are the destinations.}
\label{table:TP}
\end{table}
Table \ref{table:TP} presents a 2-dimensional view of the trip purposes: Each row considers the trips with an origin specified by its POI group: It reports the number of trips going to each POI group specified by the columns. Observe that the POI groups {\em Parking}, {\em Food}, {\em Recreation}, and {\em Business} consistently stand above the rest. This holds true for both origins and destinations. The rest of this section dives deeper in the information provide by this matrix
\subsubsection{Business to Business}
Surprisingly, {\it Business} to {\it Business} is the trip purpose with the highest counts. This suggests that Birds are being used as travel to and from business meetings in the city. Birds have a financial advantage over Uber and Lyft and a convenience advantage over MARTA (the Metropolitan Atlanta Rapid Transit Authority), making them easy and affordable to move quickly between businesses.
\begin{table}[!t]
\centering
\includegraphics[height=1.25in]{business_matrix.jpg}
\caption{Zooming into the Business Trips.}
\label{table:business}
\end{table}
Drilling down further into the business group, out of all of the Business to Business trips, a total of 145 trips are coming from the {\it lawyer} type and a total of 137 are going to a {\it lawyer} type. {\it Real Estate} was the second highest type with 74 and 68 respectively. This is shown in Table \ref{table:business}. What is unique about these jobs is their requirement for a certain degree of mobility. Many lawyers represent a vast array of clients, and this means that they could be visiting all sorts of potential POI types. It could be for a sit-down meeting in a place of business, a casual meeting, or even accompanying a client to court. Real estate agents will frequently leave the office to show a prospective buyer a new property. Ultimately they will have to return back to the agency, and 44 of the residential to business trips have a real estate agency as the associated destination.
From the opposite perspective, these are occupations that may incur client visitations. This same principle could be applied to the bank type which is responsible for 65 origins and 55 destinations for {\em Business}. A Bird would then provide a flexible mode of transportation for the client. It is then possible that these clients are in a tax bracket where the convenience incentive of a Bird outweighs the financial burden. There is also no waiting time for a Bird if it is available. If it is not, the rider may choose another, more expensive, option. {\em In summary, Birds seem to have addressed a mobility gap in first/last mile business trips, for which they bring convenience and affordability, at least for some population segments.}
\subsubsection{Business from/to Parking}
This idea can be expanded upon when examining both the high volume of {\em parking} to {\em business} trips and their inverse. In this case, workers are in a situation where they can drive their own car to work. However, no parking lots may be available close to their places of work, and those that are can be quite expensive. A Bird can serve as a way of alleviating the stress and financial burden of finding a parking spot close to work. Instead, a commuter could park further away, and use the Bird to quickly get through the last leg of their trip. {\em In this case, unlike with Public Transportation, Birds are being used as last-mile solution. If confirmed, this would be encouraging for cities which could relocate parking lots outside the main business areas and use Birds for the last mile.}
\subsubsection{Leisure}
Moving on, {\em recreation} and {\em food} are two of the other most dominant categories, and even more so in the evening and at night. Out of the 205 recreation to recreation trips, 92 involved leaving a bar and 103 were going to a bar. In addition, there were 89 instances of a trip going from a restaurant to a bar and 78 going from a bar to a restaurant. There are a couple of explanations for this, and they are not necessarily mutually exclusive.
Riders for these trips could be operating under the same incentives as those contributing to the business to business trips. For those individuals who may have to Uber/Lyft into the city or live in the city, using a Bird is a good trade-off between cost and convenience. Indeed, if all of their destinations are in a relatively similar area, the fixed cost of ordering an Uber/Lyft would not be competitive. {\em Again, if confirmed, Birds may have found a sweet spot in mobility for food and recreation trips.}
The second explanation would again incorporate parking into the equation. 84 trips go from a parking lot to a bar and 93 go from a bar to parking. On top of that, 136 are from a restaurant to parking and 145 are from parking to a restaurant. In this situation, a user would be coming into the city with their own car. Although they have an additional trip to the parking lot, the same kind of incentives are at work. Note that the data shows that e-scooters are used in combination to parking to go to bars and restaurant: It is not simply visiting a bar or a restaurant before returning home after a e-scooter trip to parking. {\em Once again, e-scooters seem to address a last-mile need.}
\subsubsection{Transit}
Looking at this trend, it does not appear that Birds are being commonly used as a last-mile complement to MARTA, the Atlanta Public Transportation system. This does not rule out compatibility between the two services in the future but it may suggest that the convenience incentive for MARTA riders to use Birds is outweighed by its financial burden.
\subsection{Trips Over Time}
The effect of time of day on the number of trips is profound as shown in Figure \ref{fig:timeofday}. The figure depicts Bird's trip locations in the morning (left) and in the evening (right) respectively. The morning has a much lower density of trips in the city and is also confined to Midtown and Downtown clusters. By the end of the day, both the trip density inside and outside of these two clusters has increased dramatically. There is also a notable amount of trips on major roads and the belt-line in the eastern part of central Atlanta. From examining these images, it is thus not surprising that a significant portion of trips is associated with recreation, business, and food.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.5in]{Morning_Origins.jpg}
\caption{Trip Origins in the Morning.}
\end{subfigure}%
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[height=2.5in]{Evening_Destinations.jpg}
\caption{Trip Destinations in the Evening.}
\end{subfigure}%
\caption{The Effect of Time of Day on the Number of Trips.}
\label{fig:timeofday}
\end{figure}
\subsection{Trip Purposes By Time of Day}
Since mornings and evenings behave rather differently, this section explores the relationship between trip purposes and time of day. To study this dependency, trips are divided into 5 different time slots; The 7:00 AM - 10:59 AM slot is designed to capture the morning commute; 11:00 AM - 1:59 PM is aimed at capturing the lunch hour; 2:00 - 4:59 PM is used for the afternoon hours, and 5:00 PM - 6:59 PM is used for commuters and restaurant patrons. The last time slot, 7:00 PM - 8:59 PM, is primarily geared towards capturing night life activity before Bird's end of operational hours.
\begin{table}[!t]
\centering
\includegraphics[height=2.4in]{time_matrix.jpg}
\caption{Trip Purposes by Time of Days.}
\label{table:TPTOD}
\end{table}
The results are shown in Table \ref{table:TPTOD}. The results indicate that the {\em Parking}, {\em Food}, {\em Recreation}, and {\em Business} groups are relatively similar when comparing the origin and destination for a given time slot.
The main take away from Table \ref{table:TPTOD} is the magnitude of changes in ridership throughout the day. The ride counts in the morning and at lunch are fairly modest, there is a large increase in the afternoon, which is generally maintained in the evening and at night. Observe that the afternoon time slot has the benefit of an extra hour when compared to the night and evening slots. In general, the number of rides increases throughout the day and there is an overall lack of symmetry in rides between the early and later hours.
It is also interesting to observe that business travel peaks in the afternoon, which could suggest that Birds are being used as afternoon travel to and from meetings in the city. Similarly, recreation and food are especially prominent in the evening and at night.
Indeed, the high overall volume of trips after hours suggests that Birds are being primarily used as a vehicle of leisure at that point.
These time of the day results further increase the confidence of the POI associations. The fact that recreation and food arise in the evening and at night and business travel in the afternoon is particularly reassuring.
\subsection{Weekdays Versus Weekends}
\begin{table}[!t]
\centering
\includegraphics[height=1.35in]{weekday_matrix.jpg}
\caption{Trip Purposes on Weekdays.}
\label{table:TPWD}
\end{table}
\begin{table}[!t]
\centering
\includegraphics[height=1.35in]{weekend_matrix.jpg}
\caption{Trip Purposes on Weekends.}
\label{table:TPWE}
\end{table}
Tables \ref{table:TPWD} and \ref{table:TPWE} report the trip purposes for weekdays and weekends. The same trends as before can be observed, with {\em Parking}, {\em Food}, {\em Recreation}, and {\em Business} consistently standing above the rest. The main difference between weekdays and weekends is of course the share of business trips which decreases significantly on weekends as expected. It also emphasizes the role of e-scooters for leisure: Food and recreation have significant ridership both on weekdays and on weekends.
\section{Implications}
This analysis indicates that Birds are a highly versatile method of transportation. Not surprisingly, given their ubiquity, they fill a number of gaps in the mobility space. In particular, this paper has shown that Birds are heavily used for business to business trips, business from/to parking, and for recreation. For these POI groups, Birds provide a new option for last-mile mobility. The Business from/to Parking trips are especially interesting for cities: They indicate that Birds may be a technology enabler to relocate parking spaces outside the city center. Birds, together with infrastructure improvements to ensure safety, may thus have interesting consequences for urban planning. Note however that Bird usage currently appears to be limited to a certain demographic. This is generally individuals who can afford an option that is more convenient and flexible than transit/walking and yet remains affordable. However, cities may have incentives to make Birds more accessible.
It is important to emphasize that there is also a significant time of the day aspect, with the afternoon and evening hours having significant more rides than mornings. To a large extent, this is a consequence of the trip purposes obviously as discussed earlier.
The use of Bird for transit is however relatively small, although Birds are typically available around public transit stations. It is possible that, at its current price, the financial burden of using a Bird as a part of a commute is too high for transit users. Indeed, in Atlanta, a simple Bird trip from the Midtown MARTA station to Georgia Tech is \$4. In contrast, for business trips, there is a demonstrated ability for Birds to fulfill the last mile need.
A packaged deal between Bird and Transit (MARTA) could open the door to a previously untapped market. In order to gain access to more consistent and reliable usage, Bird could offer a financial incentive to MARTA users. This could come in the form of a free ride every 10 trips or discounted fares. Potentially when users purchase a MARTA pass, for an extra fee, a Bird subscription could be added.
In order to accomplish a working relationship, there are still two main issues to tackle. From an economic perspective, removing the financial deterrent would require the price to be low enough to satisfy users, but high enough for Bird to have a sustainable business model. After addressing this, the next main issue would be the reliability of a Bird commute. If a user was going to consistently commit to commuting with Bird, they will want Bird to commit to consistently having a e-scooter available for them. This could be accomplished with some sort of reservation feature or by Bird calculating the expected demand for e-scooters at each station. This would cause a few things to change. First, bird could see a consistent, reliable revenue stream unlike before. Second, a whole new group of users now have an affordable and flexible transportation option. Third, MARTA by extension would become a much more flexible and convenient option. Instead of seeing MARTA as limited, this could open the door to a more encompassing public transportation system.
\section{Conclusion}
Shared e-scooters have become a familiar sight in many cities around the world. Yet few studies have analyzed the role they play in the mobility space. This paper originated from an attempt to fill this gap. Starting from raw Bird data for the city of Atlanta, it compiles the underlying trips, including their origin, destination, and times. Using Google Places, this paper also collected a wide variety of POIs relevant for mobility and clustered them in 10 broad categories. POIs were then associated with each trip origin and destination, which led to the creation of 2-dimensional matrices identifying trip purposes.
The results of the study are particularly illuminating. They indicate that e-scooters fill specific last-mile needs for certain mobility trips. In particular, the results indicate that e-scooters are primarily used for business and leisure. Business to business trips are prominent for e-scooters, which was a surprise. These trips occur primarily in the afternoon and provide both an affordable and convenient mobility for this population segment. Business from/to parking trips are also significant, with potentially interesting implications for cities and urban planning. Birds are in heavy use for leisure trips to bars and restaurants, especially in the evening and at night (before 9pm).
The use of e-scooters in connection with transit is small overall and probably due to the relatively high additional cost. However, given the use of e-scooters in conjunction with parking suggests that proper financial incentives (e.g., joint monthly passes for transit and e-scooters) may change this equation. The lack of e-scooter together with transit is not due to any technical limitation and hence cities may have a significant opportunity to address first/last mile mobility with e-scooter: It requires however the proper incentive structure.
Interestingly, e-scooters exhibit interesting time of day patterns, with a surge in use in the afternoon that is maintained through the evening. This is explained obviously by the trip purposes.
E-scooters are a recent addition to the mobility landscape and it is likely that their use will continue to evolve quickly, given their proliferation in many cities around the world. It would be interesting to perform a similar study in the next few months again to see how ridership is evolving.
\section{Acknowledgements}
This research is partly funded by NSF grant 1854684. Many thanks to Connor Riley for providing us with the raw data.
\bibliographystyle{plain}
|
1,108,101,565,783 | arxiv |
\section{Introduction}
\glsreset{LVCSR}
\glsreset{BNF}
\glsreset{DSP}
\Gls{LVCSR} can be used to extract rich context about a user's interests, intents, and state. If run on a mobile device, this has the potential to revolutionize the quality of on-device services they interact with. In order for this to become practical, hardware-level optimization is required to preserve the battery life of portable devices.
In this paper, we present a new \gls{LVCSR} model architecture that takes advantage of a low-power, fixed point, always-on \gls{DSP} to significantly reduce power consumption. Our goal is to use the \gls{DSP} to optimally compress incoming speech into its \glspl{BNF} representation which is cached for as long a period as possible. By increasing the amount of cached input, we reduce the wake-up frequency of the device's main processor, which is used to complete the inference.
We start with a state-of-the-art \gls{LAS} end-to-end \gls{ASR} model, and effectively split its encoder across the \gls{DSP} and the main processor. Hardware optimization across the \gls{DSP} and main processor has been successfully leveraged in the past to cache features for similar low-power services \cite{gfeller2017now}, though this is the first time that a \gls{DSP} has been used to compute the initial layers in the primary inference model.
This leads to a significant increase in the amount of audio we can cache, with minimal impact to the model's overall \gls{WER}. Furthermore, as a purely on-device model, this design preserves user privacy as well as battery life. The topology is an important step towards practical \gls{LVCSR} in highly power-constrained contexts.
\begin{figure*}[th]
\centering
\includegraphics[width=.8\linewidth]{images/bn_stripped2.png}
\captionsetup{width=\linewidth}
\caption{\textit{The default configuration of a bottleneck layer running on the \gls{DSP}; here we see a kernel size of 4 applied in a frequency separable way, followed by one frequency kernel per output channel. These two convolutions are considered as a single 'layer'.}}
\label{bn_structure}
\vspace{-0.34cm}
\end{figure*}
\section{Related Work}
Fully end-to-end \gls{LVCSR} are emerging as the state-of-the-art \cite{chiu2018state}, equalling and even surpassing the performance of standard connectionist temporal classification \cite{graves2012connectionist} models. The core architecture for these end-to-end models, called Listen, Attend, and Spell \cite{chan2016listen}, contains three major subgraphs - an encoder, an attention mechanism, and a decoder. Since their proposal in 2015, there has been a substantial amount of work done to optimize these models for on-device use \cite{prabhavalkar2016compression, pang2018compression}, including weight matrix factorization, pruning, and model distillation. Due to these improvements, it is now possible to run a state-of-the-art \gls{LVCSR} model on a mobile device's core processor (at a high power cost).
For the traditional \gls{HMM}-based systems that predate \gls{LAS} architectures, \gls{NN} had been heavily used as part of a traditional \gls{ASR} acoustic model. \citet{vesely2011convolutive} show that convolutional bottleneck compression improves system performance in such setups. Typically, these compressed representations are concatenated with small time-window features to provide 'context'.
Additionally, small \gls{HMM}-based keyword spotters have been successfully optimized across a \gls{DSP} and main processor. \citet{shah2018fixed} propose a model which introduces \SI{5}- and \SI{6}{bit} weight quantization for a reduced memory footprint without a significant reduction in accuracy. Although these models have different architectures and applications, their use of convolutional bottleneck features and fixed-point network quantization inform our architecture.
\citet{shah2018fixed}, \citet{gfeller2017now} introduce a split across a fixed-point \gls{DSP} and a main processor motivated by power optimization. A quantized, two-stage, separable convolutional layer running on the \gls{DSP} forms the basis of their music detector. We use the same layer structure in our \gls{DSP} implementation.
The previously mentioned approaches do not attempt to compress audio features before caching, but there are other analyses of the trade-off between feature caching and power savings in the literature. In \citet{priyantha2011littlerock} and \citet{priyantha2010eers}, empirical power consumption drops from \SI{700}{mW} to \SI{25}{mW} as data is cached \SI{50}{x} longer for a pedometer application. Measurements of \citet{gfeller2017now} indicate a full \SI{25}{\%}-\SI{50}{\%} of the power cost at inference time is due to fixed wakeup and sleep overhead. Our goal is to significantly reduce this fixed power cost.
\section{Feature Substitution}
State-of-the-art results are reported in \citet{chiu2018state} with a very large, proprietary corpus. In this paper, we use the Librispeech~100 corpus to train our model \cite{panayotov2015librispeech}. \citet{chiu2018state} report a \gls{WER} of \SI{4.1}{\%} with over 12,500 hours of training data; the same model trained on 100 hours of Librispeech data gives a \gls{WER} of \SI{21.8}{\%}, which we use as the baseline for all further evaluation.
The model from \citet{chiu2018state} is capable of running on a phone using 80-dimensional, \SI{32}{bit} floating point mel spectrum audio features sampled in \SI{25}{ms} windows every \SI{10}{ms}. These features capture a maximum frequency of \SI{7.8}{kHz} and are stacked with delta and double delta features, resulting in an 80 x 3 input vector at each timestep. We replace these features with \textit{\gls{QMF}} that are compact, simple to calculate, and currently in use by other services running on the \gls{DSP}.
\gls{QMF} are log-mel based with a \SI{16}{bit} fixed point representation. We use a default, narrow-band frequency representation that only captures up to \SI{3.8}{Hz} over 32 bins. We test the effect of reducing the bandwidth by simply using fewer log-mel bins. Sampling rate and window size are constant across test input features and, for each case, we train an end-to-end model. The results of training a state-of-the-art \gls{LAS} model with different input representations which can be calculated and cached on the \gls{DSP} can be seen in Table~\ref{feature_table}.
\begin{table*}[t]
\label{features}
\centering
\begin{tabular}{lllll}
\toprule
Model Input & Input Dims & Feature Type & \gls{WER} (\%) & \gls{BW} (kbps) \\
\midrule
\textit{\SI{16}{kHz} \SI{16}{bit} raw PCM audio} & -- & -- & -- & \textit{256} \\
\textbf{Baseline LAS Model} & 80 x 3 & Mel, +$\Delta$ +$\Delta\Delta$ & \textbf{21.79} & \textbf{768} \\
Standard \gls{QMF} + Deltas & 32 x 3 & Mel, +$\Delta$ +$\Delta\Delta$ & 22.42 & 154 \\
\textbf{Standard \gls{QMF}} & 32 x 1 & Mel & \textbf{22.62} & \textbf{51.2} \\
3/4 \gls{BW} \gls{QMF} & 24 x 1 & Mel & 22.80 & 38.4 \\
1/2 \gls{BW} \gls{QMF} & 16 x 1 & Mel & 22.97 & 25.6 \\
1/4 \gls{BW} \gls{QMF} & 8 x 1 & Mel & 24.52 & 12.8 \\
\bottomrule
\end{tabular}
\caption{Comparison of model performance with smaller feature representations.}
\label{feature_table}
\vspace{-0.34cm}
\end{table*}
The results indicate that the baseline model, whose features have not previously been optimized, has a heavily redundant input representation, requiring three times the \gls{BW} of the raw audio after delta stacking. We are able to significantly reduce the input \gls{BW} (and, by extension, the amount of computation in the initial \gls{LAS} layers) without severely affecting the model's \gls{WER}.
Delta- and double delta- feature stacking do not have a large effect relative to their \SI{3}{x} increase in size; thus we will take the \textit{standard} \SI{32}{bin} \gls{QMF} input as our starting point for further exploration. Though we see an incremental trade-off between \gls{BW} and \gls{WER} for smaller raw feature representations, we will use the full \SI{32}{bin} \gls{QMF} as an input to our compressived bottleneck layers in an attempt to preserve \gls{WER} while reducing the \gls{BW} even more drastically.
\begin{figure*}[t]
\centering
\includegraphics[width=0.49\linewidth]{images/quant_vs_bw.png}
\includegraphics[width=0.49\linewidth]{images/bw_vs_structure.png}
\captionsetup{width=.9\linewidth}
\caption{The left plot uses a bottleneck feature extractor with a single hidden layer in which the output layer dimension and quantization level were modified to give a certain bandwidth output (relative to the standard \SI{32} dimensional \SI{16}{bit} \gls{QMF}). We see a trend towards 4-bit quantization, especially at high compression levels. The right plot shows the performance of various architectures (different bottleneck and encoder depths/strides and \gls{BNF} dimension) at 4-bit quantization, plotted against bandwidth. As more drastic compression is demanded, shifting the stride to before the \glspl{BNF} improves performance, which is similar to reducing the frame rate in more traditional models \cite{Pundak45555}.}
\label{bn_results}
\vspace{-0.34cm}
\end{figure*}
\section{Bottleneck Feature Extraction}
Our model uses the convolutional structure outlined by \citet{gfeller2017now}. The structure of a single layer is shown in Figure~\ref{bn_structure}. These simple, separable convolutional layers have been optimized for the \gls{DSP}. Besides minimal computation, all layer weights and intermediate representations are quantized to \SI{8}{bits}. \SI{32}{bit} biases, batch normalization \cite{ioffe2015batch}, and a \gls{ReLU} activation function are included after the second, 1-D separable convolution.
To explore the space of bottleneck architectures, we parameterized this architecture along the following axes: output dimension size, output quantization level, convolutional stride (in time), kernel size, and the number of layers in the bottleneck network. The first three axes have the potential to reduce the \gls{BW} of the resulting bottleneck, while the latter two axes are relevant to the size of the resulting model. Reducing the output dimension size is equivalent to reducing the size of the bottleneck layer and can result in a proportional reduction in \gls{BW}. The output quantization level affects how many bits are saved for each of the values in the output, and will also result in a proportional reduction in \gls{BW}. Increasing the stride could exponentially decrease the \gls{BW}, for example, by doubling the stride we generate outputs only half as often.
These changes in input lead to a necessary modification of the initial two convolutional layers of the \gls{LAS} encoder, which are designed with 3x3 time-frequency kernels and strides of 2. We replace these (by default) with a 3x1 time kernel along the flattened and modified frequency axis. We also vary the number of initial encoder layers and strides in our analysis.
\section{Results}
\begin{table*}[t]
\centering
\resizebox{0.95\textwidth}{!}{%
\begin{tabular}{lccccc}
\toprule
Model & $\#$ \gls{BNF} Extractor Weights & $\Delta$ \gls{LAS} Encoder Weights & Total Stride\footnotemark & \gls{BW} (kbps) & WER (\%) \\
\midrule
\textit{16kHz 16-bit Raw PCM Audio} & -- & -- & -- & \textit{256} & --\\
Baseline LAS Model & -- & \textit{0 (0)} & 4 & 768 & 21.79 \\
Standard \gls{QMF} & \textit{0 (0)} & -3,072 (-98KB) & 4 & 51.2 & 22.62 \\ [0.2cm]
Best \char`\~1/10 \gls{BW}. \gls{BNF} Model, $\nabla$ & 512 (4KB) & -8,064 (-258KB) & 1 & 4.8 & 22.44 \\
Best \char`\~1/20 \gls{BW}. \gls{BNF} Model, $\nabla$ & 512 (4KB) & -8,064 (-258KB) & 2 & 2.4 & 23.55 \\
Best 1/32 \gls{BW}. \gls{BNF} Model, $\nabla$ & 384 (3KB) & -8,448 (-270KB) & 2 & 1.6 & 24.81 \\ [0.2cm]
Best 1/16 \gls{BW}. \gls{BNF} Model & 640 (5KB) & -7,680 (-246KB) & 4 & 3.2 & 24.02 \\
1/32 \gls{BW}. \gls{BNF} Model & 384 (3KB) & -8,448 (-270KB) & 4 & 1.6 & 25.42 \\
Best 1/64 \gls{BW}. \gls{BNF} Model & 1536 (123KB) & -8,448 (-270KB) & 4 & 0.8 & 28.41 \\
\bottomrule
\end{tabular}}
\caption{ Selection of best performing models for different bandwidths.}
\label{results_table}
\vspace{-0.34cm}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{images/best_models.png}
\captionsetup{width=.9\linewidth}
\caption{Best performing model vs bandwidth. We see a good trade-off around \SI{2}{kbps}.}
\label{final_results}
\vspace{-0.34cm}
\end{figure}
Initial results are based on freezing the \gls{BN} extractor and encoder layer parameters and varying one parameter at time. This analysis revealed a statistically insignificant effect of \gls{BN} kernel size (across a range from 1 to 10) based on McNemar statistical tests \cite{mcnemar1947note}. Activation function comparisons favored \gls{ReLU} in a default configuration, but at high levels of quantization/compression showed no difference between identity and \gls{ReLU} activation functions.
There was a clear performance loss when increasing \gls{BN} stride without a simultaneous decrease in encoder stride. We hypothesize that the model has already been optimally compressed in the time dimension (the original model has a time step of \SI{10}{ms} fed through two strides of two, resulting in an encoded frame every \SI{40}{ms}). No dependence on encoder depth was noticeable.
In Figure~\ref{bn_results}, we see the results of varying the \gls{BNF} output dimension and quantization level at different rates of compression relative to the \SI{32} dimensional \SI{16}{bit} \gls{QMF}. A quantization of \SI{4}{bits} and 8-12 output dimensions perform the best across compression levels.
The best performing models have been collected in Table~\ref{results_table}. Each of these models has a single hidden layer in the \gls{BNF} extractor with the exception of the 1/64 \gls{BW} model, and a stride of two in the bottleneck layer with the exception of the 1/10 \gls{BW} model. All of the models have an output quantization depth of \SI{4}{bits}, a kernel of 4, and output dimensionality between 8 and 16 channels. They use single convolutional layer with a stride of 1 in the encoder (excepting the 1/16 and 1/32 constant time compression models, which have a stride of 2).
Our optimized \SI{4.8}{kbps} model with a single \gls{BNF} layer actually outperforms the standard \gls{QMF} model (running at \SI{51.2}{ kbps}). Compared with the original unoptimized model, this is a \SI{160}{x} reduction in feature bandwidth for a \SI{0.6}{\%} increase in \gls{WER}. We are able to continue to compress our \glspl{BNF} more and more heavily for slight increases in \gls{WER}. Our presented system is able to reduce the footprint of standard fixed point \gls{DSP} spectral features by a factor of 64 for a \SI{5.8}{\%} relative increase in \gls{WER}; compared with the original floating point model, this represents a \SI{960}{x} feature compression for a \SI{6.6}{\%} increase in \gls{WER}. The best performing models at \char`\~1/84 (\SI{0.6}{kbps}) and 1/128 (\SI{0.4}{kbps}) converge to \gls{WER} values of \SI{30.36}{\%} and \SI{36.59}{\%} respectively, which represents the breakdown in performance (Figure~\ref{final_results}).
\footnotetext[1]{These models have a reduced overall stride compared to the original model. While the weights of the \gls{LAS} model are reduced, intermediate representations feeding the Attention model will grow \SI{2}{x} and \SI{4}{x} respectively in the time dimension. This incurs a nontrivial computational cost for the main processor, and lengthens training time.}
\section{Conclusion}
Our analysis revealed that time compression was initially the limiting factor in our model, and a \SI{40}{ms} compressed step size seems to be the limit for high accuracy models. We found that kernel dimensionality and activation function had little effect on our results, and \SI{4}{bits} quantization with 8-12 dimensional \glspl{BNF} per timestep performed optimally.
Given these findings, we were able to design several models that effectively compress audio features on the \gls{DSP} and allow them to be cached in severely reduced memory footprints. We designed a model that successfully compresses the original \gls{DSP} \gls{QMF} to 1/10 the size without any loss in accuracy. As we compress the features further, we find an inflection point in \gls{WER} around \SI{1}{kbps}.
While the models we have designed can increase the interval between main processor wake-ups by \SI{10}{x}-\SI{64}{x}, empirical data is necessary to understand the full effect on battery consumption. Some of our models require slightly more computation in the attention/decoder (because of decreased time compression), which alone may have an adverse effect on battery life. Further tuning should be done once these are tested in-situ.
These \glspl{BNF} may be useful for other compressed speech models, and the end-to-end training paradigm, while time-consuming, provides an optimal means for on-\gls{DSP} compression. We hope this architecture is adopted in portable applications as a standard technique for speech compression.
\subsection*{Acknowledgments}
The authors would like to acknowledge Ron Weiss and the Google Brain and Speech teams for their \gls{LAS} implementation, F\'elix de Chaumont Quitry and Dick Lyon for their feedback and support, and the Google AI Z\"urich team for their help throughout the project.
\printbibliography
\end{document}
|
1,108,101,565,784 | arxiv | \section{Introduction}
The pentagram map is defined on planar, convex n-gons, a space we will denote by $\mathcal{C}_n$. The map $T$ takes a vertex $x_n$ to the intersection of two segments: one is created by joining the vertices to the right and to the left of the original one, $\overline{x_{n-1}x_{n+1}}$, the second one by joining the original vertex to the second vertex to its right $\overline{x_nx_{n+2}}$ (see Fig. 1). These newly found vertices form a new n-gon. The pentagram map takes the first n-gon to this newly formed one. As surprisingly simple as this map is, it has an astonishingly large number of properties, see \cite{S1, S2, S3}, \cite{ST}, \cite{OST} for a thorough description.
The name pentagram map comes from the following classical fact: if $P \in \mathcal{C}_5$ is a pentagon, then $T(P)$ is projectively equivalent to $P$. Other relations seem to be also classical: if $P\in \mathcal{C}_6$ is a hexagon, then $T^2(P)$ is also projectively equivalent to $P$. The constructions performed to define the pentagram map can be equally carried out in the projective plane. In that case $T:\mathcal{C}_5\to \mathcal{C}_5$ is the identity, while $T:\mathcal{C}_6\to \mathcal{C}_6$ is an involution. In general, one should not expect to obtain a closed orbit for any $\mathcal{C}_n$; in fact orbits seem to exhibit a quasi-periodic behavior classically associated to completely integrable systems. This was conjectured in \cite{S3}.
A recent number of papers (\cite{OST, S1, S2, S3, ST}) have studied the pentagram map and stablished its completely integrable nature, in the Arnold-Liouville sense. The authors of \cite{OST} defined the pentagram map on what they called {\it n-twisted polygons}, that is, infinite polygons with vertices $x_m$, for which $x_{n+m} = M(x_m)$, where $M$ is the {\it monodromy}, a projective automorphism of $\mathbb {RP}^2$. They proved that, when written in terms of the {\it projective invariants} of twisted polygons, the pentagram map was in fact Hamiltonian and completely integrable. They displayed a set of preserved quantities and proved that almost every universally convex n-gon lie on a smooth torus with a $T$-invariant affine structure, implying that almost all the orbits have a quasi-periodic motion under the map. Perhaps more relevant to this paper, the authors showed that the pentagram map, when expressed in terms of projective invariants, is a discretization of the Boussinesq equation, a well-known completely integrable system PDE.
The Boussinesq equation is one of the best known completely integrable PDEs. It is one of the simplest among the so-called Ader--Gel'fand-Dickey (AGD) flows. These flows are biHamiltonian and completely integrable. Their first Hamiltonian structure was originally defined by Adler in \cite{A} and proved to be Poisson by Gel'fand and Dickey in \cite{GD}. The structure itself is defined on the space $\mathcal{L}$ of periodic and scalar differential operators of the form
\[
L = D^{n+1} + k_{n-1}D^{n-1}+\dots + k_1 D+k_0,
\]
Hamiltonian functionals on $\mathcal{L}$ can be written as
\[
\mathcal{H}_R(L) = \int_{S^1} \mathrm{res}(RL) dx
\]
form some pseudo-differential operator $R$, where res denotes the residue, i.e., the coefficient of $D^{-1}$. The Hamiltonian functionals for the AGD flows are given by
\begin{equation}\label{H}
\mathcal{H}(L) = \int_{S^1}\mathrm{res}(L^{k/m+1})dx
\end{equation}
$k=1,2,3,4,\dots, m$. The simplest case $m=1$, $k=1$ is equal to the KdV equation, the case $m = 2$ and $k = 2$ corresponds to the Boussinesq equation. Some authors give the name AGD flow to the choice $k = m$ only.
The author of this paper linked the first AGD Hamiltonian structure and the AGD flows to projective geometry, first in \cite{HLM} and \cite{M4}, and later in \cite{M1} and \cite{M3} where she stablished a geometric connection between Poisson structures and homogeneous manifolds of the form $G/H$ with $G$ semisimple. In particular, she found geometric realizations of the Hamiltonian flows as curve flows in $G/H$. The case $G = \rm{PSL}(n+1)$ produces projective realizations of the AGD flow. From the work in \cite{OST} one can see that the continuous limit of the pentagram map itself is the projective realization of the Boussinesq equation.
In this paper we investigate possible generalizations of the pentagram map to higher dimensional projective spaces. Some discretizations of AGD flows have already appeared in the literature (see for example \cite{LN}), but it is not clear to us how they are related to ours. In particular, we investigate maps defined as intersections of different types of subspaces in $\mathbb {RP}^m$ whose continuous limit are projective realizations of AGD flows. In section 2 we describe the connection between projective geometry and AGD flows and, in particular, we detail the relation between the AGD Hamiltonian functional and the projective realization of the AGD flow. In section 3 we describe the pentagram map a other maps with the same continuous limit as a simple case in which to describe our approach to finding these discretizations. In section 4 we describe some of the possible generalizations.
In particular, in Theorems 4.1 and 4.2 we show that the projective realization of the AGD flow associated to
\[
\mathcal{H}(L) = \int_{S^1}\rm{res}(L^{2/(m+1)}) dx
\]
has a discretization defined through the intersection of one segment and one hyperplane in $\mathbb {RP}^m$. In section 4.2 we analyze the particular cases of $\mathbb {RP}^3$ and $\mathbb {RP}^4$. We show that the projective realization of the flow associated to $\mathcal{H}(L) = \int_{S^1}\rm{res}(L^{3/4})dx$ can be discretized using the intersection of three planes in $\mathbb {RP}^3$, while the one associated to $\mathcal{H}(L) = \int_{S^1}\rm{res}(L^{3/5})dx$ can be done using the intersection of one plane and two 3-dimensional subspaces in $\mathbb {RP}^4$ (we also show that it cannot be done with a different choice of subspaces). In view of these results, we conjecture that the projective realization of the the AGD Hamiltonian flow corresponding to (\ref{H}) can be discretized using the intersection one $k-1$ subspace and $k-1$, $m-1$-dimensional subspaces in $\mathbb {RP}^m$.
These results are found by re-formulating the problem of finding the discretizations as solving a system of Dyophantine equations whose solutions determine the choices of vertices needed to define the subspaces. These systems are increasingly difficult to solve as the dimension goes up, hence it is not clear how to solve the general case with this algebraic approach. Furthermore, as surprising as it is that we get a solution at all, the solutions are not unique (nor surprising once we get one solution). As we said before, the pentagram map has extraordinary properties and we will need to search among the solutions in this paper to hopefully find the appropriate map that will inherit at least some of them. One should also check which maps among the possibilities presented here are completely integrable in its own right, that is, as discrete system. It would be very exciting if these two aspects were connected, as we feel this should be the way to learn more before attempting the general case. This is highly non-trivial as not even the invariants of twisted polygons in $\mathbb {RP}^3$ are known.
This paper is supported by the author NSF grant DMS \#0804541.
\section{Projective geometry and the Adler-Gelfand-Dikii flow}
\subsection{Projective Group-based moving frames}
Assume we have a curve $\gamma:\mathbb R \to \mathbb {RP}^{n}$ and assume the curve has a {\it monodromy}, that is, there exists $M\in \mathrm{PSL}(n+1)$ such that $\gamma(x+T) = M\cdot \gamma(x)$ where $T$ is the period. The (periodic) differential invariants for this curve are well-known and can be described as follows. Let $\Gamma:\mathbb R\to\mathbb R^{n+1}$ be the unique lift of $\gamma$ with the condition
\begin{equation}\label{norm}
\det(\Gamma, \Gamma',\dots,\Gamma^{(n)}) = 1.
\end{equation}
$\Gamma^{(n+1)}$ will be a combination of previous derivatives. Since the coefficient of $\Gamma^{(n)}$ in that combination is the derivative of $\det(\Gamma, \Gamma',\dots,\Gamma^{(n)})$, and hence zero, there exists periodic functions $k_i$ such that
\begin{equation}\label{Wilc}
\Gamma^{(n+1)} + k_{n-1}\Gamma^{(n-1)}+\dots+k_1\Gamma'+k_0\Gamma=0.
\end{equation}
The functions $k_i$ are independent generators for the projective differential invariants of $\gamma$ and they are usually called the {\it Wilczynski invariants} (\cite{Wi}).
\begin{definition}(\cite{FO}) A $k$th order left - resp. right - {\it group-based} moving frame is a map
\[
\rho: J^{(k)}(\mathbb R, \mathbb {RP}^n) \to \mathrm{PSL}(n+1)
\]
equivariant with respect to the {\it prolonged} action of $\mathrm{PSL}(n+1)$ on the jet space $J^{(k)}(\mathbb R, \mathbb {RP}^n)$ (i.e. the action $g\cdot(x,u,u',u'', \dots, u^{(k)}) = (g\cdot u, (g\cdot u)', \dots, (g\cdot u)^{(k)}$) and the left - resp. right - action of $\mathrm{PSL}(n+1)$ on itself.
The matrix $K = \rho^{-1}\rho'$ (resp. $K = \rho'\rho^{-1}$) is called the {\it Maurer-Cartan matrix} associated to $\rho$. For any moving frame, the entries of the Maurer-Cartan matrix generate all differential invariants of the curve (see \cite{Hu}). The equation $\rho' = \rho K$ is called the {\it Serret-Frenet equation} for $\rho$.
\end{definition}
The projective action on $\gamma$ induces an action of $\mathrm{SL}(n+1)$ on the lift $\Gamma$. This action is linear and therefore the matrix $\hat\rho = (\Gamma, \Gamma',\dots,\Gamma^{(n)})\in \mathrm{SL}(n+1)$ is in fact a left moving frame for the curve $\gamma$. We can write equation (\ref{Wilc}) as the system $\hat\rho' = \hat\rho \hat K$
where
\begin{equation}\label{wilcinv}
\hat K = \begin{pmatrix} 0&0&\dots&0&-k_0\\ 1&0&\dots&0&-k_1\\ \vdots&\ddots&\ddots&\vdots&\vdots\\ 0&\dots&1&0&-k_{n-1}\\0&\dots&0&1&0\end{pmatrix},
\end{equation}
is the Maurer-Cartan matrix generating the Wilczynski invariants.
Group-based moving frames also provide a formula for a general invariant evolution of projective curves. The description is as follows.
We know that $\mathbb {RP}^n \approx \mathrm{PSL}(n+1)/H$, where $H$ is the isotropic subgroup of the origin. For example, if we choose homogeneous coordinates in $\mathbb {RP}^n$ associated to the lift $u \to \begin{pmatrix} 1\\ u\end{pmatrix}$, the isotropic subgroup $H$ is given by matrices $M\in \mathrm{SL}(n+1)$ such that $e_k^T M e_1 = 0$ for $k = 2,\dots,n+1$, and we can choose a section $\varsigma: \mathbb {RP}^n\to \mathrm{PSL}(n+1)$
\[
\varsigma (u) = \begin{pmatrix} 1&0\\ u&I\end{pmatrix}
\]
satisfying $\varsigma(o) = I$. Let $\Phi_g: \mathbb {RP}^n \to \mathbb {RP}^n$ be defined by the action of $g\in \mathrm{PSL}(n+1)$ on the quotient, that is, $\Phi_g(x) = \Phi_g([y])= [gy] = g\cdot x$. The section $\varsigma$ is compatible with the action of $\mathrm{PSL}(n+1)$ on $\mathbb {RP}^n$, that is,
\begin{equation}\label{actiondef}
g \varsigma(u) = \varsigma(\Phi_g(u)) h
\end{equation}
for some $h \in H$. If ${\mathfrak h}$ is the subalgebra of $H$, consider the splitting of the Lie algebra
$\gamma = {\mathfrak h}\oplus {\bf m}$, where ${\bf m}$ is not, in general, a Lie subalgebra. Since $\varsigma$ is a section, $d\varsigma(o)$ is an isomorphism between ${\bf m}$ and $T_o\mathbb {RP}^n$.
The following theorem was proved in \cite{M2} for a general homogeneous manifold and it describes the most general form of invariant
evolutions in terms of left moving frames.
\begin{theorem} \label{invev}Let $\gamma(t,x) \in \mathbb {RP}^n$ be a flow, solution of an invariant evolution of the form
\[
\gamma_t = F(\gamma,\gamma_x, \gamma_{xx}, \gamma_{xxx}, \dots).
\]
Assume the evolution is invariant under the action of $\mathrm{PSL}(n+1)$, that is, $\mathrm{PSL}(n+1)$ takes solutions to solutions.
Let $\rho(t,x)$ be a family of left moving frames along $\gamma(t,x)$ such that $\rho\cdot o = \gamma$.
Then, there exists an invariant family of tangent vectors ${\bf r}(t,x)$, i.e., a family depending on the differential
invariants of $\gamma$ and their derivatives, such that
\begin{equation}\label{projev}
\gamma_ t = d\Phi_\rho(o) {\bf r}.
\end{equation}
\end{theorem}
Assume we choose the section $\varsigma$ above. If $\gamma$ is a curve in homogenous coordinates in $\mathbb {RP}^n$, then $\Gamma = W^{-\frac1{n+1}}\begin{pmatrix} 1\\ \gamma\end{pmatrix}$ with $W = \det(\gamma',\dots,\gamma^{(n)})$. In that case, if
$\gamma$ is a solution of (\ref{projev}), with $\rho = (\Gamma, \Gamma',\dots,\Gamma^{(n)})$, after minor calculations one can directly obtain that $\Gamma$ is a solution of
\[
\Gamma_t = (\Gamma',\dots,\Gamma^{(n)}){\bf r} + r_0 \Gamma
\]
where, if ${\bf r} = (r_i)$ and
\[
r_0 = -\frac1{n+1} \frac{W_t}W-\sum_{s=1}^n\left(W^{-\frac1{n+1}}\right)^{(s)}r_s.
\]
The coefficient $r_0$ can be written in terms of ${\bf r}$ and $\gamma$ once the normalization condition (\ref{norm}) is imposed to the flow $\Gamma$.
Summarizing, the most general form for an invariant evolution of projective curves is given by the projectivization of the lifted evolution
\[
\Gamma_t = r_{n}\Gamma^{(n)}+\dots+r_1\Gamma'+r_0\Gamma
\]
where $r_i$ are functions of the Wilczynski invariants and their derivatives, and where $r_0$ is uniquely determined by the other entries $r_i$ once we enforce (\ref{norm}) on $\Gamma$.
\subsection{AGD Hamiltonian flows and their projective realizations}\label{AGD}
Drinfeld and Sokolov proved in \cite{DS} that the Adler-Gelfand-Dickey (AGD) bracket and its symplectic companion are the reduction of two well-known compatible Poisson brackets defined on the space of loops in $\mathfrak{sl}(n+1)^\ast$ (by compatible we mean that their sum is also a Poisson bracket). The author of this paper later proved (\cite{M3}) that the reduction of the main of the two brackets can always be achieved for any homogeneous manifold $G/H$ with $G$ semisimple, resulting on a Poisson bracket defined on the space of differential invariants of the flow. The symplectic companion reduces only in some cases. She called the reduced brackets Geometric Poisson brackets. The AGD bracket is the projective Geometric Poisson bracket (\cite{M3}).
Furthermore, geometric Poisson brackets are closely linked to invariant evolutions of curves, as we explain next.
\begin{definition}
Given a $G$-invariant evolution of curves in $G/H$, $\gamma_t = F(\gamma, \gamma', \gamma'', \dots)$, there exists an evolution on the differential invariants induced by the flow $\gamma(t,x)$ of the form ${\bf k}_t = R({\bf k}, {\bf k}', {\bf k}'',\dots)$. We say ${\bf k}(t,s)$ is the {\it invariantization} of $\gamma(t,x)$. We also say that $\gamma(t,x)$ is a {\it $G/H$-realization} of ${\bf k}(t,x)$.
\end{definition}
As proved in \cite{M3}, any geometric Hamiltonian evolution with respect to a Geometric Poisson bracket can be realized as an invariant evolution of curves in $G/H$. Furthermore, the geometric realization of the Hamiltonian system could be algebraically obtained directly from the moving frame and the Hamiltonian functional. This realization is not unique: a given evolution ${\bf k}_t = R({\bf k}, {\bf k}', \dots)$ could be realized as $\gamma_t = F(\gamma, \gamma', \dots)$ for more than one choice of manifold $G/H$. For more information, see \cite{M3}.
We next describe this relation in the particular case of $\mathbb {RP}^n$.
\begin{lemma}\label{gauge}
There exists an invariant gauge matrix $g$ (i.e. a matrix in $\mathrm{SL}(n+1)$ such that its entries are differential invariants) such that
\[
g^{-1} g_x +g^{-1} \hat K g^{-1} = K
\]
where $\hat K$ is as in (\ref{wilcinv}) and where
\begin{equation}\label{K}
K = \begin{pmatrix} 0&\kappa_{n-1}&\kappa_{n-2}&\dots &\kappa_1&\kappa_0\\ 1&0&0&\dots &0&0\\ 0&1&0&\dots&0&0\\
\vdots&\ddots&\ddots &\ddots&\vdots&\vdots\\ 0&\dots&0&1&0&0\\ 0&\dots&0&0&1&0\end{pmatrix},
\end{equation}
for some choice of invariants $\kappa_i$. The invariants $\kappa_i$ form a generating and functional independent set of differential invariants.
\end{lemma}
\begin{proof}
This lemma is a direct consequence of results in \cite{DS}. In particular, of the remark following Proposition 3.1 and its Corollary (page 1989).
The authors remark how the choice of canonical form for the matrix $K$ is not unique and other canonical forms can be obtained using a gauge. In their paper the matrix $K$ is denoted by $q^{can}+\Lambda$ where $\Lambda = \sum_1^{n-1} e_{i+1,i}$. If we denote by $\mathfrak{b}$ the subalgebra of upper triangular matrices in $\mathrm{SL}(n)$ and by $\mathfrak{b}_r$ the ith diagonal (that is, matrices $(a_{ij})$ such that $a_{ij}=0$ except when $j-i = r$) then we can choose $q^{can} = \sum q_r$ where $q_r \in V_r$ and $V_r$ are $1$-dimensional vector subspaces of $\mathfrak{b}_r$ satisfying $\mathfrak{b}_r = [\Lambda, \mathfrak{b}_{r+1}] \oplus V_r$, $r=0, \dots, n-1$.
Since $[\Lambda, \mathfrak{b}_{r+1}]$ is given by diagonal matrices in $\mathfrak{b}_r$ whose entries add up to zero, there are many such choices, and one of them is the one displayed in (\ref{K}).
\end{proof}
Straight calculations show that in the $\mathbb {RP}^3$ case ($n=3$) $k_2 = -\kappa_2$, $k_1 = -\kappa_1-\kappa_2'$ and $k_0=-\kappa_0-\kappa_1'$, while
\begin{equation}\label{g3}
g = \begin{pmatrix} 1&0&k_2&k_1-k_2'\\ 0&1&0&k_2\\ 0&0&1&0\\0&0&0&1\end{pmatrix}.
\end{equation}
In the $\mathbb {RP}^4$ case $k_3 =-\kappa_3$, $k_2 = -\kappa_2-3\kappa_3'$, $k_1= -\kappa_1-2\kappa_2'-3\kappa_3''$ and $k_0 = -\kappa_0-\kappa_1'-\kappa_2''-\kappa_3'''-\kappa_3\kappa_3'$, while
\begin{equation}\label{g4}
g = \begin{pmatrix}1&0&k_3&k_2-2k_3'&k_1-k_2'-k_3''\\ 0&1&0&k_3&k_2-k_3'\\0&0&1&0&k_3\\0&0&0&1&0\\0&0&0&0&1\end{pmatrix}
\end{equation}.
One can also easily see that the first nonzero upper diagonal of $g$ is always given by $k_{m-1}$ for any dimension $m$.
\begin{theorem}\label{projreal} (\cite{M2}, \cite{M3}) Let $\hat\rho$ be the Wilczynski moving frame, and let $\rho =\hat\rho g$ be the moving frame associated to $K$, where $g$ and $K$ are given as in Lemma \ref{gauge}. Then, the invariant evolution
\begin{equation}\label{uevpr}
u_ t = d\Phi_\rho(o) {\bf r}
\end{equation}
is the projective realization of the evolution
\[
\kappa_t = P {\bf r}
\]
where $P$ is the Hamiltonian operator associated to the AGD bracket as given in \cite{A}.
\end{theorem}
The choice of $K$ is determined by the invariants ${\bf\kappa}$ being in the dual position to the tangent to the section $\varsigma$ (see \cite{M1}). According to this theorem, in order to determine the projective realizations of AGD Hamiltonian flows, we simply need to fix the moving frame $\rho$ using $\hat\rho$ and $g$, and to find the Hamiltonian functional $\mathcal{H}$ corresponding to the flow. Notice that, if ${\bf k}_t = P\delta_\kappa\mathcal{H}$ for some Hamiltonian operator $\mathcal{H}$, then the lift of (\ref{uevpr}) is given by
\begin{equation}\label{liftev}
\Gamma_t = \rho\begin{pmatrix} r_0\\ {\bf r}\end{pmatrix} = \hat\rho g\begin{pmatrix}r_0 \\ \delta_\kappa\mathcal{H}\end{pmatrix}
\end{equation}
where $r_0$ is determined uniquely by property (\ref{norm}).
Next we recall the definition of the AGD Hamiltonian functionals.
Let
\begin{equation}\label{Winv}
L = D^{n+1} + k_{n-1}D^{n-1}+\dots + k_1 D+k_0
\end{equation}
be a scalar differential operator, where $D= \frac d{dx}$, and assume $k_i$ are all periodic. The AGD flow is the AGD Hamiltonian evolution with Hamiltonian functional given by
\[
\mathcal{H}({\bf k}) = \int_{S^1} \mathrm{res}(L^{\frac r{n+1}}) dx
\]
$r=2,3,4,\dots n$, where res stands for the residue (or coefficient of $D^{-1}$) of the pseudo-differential operator $L^{r/(n+1)}$. one often refers to the AGD flow as the one associated to $r = n$, but any choice of $r$ will define {\it biHamiltonian} flows (see \cite{DS}). Therefore, any choice of $r$ will result in a completely integrable flow. For given particular cases this Hamiltonian can be found explicitly.
\begin{proposition} The AGD Hamiltonian functional for $n=3=r$, is given by
\[
\mathcal{H}({\bf k}) = \frac34\int_{S^1} k_0-\frac18 k_2^2 dx
\]
while the one for $n=4=r$ is given by
\[
\mathcal{H}({\bf k}) = \frac45 \int_{S^1} k_0-\frac{1}5 k_2k_3 dx.
\]
\end{proposition}
\begin{proof} To prove this proposition we simply need to calculate the corresponding residues. We will illustrate the case $n = 3$, and $n=4$ is identically resolved. Assume
\[
L = D^4+k_2 D^2+k_1D+k_0
\]
so that
\[
L^{1/4} = D + \ell_1 D^{-1}+\ell_2D^{-1}+\ell_3D^{-3} + o(D^{-3}).
\]
Using the relation $L = (L^{1/4})^4$ we can find the uniquely determined coefficients to be $\ell_1 = \frac 14 k_2$, $\ell_2 = \frac14(k_1-\frac32 k_2')$ and $\ell_3 = \frac14(k_0+\frac54k_2''-\frac32 k_1'-\frac38 k_2^2)$. We also find that
\[
L^{3/4} = D^3 + 2\ell_1D+3(\ell_1'+\ell_2)+(\ell_1''+3\ell_2'+3\ell_1^2+3\ell_3)D^{-1} + o(D^{-1}).
\]
Using the periodicity of the invariants, we conclude that
\[
\mathcal{H}({\bf k}) = \int_{S^1} (\ell_1''+3\ell_2'+3\ell_1^2+3\ell_3) dx = 3 \int_{S^1} (\ell_1^2+\ell_3)dx = \frac34\int_{S^1} k_0-\frac18 k_2^2 dx.
\]
\end{proof}
Notice that, if we are to connect these Hamiltonian flows to their projective realizations, we will need to express their Hamiltonian functionals in terms of the invariants $\kappa_i$. In such case $\delta_\kappa \mathcal{H} = {\bf r}$ for the Hamiltonian functional defining our system in the new coordinates. If we wish to write the projective realizations in terms of Wilczynski invariants, we can always revert to them once the realizations are found.
\section{The pentagram map} The pentagram map takes its name from an apparently classical result in projective geometry. If we have any given convex pentagon $\{x_1, x_2, x_3, x_4, x_5\}$, we can associate a second pentagon obtained from the first one by joining $x_i$ to $x_{i+2}$ using a segment $\overline{x_{i}x_{i+2}}$, and defining $x_i^\ast$ to be the intersection of $\overline{x_{k-1}x_{k+1}}$ with $\overline{x_{k}x_{k+2}}$ as in figure 1. The new pentagon $\{x_1^\ast,x_2^\ast,x_3^\ast,x_4^\ast,x_5^\ast\}$ is projectively equivalent to the first one, that is, there exists a projective transformation taking one to another. The map $T(x_i) = x_i^\ast$ is the pentagram map defined on the space of closed convex $n$-gons, denoted by $\mathcal{C}_n$ . If instead of a pentagon ($n=5$) we consider a hexagon ($n=6$), we need to apply the pentagram map twice to obtain a projectively equivalent hexagon. See \cite{OST} for more details.
\vskip 2ex
\centerline{\includegraphics[height=1.3in]{pentagram4.pdf}}
\vskip 1ex
\centerline{Fig. 1}
\vskip 2ex
If we consider the polygons to exist in the projective plane instead (all the constructions are transferable to the projective counterpart), we conclude that the pentagram map is the identity on $\mathcal{C}_5$ in $\mathbb {RP}^2$ and involutive on $\mathcal{C}_6$. This property does not hold for any $n$-gon. In fact, in general the map exhibits a quasi-periodic behavior similar to that of completely integrable systems (as shown in \cite{OST}).
The authors of \cite{OST} defined the pentagram map on the space of polygons {\it with a monodromy}. They called these {\it $n$-twisted polygons} and denoted them by $\mathcal{P}_n$. These are infinite polygons such that $x_{i+n} = M(x_i)$ for all $n$ and for some projective automorphism $M$ of $\mathbb {RP}^2$. They proved that, defined on this space and when written in terms of projective invariants of $\mathcal{P}_n$, the pentagram map was completely integrable in the Arnold-Liouville sense. They also proved that, again when written in terms of projective invariants, the pentagram map was a discretization of {\it the projective realization} of the Boussinesq equation, a well-known completely integrable PDE. Furthermore, their calculations show that $T$ is a discretization of the projective realization of the Boussinesq equation, as previously described; in fact, it shows that a certain unique lift of $T$ to $\mathbb R^3$ is a discretization of the unique lift of the projective realization given in the previous section.
The pentagram map is not the only one that is a planar discretization of the Boussinesq evolution. In fact, other combinations of segments also are. For example, instead of the pentagram map, consider the following map: $T:\mathcal{P}_m \to \mathcal{P}_m$ where $T(x_n) = \overline{x_{n-2}x_{n+1}}\cap \overline{x_{n-1}x_{n+2}}$. Clearly this map coincides with the pentagram map when defined over pentagons, and it will be degenerate over $\mathcal{C}_{2r}$ for any $r$.
\begin{proposition}\label{syst2} The map $T$ is also a discretization of the projective realization of Boussinesq's equation.
\end{proposition}
\begin{proof} Although the result is intuitive and the proof can perhaps be done in a simpler form as in \cite{OST}, the following process will be carried out in higher dimensions and it is perhaps simpler when illustrated in dimension 2. Instead of working with the projective realization of the Boussinesq equation, we will prove that a unique lifting of the projective realization to $\mathbb R^3$ is the continuous limit of the corresponding unique lifting of the map $T$ to $\mathbb R^3$. Thus, we move from the integrable system to its projective realization and from there to its unique lift to find continuous limits at that level.
Assume $\Gamma(x)$ is a continuous map on $\mathbb R^3$ and assume $\det(\Gamma, \Gamma', \Gamma'') = 1$ so that $\Gamma$ can be considered as the unique lift of a projective curve $\gamma$, as in \cite{OST}. Let $x_{n+k} = \gamma(x+k\epsilon)$ and assume $\gamma_\epsilon$ is the continuous limit of the map $T$ above (as in \cite{OST} we are discretizing both $t$ and $x$). Denote by $\Gamma_\epsilon$ the unique lift of $\gamma_\epsilon$ to $\mathbb R^3$ as in the previous section and assume further than
\[
\Gamma_\epsilon = \Gamma + \epsilon A+\epsilon^2 B + \epsilon^3 C +o(\epsilon^3)
\]
where $A = \sum_{i=0}^2 \alpha_i \Gamma^{(i)}$, $B = \sum_{i=0}^2 \beta_i \Gamma^{(i)}$ and $C =\sum_{i=0}^2 \gamma_i\Gamma^{(i)}$. Then, the definition of $T$ assumes that $\Gamma_{\epsilon}$ lies in the intersection of both the plane generated by $\Gamma(x-\epsilon)$ and $\Gamma(x+2\epsilon)$ and the one generated by $\Gamma(x-2\epsilon)$ and $\Gamma(x+\epsilon)$. If $\Gamma_\epsilon = a_1 \Gamma(x-\epsilon) + a_2 \Gamma(x+2\epsilon) = b_1 \Gamma(x-2\epsilon)+b_2\Gamma(x+\epsilon)$ for some functions $a_i(x, \epsilon), b_i(x, \epsilon)$, $i=1,2$, then equating the coefficients of $\Gamma, \Gamma'$ and $\Gamma''$ we obtain the relations
\begin{eqnarray*}\label{2d}
1+ \alpha_0\epsilon + \beta_0\epsilon^2 + o(e^2) &=& a_1+a_2 + o(\epsilon^3) = b_1+b_2+o(\epsilon^3)\\
\alpha_1\epsilon + \beta_1 \epsilon^2 + o(\epsilon^2) &=& (-a_1+2 a_2)\epsilon + o(\epsilon^3) = (-2b_1+b_2)\epsilon +o(\epsilon^3)\\
\alpha_2\epsilon + \beta_2\epsilon^2 +\gamma_2 \epsilon^3 +o(\epsilon^3) &=& (\frac12 a_1+2a_2)\epsilon^2 +o(\epsilon^3) = (2b_1+\frac12b_2) \epsilon^2 + o(\epsilon^3).
\end{eqnarray*}
Here we use the fact that $\Gamma'''$ is a combination on $\Gamma$ and $\Gamma'$ according to the normalization $\det(\Gamma, \Gamma', \Gamma'') = 1$ to conclude that the remaining terms are at least $o(\epsilon^3)$. We obtain immediately $\alpha_2 = 0$.
Let us denote ${\bf a} = (a_1, a_2)^T$ and ${\bf b} = (b_1, b_2)^T$, and ${\bf a} = \sum {\bf a}_i \epsilon^i$, ${\bf b} = \sum {\bf b}_i \epsilon^i$. Then we have the following relations
\[
\begin{pmatrix}1&1\\-1&2\end{pmatrix} {\bf a}_0 = \begin{pmatrix}1&1\\-2&1\end{pmatrix}{\bf b}_0 = \begin{pmatrix} 1\\ \alpha_1\end{pmatrix};
\hskip 1ex
\begin{pmatrix}1&1\\-1&2\end{pmatrix} {\bf a}_1 = \begin{pmatrix}1&1\\-2&1\end{pmatrix}{\bf b}_1 = \begin{pmatrix} \alpha_0\\ \beta_1\end{pmatrix}
\]
and also the extra conditions
\begin{equation}\label{extra}
\begin{pmatrix}\frac12&2\end{pmatrix}{\bf a}_0 = \begin{pmatrix}2&\frac12\end{pmatrix}{\bf b}_0 = \beta_2; \hskip 1ex \begin{pmatrix}\frac12&2\end{pmatrix}{\bf a}_1 = \begin{pmatrix}2&\frac12\end{pmatrix}{\bf b}_1 = \gamma_2.
\end{equation}
Solving for ${\bf a}_i$ and ${\bf b}_i$ and substituting in the first condition in (\ref{extra}) we get
\[
\beta_2 = 1 + \frac12 \alpha_1 = 1-\frac12\alpha_1
\]
which implies $\beta_2 = 1$ and $\alpha_1 =0$.
Finally, substituting in the second condition in (\ref{extra}) we get
\[
\gamma_2 = \alpha_0 + \frac12 \beta_1 = \alpha_0 - \frac12 \beta_1
\]
which results in $\gamma_2 = \alpha_0$ and $\beta_1 = 0$. The final conclusion comes from imposing the lifting condition $\det(\Gamma_\epsilon, \Gamma_{\epsilon}',
\Gamma_\epsilon'') = 1$ to our continuous limit. When expanded in terms of $\epsilon$, and after substituting $A = \alpha_0 \Gamma$, $B = \Gamma'' + \beta_0\Gamma$ we obtain
\[
1 = 1+3\alpha_0\epsilon + \left(3\alpha_0^2 + 3\beta_0 + \det(\Gamma, \Gamma',\Gamma^{(4)}) +\det(\Gamma, \Gamma''', \Gamma'')\right)\epsilon^2 + o(\epsilon^2).
\]
Using the fact that $\det(\Gamma, \Gamma',\Gamma^{(4)}) +\det(\Gamma, \Gamma'', \Gamma''') = 0$, and the Wilczynski relation $\Gamma''' = -k_1\Gamma' - k_0\Gamma$,
we obtain $\alpha_0 = 0$ and $\beta_0 = -\frac23 k_0$. From here, $\Gamma_e = \Gamma + \epsilon^2 (\Gamma'' -\frac23 k_0 \Gamma)+o(\epsilon^2)$. The result of the theorem is now immediate: it is known (\cite{M2}) that the evolution $ \Gamma_t = \Gamma'' -\frac23 k_0 \Gamma$ is the lifting to $\mathbb R^3$ of the projective geometric realization for the Boussinesq equation. (One can also see this limit for the pentagram map in \cite{OST}.)
\end{proof}
\section{Completely Integrable Generalizations of the pentagram map}
\subsection{Discretizations of an $n$-dimensional completely integrable system with a second order projective realization}
In this section we will describe discrete maps defined on $\mathcal{P}_r\subset \mathbb {RP}^{m}$ for which the continuous limit is given by a second order projective realization of a completely integrable PDE. As we described before, we will work with the lift of the projective flows. Assume $m\ge 2$.
Let $\{x_n\}\in \mathcal{P}_r$. Define $\Delta_m$ to be a $m-1$ linear subspace determined uniquely by the points $x_n$ and $x_{n+k_i}$ $i=1,\dots m-1$, where $k_i$ are all different from each other and different from $\pm 1$. For example, if $m = 2s-1$, we can choose $x_{n-s}, x_{n-s+1}, \dots, x_{n-2}, x_n, x_{n+2}, \dots, x_{n+s}$. Assume that for every $n$ this subspace intersects the segment $\overline{x_{n-1} x_{n+1}}$ at one point. We denote the intersection $T(x_n)$ and define this way a map $T:\mathcal{P}_r \to \mathcal{P}_r$. The example $k_1 = -2$, $k_2 = 2$ for the case $m = 3$ is shown in figure 2.
\vskip 2ex
\centerline{\includegraphics[height=1.3in]{pentagram1.pdf}}
\vskip 1ex
\centerline{Fig. 2}
\vskip 2ex
Let $\gamma: \mathbb R \to \mathbb {RP}^{m}$ and let $\Gamma: \mathbb R \to \mathbb R^{m+1}$ be the unique lift of $\gamma$, as usual with the normalization condition
(\ref{norm}).
Following the notation in \cite{OST} we call as before $\gamma(x+k\epsilon) = x_{n+k}$ and denote by $\gamma_\epsilon$ the continuous limit of the map $T$. We denote by $\Gamma_\epsilon$ its lifting to $\mathbb R^{m+1}$.
\begin{theorem}\label{basictheorem} The lift to $\mathbb R^{m+1}$ of the map $T$ is given by $\Gamma_\epsilon = \Gamma + \frac12\epsilon^2 (\Gamma'' -\frac {2}{m+1} k_{m-1}\Gamma)+o(\epsilon^2)$.
\end{theorem}
\begin{proof}
As in the case $n=2$, assume
\begin{equation}\label{gexp}
\Gamma_\epsilon = \Gamma + \epsilon A + \epsilon^2B+\epsilon^3 C + o(\epsilon^3),
\end{equation}
and assume further that $A, B$ and $C$ decompose as combinations of $\Gamma^{(k)}$ as
\[
A = \sum_{r=0}^{m}\alpha_{r}\Gamma^{(r)}, \hskip 2ex B = \sum_{r=0}^{m}\beta_{r}\Gamma^{(r)},\hskip 2ex C = \sum_{r=0}^{m}\mu_{r}\Gamma^{(r)}
\]
Since $T(x_n)$ belongs to the segment $\overline{x_{n-1} x_{n+1}}$, we conclude that the line through the origin representing $\Gamma_\epsilon$ belongs to the plane through the origin generated by $\Gamma(x-\epsilon)$ and $\Gamma(x+\epsilon)$. That is, there exist functions $a$ and $b$ such that
\[
\Gamma_\epsilon = a \Gamma(x-\epsilon)+b\Gamma(x+\epsilon).
\]
Putting this relation together with the decomposition of $\Gamma_\epsilon$ according to $\epsilon$, we obtain the following $m+1$ equations relating $\alpha$, $\beta$, $a$ and $b$
\begin{eqnarray*}
a+b &=& 1+\alpha_0\epsilon+\beta_0\epsilon^2 +o(\epsilon^2),\\ (a+b)\epsilon^{2r} &=& (2r)! (\alpha_{2r}\epsilon+\beta_{2r}\epsilon^2 +o(\epsilon^2)),\\ (b-a) \epsilon^{2r-1} &=& (2r-1)!(\alpha_{2r-1}\epsilon+\beta_{2r-1}\epsilon^2 +o(\epsilon^2))
\end{eqnarray*}
$r = 1,\dots, \frac m2$ if $m$ is even, or $r = 0, \dots,\frac{m+1}2$ if $m$ is odd. Directly from these equations we get $\alpha_r = 0$ for any $r = 2, \dots m$ and $\beta_r = 0$ for any $r=3,\dots,m$. We also get $\beta_2 = \frac12$, that is, $A = \alpha_0\Gamma+\alpha_1\Gamma'$ and $B = \beta_0\Gamma+\beta_1\Gamma'+\frac12\Gamma''$. The remaining relations involve higher order terms of $C$ and other terms that are not relevant.
Since $T(x_n)$ belongs also to the subspace $\Delta_m$, $\Gamma_\epsilon$ belongs to the $m$-dim subspace of $\mathbb R^{m+1}$ generated by $\Gamma$ and $\Gamma(x+m_i\epsilon)$, $i = 1,\dots,m-1$. That is
\begin{equation}\label{ep1}
\det(\Gamma_\epsilon, \Gamma, \Gamma(x+m_1\epsilon),\dots, \Gamma(x+m_{m-1}\epsilon)) = 0.
\end{equation}
We now expand in $\epsilon$ and select the two lowest powers of $\epsilon$ appearing. These are $\epsilon^{1+\dots +m} = \epsilon^{\frac12m(m+1)}$ and $\epsilon^{\frac12m(m+1)+1}$. They appear as coefficients of $\Gamma^{(r)}$, $r = 0,\dots, m$ situated in the different positions in the determinant. The term involving $\Gamma'$ will come from the $\Gamma_\epsilon$ expansion since this is the term with the lowest power of $\epsilon$.
With all this in mind we obtain that the coefficient of $\epsilon^{\frac12m(m+1)}$ is given by
\[
X \alpha_1
\]
for some factor $X$ that we still need to identify. In fact, we only need to know that $X\ne 0$ to conclude that (\ref{ep1}) implies $\alpha_1 = 0$.
The factor $X$ corresponds to the coefficient of $\det(\Gamma,\Gamma',\dots,\Gamma^{(m)})$ when we expand
\[
\det\left(\Gamma',\Gamma, \sum_{r=2}^{m}\frac{m_1^{r}}{r!}\Gamma^{(r)},\dots,\sum_{r=2}^{m} \frac{m_{m-1}^{r}}{r!}\Gamma^{(r)}\right).
\]
When one looks at it this way it is clear that $X$ is the determinant of the coefficients in the basis $\{\Gamma, \Gamma',\dots,\Gamma^{(m)}\}$; that is, the determinant of the matrix
\[
\begin{pmatrix} 0&1&0&0&\dots&0\\1&0&0&\dots&0\\ 0&0&\frac{m_1^2}{2!}&\frac{m_1^3}{3!}&\dots&\frac{m_1^{m}}{m!}\\ \dots&\dots&\dots&\dots&\dots&\dots\\ 0&0&\frac{m_{m-1}^2}{2!}&\frac{m_{m-1}^3}{3!}&\dots&\frac{m_{m-1}^{m}}{m!}\end{pmatrix}.
\]
Using expansion and factoring $m_i^2$ from each row, one can readily see that this determinant is a nonzero multiple of the determinant of the matrix
\[
\begin{pmatrix} 1 & m_1 &m_1^2&\dots &m_1^{m-2}\\ 1&m_2&m_2^2&\dots&m_2^{m-2}\\ \vdots&\vdots&\vdots&\dots&\vdots\\ 1&m_{m-1}&m_{m-1}^2&\dots&m_{m-1}^{m-2}\end{pmatrix}.
\]
This is a Vandermonde matrix with nonzero determinant whenever $m_i\ne m_j$, for all $i\ne j$. Therefore $X\ne 0$ and $\alpha_1 = 0$.
Using the normalization condition (\ref{norm}) for $\Gamma_\epsilon$, we obtain
\[
1 = \det(\Gamma_\epsilon,\Gamma_\epsilon',\dots,\Gamma_\epsilon^{(m)}) = 1 + \epsilon(\det(\alpha_0\Gamma,\Gamma',\dots,\Gamma^{(m)})
\]
\[+\dots+\det(\Gamma,\Gamma',\dots,\alpha_0\Gamma^{(m)}))+o(\epsilon) = 1+(m+1)\epsilon\alpha_0+o(\epsilon)
\]
and therefore $\alpha_0=0$ and $A = 0$.
Finally, the coefficient of $\epsilon^{\frac12m(m+1)+1}$ in (\ref{ep1}) is given by
\[
X\beta_1 + \frac12 Y
\]
where $Y$ is a sum of terms that are multiples of only one determinant, namely $\det(\Gamma,\Gamma',\Gamma'',\dots,\Gamma^{(m-1)},\Gamma^{(m+1)})$ (since $\Gamma''$ carries $\epsilon^2$ in $B$, one of the derivatives in the remaining vectors needs to be one order higher to obtain $\epsilon^{\frac12m(m+1)+1}$. This determines $Y$ uniquely). This determinant is the derivative of $\det(\Gamma,\Gamma',\dots,\Gamma^{(m)}) = 1$ and hence zero. Therefore, since $X\ne 0$ be also obtain $\beta_1 = 0$. This result will be true for {\it any different choice of the vertices} when constructing $\Delta_m$, as far as $x_n$ is belongs to $\Delta_m$ and our choice is non-singular, that is, as far as $\Delta_m$ intersects $\overline{x_{n-1} x_{n+1}}$.
Using (\ref{norm}) for $\Gamma_\epsilon$ again, together with $B = \frac12 \Gamma'' + \beta_0\Gamma$ and $\Gamma^{(m+1)} = \sum_{r=0}^{m-1} k_r \Gamma^{(r)}$, results in the relation
\[
(m+1) \beta_0 + \frac12 2 k_{m-1} = 0
\]
and from here we obtain the continuous limit as stated in the theorem.\end{proof}
One can check that the segment $\overline{x_{n-1}x_{n+1}}$ can be substituted for any choice of the form $\overline{x_{n-r}x_{n+r}}$ to obtain the continuous limit $\Gamma_\epsilon = \Gamma + r!(\Gamma'' -\frac 2{m+1} k_{m-1}\Gamma)\epsilon^2 + o(\epsilon^2)$ instead. Any segment choice of the form $\overline{x_{n+r}x_{n+s}}$, $s\ne -r$, will give us different evolutions, most of which (although perhaps not all) will be non integrable.
\begin{theorem} The invariantization of the projective evolution corresponding to the lifted curve evolution $\Gamma_t = \Gamma'' -\frac 2{m+1} k_{m-1}\Gamma$ is a completely integrable system in the invariants ${\bf k}$.
\end{theorem}
\begin{proof}
The proof of this theorem is a direct consequence of section \ref{AGD}. Indeed, if
\[
L= D^{n}+k_{n-2}D^{n-2}+\dots+ k_1D+k_0
\]
and if
\[
L^{1/{n}} = D + \ell_1D^{-1} + \ell_2D^{-2} +o(D^{-2})
\]
then,
\[
L^{2/n} = D^2 + 2\ell_1+(\ell_1'+2\ell_2)D^{-1}+ o(D^{-1})
\]
and from here the Hamiltonian $\mathcal{H}(L) = \int_{S^1} \mathrm{res}(L^{2/n}) dx$ is $\mathcal{H}(L) = 2\int_{S^1} \ell_2 dx$. As before, we can use the relation $(L^{1/n})^n = L$ to find the value for $\ell_2$. Indeed, a short induction shows that
\[
L^{k/n} = D^k+k\ell_1D^{k-2}+\left(k\ell_2+\binom{k}2\ell_1'\right)D^{k-3}+o(D^{k-3})
\]
which implies $\ell_1 = \frac1n k_{n-2}$ and $\ell_2 = \frac1n k_{n-3}-\binom n2\frac1{n^2} k_{n-2}'$. This implies
\[
\mathcal{H}(L) = \int_{S^1} \mathrm{res}(L^{2/n}) dx =\frac2n \int_{S^1} k_{n-3} dx
\]
with an associated variational derivative given by
\begin{equation}\label{HL}
\delta_k \mathcal{H} = \frac 2ne_{2}.
\end{equation}
It is known (\cite{DS}) that Hamiltonian evolutions corresponding to Hamiltonian functionals $\mathcal{H}(L) = \int_{S^1} \mathrm{res}(L^{k/n}) dx$, {\it for any $k$}, are biHamiltonian systems and completely integrable in the Liouville sense. Denote by $\delta_{\kappa}\mathcal{H}$ the corresponding variational derivative of $\mathcal{H}$, with respect to $\kappa$.
Recall than from (\ref{liftev}), this Hamiltonian evolution has an $\mathbb {RP}^{n-1}$ projective realization that lifts to the evolution
\[
\Gamma_t = \begin{pmatrix}\Gamma&\Gamma'&\Gamma''&\dots, \Gamma^{(n-1)}\end{pmatrix} g\begin{pmatrix} r_0\\ \delta_\kappa \mathcal{H}\end{pmatrix}
\]
where $g$ is given as in lemma \ref{gauge} and where $r_0$ is uniquely determined by the normalization condition (\ref{norm}) for the flow.
Recall also that according to the comments we made after lemma \ref{gauge}, the upper diagonal of $g$ is given by the entry $k_{n-2}$. That means the lift can be written as
\[
\Gamma_t = \begin{pmatrix}\Gamma&\Gamma'&\Gamma''+k_{n-2}\Gamma&\dots\end{pmatrix} \begin{pmatrix} r_0\\ \delta_\kappa \mathcal{H}\end{pmatrix}.
\]
The normalization condition imposed here to find $r_0$ is the exact same condition we imposed on $\Gamma_\epsilon$ to obtain the coefficient of $\Gamma$ in the continuous limit, and they produce the same value of $r_0$. Therefore, we only need to check that $\delta_\kappa \mathcal{H}$ is also a multiple of $e_2$. The change of variables formula tells us that
\begin{equation}\label{changeofvariable}
\delta_\kappa\mathcal{H} = \left(\frac{\delta {\bf k}}{\delta \kappa}\right)^\ast \delta_k \mathcal{H}
\end{equation}
and one can easily see from the proof in lemma \ref{gauge} that
\begin{equation}\label{kka}
\frac{\delta {\bf k}}{\delta \kappa} = \begin{pmatrix} -1&0&\dots &0\\ \ast & -1&\dots&0\\ \vdots& \ddots& \ddots&\vdots\\ \ast&\dots&\ast&-1\end{pmatrix}
\end{equation}
where the diagonal below the main one has entries which are multiple of the differential operator $D$. Clearly, $\left(\frac{\delta \kappa}{\delta \kappa}\right)^\ast e_2 = e_2$.
In conclusion, our integrable system has a geometric realization with a lifting of the form $\Gamma_t = \frac1n(\Gamma'' - \frac2n k_{n-2}\Gamma)$. When $n = m+1$ the theorem follows.
\end{proof}
\subsection{Discretizations of higher order Hamiltonian flows in $\mathbb {RP}^3$ and $\mathbb {RP}^4$}
As we previously said, all Hamiltonian evolutions with Hamiltonian functionals of the form (\ref{H}) induce biHamiltonian and integrable systems in the invariants ${\bf k}$. When looking for discretizations of these flows, the first thing to have in mind is that these flows have projective realizations of order higher than $2$. That means their lifts - the evolution of $\Gamma$ - will involve $\Gamma^{(r)}$, $r>2$. From the calculations done in the previous section we learned two things: first, the evolution appearing in the coefficient $B$ will be second order, and so any hope to recover a third order continuous limit will force us to seek maps for which $A = B = 0$, and to look for the continuous limit in the term involving $\epsilon^3$. That is, if we go higher in the degree, we will need to go higher in the power of $\epsilon$. Second, no map defined through the intersection of a segment and a hyperplane will have $B = 0$ since that combination forces $B$ to have $\Gamma''$ terms. Thus, we need to search for candidates among intersections of other combinations of subspaces.
\subsubsection{The $\mathbb {RP}^3$ case}
A simple dimensional counting process shows us that in $\mathbb R^4$ three $3$-dimensional hyperplanes through the origin generically intersect in one line, so does one $2$-dimensional subspace and a $3$-dimensional hyperplane. These are the only two cases for which the generic intersection of subspaces is a line. The projectivization of the first case shows us that the intersection of three projective planes in $\mathbb {RP}^3$ is a point, and the second case is the one we considered in the previous section. Thus, we are forced to look for discretizations among maps generated by the intersection of three planes in $\mathbb {RP}^3$. Since there are many possible such choices, we will describe initially a general choice of planes and will narrow down to our discretization, which turned out to be not as natural as those in the planar case. The calculations below will show that the choices of these planes need to be very specific to match the evolution associated to the AGD flows. This is somehow not too surprising: integrable systems are rare and their Hamiltonians are given by very particular choices. One will need to tilt the planes in a very precise way to match those choices.
\vskip 2ex
\centerline{\includegraphics[height=1.5in]{pentagram3.pdf}}
\vskip 1ex
\centerline{Fig. 4}
\vskip 2ex
As before, assume $x_{n+k} = \Gamma(x+k\epsilon)$ and consider three projective planes $\Pi_1$, $\Pi_2$, $\Pi_3$ intersecting at one point that we will call $T(x_n)$. Assume our planes go through the following points
\[
\Pi_1 = \langle x_{n+m_1}, x_{n+m_2},x_{n+m_3}\rangle, \hskip 2ex \Pi_2 = \langle x_{n+n_1}, x_{n+n_2}, x_{n+n_3}\rangle,
\]
\[\Pi_3 = \langle x_{n+r_1}, x_{n+r_2}, x_{n+r_3}\rangle
\]
for some integers $m_i$, $n_i$, $r_i$ that we will need to determine. Figure 4 shows the particular case when $x_{n+m_1} = x_{n+n_1}$, $x_{n+r_2} = x_{n+n_2}$ and $x_{n+m_3} = x_{n+r_3}$.
As before, denote by $\Gamma_\epsilon(x)$ the lifting of $T(x_n)$. Given that $T(x_n)$ is the intersection of the three planes, we obtain the following conditions on $\Gamma_\epsilon$
\begin{align}\label{cond}
\Gamma_\epsilon &=& a_1\Gamma(x+m_1\epsilon) + a_2\Gamma(x+m_2)+a_3\Gamma(x+m_3)\hskip 15ex \\
&=& b_1\Gamma(x+n_1\epsilon) + b_2\Gamma(x+n_2)+b_3\Gamma(x+n_3)\hskip17ex \\
&=& c_1\Gamma(x+r_1\epsilon) + c_2\Gamma(x+r_2)+c_3\Gamma(x+r_3)\hskip 18ex
\end{align}
for functions $a_i, b_i, c_i$ that depend on $\epsilon$. Also as before, assume $\Gamma_\epsilon = \Gamma+\epsilon A+\epsilon^2B+\epsilon^3C+\epsilon^4D+\epsilon^5E+o(\epsilon^5)$ and assume further that
\[
A = \sum_0^3 \alpha_i\Gamma^{(i)}, \hskip 1ex B = \sum_0^3 \beta_i\Gamma^{(i)},\hskip 1ex C = \sum_0^3 \gamma_i\Gamma^{(i)},\hskip 1ex D = \sum_0^3 \eta_i\Gamma^{(i)},\hskip 1ex E = \sum_0^3 \delta_i\Gamma^{(i)}.
\]
\begin{proposition} If $A = B = 0$, then
\begin{equation}\label{C1}
m_1m_2m_3 = n_1n_2n_3=r_1r_2r_3.
\end{equation}
In this case $\gamma_3 = \frac16 m_1m_2m_3$. Under some regularity conditions, (\ref{C1}) implies $A = B = 0$.
\end{proposition}
\begin{proof} Equating the coefficients of $\Gamma^{(i)}$, $i=0,\dots, 3$, condition (\ref{cond}) implies $\alpha_2 = \alpha_3=\beta_3 = 0$ and the equations
\begin{align}
\label{one} 1+\alpha_0\epsilon+\beta_0\epsilon^2 &= a_1+a_2 + a_3 +o(\epsilon^3)\hskip 2.5in\\
\label{two} \alpha_1+\beta_1\epsilon+\gamma_1\epsilon^2 &= a_1 m_1+a_2m_2+a_3m_3 + o(e^2)\\
\label{three} 2(\beta_2+\gamma_2\epsilon+\eta_2\epsilon^2) &= a_1m_1^2+a_2m_2^2+a_3m_3^2- \frac2{4!}k_2(a_1m_1^4+a_2m_2^4+a_3m_3^4)\epsilon^2 + o(\epsilon^2)
\end{align}
with the additional condition
\begin{equation}\label{condition}
6(\gamma_3+\eta_3\epsilon+\delta_3\epsilon^2) = a_1m_1^3+a_2m_2^3+a_3m_3^3-\frac1{20} k_2(a_1m_1^5+a_2m_2^5+a_3m_3^5) \epsilon^2 + o(\epsilon^2).
\end{equation}
The terms with $k_2$ appear when we use the Wilczynski relation $\Gamma^{(4)} = -k_2\Gamma''-k_1\Gamma'-k_0\Gamma$. We obtain similar equations with $b_i, n_i$ and with $c_i, r_i$.
Denote by ${\bf a} = (a_i)$ and decompose ${\bf a}$ as ${\bf a} = \sum {\bf a}_i\epsilon^i$, with analogous decompositions for $b_i$ and $c_i$. Then, the first three equations above allow us to solve for ${\bf a}_i$, ${\bf b}_i$ and ${\bf c}_i$, $i=0,1$, namely
$
{\bf a}_i = A(m)^{-1} v_i, {\bf b}_i = A(n)^{-1}v_i, {\bf c}_i = A(r)^{-1}v_i$ with
\begin{equation}\label{As}
A(s) = \begin{pmatrix}1&1&1\\ s_1&s_2&s_3\\ s_1^2&s_2^2&s_3^2\end{pmatrix}, v_0 = \begin{pmatrix}1\\\alpha_1\\2\beta_2\end{pmatrix}, v_1 = \begin{pmatrix} \alpha_0\\\beta_1\\2\gamma_2\end{pmatrix}
\end{equation}
We can now use (\ref{condition}) to obtain conditions on the parameters $\alpha_i$ and $\beta_i$. Indeed, these are
\[
6\gamma_3 = \begin{pmatrix}m_1^3&m_2^3&m_3^3\end{pmatrix} {\bf a}_0 = \begin{pmatrix}n_1^3&n_2^3&n_3^3\end{pmatrix} {\bf b}_0= \begin{pmatrix}r_1^3&r_2^3&r_3^3\end{pmatrix} {\bf c}_0.
\]
After substituting the values for ${\bf a}_0, {\bf b}_0$ and ${\bf c}_0$, these three equations can be alternatively written as the system
\begin{equation}\label{M}
6\gamma_3 \begin{pmatrix} 1\\1\\1\end{pmatrix}= \begin{pmatrix} M_1&M_2&M_3\\ N_1&N_2&N_3\\ R_1&R_2&R_3\end{pmatrix}\begin{pmatrix}1\\\alpha_1\\\beta_2\end{pmatrix}
\end{equation}
where
\[
M_1 = \frac1{\det(A(m))}\det\begin{pmatrix}m_1^3&m_2^3&m_3^3\\ m_1&m_2&m_3\\ m_1^2&m_2^2&m_3^2\end{pmatrix}, M_2 = \frac1{\det(A(m))}\det\begin{pmatrix}1&1&1\\ m_1^3&m_2^3&m_3^3\\ m_1^2&m_2^2&m_3^2\end{pmatrix},
\]
\[
M_3 = \frac1{\det(A(m))}\det\begin{pmatrix}1&1&1\\ m_1&m_2&m_3\\ m_1^3&m_2^3&m_3^3\end{pmatrix}
\]
with similar definitions for $N_i$ (using $n_i$ instead of $m_i$) and $R_i$ (using $r_i$).
\vskip 1ex
Although the following relation must be known, I was unable to find a reference.
\begin{lemma}\label{Vandermonde} Assume $A(m)$ is an $s\times s$ Vandermonde matrix with constants $m_1, \dots, m_s$. Let $M_i = \frac{\det A_i(m)}{\det A(m)}$, where $A_i(m)$ is obtained from $A(m)$ when substituting the ith row with $m_1^s, \dots, m_s^s$. Then $M_i = (-1)^s p_{s-i+1}$, where $p_r$ are the elementary symmetric polynomials defined by the relation
\[
(x-m_1)\dots (x-m_s) = x^s + p_{s-1} x^{s-1}+\dots + p_1 x+p_0.
\]
\end{lemma}
\begin{proof}[Proof of the lemma] One can see that
\[
(x-m_1)\dots (x-m_s) = (-1)^s \det A(m)^{-1} \det\begin{pmatrix} 1&1&\dots&1\\ x&m_1&\dots&m_s\\ x^2&m_1^2&\dots& m_s^2\\ \vdots&\vdots&\dots&\vdots\\ x^s&m_1^s&\dots&m_s^s\end{pmatrix}
\]
since both polynomials have the same roots and the same leading coefficient. The lemma follows from this relation.
\end{proof}
{\it We continue the proof of the Proposition}.
From the lemma we know the values of $M_i$ to be
\begin{equation}\label{Mi}
M_1 = m_1m_2m_3,\hskip 2ex M_2 = -(m_1m_2+m_1m_3+m_2m_3),\hskip 2ex M_3 = m_1+m_2+m_3.
\end{equation}
It is now clear that, if $A = B = 0$, then $\alpha_1 = \beta_2 = 0$ and the system implies $M_1 = N_1=R_1$ as stated in the proposition.
Notice that the condition $M_1=N_1=R_1$ does not guarantee $A = B = 0$; this will depend on the rank of the matrix in (\ref{M}).
Notice also that the same condition applies to ${\bf a}_1$, ${\bf b}_1$ and ${\bf c}_1$, we only need to substitute $F_0$ by $F_1$ in the calculations. Therefore, if the rank of
\begin{equation}\label{check}
\begin{pmatrix} M_2&M_3\\ N_2&N_3\\ R_2&R_3\end{pmatrix}
\end{equation}
is maximal, then $M_1 = N_1= R_1$ if, and only if $\alpha_1 = \beta_2 = 0$, and also $\beta_1 = \gamma_2 = 0$. Assume that the rank of this matrix is 2. Then, $A = \alpha_0\Gamma$ and condition (\ref{norm}) applied to $\Gamma_\epsilon$ as before becomes
\[
0 = \epsilon 4\alpha_0+o(\epsilon)
\]
implying $\alpha_0 = 0$ and $A = 0$. Likewise, if the rank is two then $\beta_1=\beta_2 = 0$ and $B = \beta_0\gamma$. Applying (\ref{norm}) again we will obtain $\beta_0=0$ and $B=0$.
\end{proof}
If we now go back to (\ref{one})-(\ref{two})-(\ref{three})-(\ref{condition}) and we compare the powers of $\epsilon^2$, we can solve for ${\bf a}_2, {\bf b}_2, {\bf c}_2$ as ${\bf a}_2 = A(m)^{-1}v^a_2$, ${\bf b}_2 = A(n)^{-1}v^b_2$, ${\bf c}_2 = A(r)^{-1}v^c_2$ with
\[
v^a_2 = \begin{pmatrix} \beta_0\\ \gamma_1\\ 2\eta_2+\frac 1{12}k_2 {\bf m}_4\cdot{\bf a}_0\end{pmatrix}, v^b_2 = \begin{pmatrix} \beta_0\\ \gamma_1\\ 2\eta_2+\frac 1{12}k_2 {\bf n}_4\cdot{\bf b}_0\end{pmatrix}
, v^c_2 = \begin{pmatrix} \beta_0\\ \gamma_1\\ 2\eta_2+\frac 1{12}k_2 {\bf r}_4\cdot{\bf c}_0\end{pmatrix}
\]
where ${\bf m}_4 = (m_1^4, m_2^4, m_3^4)$. From now on we will denote ${\bf m}_i = (m_1^i, m_2^i, m_3^i)$ and we will have analogous notation for ${\bf n}_i$ and ${\bf r}_i$. Using these formulas in the extra condition (\ref{condition}), we obtain a system of three equations.
As before, we can rearrange these equations to look like the system
\begin{equation}\label{Meq}
\mathbf{M}\begin{pmatrix} \gamma_1\\ 2\eta_2\\ -3!\delta_3\end{pmatrix}= \begin{pmatrix} \displaystyle \frac1{20}{\bf m}_5A^{-1}(m)e_1-\frac{M_3}{12}{\bf m}_4A^{-1}(m)e_1\\\\\displaystyle \frac1{20}{\bf n}_5A^{-1}(n)e_1-\frac{N_3}{12}{\bf n}_4A^{-1}(n)e_1\\\\\displaystyle
\frac1{20}{\bf r}_5A^{-1}(r)e_1-\frac{R_3}{12}{\bf r}_4A^{-1}(r)e_1\end{pmatrix} k_2
\end{equation}
where
\[
\mathbf{M} = \begin{pmatrix} M_2&M_3&1\\ N_2&N_3&1\\ R_2&R_3&1\end{pmatrix}
\]
Since $\gamma_3 =\frac{m_1m_2m_3}{6}$, at first look it seems as if the numbers $(m_1,m_2,m_3) = (-1,-2,3)$, $(n_1,n_2,n_3)=(-1,2,-3)$ and $(r_1,r_2,r_3)=(1,-2,-3)$ would be good choices for a generalization of the pentagram map (it would actually be a direct generalization of the map in proposition \ref{syst2}). As we see next, the situation is more complicated.
\begin{lemma}\label{square} Assume $m_1^2+m_2^2+m_3^2=n_1^2+n_2^2+n_3^2 = r_1^2+r_2^2+r_3^2$. Then the continuous limit of $\Gamma_\epsilon$ is not the AGD flow and it is not biHamiltonian.
\end{lemma}
\begin{proof} We already have $A = B = 0$ and $\gamma_2 = 0$.
Equation (\ref{Meq}) allows us to solve for $\gamma_1$. After that, $\gamma_0$ will be determined, as usual, by the normalization equation (\ref{norm}). Direct calculations determine ${\bf m}_5A^{-1}(m) e_1 = M_1M_5$ and ${\bf m}_4A^{-1}(m)e_1 = M_1M_3$, where $M_1, M_2, M_3$ are as in (\ref{Mi}) and where
\begin{equation}\label{M5}
M_5 = m_1^2+m_2^2+m_3^2+m_1m_2+m_1m_3+m_2m_3 = M_3^2 + M_2.
\end{equation}
From here we get
\[
\frac1{20}M_5-\frac1{12}M_3^2 = -\frac1{30}(M_5+M_2) + \frac7{60} M_2.
\]
$M_5+M_2 = m_1^2+m_2^2+m_3^2$, and so $M_5+M_2= N_5+N_2=R_5+R_2$ by the hypothesis of the lemma. We have
\[\gamma_1 = M_1\frac{k_2}{\det\mathbf{M}}\left(-\frac1{30}\det\begin{pmatrix}M_5+M_2&M_3&1\\ N_5+N_2&N_3&1\\mathbb R_5+R_2&R_3&1\end{pmatrix} +\frac 7{60}\det\begin{pmatrix}M_2&M_3&1\\mathcal{N}_2&N_3&1\\mathbb R_2&R_3&1\end{pmatrix} \right)
\]
\[= \frac{7M_1}{60}k_2.
\]
Since $\gamma_3 = \frac{M_1}{6}$ we finally have that $\Gamma_\epsilon$ is expanded as
\[
\Gamma_\epsilon = \Gamma + \frac{M_1}6(\Gamma'''+\frac7{10}k_2\Gamma'+r_0\Gamma)\epsilon^3+ o(\epsilon^3)
\]
where $r_0$ is uniquely determined by (\ref{norm}).
Let's now find the lifting for the projective realization of the AGD flow. From lemma \ref{gauge} the moving frame associated to $K$ is given by $\rho=\hat\rho g$ where $g$ is the gauge matrix in the case of $\mathrm{SL}(4)$, given as in (\ref{g4}). We also know that the lifted action of $\mathrm{SL}(4)$ on $\mathbb R^4$ is linear. With this information one can conclude that the lift of the evolution (\ref{uevpr}) to $\mathbb R^4$ is of the form
\[
\Gamma_t = \rho \begin{pmatrix}r_0\\ \delta_\kappa\mathcal{H}\end{pmatrix} = \begin{pmatrix}\Gamma & \Gamma'& \Gamma''+k_2\Gamma & \Gamma'''+k_2\Gamma'+(k_1-k_2')\Gamma\end{pmatrix} \begin{pmatrix}r_0\\ \delta_\kappa\mathcal{H}\end{pmatrix}.
\]
According to (\ref{changeofvariable}), we also know that
\[
\delta_\kappa\mathcal{H} = \frac{\delta {\bf k}}{\delta \kappa}^\ast\delta_k\mathcal{H} = \frac{\delta {\bf k}}{\delta \kappa}^\ast \begin{pmatrix} -\frac14 k_2\\ 0\\ 1\end{pmatrix} = \begin{pmatrix} \frac14 k_2\\ 0\\ -1\end{pmatrix}.
\]
Therefore, the lifting of the projective realization of the AGD flow is $\Gamma_t = -\Gamma''' -\frac34 k_2\Gamma'-r_0\Gamma$, with $r_0$ determined by (\ref{norm}). Although a change in the coefficient of $\Gamma'$ seems like a minor difference, the change is in the Hamiltonian of the evolution and any small change usually results in a system that is no longer biHamiltonian, as it is the case here. The calculations that show that the resulting Hamiltonian is no longer biHamiltonian are very long and tedious, and we are not including them here.
\end{proof}
According to this lemma, in order to find a discretization of the AGD flow we need to look for planes for which the hypothesis of the previous lemma does not hold true.
\begin{theorem} Assume $(m_1,m_2,m_3) = (-c,a,b)$, $(n_1,n_2,n_3)=(c,-a,b)$ and $(r_1,r_2,r_3)=(c,-1,ab)$. Then, the map T is a discretization of the AGD flow whenever
\begin{equation}\label{level}
\frac{c-1+a(b-1)}{b-c} = -\frac{5}4.
\end{equation}
\end{theorem}
\begin{proof} With the choices above one can check that
\[
\det\begin{pmatrix} M_2&M_3&1\\ N_2&N_3&1\\ R_2&R_3&1\end{pmatrix} = -2(c-a)(a-1)(b+1)(b-c)
\]
while
\[
\det\begin{pmatrix} M_5&M_3&1\\ N_5&N_3&1\\ R_5&R_3&1\end{pmatrix} = -2(c-a)(a-1)(b+1)(c-1+a(b-1)).
\]
With this particular ansatz, the rank of (\ref{check}) is maximal whenever $c\ne a$, $a\ne 1$, $b\ne -1$ and $b\ne c$.
Using the fact that
\[
\frac1{20}M_5-\frac1{12}M_3^2 = -\frac1{30}M_5 + \frac1{12} M_2
\]
we obtain
\[
\gamma_1 = -\frac1{30}M_1\frac{c-1+a(b-1)}{b-c} + \frac1{12}M_1.
\]
As we saw before, the lifting of the realization of the AGD flow is given by $\Gamma_t= \Gamma'''+\frac34k_2\Gamma'+r_0\Gamma$ (after a change in the sign of $t$), where $r_0$ is determined by (\ref{norm}). Since $\gamma_3 = \frac{M_1}{6}$, to match this flow we need $\gamma_1 =\frac{1}{8}k_2M_1$.
That is, we need
\[
\frac{c-1+a(b-1)}{b-c} = -\frac{5}4
\]
as stated in the theorem.
\end{proof}
Equation (\ref{level}) can be rewritten as
\[
z + xy = 1
\]
where $z = c$, $x={4a+5}$, $y = 1-b$. In principle there are many choices like $c = -2, a = -2, b=2$ that solve these equations, but these are not valid choices since the planes associated to $(-c,a,b)$, $(c,a,-b)$ are not well defined (they are determined by only two points). Thus, looking for appropriate values is simple, but we have to be careful. In particular, we cannot choose any vanishing value, since $m_1m_2m_3=n_1n_2n_3=r_1r_2r_3$ would imply that all three planes intersect at $x_n$ (the zero value for $m_i$, $n_i$ and $r_i$) and $T$ would be the identity. We also do not want to have the condition in lemma \ref{square} (hence $a$ and $b$ cannot be $\pm1$), plus we want to have the matrix (\ref{check}) to have full rank, which implies $c\ne a$, $a\ne 1$, $b\ne -1$ and $b\ne c$. A possible choice of lowest order is
\[\begin{array}{cc}
a = -2, b=3, c=-5
\end{array}
\]
and other combinations involving higher values. Choosing this simplest value, we see that the planes are $\Pi_1 = \langle x_{n-2}, x_{n+3}, x_{n+5}\rangle$, $\Pi_2 = \langle x_{n-5}, x_{n+2}, x_{n+3}\rangle$ and $\Pi_3=\langle x_{n-5}, x_{n+1}, x_{n-6}\rangle$, which shows just how precise one needs to be when choosing them. Of course, these are not necessarily the simplest choices, just the ones given by our ansatz. Using a simple C-program and maple one can show that $6$ is the lowest integer value that needs to be included, so our choice is in fact minimal in that sense (there is no solution if we only use $-5,-4 \dots, 4, 5$ for $m_i$, $n_i$ and $r_i$). One can check that, for example, $\Pi_1 = \langle x_{n+2}, x_{n-3}, x_{n+5}\rangle$, $\Pi_2 = \langle x_{n+5}, x_{n-2}, x_{n+3}\rangle$, $\Pi_3=\langle x_{n+5}, x_{n+1}, x_{n+6}\rangle$ and $\Pi_1 = \langle x_{n+1}, x_{n-3}, x_{n-4}\rangle$, $\Pi_2 = \langle x_{n-1}, x_{n-3}, x_{n+4}\rangle$ and $\Pi_3=\langle x_{n+1}, x_{n+2}, x_{n+6}\rangle$ are also choices.
It will be valuable and very interesting to learn the geometric significance (if any) of this condition and whether or not the map $T$, when written in terms of the projective invariants of elements of $\mathcal{P}_n$, is also completely integrable as it is the case with the pentagram map. Learning about their possible discrete structure might aid the understanding of condition (\ref{level}) and would greatly aid the understanding of the general case. Doing this study is non-trivial as even the projective invariants of twisted polygons in three dimensions are not known.
\subsubsection{The $\mathbb {RP}^4$ case} One thing we learned form the $\mathbb {RP}^3$ case is that choosing an appropriate set of linear subspaces intersecting to match discretizations of biHamiltonian flows involve solving Dyophantine problems. This will also become clear next. These Dyophantines problems grow increasingly complicated very fast, but, nevertheless they seem to be solvable. Our calculations are telling: a general Dyophantine problem of high order is unlikely to have solutions. Ours can be simplified using the symmetry in the values $m_i$, $n_i$ and $r_i$ to reduce the order as we write them in terms of elementary symmetric polynomials. Still, as we see next, in cases when, based on degree and number of variables, one should expect a solution, we have none. In other apparently similar cases we have an infinite number. Hence, the fact that our cases have solutions hints to a probable underlying reason to why they do.
In this section we find a discretization for a the second integrable AGD flow in $\mathbb {RP}^4$, and we will draw from it a conjecture for the general case.
\begin{proposition} The projective geometric realization of the AGD Hamiltonian system associated to the Hamiltonian
\[
\mathcal{H}(L) = \int_{S^1} \mathrm{res}(L^{3/5}) dx
\]
has a lift given by
\begin{equation}\label{ev4}
\Gamma_t = \Gamma'''+\frac{27}5k_3\Gamma'+r_0\Gamma
\end{equation}
where again $r_0$ is determined by the property (\ref{norm}) of the flow.
\end{proposition}
\begin{proof}
As before, we need to find $\delta_k \mathcal{H}$, change the variable to $\delta_{\kappa}\mathcal{H}$ (to relate it to the coefficients of the realizing flow), and write these coefficients in terms of the Wilczynski invariants $k_i$.
If $L^{1/5} = D+\ell_1D^{-1}+\ell_2D^{-2}+\ell_3D^{-3}+o(D^{-3})$, then
\[
L^{3/5} = D^3+3\ell_1D+3(\ell_1'+\ell_2)+(\ell_1''+3\ell_2'+3\ell_1^2+3\ell_3)D^{-1} + o(D^{-1})
\]
so that
\[
\mathcal{H}(L) = \int_{S^1}\mathrm{res}(L^{3/5})dx = 3\int_{S^1} (\ell_1^2+\ell_3)dx.
\]
Using $(L^{1/5})^5 = L$ we find directly that
\[
\ell_1 = \frac15 k_3, \hskip 2ex\ell_3 = \frac15(k_1-2k_2'+2k_3''+2k_3^2).
\]
Therefore
\[
\mathcal{H}(L) = \frac35\int_{S^1} \frac{11}5k_3^2+k_1 dx
\]
and $\delta_k\mathcal{H} = \frac35(e_3 + \frac {22}5 k_3 e_1)$.
Using expression (\ref{kka}) and Lemma \ref{gauge} we get
\[
\frac{\delta k}{\delta\kappa} = \begin{pmatrix} -1&0&0&0\\ -3D&-1&0&0\\ -3D^2&-2D&-1&0\\ -D^3-\kappa_3'-\kappa_3D&-D^2&-D&-1\end{pmatrix}.
\]
\[
\delta_\kappa\mathcal{H}= \left(\frac{\delta{\bf k}}{\delta \kappa}\right)^\ast\delta\mathcal{H} = \begin{pmatrix}-\frac {22}5 k_3\\0\\-1\\0\end{pmatrix}.
\]
Finally, the matrix $g$ appearing in Lemma \ref{gauge} is in this case given by (\ref{g4}) and the lifting of the projective realization associated to the $\mathcal{H}$ Hamiltonian evolution is given by
\[
\Gamma_t = (\Gamma, \Gamma', \Gamma'', \Gamma''', \Gamma^{(4)}) g \begin{pmatrix}\hat r_0\\\delta_\kappa\mathcal{H}\end{pmatrix} = -\Gamma'''-\frac{27}5k_3\Gamma'-r_0\Gamma.
\]
A simple change of sign in $t$ will prove the theorem.
\end{proof}
There are 4 possible combinations of linear submanifolds in $\mathbb {RP}^4$ intersecting at a point: four $3$-dim subspaces, two $2$-dim planes, one $2$-dim plane and two $3$-dim subspaces and one line and one $3$-dim subspace. These correspond to the following subspaces through the origin in $\mathbb R^5$ that generically intersect in a line
\begin{enumerate}
\item\label{1} Four 4-dimensional subspaces.
\item\label{2} Two 3-dimensional subspaces.
\item\label{3} One 3-dimensional subspace and two 4-dimensional ones.
\item\label{4} One 2-dimensional subspace and one 4-dimensional one.
\end{enumerate}
Case (\ref{4}) corresponds to the intersection of a projective line and hyperplane, the case we studied first. Case (\ref{1}) is a natural choice for the fourth order AGD flow, and one can easily check that it cannot have a third order limit but a fourth order one. Therefore, we have choices (\ref{2}) and (\ref{3}) left.
\begin{theorem} \label{case2} If a nondegenerate map $T:\mathcal{P}^n\to \mathcal{P}^n$ is defined using combination (\ref{2}) in $\mathbb {RP}^4$ and its lifting has a continuous limit of the form $\Gamma_t = a\Gamma'''+ b\Gamma'+c\Gamma$, then $b = \frac3{10}a$ and $c$ is determined by (\ref{norm}).
\end{theorem}
\begin{proof} Two $3$-dim subspaces through the origin in $\mathbb R^5$ are determined by three points each. Assume $x_{n+m_i}$, $i=1,2,3$ are the points in one of them, while $x_{n+n_i}$ are the points in the other one. If $\Gamma(x+\epsilon r)= x(n+r)$ for any $r$, then $\Gamma_\epsilon$ belongs to the intersection of these two subspaces whenever
\begin{eqnarray}\label{first}\Gamma_\epsilon &=& a_1\Gamma(x+m_1\epsilon)+a_2\Gamma(x+m_2\epsilon)+a_3\Gamma(x+m_3\epsilon)\\ &=& b_1\Gamma(x+n_1\epsilon)+b_2\Gamma(x+n_2\epsilon)+b_3\Gamma(x+n_3\epsilon)
\end{eqnarray}
If, as before, $\Gamma_\epsilon = \Gamma+\epsilon A+\epsilon^2B+\epsilon^3C +\epsilon^4D+ \epsilon^5E+\epsilon^6 F +o(\epsilon^6)$ and $A = \sum_{i=0}^4 \alpha_i\Gamma^{(i)}$, $B = \sum_{i=0}^4 \beta_i\Gamma^{(i)}$, $C = \sum_{i=0}^4 \gamma_i\Gamma^{(i)}$, $D=\sum_{i=0}^4\delta_i\Gamma^{(i)}$, $E=\sum_{i=0}^4\eta_i\Gamma^{(i)}$ and $F=\sum_{i=0}^4\nu_i\Gamma^{(i)}$ then $\alpha_2=\alpha_3=\alpha_4=0=\beta_3=\beta_4=\gamma_4$ and (\ref{first}) can be split into
\begin{eqnarray*}\label{1-2-3}
1+\alpha_0\epsilon+\beta_0\epsilon^2+\gamma_0\epsilon^3 +o(\epsilon^3) &=& a_1+a_2+a_3+o(\epsilon^3) = b_1+b_2+b_3+o(\epsilon^3)\\
\alpha_1+\epsilon\beta_1+\epsilon^2\gamma_1+o(\epsilon^2) &=& {\bf m}_1\cdot {\bf a}+o(\epsilon^2) = {\bf n}_1\cdot {\bf b}+o(\epsilon^2)\\
\beta_2+\epsilon\gamma_2+\epsilon^2\delta_2+o(\epsilon^2) &=&\frac1{2!}{\bf m}_{2}\cdot {\bf a}+o(\epsilon^2) = \frac1{2!}{\bf n}_{2}\cdot {\bf b} + o(\epsilon^2)
\end{eqnarray*}
corresponding to the coefficients of $\Gamma$, $\Gamma'$ and $\Gamma''$ and
\begin{eqnarray*}\label{4-5}
\gamma_3+\epsilon\delta_3+\epsilon^2\eta_3 + o(\epsilon^2) &=& \frac1{3!} {\bf m}_{3}\cdot {\bf a}-\frac{\epsilon^2}{5!}k_3 {\bf m}_{5}\cdot {\bf a} +o(\epsilon^2)\\ &=& \frac1{3!} {\bf n}_{3}\cdot {\bf b}-\frac{\epsilon^2}{5!}k_3 {\bf n}_{5}\cdot {\bf b} +o(\epsilon^2)\\
\delta_4+\epsilon\eta_4+\epsilon^2\nu_4+o(\epsilon^2) &=& \frac1{4!} {\bf m}_{4}\cdot {\bf a}-\frac{\epsilon^2}{6!}k_3 {\bf m}_{6}\cdot {\bf a} + o(\epsilon^2)\\ &=& \frac1{4!} {\bf n}_{4}\cdot {\bf b}-\frac{\epsilon^2}{6!}k_3 {\bf n}_{6}\cdot {\bf b} + o(\epsilon^2)
\end{eqnarray*}
corresponding to the coefficients of $\Gamma^{(3)}$ and $\Gamma^{(4)}$. Here we have used the relation $\Gamma^{(5)} = -k_3\Gamma'''-k_2\Gamma''-k_1\Gamma'-k_0\Gamma$ and we have used the notation ${\bf m}_r= (m_1^r,m_2^r, m_3^r)$, as we did in the previous case. Likewise with $\mathfrak{n}$.
If, as before, we use the notation ${\bf a} = \sum_{j=0}^\infty {\bf a}_j\epsilon^j$, ${\bf b} = \sum_{j=0}^\infty {\bf b}_j\epsilon^j$, then the first three equations completely determine ${\bf a}_i$ and ${\bf b}_i$, $i=0,1,2$. Indeed, they are given by ${\bf a}_i = A(m)^{-1}v_i$, ${\bf b}_i = A(n)^{-1}v_i$, where $A(s)$ is given as in (\ref{As}) and where
\begin{equation}\label{vi}
v_0 = \begin{pmatrix} 1\\ \alpha_1\\ 2!\beta_2\end{pmatrix}, v_1 = \begin{pmatrix} \alpha_0\\ \beta_1\\ 2!\gamma_2\end{pmatrix}, v_2 = \begin{pmatrix} \beta_0\\ \gamma_1\\ 2!\delta_2\end{pmatrix}.
\end{equation}
The last four equations above are extra conditions that we need to impose on these coefficients. They can be rewritten as
\begin{eqnarray}
\label{uno} 3!\gamma_3 &=& {\bf m}_{3}\cdot {\bf a}_0 = {\bf n}_{3}\cdot {\bf b}_0\\
\label{dos} 3! \delta_3 &=&{\bf m}_{3}\cdot {\bf a}_1 = {\bf n}_{3}\cdot {\bf b}_1\\
\label{tres} 3!\eta_3&=& {\bf m}_{3}\cdot{\bf a}_2-\frac{3!}{5!} k_3{\bf m}_{5}\cdot {\bf a}_0 = {\bf n}_{3}\cdot{\bf b}_2-\frac{3!}{5!} k_3{\bf n}_{5}\cdot {\bf b}_0
\end{eqnarray}
and
\begin{eqnarray}
\label{cuatro} 4!\delta_4 &=& {\bf m}_{4}\cdot{\bf a}_0 = {\bf n}_4\cdot {\bf b}_0\\ \label{cinco} 4!\eta_4 &=&{\bf m}_4\cdot {\bf a}_1={\bf n}_4\cdot {\bf b}_1\\\label{seis} 4!\nu_4&=&{\bf m}_4\cdot {\bf a}_2-\frac{4!}{6!} k_3 {\bf m}_6\cdot{\bf a}_0 = {\bf n}_4\cdot {\bf b}_2-\frac{4!}{6!} k_3 {\bf n}_6\cdot{\bf b}_0.
\end{eqnarray}
If as before we denote $(m_1^3, m_2^3, m_3^3) A(m)^{-1} = (M_1, M_2, M_3)$, where $M_i$ are the negative of the basic symmetric polynomials as shown in Lemma \ref{Vandermonde}, then equations (\ref{uno}) imply
\[
\begin{pmatrix} M_1&M_2&M_3\end{pmatrix}\begin{pmatrix} 1\\\alpha_1\\2!\beta_2\end{pmatrix} = \begin{pmatrix} N_1&N_2&N_3\end{pmatrix}\begin{pmatrix} 1\\\alpha_1\\2!\beta_2\end{pmatrix} =3!\gamma_3.
\]
Since our continuous limit needs to be third order, and hence appearing in $C$, to have the proper continuous limit we need $A = B = 0$, thus $\alpha_1 = \beta_2 = 0$ and $M_1 = N_1 = 3!\gamma_3$.
From the proof of Lemma \ref{square} and direct calculations, we know that ${\bf m}_4A(m)^{-1} = (M_1M_3, M_4, M_5)$, where $M_4 = -(m_1+m_2)(m_2+m_3)(m_1+m_3) = M_3M_2+M_1$ and $M_5 = M_3^2+M_2$ is as in (\ref{M5}). Therefore, equation (\ref{cuatro}) can be rewritten as
\[
\begin{pmatrix} M_1M_3&M_4&M_5\end{pmatrix}\begin{pmatrix} 1\\\alpha_1\\2!\beta_2\end{pmatrix} = \begin{pmatrix} N_1N_3&N_4&N_5\end{pmatrix}\begin{pmatrix} 1\\\alpha_1\\2!\beta_2\end{pmatrix} =4!\delta_4
\]
from which we can conclude that, if $\alpha_1 = 0 = \beta_2$, then $M_1M_3 = N_1 N_3$, that is $M_3 = N_3$. (Notice that $M_1=N_1\ne 0$ since $M_1=N_1=0$ implies both planes intersect at $x_n$ and $T$ is the identity.)
In order to have (\ref{ev4}) as continuous limit we will need $v_1 = 0$ which implies ${\bf a}_1 = {\bf b}_1 = 0$. Using (\ref{dos}) and (\ref{cinco}) we get $\delta_3 = \eta_4 = 0$.
Finally $v_2 = \gamma_1e_2+2!\delta_2e_3$ and ${\bf m}_5A^{-1}(m)e_1 = M_1 M_5 = M_1(M_3^2+M_2)$ as in the proof of Lemma \ref{square}. Therefore (\ref{tres}) becomes
\[
3!\eta_3 = M_2\gamma_1+2!M_3\delta_2 - \frac{3!}{5!} M_1(M_3^2+M_2) = N_2\gamma_1+2!N_3\delta_2 - \frac{3!}{5!} N_1(N_3^2+N_2).
\]
Since we already know that $M_1 = N_1$ and $M_3 = N_3$, this equation becomes
\[
M_2(\gamma_1-\frac{3!}{5!}M_1) = N_2 (\gamma_1-\frac{3!}{5!}M_1).
\]
From here, either $\gamma_1= \frac{3!}{5!}M_1 = \frac{(3!)^2}{5!}\gamma_3$, resulting in the continuous limit displayed in the statement of the Theorem, or $M_2 = N_2$. But, from Lemma \ref{Vandermonde} conditions $M_1 = N_1$, $M_2=N_2$ and $M_3=N_3$ imply $m_1 = n_1, m_2 = n_2$ and $m_3 = n_3$. This is a degenerate case, the three planes are equal.
\end{proof}
\begin{theorem} There exists a map $T: \mathcal{P}^n \to \mathcal{P}^n$ in $\mathbb {RP}^4$ defined using option (\ref{3}) whose continuous limit (as previously defined) is integrable and has a lifting given by (\ref{ev4}). The map is not unique.
\end{theorem}
Before we prove this theorem, let me point at the apparent pattern we see here: The $L^{\frac2{m+1}}$-Hamiltonian flow, $m\ge 2$, is the continuous limit of a map obtained when intersecting a $1$-dim line and a $m-1$-dim subspace of $\mathbb {RP}^m$. The $L^{\frac3{m+1}}$-Hamiltonian flow, $m = 3,4$, is the continuous limit of a map defined by intersecting one $2$-dim plane and two $m-1$-dim subspaces in $\mathbb {RP}^m$. This pattern leads us to the following conjecture.
\begin{conjecture}
The AGD Hamiltonian flow associated to the $L^{\frac k{m+1}}$--Hamiltonian is the continuous limit of maps defined analogously to the pentagram map through the intersection of one $k-1$-dimensional subspace and $k-1$ $m-1$-dimensional subspaces of $\mathbb {RP}^m$.
\end{conjecture}
\begin{proof}[Proof of the theorem] The proof is a calculation similar to the one in the previous Theorem. Because of the symmetry in the integers $m_i$, $n_i$ and $r_i$, we will write the equations for the planes in terms of the elementary symmetric polynomials $M_i$, $N_i$ and $R_i$. This will both simplify the equations and will reduce their order, making it easier to solve.
If we use the notation in Theorem \ref{case2} we get similar initial equations, except for the fact that the plane corresponding to integers $m_1,m_2,m_3$ and associated to ${\bf a} = (a_1,a_2,a_3)$ coefficients is three dimensional, while the ones associated to integers $n_1,n_2,n_3,n_4$ and $r_1,r_2,r_3,r_4$ and associated to ${\bf b}$ and ${\bf c}$ are four dimensional, ${\bf b}=(b_1, b_2, b_3, b_4)$, ${\bf c}=(c_1,c_2,c_3,c_4)$. Therefore, instead of (\ref{1-2-3}) and subsequent equations, we have
\begin{eqnarray*}\label{6-7-8}
1+\alpha_0\epsilon+\beta_0\epsilon^2+\gamma_0\epsilon^3 +o(\epsilon^3) &=& a_1+a_2+a_3+o(\epsilon^3) \\&=& b_1+b_2+b_3+b_4+o(\epsilon^3)\\&=& c_1+c_2+c_3+c_4+o(\epsilon^3)\\
\alpha_1+\epsilon\beta_1+\epsilon^2\gamma_1+o(\epsilon^2) &=& {\bf m}_1\cdot {\bf a}+o(\epsilon^2) = {\bf n}_1\cdot {\bf b}+o(\epsilon^2)= {\bf r}_1\cdot {\bf c} +o(\epsilon^2)\\
\beta_2+\epsilon\gamma_2+\epsilon^2\delta_2+o(\epsilon^2) &=&\frac1{2!}{\bf m}_{2}\cdot {\bf a}+o(\epsilon^2) \\&=& \frac1{2!}{\bf n}_{2}\cdot {\bf b} + o(\epsilon^2)=\frac1{2!}{\bf r}_{2}\cdot {\bf c}+o(\epsilon^2)
\end{eqnarray*}
corresponding to the coefficients of $\Gamma$, $\Gamma'$ and $\Gamma''$ and
\begin{eqnarray*}\label{4-5}
\gamma_3+\epsilon\delta_3+\epsilon^2\eta_3 + o(\epsilon^2) &=& \frac1{3!} {\bf m}_{3}\cdot {\bf a}-\frac{\epsilon^2}{5!}k_3 {\bf m}_{5}\cdot {\bf a} +o(\epsilon^2)\\ &=& \frac1{3!} {\bf n}_{3}\cdot {\bf b}-\frac{\epsilon^2}{5!}k_3 {\bf n}_{5}\cdot {\bf b} +o(\epsilon^2)\\ &=& \frac1{3!} {\bf r}_{3}\cdot {\bf c}-\frac{\epsilon^2}{5!}k_3 {\bf r}_{5}\cdot {\bf c} +o(\epsilon^2)\\
\delta_4+\epsilon\eta_4+\epsilon^2\nu_4+o(\epsilon^2) &=& \frac1{4!} {\bf m}_{4}\cdot {\bf a}-\frac{\epsilon^2}{6!}k_3 {\bf m}_{6}\cdot {\bf a} + o(\epsilon^2)\\ &=& \frac1{4!} {\bf n}_{4}\cdot {\bf b}-\frac{\epsilon^2}{6!}k_3 {\bf n}_{6}\cdot {\bf b} + o(\epsilon^2)\\ &=& \frac1{4!} {\bf r}_{4}\cdot {\bf c}-\frac{\epsilon^2}{6!}k_3 {\bf r}_{6}\cdot {\bf c} + o(\epsilon^2)
\end{eqnarray*}
corresponding to the coefficients of $\Gamma'''$ and $\Gamma^{(4)}$. Using the first three equations we can solve for ${\bf a}_0$, ${\bf a}_1$ and ${\bf a}_2$ as ${\bf a}_i = A_3^{-1}(m)v_i$, where the subindex in the $A_3(m)$ refers to the size of the Vandermonde matrix, and where $v_i$ are as in (\ref{vi}). Using the fourth equation for ${\bf a}$ we can solve for $\gamma_3, \delta_3$ and $\eta_3$, values that we will use in our next step. Using the first four equations for ${\bf b}$ and ${\bf c}$, we can solve for ${\bf b}_i$ and ${\bf c}_i$, $i = 0,1$ as ${\bf b}_i = A_4^{-1}(n)w_i$, ${\bf c}_i = A_4^{-1}(r)w_i$ with
\[
w_0 = \begin{pmatrix}1\\\alpha_1\\2!\beta_2\\ 3!\gamma_3\end{pmatrix}, w_1 = \begin{pmatrix} \alpha_0\\\beta_1\\2!\gamma_2\\3!\delta_3\end{pmatrix}
\]
and we can also solve for ${\bf b}_2 = A^{-1}_4(n)w_2^b$, ${\bf c}_2 = A^{-1}_4(r)w_2^c$ where
\[
w_2^b = \begin{pmatrix}\beta_0\\\gamma_1\\2!\delta_2\\3!\eta_3+\frac{3!}{5!}k_3{\bf n}_5A^{-1}_4(n)w_0\end{pmatrix}, w_2^c = \begin{pmatrix}\beta_0\\\gamma_1\\2!\delta_2\\3!\eta_3+\frac{3!}{5!}k_3{\bf r}_5A^{-1}_4(r)w_0\end{pmatrix}.
\]
Substituting all these values in the last equation gives us a number of equations that will help us identify the parameter values for $\alpha_i, \beta_i, \gamma_i$, etc.
There are three zeroth order equations for $\delta_4$. We can eliminate $\delta_4$ to obtain the following two equations for $\alpha_1$ and $\beta_2$. (The calculations are a little long, but otherwise straightforward.)
\begin{eqnarray*}
M_1M_3-(N_1+N_4M_1)+[M_4-(N_2+N_4M_2)] \alpha_1+2![M_5-(N_3+N_4M_3)]\beta_2 &=& 0\\
M_1M_3-(R_1+R_4M_1)+[M_4-(R_2+R_4M_2)] \alpha_1+2![M_5-(R_3+R_4M_3)]\beta_2 &=& 0
\end{eqnarray*}
where $M_i$ $i=1,2,3$, $N_i$ and $R_i$, $i=1,2,3,4$ are as in lemma \ref{Vandermonde}, and $M_4 = M_3M_2+M_1$, $M_5 = M_3^2+M_2$ were given in the proof of the previous theorem.
From here, conditions
\begin{equation}\label{condition1}
M_1M_3 = N_1+N_4M_1,\hskip .2in M_1M_3 = R_1+R_4M_1
\end{equation}
together with the matrix
\begin{equation}\label{full rank}
M = \begin{pmatrix} M_4-(N_2+N_4M_2)&M_5-(N_3+N_4M_3)\\ M_4-(R_2+R_4M_2)&M_5-(R_3+R_4M_3)\end{pmatrix}
\end{equation}
having full rank, will ensure that $\alpha_1 = \beta_2 = 0$. As a result of this we have
\[
v_0 = e_1,\hskip.2in \gamma_3= \frac 1{3!} M_1,\hskip.2in w_0 = e_1 + M_1 e_4.
\]
There are also three first order equations for $\eta_4$. Notice that $\alpha_0 = 0$ once we impose condition (\ref{norm}) to $\Gamma_\epsilon$. Again, after some rewriting we get the system
\begin{eqnarray*}
0&=&[M_4-(N_2+N_4M_2)] \beta_1+2![M_5-(N_3+N_4M_3)]\gamma_2 \\
0&=&[M_4-(R_2+R_4M_2)] \beta_1+2![M_5-(R_3+R_4M_3)]\gamma_2
\end{eqnarray*}
and hence the rank condition on matrix (\ref{full rank}) will ensure $\beta_1 = \gamma_2 = 0$. With these values we also get $v_1 = 0$, $\delta_3 = 0$ and $w_1=0$. Using the normalization of $\Gamma_\epsilon$ again we obtain $\beta_0 = 0$, and from here
\[
A = B = 0,\hskip 2ex C = \frac1{3!}\Gamma''' + \gamma_1\Gamma'+\gamma_0\Gamma.
\]
We are left with the determination of $\gamma_1$, since $\gamma_0$ is determined by normalizing conditions.
From now on, let us assume that both the $n$-plane and the $r$-plane include $x_n$, so that we can assume that $n_4 = 0 = r_4$. In such case, $N_1 = R_1 = 0$ and so conditions (\ref{condition1}) becomes $M_3 = N_4 = R_4$. With this assumption, and using the fact that $M_4 = M_1+M_3M_2$ and $M_5 = M_3^2+M_2$, the matrix (\ref{full rank}) becomes
\begin{equation}\label{full rank2}
M=\begin{pmatrix} M_1-N_2&M_2-N_3\\ M_1-R_2&M_2-R_3\end{pmatrix}.
\end{equation}
Finally, we use the equations involving $\nu_4$. We have three of them, and so we can get rid of $\nu_4$ and obtain a system of equations for $\gamma_1$ and $\delta_2$. The calculations are a little long, but they are straightforward if we use the following relations (themselves obtained straightforwardly).
\begin{eqnarray*}{\bf m}_6A_3^{-1}(m)e_1 &=& M_1(M_3^3+3M_1+2M_2M_3)\\ {\bf n}_5A_4^{-1}(n)e_1 &=& N_1N_4\\ {\bf n}_5A_4^{-1}(n)e_4 &=& N_4^2+N_3\\
{\bf n}_6A_4^{-1}(n)e_1 &=& N_1(N_4^2+N_3)\\ {\bf n}_6A_4^{-1}(n)e_4 &=& N_4^3+2N_4N_3+N_2\end{eqnarray*}
and the fact that $N_4 = M_3$. The resulting system is given by $M \begin{pmatrix}\gamma_1\\ 2!\delta_2\end{pmatrix} = N$, where $M$ is the matrix in (\ref{full rank2}) and where
\[
N = -\frac1{60}k_3M_1\begin{pmatrix}M_3(N_3-M_2)+2(N_2-3M_1)\\ M_3(R_3-M_2)+2(R_2-3M_1)\end{pmatrix}.
\]
Clearly, $\gamma_1$ is then given by
\[
\gamma_1 = -\frac1{60\det M} k_3M_1\det\begin{pmatrix}M_3(N_3-M_2)+2(N_2-3M_1)&M_2-N_3\\ M_3(R_3-M_2)+2(R_2-3M_1)&M_2-R_3\end{pmatrix}.
\]
If we now impose the condition \[\gamma_1 = \frac{27}5\gamma_3 = \frac{27}{6!5} M_1 k_3\] we get the equation
\[
20\det\begin{pmatrix}M_3(N_3-M_2)+2(N_2-3M_1)&M_2-N_3\\ M_3(R_3-M_2)+2(R_2-3M_1)&M_2-R_3\end{pmatrix} = -9\det\begin{pmatrix}M_1-N_2&M_2-N_3\\ M_1-R_2&M_2-R_3\end{pmatrix}
\]
This equation can be easily programed. Using Maple to calculate the equations and a simple C-program to solve them, one can check that the smallest solutions involve integer values up to 7. Some of these solutions are
\[\begin{array}{ccc}
m_1 = 7, m_2 = -1, m_3 = -7,&\hskip .1in n_1 = 3, n_2 = -1, n_3 = -3,&\hskip .1in r_1 = 6, r_2 = -3, r_3 = -4,\\
m_1 = 7, m_2 = -1, m_3 = -7,&\hskip .1in n_1 = 3, n_2 = -1, n_3 = -3,&\hskip .1in r_1 = 4, r_2 = -2, r_3 = -3\\
m_1 = 7, m_2 = -1, m_3 = -7,&\hskip .1in n_1 = 6, n_2 = -3, n_3 = -4,&\hskip .1in r_1 = 4, r_2 = -2, r_3 = -3.
\end{array}
\]
\end{proof}
|
1,108,101,565,785 | arxiv | \section{Introduction}
It is classically known that any vector bundle on a projective line over arbitrary algebraically closed field $k$ splits as a direct sum of line bundles. However, if the dimension of a projective space is bigger than or equal to two, the situation is pretty involved. So the splitting of vector bundles on higher dimensional projective spaces has long been a major concern among the problems on vector bundles in algebraic geometry. For non-splitting vector bundles, there are many classification results on some special classes of vector bundles. One of the classes that has been studied more widely is \emph{uniform} vector bundles; that is, bundles whose splitting type is independent of the chosen line. The notion of a uniform vector bundle appears first in a paper of Schwarzenberger\cite{ref1}. In characteristic zero, much work has been done on the classification of uniform vector bundles over projective spaces. In 1972, Van de Ven \cite{ref2} proved that for $n>2$, the uniform 2-bundles over $\mathbb{P}_k^n$ split and the uniform 2-bundles over $\mathbb{P}_k^2$ are precisely the bundles $\mathcal{O}_{\mathbb{P}_k^2}(a)\bigoplus\mathcal{O}_{\mathbb{P}_k^2}(b)$
and $T_{\mathbb{P}_k^2}(a),a,b\in\mathbb{Z}$. In 1976, Sato \cite{ref3} proved that for $2<r<n$, the uniform $r$-bundles over $\mathbb{P}_k^n$ split by using a theorem of Tango \cite{ref16} about holomorphic mappings from projective spaces to Grassmannians. In 1978, Elencwajg \cite{ref4} extended the investigations of Van de Ven to show that uniform vector bundles of rank 3 over $\mathbb{P}_k^2$ are of the form
$$\mathcal{O}_{\mathbb{P}_k^2}(a)\bigoplus\mathcal{O}_{\mathbb{P}_k^2}(b)\bigoplus\mathcal{O}_{\mathbb{P}_k^2}(c), ~ T_{\mathbb{P}_k^2}(a)\bigoplus\mathcal{O}_{\mathbb{P}_k^2}(b)\quad\text{or}\quad S^2T_{\mathbb{P}_k^2}(a),$$ where $a$, $b$, $c\in \mathbb{Z}$. Sato \cite{ref3} had previously shown that for $n$ odd, uniform $n$-bundles over $\mathbb{P}_k^n$ are of the forms
$$\oplus_{i=1}^{n}\mathcal{O}_{\mathbb{P}_k^n}(a_i),~ T_{\mathbb{P}_k^n}(a)\quad\text{or}\quad \Omega^1_{\mathbb{P}_k^n}(b),$$ where $a_i,a, b\in \mathbb{Z}$. So the results of Elencwajg and Sato yield a complete classification of uniform 3-bundles over $\mathbb{P}_k^n$. In particular all uniform 3-bundles over $\mathbb{P}_k^n$ are homogeneous. Later, Elencwajg, Hirschowitz and Schneider \cite{ref5} showed that Sato's result is also true for $n$ even. Around 1982, Ellia \cite{ref7} and Ballico \cite{ref8} independently proved that for $n\ge 3$, the uniform $(n+1)$-bundles over $\mathbb{P}_k^{n}$ are of the form
$$\oplus_{i=1}^{n+1}\mathcal{O}_{\mathbb{P}_k^n}(a_i),~ T_{\mathbb{P}_k^n}(a)\bigoplus\mathcal{O}_{\mathbb{P}_k^n}(b)\quad\text{or}\quad\Omega^1_{\mathbb{P}_k^n}(c)\bigoplus\mathcal{O}_{\mathbb{P}_k^n}(d), $$ where $a_i, a,b,c,d\in\mathbb{Z}$. One can go over the good reference by Okonek, Schneider and Spindler \cite{ref14} for related topics. Later, similar results have been extensively studied for uniform vector bundles on varieties swept out by lines, such as quadrics \cite{ref17, ref18}, Grassmannians\cite{ref15} and special Fano manifolds \cite{ref9}.
In positive characteristic, for $2\le r<n$, the uniform $r$-bundles over $\mathbb{P}_k^n$ split by Sato's result\cite{ref3}. The classification problem of uniform $n$-bundles on $\mathbb{P}_k^n$ has been solved by Lange \cite{ref10} for $n=2$ and Ein \cite{ref6} for all $n$. It seems that the classification of uniform vector bundles over other projective manifolds covered by lines in characteristic $p$ is still open. In the first part of the paper, we consider the uniform vector bundles on Grassmannians in positive characteristic and prove the following main theorem.
\begin{thm}\emph{(Theorem \ref{b} and Theorem \ref{z})}\label{m}
Let $G=G_k(d,n)$ $(d\leq n-d)$ be the Grassmannian manifold parameterizing linear subspaces of dimension $d$ in $k^n$, where $k$ is an algebraically closed field of characteristic $p>0$. Let $E$ be a uniform vector bundle over $G$ of rank $r\le d$.
\begin{itemize}
\item If $r<d$, then $E$ is a direct sum of line bundles.
\item If $r=d$, then $E$ is either a direct sum of line bundles or a twist of a pull back of the universal bundle $H_d$ or its dual $H_d^{\vee}$ by a series of absolute Frobenius maps.
\end{itemize}
\end{thm}
\begin{rem}
\emph{The first part of the theorem holds for any algebraically closed field. The result in characteristic zero is due to Guyot \cite{ref15}. We generalize the method of Elenwajg-Hirschowitz-Schneider \cite{ref5}, by which the authors considered the case over projective spaces, to deal with Grassmannians over any algebraically closed field. Our basic idea follows from the strategy in \cite{ref5} but the complexity in the case of Grassmannians is far better than in the case of projective spaces. For example, Proposition \ref{e} cannot be generalized from the case for projective spaces directly. We need to strengthen the condition and use different. However, for $r<d$, our method is independent of the characteristic of the field.
For the case $r=d$, we checked that the first part of Guyot's paper \cite{ref15}, which is independent of the characteristic of the field $k$, is for studying the Chow ring and axiomatically computing Chern class of vector bundles of flag varieties. However, Guyot's method can only handle the case for $b=-1$, i.e. $(0,\ldots,0,-1)$ in the proof of Theorem \ref{m} because this is the only case for characteristic zero. In characteristic $p$, we need to use Katz's Lemma.
So for the second part of the theorem, we mainly use Ein's \cite{ref6} ideas for projective spaces and Katz's \cite{ref11} key lemma to study vector bundles in positive characteristic. }
\end{rem}
The proof of the first part of the above theorem has a surprising consequence for vector bundles of arbitrary rank.
\begin{cor}\emph{(Corollary \ref{d})}
Let $E$ be a vector bundle over $G=G(d,n)$ $(d\ge 2)$ over an algebraically closed field. If $E$ splits as a direct sum of line bundles when it restricts to every $\mathbb{P}_k^{d}\subseteq G$, then $E$ splits as a direct sum of line bundles over $G$.
\end{cor}
In the second part of the paper, we consider vector bundles over flag varieties in characteristic zero.
For fixed integer $n$, let $F:=F(d_1,\ldots,d_s)$ be the flag manifold parameterizing flags \[V_{d_1}\subseteq\cdots\subseteq V_{d_s}\subseteq k^n\]
where $dim(V_{d_i})=d_i, 1\le i \le s$. Let $F^{(i)}:=F(d_1,\ldots,d_{i-1},d_i-1,d_i+1,d_{i+1},\ldots,d_s)$ be the $i$-th irreducible component of the manifold of lines in $F$. (From now we specify that if the two adjacent integers in the expression of flag varieties such as $F^{(i)}$ are equal, we keep only one of them. Please see Section \ref{flag} for the notations.)
We separate our discussion into two cases:
Case I: $d_i-1=d_{i-1}$ and $d_i+1=d_{i+1}$, then we have the natural projection $F\rightarrow F^{(i)}$;
Case II: $d_i-1\neq d_{i-1}$ or $d_i+1\neq d_{i+1}$, then we have the \emph{standard diagram}
\begin{align}
\xymatrix{
F(d_1,\ldots,d_{i-1},d_i-1,d_i,d_i+1,d_{i+1},\ldots d_s)\ar[d]^{q_1} \ar[r]^-{q_2} & F^{(i)}\\
F=F(d_1,\ldots,d_s).
}
\end{align}
\begin{thm}\emph{(Theorem \ref{main})}
Fix integer $i, 1\le i \le s$. Let $E$ be an algebraic $r$-bundle over $F$ of type $\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)}), a_1^{(i)}\geq\cdots\geq a_r^{(i)}$ with respect to $F^{(i)}$. If for some $t<r$,
\[
a_t^{(i)}-a_{t+1}^{(i)}\geq
\left\{
\begin{array}{ll}
1, & and~F^{(i)}~ \text{is in Case I}\\
2, & and ~F^{(i)}~ \text{is in Case II},
\end{array}
\right.\]
then there is a normal subsheaf $K\subseteq E$ of rank $t$ with the following properties: over the open set $V_E^{(i)}=q_1({q_2}^{-1}(U_E^{(i)}))\subseteq F$, the sheaf $K$ is a subbundle of $E$, which on the line $L\subseteq F$ given by $l\in U_E^{(i)}$ has the form\[
K|L\cong\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)}).
\]
\end{thm}
\begin{rem}
\emph{Please see Definition \ref{UEi} for the notation $U_E^{(i)}$. A coherent sheaf $\mathcal{F}$ is said to be "normal" in the sense of Barth (\cite{ref21}, p.128) if the restriction $\mathcal{F}(U)\rightarrow\mathcal{F}(U\backslash A)$ is bijective for every open subset $U$ and a closed subset $A$ of $U$ of codimension at least $2$ (cf. Definition \ref{normal}).}
\end{rem}
Using the above theorem, we generalize the Grauert-M$\ddot{\text{u}}$lich-Barth theorem (cf. \cite{ref21}, Theorem 1) to flag varieties as follows.
\begin{cor}\emph{(Corollary \ref{gap})}
Fix integer $i, 1\le i \le s$, for an i-semistable $r$-bundle $E$ over $F$ of type $\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)}), a_1^{(i)}\geq\cdots\geq a_r^{(i)}$ with respect to $F^{(i)}$, we have \[
a_j^{(i)}-a_{j+1}^{(i)}\leq 1~~ \text{for all}~ j=1,\ldots, r-1.
\]
In particular, if $F^{(i)}$ is in Case I, then we have $a_j^{(i)}$'s are constant for all $1\leq j\leq r$.
\end{cor}
\begin{rem}
\emph{A generalization of the Grauert-M$\ddot{\text{u}}$lich-Barth theorem to normal projective varieties in characteristic zero is proven in Huybrechts and Lehn's famous book (see \cite{ref24} Theorem 3.1.2). But the definition of our semistability or just slope is a little bit different from the usual ones. The canonical way is due to Mumford and Takemoto who embed the variety into a projective space and fix an ample divisor to define the degree and the slope of vector bundles. Another way so called Gieseker-semistability is by replacing the degree in the Mumford-Takemoto's definition with Hilbert polynomial to define the slope (see for example \cite{ref24} Definition 1.2.3). However these two ways both depend on the embeddings. Our definition of semistability for the flag varieties (see Definition \ref{i}) is intrinsic because the flag varieties are uniruled. If one embeds the flag variety into a projective space and takes ample divisors whose intersection is a single line on the flag variety, then our definition is the same as Mumford-Takemoto's definition.}
\end{rem}
\begin{definition}
If there exists some integer $i$ such that the splitting type of a vector bundle $E$ is uniform for any line $L\subseteq F$ given by $l\in F^{(i)}$, then $E$ is called uniform vector bundle with respect to $F^{(i)}$. $E$ is called strongly uniform on $F$ if the splitting type of $E$ is uniform for any line $L\subseteq F$.
\end{definition}
\begin{cor}\emph{(Corollary \ref{x})}
If $E$ is a strongly uniform $i$-semistable $(1\le i\le n-1)$ $r$-bundle over the complete flag $F$, then $E$ splits as a direct sum of line bundles. In addition $E|L\cong \mathcal{O}_L(a)^{\oplus r}$ for every line $L\subseteq F$, where $a\in \mathbb{Z}$.
\end{cor}
\section{Preliminaries}
Denote by $G$ the Grassmannian $G_k(d, n)$ of $d$-dimensional linear
subspaces in $V=k^n$, where $k$ is an algebraically closed field. Of course we may also consider $G_k(d, n)$ in its
projective guise as $\mathbb{G}_k(d-1, n-1)$, the Grassmannian of
projective $(d-1)$-planes in $\mathbb{P}_k^{n-1}$.
Let $\mathcal{V}:=G\times V$ be the trivial vector bundle of rank $n$ on $G$ whose fiber
at every point is the vector space $V$. We write $H_d$ for the $d$-subbundle of $\mathcal{V}$ whose
fiber at a point $[\Lambda]\in G$ is the subspace $\Lambda$ itself; that is,
\[(H_d)_{[\Lambda]}=\Lambda\subseteq V=\mathcal{V}_{[\Lambda]}.\]
$H_d$ is called the \emph{universal subbundle} on $G$; the rank $n-d$ quotient bundle $Q_{n-d}=\mathcal{V}/H_d$ is called the \emph{universal quotient bundle}, i.e.,
\begin{equation}\label{tau}
0\rightarrow H_d\rightarrow \mathcal{V}\rightarrow Q_{n-d}\rightarrow 0.
\end{equation}
We write line bundle $\mathcal{O}_G(1)$, which is called the \emph{Pl$\ddot{u}$cker bundle} on $G$, to be the pull back of $\mathcal{O}_{\mathcal{P}}(1)$ under the \emph{Pl$\ddot{u}$cker embedding } \[
\iota: G_k(d,n)\rightarrow\mathcal{P}=\mathbb{P}(\bigwedge^{d}k^n).
\]
\begin{definition}
Let $F:=F(d_1,\ldots,d_s)$ be the flag manifold parameterizing flags \[V_{d_1}\subseteq\cdots\subseteq V_{d_s}\subseteq k^n\]
where $dim(V_{d_i})=d_i, 1\le i \le s$. In particular, flag manifold $F(1, \ldots, n-1)$ is called the complete flag manifold.
\end{definition}
Given a flag $V_{d-1}\subseteq V_{d+1}\subseteq k^n$, the set of $d$-dimensional subspaces $W\subseteq k^n$ such that\[
V_{d-1}\subseteq W\subseteq V_{d+1}
\]
is the projectivization of the quotient space $V_{d+1}/V_{d-1}$, so the set is isomorphic to $\mathbb{P}_k^1$. It follows that the flag manifold $F(d-1,d+1)$ be the manifold of lines in $G$ and the flag manifold $F(d-1,d,d+1)$ can be written as\[
F(d-1,d,d+1)=\{(x,L)\in G\times F(d-1,d+1) |x\in L\}.
\]
Let $\bar{F}:=F(d-1,d,d+1)$, $F_1:=F(d-1,d)$ and $F_2:=F(d,d+1)$. Then we have the following two diagrams:
\begin{align}
\xymatrix{
F_1 \ar[rrdd]_{q_{11}} && \bar{F}\ar[dd]^{q_1}\ar[ll]_{pr_1} \ar[rr]^{pr_2} && F_2 \ar[lldd]^{q_{12}} \\
\\
&&G
}
\end{align}
and
\begin{align}
\xymatrix{
\bar{F}\ar[d]^{q_1} \ar[r]^-{q_2} & F(d-1,d+1),\\
G
}
\end{align}
where all morphisms in the above diagrams are projections.
\begin{rem}\label{g}\emph{(\cite{ref15} Section~ 2.2)}
\emph{The mapping $q_{11}~(resp.~q_{12})$ identifies $F_1~(resp.~F_2)$ with the projective bundle $\mathbb{P}(H_{d})~(resp.~\mathbb{P}(Q_{n-d}^{\vee}))$ of $G$.
Let $\mathcal{H}_{H_{d}^{\vee}}~(resp.~ \mathcal{H}_{Q_{n-d}})$ be the tautological line bundle on $\bar{F}$ associated to $F_1~(resp.~ F_2)$, i.e. \[{pr_1}_{*}\mathcal{H}_{H_{d}^{\vee}}=\mathcal{O}_{F_1}(-1)~(resp.~ {pr_2}_{*}\mathcal{H}_{Q_{n-d}}=\mathcal{O}_{F_2}(-1)).\]}
\end{rem}
\begin{definition}
Let $X$ be a noetherian scheme in characteristic $p$ and $E$ be a locally free coherent sheaf on $X$. We define the associated projective space bundle $\mathbb{P}(E)$ as follows.
\[
\mathbb{P}(E)=Proj(\oplus_{l\geq 0}{S^{l}E}).
\]
\end{definition}
\begin{definition}
Let $X$ be a scheme in characteristic $p$. We define the absolute Frobenius map of $X$ to be $F_X:X\rightarrow X$ such that $F_X=id_X$ as a map between two topological spaces and on each open set $U$, $F_{X}^{\sharp}:\mathcal{O}_X(U)\rightarrow \mathcal{O}_X(U)$ takes $f$ to $f^p$ for any $f\in \mathcal{O}_X(U)$.
\end{definition}
\begin{definition}
Let $S$ be a scheme in characteristic $p$ and $X$ be an $S$-scheme. Consider the following diagram:
\[
\xymatrix{
X \ar[drr]_{F_{X/S}} \ar[rrrrd]^{F_X} \ar[rrddd]_f\\
&&X^{(p)}\ar[rr] \ar[dd]^{f'}&&X\ar[dd]_f\\
\\
&&S\ar[rr]^{F_S}&&S,
}
\]
where $X^{(p)}$ is defined as the fibre product of $X$ and $S$ in the diagram. The induced map $F_{X/S}$ is called the Frobenius morphism of $X$ relative to $S$.
\end{definition}
\begin{rem}\emph{(\cite{ref6} Lemma ~1.5)}\label{F}
\emph{Let $S$ be a noetherian scheme in characteristic $p$, $E$ be a locally free coherent sheaf on $S$ and $X=\mathbb{P}(E)$. Then $X^{(p^m)}=\mathbb{P}(F^{m^{\ast}}E)$.}
\end{rem}
One of the key tools in studying vector bundles in characteristic $p$ is the following lemma of Katz.
\begin{lemma}\emph{(\cite{ref11} Lemma~ 1.4)}\label{ka}
Let $X$ and $Y$ be two varieties smooth over $S$, a noetherian scheme in characteristic $p$, and $f$ be a $S$-morphism from $X$ to $Y$. If the induced map on differentials, $df:f^{\ast}\Omega_{Y/S}\rightarrow\Omega_{X/S}$ is the zero map, then $f$ can be factored through the relative Frobenius morphism $F_{X/S}$.
\end{lemma}
\begin{definition}
Denote by $H_i$ the universal subbundle on the complete flag manifold $F(1,\cdots,n-1)$ of rank $i(1\leq i \leq n)$ and $X_i=c_1(H_i/H_{i-1})$.
\end{definition}
Although M.Guyot's paper \cite{ref15} is based on the field of characteristic zero, the following conclusions are also true in positive characteristic.
\begin{lemma}\emph{(\cite{ref15} Theorem ~3.2)}\label{zq}
Suppose $\mathbf{Z}[X_1,\ldots,X_{d-1};X_d;X_{d+1}]$ to be the ring of polynomials in $d+1$ variables with integral coefficients symmetrical in $X_1,\ldots,X_{d-1}$ and $A(\bar{F})$ is the Chow ring of $\bar{F}$.
The natural morphism $\mathbf{Z}[X_1,\ldots,X_{d-1};X_d;X_{d+1}]\rightarrow A(\bar{F})$ is surjective and its kernel is the ideal generated by
$\sum_{i}(X_1,\ldots,X_{d+1}), (n-d-1)<i\leq n$, where\[\sum_{i}(X_1,\ldots,X_{d+1}):=\sum_{\alpha_1+\cdots+\alpha_{d+1}=i}X_1^{\alpha_1}\cdots X_{d+1}^{\alpha_{d+1}},\] and $\alpha_1,\ldots,\alpha_{d+1}$ are nonnegative integers.
\end{lemma}
\begin{lemma}\emph{(\cite{ref15} Lemma~ 5.1)}\label{pic}
The Picard group of $\bar{F}$ is generated by ${q_1}^{\ast}\mathcal{O}_{G}(1)$, $\mathcal{H}_{H_{d}^{\vee}}$ and $\mathcal{H}_{Q_{n-d}}$. The Chern polynomial \[c_{\mathcal{H}_{H_{d}^{\vee}}}(T)=T+X_d, ~c_{\mathcal{H}_{Q_{n-d}}}(T)=T-X_{d+1}.\]Here
$c_{E}(T):=T^r-c_1(E)T^{r-1}+\cdots+(-1)^{r}c_r(E)$ is the chern polynomial of rank $r$-bundle $E$.
\end{lemma}
\begin{lemma}\emph{(\cite{ref15} Proposition ~2.3, 2.5)} \label{ta}
The restriction of the relative cotangent bundle $\Omega_{\bar{F}/G}$ to every $q_2$-fibre $\widetilde{L}={q_2}^{-1}(L)\subseteq F$ has the following form\[
\Omega_{\bar{F}/G}|\widetilde{L}=\mathcal{O}_{\widetilde{L}}(1)^{\oplus n-2}.
\]
\end{lemma}
\section{Uniform vector bundles of rank r(r<d) on G }
In this section, we suppose $k$ is an algebraically closed field of arbitrary characteristic.
\begin{prop}\label{e}
Let $E$ be an algebraic vector bundle of rank $r$ over $G=G_k(d,n)$ and assume $E|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L\subseteq G$. Then $E$ is trivial.
\end{prop}
\begin{proof}
We prove the theorem by induction on $d$. For $d=1$, the Grassmannian is just $\mathbb{P}_k^{n-1}$, the result holds (see \cite{ref14} Theorem 3.2.1). Let's consider the diagram
\begin{align}
\xymatrix{
F(d-1,d)\ar[d]^{q_1} \ar[r]^-{q_2} & G_k(d-1,n).\\
G
}
\end{align}
It's not hard to see that every $q_2$-fibre $q_2^{-1}(x)$ is isomorphic to $\mathbb{P}_k^{n-d}$ and $q_1(q_2^{-1}(x))\cong \mathbb{P}_k^{n-d}$ is a $d$-dimensional linear subspace containing the $(d-1)$-dimensional linear subspace corresponding to $x$. By assumption, the restriction of $E$ to every line in $q_1(q_2^{-1}(x))$ is trivial, thus $E|q_1(q_2^{-1}(x))$ is trivial. Next, let's consider the coherent sheaf\[
E'={q_2}_{*}q_1^{*}E.
\]
Note that $E'$ is an algebraic vector bundle of rank $r$ over $G_k(d-1,n)$ and $q_1^{*}E\cong q_2^{*}E'$, because $q_1^{*}E|q_2^{-1}(x)\cong E|q_1(q_2^{-1}(x))$ is trivial on all $q_2$-fibres.
\textbf{Claim.} $E'|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$ in $G_k(d-1,n)$.
In fact, because $q_1^{*}E|q_1^{-1}(y)$ is trivial for every $y\in G$,\[
E'|q_2(q_1^{-1}(y))\cong q_2^{*}E'|q_1^{-1}(y)\cong q_1^{*}E|q_1^{-1}(y)
\]
is trivial for every $y\in G$. Since every line $L\subseteq G_k(d-1,n)$ is contained in some set $q_2(q_1^{-1}(y))$, the restriction $E'|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$ in $G_k(d-1,n)$.
By the induction hypothesis, $E'$ is trivial. Thus $q_1^{*}E\cong q_2^{*}E'$ is trivial, so is $E\cong {q_1}_{*}q_1^{*}E$.
\end{proof}
\begin{cor}\label{trivial}
If $E$ be a globally generated vector bundle of rank $r$ over $G$ with $c_1(E)=0$, then $E$ is trivial.
\end{cor}
\begin{proof}
Since $E$ is globally generated, we have an exact sequence
\[0\rightarrow K \rightarrow \mathcal{O}_G^{\oplus N}\rightarrow E\rightarrow 0.\]
Restricting this sequence to a line $L\subseteq G$ we get\[
0\rightarrow K|L \rightarrow \mathcal{O}_L^{\oplus N}\rightarrow E|L\rightarrow 0.
\]
Suppose $E|L=\oplus_{i=1}^r \mathcal{O}_L(a_i)$, then together with $E|L$ is globally generated, we have $a_i\ge 0, 1\le i\le r$. If $c_1(E)=0$, then we must have $a_i=0$ for all $i$. Thus $E$ is trivial on every line and hence trivial.
\end{proof}
\begin{thm}\label{b}
For $r<d$ every uniform $r$-bundle over $G$ splits as a sum of line bundles.
\end{thm}
\begin{proof}
We prove this theorem by induction on $r$. For $r=1$, there is nothing to prove. Suppose the assertion is true for all uniform $r'$-bundles with $1\leq r'<r<d$. If $E$ is a uniform $r$-bundle, after twisting with an appropriate line bundle and dualizing if necessary, we can assume that $E$ has the splitting type
\[\underline{a}_E=(a_1,\ldots,a_r),
\]
where $a_1\leq\cdots\leq a_r$ and $a_1=\cdots=a_t=0, ~a_{t+1}>0$. If $t=r$, then $E$ is trivial by Proposition \ref{e}. Therefore let $t<r$, i.e.,
\[\underline{a}_E=(0,\ldots,0,a_{t+1},\ldots,a_r),~a_{t+i}>0, ~\text{for}~i=1,\ldots,r-t.
\]
Let's consider the standard diagram
\begin{align}\label{key}
\xymatrix{
\bar{F}\ar[d]^{q_1} \ar[rr]^-{q_2} && F(d-1,d+1).\\
G
}
\end{align}
For $L\in F(d-1,d+1)$, the $q_2$-fibre
\[\widetilde{L}={q_2}^{-1}(L)=\{(x,L)|x\in L\},\]
is mapped isomorphically under $q_1$ to the line $L$ in $G$ and we have \[{q_1}^\ast E|\widetilde{L}\cong E|L.\]
For $x\in G$, the $q_1$-fibre over $x$,
\[{q_1}^{-1}(x)=\{(x,L)|x\in L\},
\]
is mapped isomorphically under $q_2$ to the subvariety
\[\text{VMRT}_x=\{L\in F(d-1,d+1)|x\in L\}\cong \mathbb{P}_k^{d-1}\times\mathbb{P}_k^{n-d-1}\]
(VMRT$_x$ means the variety of minimal rational tangents at $x$. We refer to \cite{ref22} for a complete account on the VMRT).
Because \[E|L\cong\mathcal{O}_L^{\oplus t}\oplus\bigoplus\limits_{i=1}^{r-t} \mathcal{O}_L(a_{t+i}),~ a_{t+i}>0,\]
\[h^0\left({q_2}^{-1}(L),{q_1}^\ast (E^{\vee})|_{q_2^{-1}(L)}\right)=t\] for all $L\in F(d-1,d+1)$.
Thus the direct image ${q_2}_{\ast}{q_1}^{\ast}(E^{\vee})$ is a vector bundle of rank $t$ over $F(d-1,d+1)$.
The canonical homomorphism of sheaves\[
{q_2}^{\ast}{q_2}_{\ast}P^{\ast}(E^{\vee})\rightarrow {q_1}^{\ast}(E^{\vee})
\]
makes $\widetilde{N}^{\vee}:={q_2}^{\ast}{q_2}_{\ast}{q_1}^{\ast}(E^{\vee})$ to be a subbundle of ${q_1}^{\ast}(E^{\vee})$. Because over each ${q_2}$-fibre $\widetilde{L}$, the evaluation map \[
\widetilde{N}^{\vee}|\widetilde{L}=H^0(\widetilde{L},{q_1}^{\ast}(E^{\vee})|\widetilde{L})\otimes_{k}\mathcal{O}_{\widetilde{L}}\rightarrow {q_1}^{\ast}(E^{\vee})|\widetilde{L}
\]
identifies $\widetilde{N}^{\vee}|\widetilde{L}$ with $\mathcal{O}_L^{\oplus t}\subseteq\mathcal{O}_L^{\oplus t}\oplus\bigoplus_{i=1}^{r-t} \mathcal{O}_L(-a_{t+i})=E^{\vee}|L.$
Over $\bar{F}$ we thus obtain an exact sequence\[
0\rightarrow\widetilde{M}\rightarrow {q_1}^{\ast}E\rightarrow\widetilde{N}\rightarrow0
\]
of vector bundles, whose restriction to ${q_2}$-fibres $\widetilde{L}$ looks as follows:
$$
\xymatrix{
0\ar[r] &\widetilde{M}|\widetilde{L} \ar[r] \ar[d]_\cong& {q_1}^{\ast}E|\widetilde{L} \ar[r] \ar[d]_\cong&\widetilde{N}|\widetilde{L}\ar[r]\ar[d]_\cong&0\\
0\ar[r] &\bigoplus_{i=1}^{r-t} \mathcal{O}_L(a_{t+i})\ar[r] &\mathcal{O}_L^{\oplus t}\oplus\bigoplus_{i=1}^{r-t} \mathcal{O}_L(a_{t+i}) \ar[r] &\mathcal{O}_L^{\oplus t}\ar[r]&0.\\
}
$$
By Lemma \ref{KQ} below, there are bundles $M={q_1}_{\ast}\widetilde{M}$, $N={q_1}_{\ast}\widetilde{N}$ over $G$ with \[\widetilde{M}={q_1}^{\ast}M,\widetilde{N}={q_1}^{\ast}N.
\]
$M$ and $N$ are then necessarily uniform and we obtain by projecting the bundle sequence\[
0\rightarrow {q_1}^{\ast}M\rightarrow {q_1}^{\ast}E\rightarrow {q_1}^{\ast}N\rightarrow0
\]
onto $G$ to get the exact sequence
\begin{align}\label{c}
0\rightarrow M\rightarrow E\rightarrow N\rightarrow 0.
\end{align}
Because $M$ and $N$ are uniform vector bundles of rank smaller than $r$, by the induction hypothesis,
\[
M=\bigoplus\limits_{i=1}^{r-t} \mathcal{O}_{G}(a_{t+i}),N=\mathcal{O}_{G}^{\oplus t}.
\]
It follows from the Kempf vanishing theorem that $H^1(G,N^{\vee}\otimes M)=0$. Thus the exact sequence (\ref{c}) splits and hence also $E$.
\end{proof}
\begin{lemma}\label{KQ}
There are bundles $M$, $N$ over $G$ with $\widetilde{M}={q_1}^{\ast}M,\widetilde{N}={q_1}^{\ast}N$.
\end{lemma}
\begin{proof}
To prove the lemma, it suffices to show that $\widetilde{M}$,$\widetilde{N}$ are trivial on all ${q_1}$-fibres (the canonical morphisms ${q_1}^{\ast}{q_1}_{\ast}\widetilde{M}\rightarrow\widetilde{M}$, ${q_1}^{\ast}{q_1}_{\ast}\widetilde{N}\rightarrow\widetilde{N}$ are then isomorphisms).
Because $\widetilde{N}^{\vee}$ is a subbundle of ${q_1}^{\ast}(E^{\vee})$ of rank $t$, for every point $x\in G$, it provides a morphism\[
\varphi:\text{VMRT}_x\rightarrow\mathbb{G}_k(t-1,\mathbb{P}_k(E^{\vee}_x)).
\]
We claim that $\psi:=\varphi|\mathbb{P}_k^{d-1}$ is constant for any $\mathbb{P}_k^{d-1}\subseteq \text{VMRT}_x$.
Let's consider $\psi^{\ast}H_t$ and $\psi^{\ast}Q_{r-t}$(the pull back of universal bundle $H_t$ and universal quotient bundle $Q_{r-t}$ under $\psi$), which are vector bundles on $\mathbb{P}_k^{d-1}$. We have the exact sequence\[
0\rightarrow\psi^{\ast}H_t\rightarrow\mathcal{O}^{\oplus r}_{\mathbb{P}_k^{d-1}}\rightarrow\psi^{\ast}Q_{r-t}\rightarrow 0.
\]
Then \[
c(\psi^{\ast}H_t).c(\psi^{\ast}Q_{r-t})=1,
\]
i.e. $\big(1+c_1(\psi^{\ast}H_t)+\cdots+c_t(\psi^{\ast}H_t)\big).\big(1+c_1(\psi^{\ast}Q_{r-t})+\cdots+c_{r-t}(\psi^{\ast}Q_{r-t})\big)=1$.
Because $r<d$ and the Chow ring of $\mathbb{P}_k^{d-1}$ is $A(\mathbb{P}_k^{d-1})=\mathbb{Z}[\mathcal{H}]/{\mathcal{H}^d}$, where $\mathcal{H}$ is the rational equivalence class of a hyperplane, this must imply \[
c(\psi^{\ast}H_t)=1, ~ c(\psi^{\ast}Q_{r-t})=1.
\]
(Note that here we use the condition $r<d$, otherwise the proof doesn't work.)
In particular\[
c_1(\psi^{\ast}H_t)=0 , ~c_1(\psi^{\ast}Q_{r-t})=0.
\]
Let $\mathcal{O}_{\mathbb{G}_k(t-1,r-1)}(1)$ be the Pl$\ddot{u}$cker bundle of $\mathbb{G}_k(t-1,r-1)$, since deg $\psi^{\ast}\mathcal{O}_{\mathbb{G}_k(t-1,r-1)}(1)=c_1(\psi^{\ast}H_t)=0$, $\psi$ is constant.
Because $\text{VMRT}_{x}\cong \mathbb{P}_k^{d-1}\times\mathbb{P}_k^{n-d-1}$ can be covered by a family of $\mathbb{P}_k^{d-1}$ and for any two different $\mathbb{P}_k^{d-1}$'s in the family, there exists a $\mathbb{P}_k^{d-1}$ which intersects them both. So we obtain that $\varphi$ is constant. Thus $\widetilde{N}$ is trivial on all ${q_1}$-fibres. Moreover for every point $x\in G$, $\widetilde{M}^{\vee}|{q_1}^{-1}(x)$ is globally generated and $c_1(\widetilde{M}^{\vee}|{q_1}^{-1}(x))=0$, so $\widetilde{M}^{\vee}|{q_1}^{-1}(x)$ is trivial due to Corollay \ref{trivial}.
\end{proof}
\begin{cor}\label{d}
Let $E$ be a vector bundle on $G=G(d,n)$ $(d\ge 2)$ over an algebraically closed field. If $E$ splits as a direct sum of line bundles when it restricts to every $\mathbb{P}_k^{d}\subseteq G$, then $E$ splits as a direct sum of line bundles on $G$.
\end{cor}
\begin{proof}
The condition implies that $E$ is uniform. In fact, any line $L$ in $G$ is given by a $(d-1)$-dimensional vector space $V_{d-1}$ and a $(d+1)$-dimensional vector space $V_{d+1}$. Since $G(d, V_{d+1})$ and $\{W\in G(d, 2d)|W\supseteq V_{d-1}\}$ are two different subsets of $G$ which are both isomorphic to $\mathbb{P}^d$ and contain $L$, we can see that $E$ has uniform splitting type.
We use the same notations in Theorem \ref{b} to prove the corollary by induction on $r$ (the rank of $E$). If we have the exact sequence of vector bundles on $G$
\begin{align}\label{a}
0\rightarrow M\rightarrow E\rightarrow N\rightarrow 0,
\end{align}
where the rank of $M$ and $N$ are smaller than $r$,
such that \[
M|Z=\bigoplus\limits_{i=1}^{r-t} \mathcal{O}_Z(a_{t+i}),N|Z=\mathcal{O}_Z^{\oplus t},
\]
for every projective subspace $Z$ of dimension $d$, then by the induction hypothesis, $M$ and $N$ split. It follows from the Kempf vanishing theorem that $H^1(G,N^{\vee}\otimes M)=0$. Thus the above exact sequence splits and hence also $E$.
Similar to the proof of Theorem \ref{b}, we can obtain an exact sequence in $\bar{F}$\[
0\rightarrow\widetilde{M}\rightarrow {q_1}^{\ast}E\rightarrow\widetilde{N}\rightarrow0. \]
If we prove that the morphism $\varphi$ is constant for every $x\in G$, then there exist two bundles $M$, $N$ over $G$ with $\widetilde{M}={q_1}^{\ast}M,\widetilde{N}={q_1}^{\ast}N$. By projecting the bundle sequence\[
0\rightarrow {q_1}^{\ast}M\rightarrow {q_1}^{\ast}E\rightarrow {q_1}^{\ast}N\rightarrow0
\]
onto $G$, we can get the desired exact sequence \ref{a}.
Thus, to prove the existence of the above exact sequence, it suffices to show that the map\[
\varphi:\text{VMRT}_x\rightarrow\mathbb{G}_k(t-1,\mathbb{P}_k(E^{\vee}_x))
\]
is constant for every $x\in G$ .
Given a projective subspace $Z$ of dimension $d$ and a line $L\subseteq Z$, we take any point $x\in L$ and denote by $Z'$ the subspace of $\text{VMRT}_x$ corresponding to the tangent directions to $Z$ at $x$. By the hypothesis, $E|Z$ is the direct sum of line bundles, so
\[
\varphi|Z':Z'\rightarrow\mathbb{G}_k(t-1,\mathbb{P}_k(E^{\vee}_x))
\]
is constant. Since $G$ covered by linear projective subspaces of dimension $d$ and $\text{VMRT}_{x}\cong \mathbb{P}_k^{d-1}\times\mathbb{P}_k^{n-d-1}$, $\varphi$ is constant for every $x\in G$ by the same reason given in the proof of Lemma \ref{KQ}.
\end{proof}
\section{Uniform vector bundles of rank d on G }
In this section, we suppose the characteristic of the field $k$ is positive integer $p$. Although Guyot's paper \cite{ref15} is based on the field of characteristic zero, many conclusions are also true for positive characteristic, except for the argument of the splitting type of uniform non-splitting $d$-bundle over $G$ being $(0,\ldots,0,-1)$. The main reason is that the first part of Guyot's paper, which is independent of the characteristic of the field $k$, is for studying the Chow ring and axiomatically computing Chern classes of vector bundles on flag varieties.
The uniform vector bundle $E$ can be characterized as follows\cite{ref15}. Let (\ref{key}) be the stardard diagram and $L$ be a line in $G$, then $E$ is uniform with
$$E|L=\mathcal{O}_L(u_1)^{\oplus r_1}\oplus\cdots\oplus\mathcal{O}_L(u_t)^{\oplus r_t}, ~u_1>\cdots>u_t,$$
if and only if there is a filtration
$$0=H N_{q}^{0}\left({q_1}^{*} E\right)\subseteq H N_{q}^{1}\left({q_1}^{*} E\right)\subseteq\cdots\subseteq H N_{q}^{t}\left({q_1}^{*} E\right)={q_1}^*E$$
of ${q_1}^{\ast}E$ by subbundles $ H N_{q}^{i}\left({q_1}^{*} E\right)$ such that $H N_{q}^{i}\left({q_1}^{*} E\right)/H N_{q}^{i-1}\left({q_1}^{*} E\right)\cong {q_2}^*(E_i)\otimes O_{q}(u_i)$, where
$E_i$ is an algebraic vector bundle of rank $r_i$ over $F(d-1, d+1)$, $O_{q}(1)= \mathcal{H}_{H_{d}^{\vee}}$ and\[
H N_{q}^{i}\left({q_1}^{*} E\right)=\operatorname{Im}\left[ {q_2}^{*} {q_2}_{*}\left({q_1}^{*} E \otimes O_{q}\left(-u_{i}\right)\right) \otimes O_{q}\left(u_{i}\right) \rightarrow {q_1}^{*} E\right].
\]
This filtration is the relative Harder-Narasimhan filtration \nocite{ref12,ref13,ref14} of ${q_1}^*E$.
Since rank $E=d\le n-d$, by Whitney's formula and Lemma \ref{zq}, we have
\begin{equation}\label{chow}
c_{{q_1}^{*} E}(T)=\prod_{i=1}^{t}c_{H N_{q}^{i}\left({q_1}^{*} E\right)/H N_{q}^{i-1}\left({q_1}^{*} E\right)}(T)+a\sum_{n-d}(X_1,\ldots,X_{d+1}).
\end{equation}
\begin{thm}\label{z}
If $E$ is a uniform vector bundle over $G$ of rank $d$, then $E\cong\oplus_{i=1}^{d}\mathcal{O}_G(a_i)$, $E\cong {F^m}^{\ast}H_d\otimes\mathcal{O}_G(v_1)$ or $E\cong {F^m}^{\ast}(H_d^{\vee})\otimes\mathcal{O}_G(v_2)$, $m\geq0$ and $a_i, v_1, v_2 \in \mathbb{Z}$.
\end{thm}
\begin{proof}
If $a=0$ in the equality (\ref{chow}) and $E$ can't split as a sum of line bundles, then by the assertion of Guyot (\cite{ref15} Corollary 4.1.1 b), the expression of $c_{{q_1}^{*} E}(T)$ can only contain $u_1$ and $u_2$ and $r_1=d-1$, $r_2=1$. So, after twisting with an appropriate power of $\mathcal{O}_{G}(1)$ and dualizing if necessary, we can let $u_1=0$ and assume $E$ is of type $(0,0,\ldots,0,b), b<0$. Then we can write \[
c_{{q_1}^{*} E}(T)=\prod_{i=1}^{d}(T+bX_i).
\]
So we get \[
c_{H N_{q}^{1}\left({q_1}^{*} E\right)}(T)=\prod_{i=1}^{d-1}(T+bX_i)~ \text{and} ~ c_{{q_1}^{*} E/H N_{q}^{1}\left({q_1}^{*} E\right)}(T)=T+bX_d.
\]
By lemma \ref{pic}, we get ${q_1}^{*} E/H N_{q}^{1}\left({q_1}^{*} E\right)\cong \big(({\mathcal{H}_{H_{d}^{\vee}})^{\vee}}\big)^{\otimes(-b)}$. Hence on $\bar{F}$, we have the following exact sequence
\begin{equation}\label{o}
0\rightarrow H N_{q}^{1}\left({q_1}^{*} E\right)\rightarrow {q_1}^\ast E\rightarrow \big(({\mathcal{H}_{H_{d}^{\vee}})^{\vee}}\big)^{\otimes(-b)}\rightarrow 0.
\end{equation}
By the universal property of $\mathbb{P}(E)$, there is a unique $G$-morphism $\sigma:\bar{F}\rightarrow \mathbb{P}(E)$ such that \[
\sigma^{\ast}\mathcal{O}_{\mathbb{P}(E)}(1)= \big((\mathcal{H}_{H_{d}^{\vee}})^{\vee}\big)^{\otimes(-b)}, \sigma^{\ast}\Omega_{\mathbb{P}(E)/G}=H N_{q}^{1}\left({q_1}^{*} E\right)\otimes \big(\mathcal{H}_{H_{d}^{\vee}}\big)^{\otimes (-b)}.
\]
Let's consider the following diagram
\begin{align}
\xymatrix{
F_1 \ar[rrdd]_{q_{11}} && \bar{F}\ar[dd]^{q_1}\ar[ll]_{pr_1} \ar[rr]^{pr_2} && F_2 \ar[lldd]^{q_{12}} \\
\\
&&G.
}
\end{align}
Case 1. $b=-1$:
Projecting the exact sequence\[
0\rightarrow H N_{q}^{1}\left({q_1}^{*} E\right)\rightarrow {q_1}^\ast E\rightarrow ({\mathcal{H}_{H_{d}^{\vee}})^{\vee}}\rightarrow 0
\]
onto $F_1$ and by Remark \ref{g}, we get the exact sequence ($R^1{pr_1}_{*}H N_{q}^{1}\left({q_1}^{*} E\right)=0$)
\[
0\rightarrow {pr_1}_{*}H N_{q}^{1}\left({q_1}^{*} E\right)\rightarrow {q_{11}}^\ast E\rightarrow \mathcal{O}_{F_1}(1)\rightarrow 0.
\]
Restricting the above exact sequence to a fibre of ${q_{11}}$, ${q_{11}}^{-1}(x)\cong \mathbb{P}_k^{d-1}$, we see that \[
{pr_1}_{*}H N_{q}^{1}\left({q_1}^{*} E\right)|_ {q_{11}^{-1}(x)}\cong \Omega_{\mathbb{P}_k^{d-1}}(1).
\]
Hence we get ${q_{11}}_{*}({pr_1}_{*}H N_{q}^{1}\left({q_1}^{*} E\right))=\textbf{R}^{1}{q_{11}}_{*}({pr_1}_{*}H N_{q}^{1}\left({q_1}^{*} E\right))=0$. In particular, \[E\cong {q_{11}}_{*}\mathcal{O}_{F_1}(1)\cong H_d.\]
Case 2. $b<-1$:
By restricting the induced map $d\sigma: \sigma^{\ast}\Omega_{\mathbb{P}(E)/G}=H N_{q}^{1}\left({q_1}^{*} E\right)\otimes \big({\mathcal{H}_{H_{d}^{\vee}}}\big)^{\otimes(-b)}\rightarrow\Omega_{\bar{F}/G}$ to any $q_2$-fibre $\widetilde{L}={q_2}^{-1}(L)\subseteq \bar{F}$, and by Lemma \ref{ta}, we get \[
d\sigma|\widetilde{L}:\mathcal{O}_{\widetilde{L}}(-b)^{\oplus d-1}\rightarrow\mathcal{O}_{\widetilde{L}}(1)^{\oplus n-2}.
\]
Because $b<-1$, we have $d\sigma|{q_2}^{-1}(L)=0$ for all $L\in F(d-1,d+1)$, i.e. $d\sigma=0$. By Lemma \ref{ka}, $\sigma$ can be factored through the relative Frobenius morphism $F_{\bar{F}/G}^m$ for some positive integer $m$:
\[
\xymatrix{
\bar{F} \ar@/^2pc/[rrrr]^{\sigma} \ar[rrdd]_{q_1}\ar[rr]^{F_{\bar{F}/G}^m} && \bar{F}^{(p^m)}\ar[dd]^{q_1'} \ar[rr]^{\sigma'} && \mathbb{P}(E) \ar[lldd]^{\pi} \\
\\
&&G.
}
\]
Let's consider the following diagram:
\begin{align}
\xymatrix{
\bar{F}_1^{(p^m)} \ar[rrdd]_{q_{11}'} && \bar{F}^{(p^m)}\ar[dd]^{q_1'}\ar[ll]_{pr_1'}\ar[rr]^{pr_2'} && \bar{F}_2^{(p^m)} \ar[lldd]^{q_{12}'}. \\
\\
&&G.
}
\end{align}
By Remark \ref{F}, we have $\bar{F}_1^{(p^m)}=\mathbb{P}(\bar{F}^{m^{\ast}}(H_d))$, $\bar{F}_2^{(p^m)}=\mathbb{P}(\bar{F}^{m^{\ast}}(Q_{n-d}^{\vee}))$.
Because $\sigma$ can be factored through the relative Frobenius morphism $F_{\bar{F}/G}^m$, hence on $\bar{F}^{(p^m)}$, we have the exact sequence \nocite{guyot1985caracterisation}
\begin{equation}\label{q}
0\rightarrow (H N_{q}^{1}\left({q_1}^{*} E\right))'\rightarrow {q_1'}^\ast E\rightarrow\big(({\mathcal{H}'_{H_{d}^{\vee}})^{\vee}}\big)^{\otimes{(\frac{-b}{p^m}})}\rightarrow0.
\end{equation}
where $\mathcal{H}'_{H_{d}^{\vee}}$ is the tautological bundle on $\bar{F}^{(p^m)}$ associated to $\bar{F}_1^{(p^m)}$, and the pull back of the exact sequence (\ref{q}) under $F_{\bar{F}/G}^m$ is the exact sequence (\ref{o}).
We also have the reduced map
\[d{\sigma'}:{\sigma'}^\ast\Omega_{\mathbb{P}(E)/G}\rightarrow \Omega_{\bar{F}^{(p^m)}/G}.\]
By restricting the map to any $q_2$-fiber, we get
\[
d\sigma'|\widetilde{L}:\mathcal{O}_{\widetilde{L}}(-b)^{\oplus d-1}\rightarrow\mathcal{O}_{\widetilde{L}}(p^m)^{\oplus n-2}.
\]
By Lemma \ref{ka}, we may assume $-b\leq p^m$. On the other hand, we have $p^m|-b$, thus $-b=p^m$. On $\bar{F}^{(p^m)}$, we now have the following exact sequence\[
0\rightarrow (H N_{q}^{1}\left({q_1}^{*} E\right))'\rightarrow {q_1'}^\ast E\rightarrow({\mathcal{H}'_{H_{d}^{\vee}})^{\vee}}\rightarrow0.
\]
By projecting the exact sequence to $\bar{F}_1^{(p^m)}$, we get an exact sequence ($R^1{pr_1'}_{*}(H N_{q}^{1}\left({q_1}^{*} E\right))'=0$)
\[
0\rightarrow {pr_1'}_{*}(H N_{q}^{1}\left({q_1}^{*} E\right))'\rightarrow {q_{11}'}^\ast E\rightarrow \mathcal{O}_{\bar{F}_1^{(p^m)}}(1)\rightarrow 0.
\]
As in Case 1, we find ${q_{11}'}_{*}({pr_1'}_{*}(H N_{q}^{1}\left({q_1}^{*} E\right))')=\textbf{R}^{1}{q_{11}'}_{*}({pr_1'}_{*}(H N_{q}^{1}\left({q_1}^{*} E\right))')=0$. In particular, \[
E\cong {q_{11}'}_{\ast}\mathcal{O}_{\bar{F}_1^{(p^m)}}(1)\cong {F^m}^{\ast}H_d.\]
If $a\neq0$, the equation (\ref{chow}) implies that $d=n-d$. Suppose that $E$ can't split as a sum of line bundles. After twisting with an appropriate power of $\mathcal{O}_{G}(1)$ and dualizing if necessary, we can assume $E$ is of type $(0,0,\ldots,0,\beta)(\beta<0)$. By Proposition \ref{prop} below, we get $E\cong {F^m}^{\ast}Q_{n-d}^{\vee}(m\geq 0)$, where $Q_{n-d}^{\vee}$ can be viewed as the universal subbundle of the dual Grassmannian. Hence we get the desired result.
\end{proof}
\begin{prop}\label{prop}
Let $E$ be a uniform vector bundles over $G$ of rank $n-d$, and $a\neq0$. Then $E\cong\oplus_{i=1}^{n-d}\mathcal{O}_G(a_i)$, $E\cong {F^m}^{\ast}Q_{n-d}^{\vee}\otimes\mathcal{O}_G(v_1)$ or $E\cong {F^m}^{\ast}Q_{n-d}\otimes\mathcal{O}_G(v_2)$, where $m\geq 0$ and $v_1, v_2\in \mathbb{Z}$.
\end{prop}
\begin{proof}
By the assertion of Guyot (\cite{ref15} Lemma 4.2.3), if $E$ can't split as a sum of line bundles, then there are two cases:
1) $r_1=n-d-1$, $r_2=1$: after tensoring $E$ with a line bundle, we may assume $c_{{q_1}^{*}E}(T)=\sum_{n-d}(T,\beta X_1,\ldots,\beta X_d)~(\beta<0).$
2) $r_1=1$, $r_2=n-d-1$: after tensoring $E$ with a line bundle, we may assume $c_{{q_1}^{*}E}(T)=\sum_{n-d}(T,-\beta X_1,\ldots,-\beta X_d)~(\beta<0).$
In the first case, we can write
\begin{eqnarray}
&&c_{{q_1}^{*}E}(T)-\beta^{n-d}\sum_{n-d}( X_1,\ldots, X_{d+1})\\
&=&(T-\beta X_{d+1})\big(\sum_{n-d-1}(T,\beta X_1,\ldots,\beta X_{d+1})\big).
\end{eqnarray}
So we get \[
c_{H N_{q}^{1}\left({q_1}^{*} E\right)}(T)=\sum_{n-d-1}(T,\beta X_1,\ldots,\beta X_{d+1}) ~\text{and}~ c_{{q_1}^{*} E/H N_{q}^{1}\left({q_1}^{*} E\right)}(T)=T-\beta X_{d+1}.
\]
By Lemma \ref{pic}, we get ${q_1}^{*} E/H N_{q}^{1}\left({q_1}^{*} E\right)\cong \big(({\mathcal{H}_{Q_{n-d}})^{\vee}}\big)^{\otimes(-\beta)}$. Hence on $\bar{F}$, we have the following exact sequence
\begin{equation}
0\rightarrow H N_{q}^{1}\left({q_1}^{*} E\right)\rightarrow {q_1}^\ast E\rightarrow \big(({\mathcal{H}_{Q_{n-d}})^{\vee}}\big)^{\otimes(-\beta)}\rightarrow 0.
\end{equation}
According to the proof in the above theorem, we get $E\cong {F^m}^{\ast}Q_{n-d}^{\vee}$.
For the second case, we get $E\cong {F^m}^{\ast}Q_{n-d}~(m\geq 0)$ for the similar reason.
\end{proof}
Therefore, we have proved Theorem \ref{m} completely.
According to \cite{ref24} Section 3.3, the family of semistable torsion free sheaves over normal projective spaces in characteristic zero forms a bounded family. But it is still unknown whether the same result is true in characteristic $p$. But we conjecture it is true over Grassmannian $G$.
\begin{conj}
Denote $\mathfrak{F}^{ss}_G(r, c_1, c_2)$ to be the class of all semistable (in any sense) torsion free sheaves $\mathcal{F}$ on $G$ of rank $r$ with $c_i(\mathcal{F})=c_i$ for $i=1,2$ in characteristic $p$. Then $\mathfrak{F}^{ss}_G(r, c_1, c_2)$ forms a bounded family.
\end{conj}
\begin{rem}
\emph{Langer (\cite{ref25}) proved that the family of strongly semistable torsion free sheaves over normal projective spaces in arbitrary characteristic form a bounded family. But it is still unknown that, even for Grassmannians, if the statement is true for general semistable torsion free sheaves in characteristic $p$.}
\end{rem}
\section{Vector bundles on flag varieties}\label{flag}
Let $F:=F(d_1,\ldots,d_s)$ be the flag manifold parameterizing flags \[V_{d_1}\subseteq\cdots\subseteq V_{d_s}\subseteq k^n\]
where $dim(V_{d_i})=d_i, 1\le i \le s$. In this section, we suppose that the characteristic of $k$ is zero.
For any integer $i (1\le i \le s)$, there is a flag $V_{d_1}\subseteq\cdots V_{d_{i-1}}\subseteq V_{d_i-1}\subseteq V_{d_i+1}\subseteq V_{d_{i+1}}\subseteq\cdots V_{d_s}\subseteq k^n$. It's not hard to see that the set of flags $V_{d_1}\subseteq\cdots V_{d_{i-1}}\subseteq W\subseteq V_{d_{i+1}}\subseteq\cdots V_{d_s}\subseteq k^n$ such that\[
V_{d_i-1}\subseteq W\subseteq V_{d_i+1},~dim(W)=d_i
\]
is the projectivization of the quotient space $V_{d_i+1}/V_{d_i-1}$, so the set of such flags is isomorphic to $\mathbb{P}_k^1$. It follows that any $l\in F(d_1,\ldots,d_{i-1},d_i-1,d_i+1,d_{i+1},\ldots,d_s)$ determines a line $L\subseteq F$. We denote by $F^{(i)}:=F(d_1,\ldots,d_{i-1},d_i-1,d_i+1,d_{i+1},\ldots,d_s)$ the $i$-th irreducible component of the manifold of lines in $F$. (We re-emphasize that if the two adjacent integers in the expression of flag varieties such as $F^{(i)}$ are equal, we keep only one of them.)
We can consider $F^{(i)}$ in the following two cases:
Case I: $d_i-1=d_{i-1}$ and $d_i+1=d_{i+1}$, then we have the natural projection $F\rightarrow F^{(i)}$.
Case II: $d_i-1\neq d_{i-1}$ or $d_i+1\neq d_{i+1}$, then we have the \emph{standard diagram}
\begin{align}
\xymatrix{
F(d_1,\ldots,d_{i-1},d_i-1,d_i,d_i+1,d_{i+1},\ldots d_s)\ar[d]^{q_1} \ar[r]^-{q_2} & F^{(i)}\\
F=F(d_1,\ldots,d_s).
}
\end{align}
On the flag manifold $F$, we denote by $H_{d_i}$ the \emph{universal subbundle} whose fiber at a point $[\Lambda]=[V_{d_1}\subseteq\cdots\subseteq V_{d_s}\subseteq k^n]\in F$ is the subspace $V_{d_i}$; that is,
\[(H_{d_i})_{[\Lambda]}=V_{d_i}.\]
Set $H_{d_i,d_j}=H_{d_j}/H_{d_i}(j\ge i)$ as the \emph{universal quotient bundle}.
Let $E$ be an algebraic $r$-bundle over $F$. According to the theorem of Grothendieck, for every $l\in F^{(i)}$, there is an $r$-tuple\[
a_E^{(i)}(l)=(a_1^{(i)}(l),\ldots,a_r^{(i)}(l))\in\mathbb{Z}^{r}~\text{with}~ a_1^{(i)}(l)\geq\cdots\geq a_r^{(i)}(l)
\]
such that $E|L\cong\bigoplus_{j=1}^{r}\mathcal{O}_{L}(a_j^{(i)}(l))$. We give $\mathbb{Z}^{r}$ the lexicographical ordering, i.e., $(a_1,\ldots,a_r)\le(b_1,\ldots,b_r)$ if the first non-zero difference $b_i-a_i$ is positive. Let \[
\underline{a}_E^{(i)}=\inf_{l\in F^{(i)}}a_E^{(i)}(l).
\]
\begin{definition}\label{UEi}
$\underline{a}_E^{(i)}$ is the generic splitting type of $E$ with respect to $F^{(i)}$, $S_E^{(i)}=\{l\in F^{(i)}|a_E^{(i)}(l)>\underline{a}_E^{(i)}\}$ is the set of jump lines with respect to $F^{(i)}$. We define $U_E^{(i)}:=F^{(i)}\backslash S_E^{(i)}$.
\end{definition}
\begin{rem}\label{u}
\emph{Fix integer $i, 1\le i \le s$. Let \[
M_t(a_1,\ldots,a_t)=\{l\in F^{(i)}|(a_1^{(i)}(l),\ldots,a_t^{(i)}(l))>(a_1,\ldots,a_t)\}.
\]
Because of the semicontinuity theorem, the set $M_1(a_1)=\{l\in F^{(i)}|h^0(L,E(-a_1-1)|L>0)\}$ is Zariski-closed in $F^{(i)}$. By induction on $t$ we see that $S_E^{(i)}=M_r{(\underline{a}_E^{(i)})}$ is Zariski-closed in $F^{(i)}$. Thus $U_E^{(i)}$ is a non-empty Zariski-open subset of $F^{(i)}$ (see \cite{ref14} Lemma 3.2.2).}
\end{rem}
\begin{definition}(\cite{ref21} p.128 or ~\cite{ref23} Proposition 1.6)\label{normal}
Let $X$ be a nonsingular variety over $k$. A coherent sheaf $\mathcal{F}$ over $X$ is called normal if for every open $U\subseteq X$ and every closed subset $A\subseteq U$ of codimension at least $2$, the restriction map\[
\mathcal{F}(U)\rightarrow \mathcal{F}(U\backslash A)
\]
is an isomorphism.
\end{definition}
The following result holds over any algebraically closed field.
\begin{prop}\label{y}
Let $E$ be an algebraic vector bundle of rank $r$ over $F:=F(d_1,\ldots,d_s)$ and assume $E|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$. Then $E$ is trivial.
\end{prop}
\begin{proof}
We prove the theorem by induction on $s$. For $s=1$, the flag manifold is just $G(d_1,n)$, the result holds by Proposition \ref{e}. Suppose the assertion is true for all flag manifolds $F(d_1',\cdots,d_{s-1}')$. Let's consider the natural projection \[
q:F=F(d_1,\ldots,d_s)\rightarrow F(d_2,\ldots,d_s).\]
It's not hard to see that every $q$-fibre $q^{-1}(x)$ is isomorphic to the Grassmannian $G(d_1,d_2)$. Since the restriction of $E$ to every line in $q$-fibre $q^{-1}(x)$ is trivial by assumption, $E$ is trivial on all $q$-fibres by Proposition \ref{e}. It follows that $E'=q_{*}E$ is an algebraic vector bundle of rank $r$ over $F(d_2,\ldots,d_s)$ and $E\cong q^{*}E'$.
\textbf{Claim.} $E'|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$ in $F(d_2,\ldots,d_s)$.
In fact, let $L$ be a line in the $i$-th $(2\le i\le s)$ irreducible component of the manifold of lines in $F$. Then $q(L)$ is a line in the ${i-1}$-th $(2\le i\le s)$ irreducible component of the manifold of lines in $F(d_2,\ldots,d_s)$. When $L$ runs through all lines in the $i$-th $(2\le i\le s)$ component of the manifold of lines in $F$, $q(L)$ also runs through all lines in the $(i-1)$-th $(2\le i\le s)$ component of the manifold of lines in $F(d_2,\ldots,d_s)$. The projection $q$ induces an isomorphism\[
E'|q(L)\cong q^{*}E'|L\cong E|L.
\]
Since $E|L$ is trivial for all lines $L$ in the $i$-th $(2\le i\le s)$ irreducible component of the manifold of lines in $F$ by assumption, $E'|L$ is trivial for all lines $L$ in the $i$-th $(1\le i\le s-1)$ irreducible component of the manifold of lines in $F(d_2,\ldots,d_s)$. It follows that $E'|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$ in $F(d_2,\ldots,d_s)$.
By the induction hypothesis, $E'$ is trivial. Thus $E\cong q^{*}E'$ is trivial.
\end{proof}
\begin{lemma}(Descent Lemma \cite{ref19})\label{Descent}
Let $X$, $Y$ be nonsingular varieties over $k$, $f:X\rightarrow Y$ be a surjective submersion with connected fibres and $E$ be an algebraic $r$-bundle over $Y$. Let $\widetilde{K}\subseteq f^{\ast}E$ be a subbundle of rank $t$ in $f^{\ast}E$ and $\widetilde{Q}=f^{\ast}E/\widetilde{K}$ be its quotient. If \[
Hom(T_{X/Y},\mathcal{H}om(\widetilde{K},\widetilde{Q}))=0,
\]
then $\widetilde{K}$ is the form $\widetilde{K}=f^{\ast}K$ for some algebraic subbundle $K\subseteq E$ of rank $t$.
\end{lemma}
\begin{lemma}\label{rt}
Let $\widetilde{F^{(i)}}:=F(d_1,\ldots,d_{i-1},d_i-1,d_i,d_i+1,d_{i+1},\ldots,d_s)$,
$\widetilde{L}={q_2}^{-1}(l)\subseteq\widetilde{F^{(i)}}$ for $l\in F^{(i)}$. If $F^{(i)}$ is in Case II, then for the relative cotangent bundle $\Omega_{\widetilde{F^{(i)}}/F}$, we have\[
\Omega_{\widetilde{F^{(i)}}/F}|\widetilde{L}=\mathcal{O}_{\widetilde{L}}(1)^{\oplus d_{i+1}-d_{i-1}-2}.
\]
\end{lemma}
\begin{proof}
By the definition of vector bundle $H_{d_i,d_j}(j>i)$, it's easy to check that on $\widetilde{F^{(i)}}$, we have the following two exact sequences:
\begin{align}\label{1}
0\rightarrow H_{d_i-1,d_i}^{\vee}\rightarrow {q_1}^{\ast}H_{d_{i-1},d_i}^{\vee} \rightarrow {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee} \rightarrow 0,
\end{align}
\begin{align}
0\rightarrow H_{d_i,d_i+1}\rightarrow {q_1}^{\ast}H_{d_i,d_{i+1}} \rightarrow {q_2}^{\ast}H_{d_i+1,d_{i+1}} \rightarrow 0.
\end{align}
Let $\widetilde{F_1^{(i)}}:=F(d_1,\ldots,d_{i-1},d_i-1,d_i,d_{i+1},\ldots d_s)$, $\widetilde{F_2^{(i)}}:=F(d_1,\ldots,d_{i-1},d_i,d_i+1,d_{i+1},\ldots d_s)$ and consider the following diagram
\begin{align}
\xymatrix{
\widetilde{F_1^{(i)}} \ar[rrdd]_{q_{11}} && \widetilde{F^{(i)}}\ar[ll]_{pr_1}\ar[rr]^{pr_2}\ar[dd]^{q_1} && \widetilde{F_2^{(i)}} \ar[lldd]^{q_{12}} \\
\\
&&F.
}
\end{align}
All morphisms in the above diagram are projections. It is not hard to see that $\widetilde{F_1^{(i)}}=\mathbb{P}(H_{d_{i-1},d_i})$, $\widetilde{F_2^{(i)}}=\mathbb{P}(H_{d_i,d_{i+1}}^{\vee})$ and $H_{d_i-1,d_i}^{\vee}~(resp.~ H_{d_i,d_i+1})$ is the tautological line bundle on $\widetilde{F^{(i)}}$ associated to $\widetilde{F_1^{(i)}}~(resp.~ \widetilde{F_2^{(i)}})$, i.e. \[{pr_1}_{*}H_{d_i-1,d_i}^{\vee}=\mathcal{O}_{\widetilde{F_1^{(i)}}}(-1)~(resp.~ {pr_2}_{*}H_{d_i,d_i+1}=\mathcal{O}_{\widetilde{F_2^{(i)}}}(-1)).\]
Projecting the exact sequence \ref{1} onto $\widetilde{F_1^{(i)}}$ and considering the relative Euler sequence, we have the diagram of exact sequences (${R^1pr_1}_{*}H_{d_i-1,d_i}^{\vee}=0$)
$$
\xymatrix{
0\ar[r] &{pr_1}_{*}H_{d_i-1,d_i}^{\vee}\ar[r] \ar[d]_\cong& q_{11}^{\ast}H_{d_{i-1},d_i}^{\vee} \ar[r] \ar[d]_\cong&{pr_1}_{*}{q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee}\ar[r]\ar[d]_\cong&0\\
0\ar[r] &\mathcal{O}_{\widetilde{F_1^{(i)}}}(-1) \ar[r] &q_{11}^{\ast}H_{d_{i-1},d_i}^{\vee}\ar[r] & \mathcal{O}_{\widetilde{F_1^{(i)}}}(-1)\otimes T_{\widetilde{F_1^{(i)}}/F}\ar[r]&0.
}
$$
We get $T_{\widetilde{F_1^{(i)}}/F}\cong {pr_1}_{*}(H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee})$, since $H_{d_i-1,d_i}={pr_1}^{*}\mathcal{O}_{\widetilde{F_1^{(i)}}}(1)$.
Similarly, on $\widetilde{F_2^{(i)}}$, we have the exact sequences ($R^1{pr_2}_{*}H_{d_i,d_i+1}=0$)
$$
\xymatrix{
0\ar[r] &{pr_2}_{*}H_{d_i,d_i+1}\ar[r] \ar[d]_\cong& q_{12}^{\ast}H_{d_i,d_{i+1}} \ar[r] \ar[d]_\cong&{pr_2}_{*}{q_2}^{\ast}H_{d_i+1,d_{i+1}}\ar[r]\ar[d]_\cong&0\\
0\ar[r] &\mathcal{O}_{\widetilde{F_2^{(i)}}}(-1) \ar[r] &q_{12}^{\ast}H_{d_i,d_{i+1}}\ar[r] & \mathcal{O}_{\widetilde{F_2^{(i)}}}(-1)\otimes T_{\widetilde{F_2^{(i)}}/F}\ar[r]&0.
}
$$
We get $T_{\widetilde{F_2^{(i)}}/F}\cong {pr_2}_{*}(H_{d_i,d_i+1}^{\vee}\otimes {q_2}^{\ast}H_{d_i+1,d_{i+1}})$.
Since $\widetilde{F^{(i)}}$ as $F$-scheme is the fiber product of two $F$-schemes $\widetilde{F_1^{(i)}}$ and $\widetilde{F_2^{(i)}}$, we have
\begin{align}
T_{\widetilde{F^{(i)}}/F}
&\cong \big({pr_1}^{*}T_{\widetilde{F_1^{(i)}}/F}\big)\oplus\big({pr_2}^{*} T_{\widetilde{F_2^{(i)}}/F}\big)\\
&\cong \big({pr_1}^{*}{pr_1}_{*}(H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee})\big)\oplus\big({pr_2}^{*}{pr_2}_{*} (H_{d_i,d_i+1}^{\vee}\otimes {q_2}^{\ast}H_{d_i+1,d_{i+1}})\big).
\end{align}
The canonical homomorphism ${pr_1}^{*}{pr_1}_{*}(H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee})\cong H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee}$, because over each $pr_1$-fibre ${pr_1}^{-1}(l)$, the evaluation map
\begin{align}
&{pr_1}^{*}{pr_1}_{*}(H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee})|{pr_1}^{-1}(l)\\
&\cong H^0\big({pr_1}^{-1}(l),H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee}|{pr_1}^{-1}(l)\big)\otimes_{k}\mathcal{O}_{{pr_1}^{-1}(l)}\\
&\cong (H_{d_i-1,d_i}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1}^{\vee})|{pr_1}^{-1}(l).
\end{align}
is an isomorphism. Similarly, ${pr_2}^{*}{pr_2}_{*} (H_{d_i,d_i+1}^{\vee}\otimes {q_2}^{\ast}H_{d_i+1,d_{i+1}})\cong H_{d_i,d_i+1}^{\vee}\otimes {q_2}^{\ast}H_{d_i+1,d_{i+1}}$.
Hence,\[
\Omega_{\widetilde{F^{(i)}}/F}\cong (H_{d_i-1,d_i}^{\vee}\otimes {q_2}^{\ast}H_{d_{i-1},d_i-1})\oplus (H_{d_i,d_i+1}\otimes {q_2}^{\ast}H_{d_i+1,d_{i+1}}^{\vee})
\]
Finally, we get \[
\Omega_{\widetilde{F^{(i)}}/F}|\widetilde{L}=\mathcal{O}_{\widetilde{L}}(1)^{\oplus d_{i+1}-d_{i-1}-2}.
\]
\end{proof}
\begin{thm}\label{main}
Fix an integer $i, 1\le i \le s$. Let $E$ be an algebraic $r$-bundle over $F$ of type $\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)}), a_1^{(i)}\geq\cdots\geq a_r^{(i)}$ with respect to $F^{(i)}$. If for some $t<r$,
\[
a_t^{(i)}-a_{t+1}^{(i)}\geq
\left\{
\begin{array}{ll}
1, & and~F^{(i)}~ \text{is in Case I}\\
2, & and ~F^{(i)}~ \text{is in Case II},
\end{array}
\right.\]
then there is a normal subsheaf $K\subseteq E$ of rank $t$ with the following properties: over the open set $V_E^{(i)}=q_1({q_2}^{-1}(U_E^{(i)}))\subseteq F$, the sheaf $K$ is a subbundle of $E$, which on the line $L\subseteq F$ given by $l\in U_E^{(i)}$ has the form\[
K|L\cong\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)}).
\]
\end{thm}
\begin{proof}
After tensoring with an appropriate line bundle, we may assume $a_t^{(i)}=0,a_{t+1}^{(i)}<0$.
\begin{enumerate}
\item If $F^{(i)}$ is in Case I, then we have the natural projection $F\xrightarrow{q} F^{(i)}$. By the hypothesis, for every point $l\in U_E^{(i)}$ \[
E|q^{-1}(l)\cong\oplus_{j=1}^{r}\mathcal{O}_{q^{-1}(l)}(a_j^{(i)}).
\]
Then $q_{\ast}E$ is a coherent sheaf over $F^{(i)}$ which is locally free over $U_E^{(i)}$. The morphism $\phi:q^{\ast}q_{\ast}E\rightarrow E$ on each $\widetilde{L}=q^{-1}(l)\cong L$ for an $l\in U_E^{(i)}$ is given by the evaluation of the section of $E|L$. Thus the image of $\phi|\widetilde{L}$ is the subbundle\[
\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)})\subseteq E|L
\]
of rank $t$. Hence over the open set $q^{-1}(U_E^{(i)})$, $\phi$ is a morphism of constant rank $t$ and thus its image $Im \phi\subseteq E$ over $q^{-1}(U_E^{(i)})$ is a subbundle of rank $t$.
Let $Q'=E/Im\phi$ and $T(Q')$ be the torsion subsheaf of $Q'$ and \[
K=ker(E\rightarrow Q'/T(Q')).
\]
Because $\widetilde{Q}=Q'/T(Q')$ is a torsion-free sheaf, $K$ is a normal subsheaf of rank $t$. Over the open set $V_E^{(i)}=q^{-1}(U_E^{(i)})\subseteq F$ the sheaf $K$ is a subbundle of $E$, which on the line $L\subseteq F$ given by $l\in U_E^{(i)}$ has the form\[
K|L\cong\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)}).
\]
\item If $F^{(i)}$ is in Case II, then we have the standard diagram
\begin{align}
\xymatrix{
\widetilde{F^{(i)}}\ar[d]^{q_1} \ar[r]^-{q_2} & F^{(i)}=F(d_1,\ldots,d_{i-1},d_i-1,d_i+1,d_{i+1},\ldots d_s)\\
F=F(d_1,\ldots,d_s).
}
\end{align}
For every point $l\in U_E^{(i)}$, we have\[
{q_1}^{\ast}E|{q_2}^{-1}(l)\cong E|L\cong\oplus_{j=1}^{r}\mathcal{O}_{L}(a_j^{(i)}).
\]
Then ${q_2}_{\ast}{q_1}^{\ast}E$ is a coherent sheaf over $F^{(i)}$ which is locally free over $U_E^{(i)}$. The morphism $\phi:{q_2}^{\ast}{q_2}_{\ast}{q_1}^{\ast}E\rightarrow {q_1}^{\ast}E$ on each $\widetilde{L}={q_2}^{-1}(l)\cong L$ for an $l\in U_E^{(i)}$ is given by the evaluation of the section of $E|L$. Thus the image of $\phi|\widetilde{L}$ is the subbundle\[
\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)})\subseteq E|L
\]
of rank $t$. Hence over the open set ${q_2}^{-1}(U_E^{(i)})$, $\phi$ is a morphism of constant rank $t$ and thus its image $Im \phi\subseteq {q_1}^{\ast}E$ over ${q_2}^{-1}(U_E^{(i)})$ is a subbundle of rank $t$.
Let $Q'={q_1}^{\ast}E/Im\phi$ and $T(Q')$ be the torsion subsheaf of $Q'$ and \[
\widetilde{K}=ker({q_1}^{\ast}E\rightarrow Q'/T(Q')).
\]
Because $\widetilde{Q}=Q'/T(Q')$ is a torsion-free sheaf, $\widetilde{K}$ is a normal subsheaf of rank $t$, and outside the singularity set $S(\widetilde{Q})$ of $\widetilde{Q}$, the sheaf $\widetilde{K}$ is a subbundle of ${q_1}^{\ast}E$, which on each $\widetilde{L}={q_2}^{-1}(l)\cong L$ given by $l\in U_E^{(i)}$ has the form\[
\widetilde{K}|\widetilde{L}\cong\oplus_{j=1}^{t}\mathcal{O}_{\widetilde{L}}(a_j^{(i)}).
\]
Let $X=\widetilde{F^{(i)}}\backslash S(\widetilde{Q})$. $X$ is open in $\widetilde{F^{(i)}}$ and contains ${q_2}^{-1}(U_E^{(i)})$. We have the following commutative diagram
\begin{align}\label{tt}
\xymatrix{
&X \ar@{^{(}->}[r]^{\zeta}\ar[d]_{f} & \widetilde{F^{(i)}}\ar[d]^{q_1}
\\
&Y=q_1(X) \ar@{^{(}->}[r]^-{\eta} & F
}
\end{align}
with a surjective submersion $f$ with connected fibres.
In order to apply the Descent Lemma to the subbundle $\widetilde{K}|X\subseteq f^{\ast}(E|Y)$, we need to show that \[
Hom(T_{X/Y},\mathcal{H}om(\widetilde{K}|X,\widetilde{Q}|X))=0.
\]
By the following Claim \ref{cl}, the hypothesis of the Descent Lemma is satisfied. Hence over the open set $Y\subseteq F$, we get a subbundle $K'\subseteq E|Y$
with\[
f^{\ast}K'=\widetilde{K}|X\subseteq f^{\ast}(E|Y).
\]
$K'$ can be extended to a normal subsheaf $K={q_1}_{\ast}\widetilde{K}\subseteq E$ on $F$:
To prove this, we need to consider the above diagram \ref{tt} again. From the diagram and Zariski's Main Theorem, we deduce that\[
f_{\ast}\mathcal{O}_X=\eta^{\ast}\eta_{\ast}f_{\ast}\mathcal{O}_X=\eta^{\ast}{q_1}_{\ast}\zeta_{\ast}\mathcal{O}_X=\eta^{\ast}{q_1}_{\ast}\mathcal{O}_{\widetilde{F^{(i)}}}=\eta^{\ast}\mathcal{O}_{F}=\mathcal{O}_{Y}.
\]
Next, we only need to prove $K|Y=K'$, i.e. $\eta_{\ast}K'={q_1}_{\ast}\widetilde{K}$.
Because $S(\widetilde{Q})$ is of codimension at least 2 and $\widetilde{K}$ is a normal sheaf, we have $\zeta_{\ast}(\widetilde{K}|X)=\widetilde{K}$. Thus\[
\eta_{\ast}K'=\eta_{\ast}(K'\otimes f_{\ast}\mathcal{O}_X)=\eta_{\ast}f_{\ast}f^{\ast}K'={q_1}_{\ast}\zeta_{\ast}(f^{\ast}K')={q_1}_{\ast}\zeta_{\ast}(\widetilde{K}|X)={q_1}_{\ast}\widetilde{K}.
\]
It's easy to see that over the open set $V_E^{(i)}=q_1({q_2}^{-1}(U_E^{(i)}))\subseteq F$, the sheaf $K$ is a subbundle of $E$, which on the line $L\subseteq F$ given by $l\in U_E^{(i)}$ has the form\[
K|L\cong\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)}).
\]
\end{enumerate}
\begin{claim}\label{cl}
If $a_{t+1}^{(i)}<-1$, then\[
Hom(T_{X/Y},\mathcal{H}om(\widetilde{K}|X,\widetilde{Q}|X))=0.
\]
\end{claim}
In fact, it's equivalent to prove \[
H^{0}(X,\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q})=0.
\]
Since the codimension of $X\backslash{q_2}^{-1}(U_E^{(i)})$ in $X$ is at least $2$ and $\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q}$ is torsion free, the restriction\[
H^{0}(X,\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q})\rightarrow H^{0}({q_2}^{-1}(U_E^{(i)}),\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q})
\]
is injective, it suffices to show that $\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q}$ has no section over ${q_2}^{-1}(U_E^{(i)})$.
Let $l\in U_E^{(i)}$ and $\widetilde{L}={q_2}^{-1}(l)\cong L$. By the previous assertion, \[
\widetilde{K}^{\vee}|\widetilde{L}\cong\oplus_{j=1}^{t}\mathcal{O}_{\widetilde{L}}(-a_j^{(i)}),~\widetilde{Q}|\widetilde{L}\cong\oplus_{j=t+1}^{r}\mathcal{O}_{\widetilde{L}}(a_j^{(i)}),
\]
and by Lemma \ref{rt}, we have \[
\Omega_{\widetilde{F^{(i)}}/F}|\widetilde{L}=\mathcal{O}_{\widetilde{L}}(1)^{\oplus d_{i+1}-d_{i-1}-2}.
\]
Thus \[
H^{0}(\widetilde{L},\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q}|\widetilde{L})=0,~\text{if}~ a_{t+1}^{(i)}<-1.
\]
Then every section of $(\Omega_{\widetilde{F^{(i)}}/F}\otimes\widetilde{K}^{\vee}\otimes\widetilde{Q}|X)$ is zero over ${q_2}^{-1}(U_E^{(i)})$ and hence over $X$.
\end{proof}
The above theorem has far reaching consequences. We give first a series of immediate deduction.
\begin{cor}
Fix an integer $i, 1\le i \le s$. Let $E$ be a uniform $r$-bundle with respect to $F^{(i)}$ of type\[
\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)}), a_1^{(i)}\geq\cdots\geq a_r^{(i)}
\]
If for some $t<r$,
\[
a_t^{(i)}-a_{t+1}^{(i)}\geq
\left\{
\begin{array}{ll}
1, & and~F^{(i)}~ \text{is in Case I}\\
2, & and ~F^{(i)}~ \text{is in Case II},
\end{array}
\right.\]
then we can write $E$ as an extension of uniform bundles with respect to $F^{(i)}$.
\end{cor}
\begin{proof}
By the above Theorem \ref{main}, there is a uniform bundle $K\subseteq E$ of type $\underline{a}_K^{(i)}=(a_1^{(i)},\ldots,a_t^{(i)})$ with respect to $F^{(i)}$. Then the quotient bundle $Q=E/K$ is uniform of type $(a_{t+1}^{(i)},\ldots,a_r^{(i)})$ with respect to $F^{(i)}$. We have the following exact sequence\[
0\rightarrow K\rightarrow E\rightarrow Q\rightarrow 0.
\]
\end{proof}
Let $\mathcal{F}$ be a torsion free coherent sheaf of rank $r$ over $F$.
Since the singularity set $S(\mathcal{F})$ of $\mathcal{F}$ has codimension at least $2$, there is some integer $i(1\le i \le s)$ and lines $L\subseteq F$ given by $l\in F^{(i)}$ which do not meet $S(\mathcal{F})$. If
\[\mathcal{F}|L\cong \mathcal{O}_{L}(a_1^{(i)})\oplus\cdots\oplus \mathcal{O}_{L}(a_r^{(i)}).\]
Let \[c_1^{(i)}(\mathcal{F})=a_1^{(i)}+\cdots+a_r^{(i)},\] which is independent of the choice of $L$.
We set
\[\mu^{(i)}(\mathcal{F})=\frac{c_1^{(i)}(\mathcal{F})}{\text{rk}(\mathcal{F})}.\]
\begin{definition}\label{i}
A torsion free coherent sheaf $\mathcal{E}$ over $F$ is i-semistable if for every coherent subsheaf $ \mathcal{F}\subseteq \mathcal{E}$ with $0< \text{rk}(\mathcal{F})< \text{rk}(\mathcal{E})$, we have
\[\mu^{(i)}(\mathcal{F})\le \mu^{(i)}(\mathcal{E}).\]
\end{definition}
\begin{cor}\label{gap}
Fix an integer $i, 1\le i \le s$. For an i-semistable $r$-bundle $E$ over $F$ of type $\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)}), a_1^{(i)}\geq\cdots\geq a_r^{(i)}$ with respect to $F^{(i)}$, we have \[
a_j^{(i)}-a_{j+1}^{(i)}\leq 1~~ \text{for all}~ j=1,\ldots, r-1.
\]
In particular, if $F^{(i)}$ is in Case I, then $a_j^{(i)}$'s are constants for all $1\leq j\leq r$ .
\end{cor}
\begin{proof}
If for the fixed $i$, $E$ is of type $\underline{a}_E^{(i)}=(a_1^{(i)},\ldots,a_r^{(i)})$ with $a_t^{(i)}-a_{t+1}^{(i)}\geq 2$ for some $t<r$, then by Theorem \ref{main} we can find a normal sheaf $K\subseteq E$ which is of the form
\[
K|L\cong\oplus_{j=1}^{t}\mathcal{O}_L(a_j^{(i)})
\]
over the line $L\subseteq F$ given by $l\in U_E^{(i)}$. Then we have $\mu^{(i)}(E)<\mu^{(i)}(K)$, hence $E$ is not $i$-semistable.
In particular, if the $i$-th irreducible component of the manifold of lines $F^{(i)}$ is in Case I and there is some $t<r$ such that $a_t^{(i)}\neq a_{t+1}^{(i)}$, then we could find a normal sheaf $K\subseteq E$ such that $\mu^{(i)}(E)<\mu^{(i)}(K)$, hence $E$ is not $i$-semistable.
\end{proof}
\begin{cor}\label{x}
If $E$ is a strongly uniform $i$-semistable $(1\le i\le n-1)$ $r$-bundle over the complete flag variety $F$, then $E$ splits as a direct sum of line bundles. In addition $E|L\cong \mathcal{O}_L(a)^{\oplus r}$ for every line $L\subseteq F$, where $a\in \mathbb{Z}$.
\end{cor}
\begin{proof}
By Corollary \ref{gap}, we get $E|L=\mathcal{O}_L(a)^{\oplus r}$ for every line $L$ in $F$. After tensoring with appropriate line bundles, we can assume $E|L=\mathcal{O}_{L}^{\oplus r}$ for every line $L$. So $E$ is trivial by Proposition \ref{y}.
\end{proof}
\section*{Acknowledgements}
All the authors would like to show our great appreciation to the anonymous reviewer for pointing out a fatal error in one of our propositions and providing many useful suggestions in the original version.
|
1,108,101,565,786 | arxiv | \section{Introduction}
\subsection{Background and Motivation}
Activation functions (AF) are neccessary components of
neural networks that allow approximation of most types of functions
(universal approximation theory).
Activation functions in current use consist of simple fixed functions
such as sigmoid, softplus, ReLu \cite{FengActFn2019,Ravanbakhsh,WangRelu,jin2016deep}.
There is motivation to find more complex AFs for machine learning,
such as parametric Relu,
to improve the ability of neural networks to approximate
complex functions or probability distributions \cite{he2015delving}.
Putting more complexity in activation functions
can increase the function approximation
capability of a network, similar to adding
network layers, but with far fewer parameters.
\subsection{Theoretical Justification}
Most approaches to selecting AFs focus on the end result, i.e.
performance of the network \cite{FengActFn2019}.
It may be more enlightening to ask what does
the AFs say about the input data. Any monotonically increasing
function can be seen as an estimator
of the input distribution \cite{BagKayInfo2022}.
This view that AFs are PDF estimators can be best described mathematically
with the change of variables theorem. Let the AF be written $y=f(x)$ and let us assume that
$y$ is a random variable with distribution $p_y(y)$. Then,
the distribution of $x$ is given by
\begin{equation}
p_x(x)=\left| \frac{\partial y}{\partial x} \right| \; p_y(f(x)) = \left| f^\prime(x) \right| \; p_y(f(x)).
\label{pdx}
\end{equation}
If $y$ has the uniform distribution on $[0,\; 1]$, then
\begin{equation}
p_x(x)= \left| f^\prime(x) \right|.
\label{pdx2}
\end{equation}
The activation function $f(x)$ can be used as a probability density function
(PDF) estimator if it is adjusted (trained) until $y$ has a uniform output distribution,
so that (\ref{pdx2}) holds. Training is accomplished by maximum likelihood (ML)
estimation using
\begin{equation}
\max_\theta \frac{1}{K} \left\{ \sum_{i=1}^K \log f^\prime(x_k;\theta) \right\},
\label{mleq}
\end{equation}
where $k$ indexes over a set of training samples $x_k$,
and we have removed the absolute value operator because we assume $f(x;\theta)$ is monotonically increasing, so $f^\prime(x;\theta)>0$.
In accordance with (\ref{pdx2}), the trained AF will have increasing slope in regions where the input data $x$ is concentrated,
with the net result being that the output has a uniform distribution.
This concept is illustrated in Figure \ref{modal}.
A similar argument can be made for a Gaussian output distribution, where
$p_y(y)=\frac{1}{\sqrt{2\pi}}e^{-y^2/2}.$ Then,
\begin{equation}
p_x(x)= \left| f^\prime(x) \right| \frac{1}{\sqrt{2\pi}}e^{-f(x)^2/2}.
\label{pdx3}
\end{equation}
Training $f(x)$ will then result in $p_y(y)$ approaching the Gaussian distribution.
Some confusion may arise because we are discussing two different distributions of $y$, the true distribution
based on knowing $p_x(x)$, obtained by inverting (\ref{pdx}) given by
$p_y(y)= \frac{p_x(x)}{\left| f^\prime(x) \right|},$
and the assumed distribution. The purpose of training $f(x)$ is to make the
true distribution of $y$ approach the assumed distribution.
In general, the slope of $f(x)$ will tend to
increase where the histogram of $x$ has peaks, serving to remove modalities in the data
as illustrated in Figure \ref{modal}, as the activation function approximates
the cumulative distribution of the input data.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=2.3in,height=2.2in]{modal.eps}
\caption{Illustration of an activation function
removing modality in data when the derivative approximates
the histogram.}
\vspace{-.2in}
\label{modal}
\end{center}
\end{figure}
The view that $f(x)$ is a PDF estimator, and the fact that data often has clusters
leads us to the idea of creating AF's with multi-modal derivatives.
One of the simplest and earliest types of AFs is the sigmoid function,
whose derivative approximates a Gaussian distribution. Therefore,
the sum of shifted sigmoid functions approximates a Gaussian mixture, which
is a popular approach to PDF estimation \cite{Redner,McLachlan}. Different base activation functions lead to other
mixture distributions. This view leads us to the idea for the trained compound
activation functions (TCA).
\subsection{Contributions and Goals of Paper}
In this paper, we propose TCA, a trainable activation function with
complex, but monotonic response. We argue that using a TCA in a neural
network, is a more efficient way to increase the effectiveness of a network
than adding layers. Furthermore, in generative networks, the TCA has an interpretation
as a mixture distribution and can remove modality in the data.
When the TCA is used in a restricted Boltzmann machine (RBM),
it creates a novel type of RBM based on stochastic units that are
mixtures. We show significant improvement of TCA-based RBMs, deep
belief network (DBN) and projected belief networks (PBNs) in experiments.
\section{Trained Compound Activation Function (TCA)}
Consider the compound activation function $f(x)$ given by
\begin{equation}
f(x) = \frac{1}{M} \sum_{j=1}^M \; f_0\left(e^{a_j} x+b_j\right),
\label{tcadef}
\end{equation}
where $f_0(x)$ is the {\it base} activation function, and ${\bf a}=\{a_j\}$ and ${\bf b}=\{b_j\}$
are scale and bias parameters. The exponential function $ e^{a_j}$ is used to insure positivity
of the scale factor. Note that if $f_0(x)$ is a monotonically increasing function
(which we always assume), then the TCA is monotonically increasing and equal to the base activation
function if ${\bf a}={\bf 0}$ and ${\bf b}={\bf 0}$.
For a dimension-$N$ input data vector ${\bf x}$, the TCA operates element-wise
on ${\bf x}$ as follows:
\begin{equation}
{\bf y} = f({\bf x}), \;\;\; y_i = \frac{1}{M} \sum_{j=1}^M \; f_0\left(e^{a(i,j)} x_i+b(i,j)\right), \;1\leq i \leq N,
\label{tcadefv}
\end{equation}
where ${\bf A}=\{a(i,j)\}$ and ${\bf B}=\{b(i,j)\}$ are $N\times M$ scale and bias parameters.
A TCA can be implemented with an additional dense layer that expands the dimension
to $N\cdot M$ neurons, followed by a linear layer that averages over each group of $M$ neurons,
compressing back to dimension $N$. But, not only does a TCA use a factor of $N$ fewer parameters,
but it has an interpretation as a mixture
distribution when used in generative models,
and results in a novel type of RBM, as we now show.
\section{TCA for Deep Belief Networks (DBN)}
A deep belief network is a layered network proposed by
Hinton \cite{HintonDeep06} based on restricted Boltzmann machines (RBMs).
\subsection{RBMs}
The RBM is a widely-used generative stochastic artificial neural network
that can learn a probability distribution over its set of inputs
\cite{Goodfellow2016}. The RBM is based on an elegant stochastic model,
the Gibbs distribution, and is the central idea in a deep belief
network (DBN) made popular by Hinton \cite{HintonDeep06}.
A cascaded series of layer-wise-trained RBMs can be used to initialize deep neural
networks. This method, in fact played a key role in the birth of deep
learning because they provided a means to pre-train deep networks that
suffered from vanishing gradients.
\subsection{Review of RBMs}
The RBM estimates a joint distribution between an input (visible) data vector ${\bf x}\in\mathbb{R}^N$, and
a set of hidden variables ${\bf h}\in\mathbb{R}^M$.
The RBM consists of a pair of stochastic perceptrons, arranged back-to-back,
and is illustrated in Figure \ref{rbm}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=2.7in]{rbm.eps}
\caption{Illustration of an RBM.}
\label{rbm}
\end{center}
\vspace{-.1in}
\end{figure}
In a sampling procedure called ``Gibbs sampling",
data is created by alternately sampling ${\bf x}$ and ${\bf h}$
using the conditional distributions $p_h({\bf h}|{\bf x})$ and
$p_x({\bf x}|{\bf h})$. To sample ${\bf h}$ from the distribution
$p_h({\bf h}|{\bf x})$, we first multiply ${\bf x}$ by the
transpose of the $N \times M$ weight matrix ${\bf W}$, and add a bias vector:
$\mbox{\boldmath $\alpha$} = {\bf W}^\prime {\bf x} + {\bf b}.$
The variable $\mbox{\boldmath $\alpha$}$ is then applied to a generating distribution
(GD) to create the stochastic variable ${\bf h}$ as $h_i \sim p(h;\alpha_i),$ $1\leq i \leq M$.
Note that conditioned on ${\bf x}$, ${\bf h}$ is a set of independent random
variables (RV). To sample ${\bf x}$ from the distribution
$p_x({\bf x}|{\bf h})$, we use the analog of the forward sampling process:
$\mbox{\boldmath $\beta$} = {\bf W} {\bf h} + {\bf a}.$ The variable $\mbox{\boldmath $\beta$}$ is then applied to a generating distribution
$x_j \sim p(x;\beta_j),$ $1\leq j \leq N$. Conditioned on ${\bf h}$,
${\bf x}$ is a set of independent random variables (RV).
After many alternating sampling operations, the joint distribution
between ${\bf x}$ and ${\bf h}$ converges to the Gibbs distribution
$p({\bf x},{\bf h}) = \frac{e^{-E({\bf x},{\bf h})}}{K},$
where the normalizing factor $K$ is generally unknown.
Training an RBM is done using contrastive divergence, which
is described in detail for exponential-class GDs in \cite{WellingHinton04}.
\subsection{Activation functions and RBMs}
Once an RBM is trained, it can be used as a layer
of a neural network to extract information-bearing features
${\bf h}$. This is done by replacing stochastic sampling with deterministic
sampling by replacing the stochastic generating distributions
$p(x;\beta)$, $p(h;\alpha)$ with activation
functions that equal the expected value (mean) of the
generating distributions, $f(x;\alpha)=\mathbb{E}(x;\alpha)$.
Consider the Bernoulli distribution whose AF is the sigmoid function,
the truncated exponential distribution (TED) whose AF is the
TED distribution \cite{BagEusipcoRBM}, the truncated Gaussian distribution (TG) whose
AF is the TG activation \cite{Bag2021ITG}, and the Gaussan distribution
which has the linear AF $f(x)=x$.
\subsection{RBMs based on TCA}
If a simple activation function corresponds to the expected value
of the GD, then what distribution corresponds to a TCA?
It is previously known that any monotonically-increasing function can be seen as a sum of shifted
stochastic generating distributions \cite{HintonRelu2010,Ravanbakhsh}.
But, we must look more carefully at this because it is not as simple as
adding random variables. When adding random variables, the probability densities combine by
convolution, not additively. To combine them properly, we need
a mixture distribution.
Let $p_0(x;\alpha)$ be a univariate generating distribution depending on parameter
$\alpha$, and let this distribution have mean $f_0(\alpha)=\mathbb{E}(x;\alpha)$, so $f_0(\alpha)$
is the AF corresponding to probability distribution $p(x;\alpha)$.
Let $\Phi_0(x;\alpha)$ be the cumulative distribution function (CDF) of $p_0(x;\alpha)$ , i.e.
$$\Phi_0(x;\alpha)=\int_{-\infty}^x \; p_0(x;\alpha) \; {\rm d} x.$$
Now, consider a mixture distribution given by
\begin{equation}
p(x;\alpha) = \sum_{j=1}^M \; \frac{1}{M} \; p_0\left(e^{a_j} \alpha+b_j\right).
\label{tcadefp}
\end{equation}
To draw a sample from mixture distribution (\ref{tcadefp}), we first draw a discrete random
variable $j$ uniformly in $[1, \; M]$, then draw $x$ from
distribution $e^{-a_j} p_0\left(e^{a_j} \alpha+b_j\right)$.
Mixture distribution (\ref{tcadefp}) has CDF
\begin{equation}
\Phi(x;\alpha) = \frac{1}{M} \sum_{j=1}^M \; e^{-a_j} \Phi_0\left(e^{a_j} \alpha+b_j\right).
\label{tcadefP}
\end{equation}
It is easily seen by taking the derivative,
that distribution corresponding to the CDF (\ref{tcadefP}) is (\ref{tcadefp}).
And, since expected value is a linear operation,
the mean of distribution (\ref{tcadefp}) is the TCA (\ref{tcadef}).
Note that RBMs are implicitly an infinite mixture distributions
over the hidden variables \cite{Roux2008}, but using
using discrete mixture $\phi(x;\alpha)$ for a generating distribution
creates an entirely novel type of RBM.
Different base AFs (i.e. different base stochastic units) can be used for the input
and output, producing a wide range of different types of RBMs \cite{Bag2021ITG}.
Figure \ref{rbm_nnl} illustrates an RBM constructed using a TCA unit in the forward path.
The activation functions and TCAs in the figure can be either stochastic
(random sampling from the corresponding GD) or deterministic
if the activation functions are used.
In the forward path, a weight matrix ${\bf W}$ multiplies the input data vector ${\bf x}$
in order to produce a linear feature vector, which is then passed
through the TCA to produce the hidden variables vector ${\bf h}$.
In the backward path, ${\bf h}$ is multiplied by the transposed weight matrix ${\bf W}^\prime$
and passed through an activation function to produce the re-sampled input vector ${\bf x}$.
In our approach, we use a TCA only in the forward path,
with a normal AF in the backward path.
The mathematical approach to train the parameters of
RBMs using the contrastive divergence (CD) algorithm
is well documented \cite{WellingHinton04} and can be
extended in order to obtain the updates equations to train the
parameters of the TCAs. This is facilitated using the symbolic differentiation
available using software frameworks such as THEANO \cite{Theano2010}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.0in]{rbm_nnl.eps}
\caption{Illustration of an RBM based with TCA units in the
forward path. The need for a separate bias in the forward path is eliminated due
to the existance of trainable bias (shifts).}
\label{rbm_nnl}
\end{center}
\end{figure}
\section{Stacked RBM and DBNs}
To create a ``stacked RBM", an RBM is trained on the input data, and then the
forward path is used to create hidden variables, which are then used as
input data for the next layer.
The deep belief network (DBN) \cite{HintonDeep06} consists of a series
of stacked RBMs, plus a special ``top layer" RBM. The one-hot encoded class labels are injected at the input
of the top layer (concatenated with the hidden variables from the last stacked RBM).
Then, the Gibbs distribution of the top layer learns the joint distribution
of the class labels with the hidden variables out of the last stacked RBM.
The cleverness of Hinton's invention lies in the fact that although
the scale factor of the the Gibbs distribution is not known,
it is not needed to compare the likelihood function from the
competing class hypotheses.
Computing the Gibbs distribution for a given class
assumption has been called the ``free energy" \cite{HintonDeep06,Bag2021ITG},
so we will call this a free energy classifier.
Computing the free energy classifier requires solving for terms
of the marginalized Gibbs distribution \cite{Bag2021ITG},
and these in turn require the CDF, which we have given in (\ref{tcadefP}).
We therefore have all the tools to create a DBN using TCA-based
stochastic units.
\section{Experiments: TCA-based RBM and DBN}
\subsection{Data}
\label{ddesc}
For the these experiments, we took a subset of the MNIST handwritten data corpus,
just three characters ``3", ``8", and ``9".
The data consists of sample images of $28\times 28$, or a data dimension of 784.
We used 500 training samples from each character.
Since MNIST pixel data is coarsely quantized in the range [0,1],
a dither was applied to the pixel values\footnote{For pixel values
above 0.5, a small exponential-distributed random value was subtracted,
but for pixel values below 0.5, a similar random value was added.}.
\subsection{Network}
The network was a 1-layer stacked RBM of 32 neurons, followed by a top-level (classifier) RBM
of 256 units. TCA's with 3 components were used in the
forward path. For the base activation and stochastic unit,
truncated exponential distribution (TED), which is the continuous version of the
Bernoulli distribution/sigmoid function \cite{BagEusipcoRBM,Bag2021ITG}, is used.
\subsection{First Layer}
In the first experiment, we trained just the first layer RBM and measured
input data reconstruction error after one Gibbs sampling cycle.
We consider both mean-square error and conditional likelihood function (LF)
which is $\log p({\bf x}|\mbox{\boldmath $\beta$})$, where $\mbox{\boldmath $\beta$}$ is the input to the
activation functions in the reconstruction path (see Figure \ref{rbm_nnl}).
We trained in three phases, (a) first with no TCA (using
just the base AF), then (b) with TCA but with TCA update
disabled, then finally (c) with TCA enabled.
At initialization, the TCAs have a transfer function very similar to the base
non-linearity, so with TCA update disabled, we should expect the same performance as for the base AF.
Training was done using contrastive divergence \cite{WellingHinton04,HintonDeep06}.
For the first layer, we used deterministic Gibbs sampling
(using AF instead of stochastic units).
When switching from phase (b) to (c), we plotted the
MSE as a function of epochs. In Figure \ref{L1profile},
the plot begins where phase (b) has reached convergence,
then at X axis -2.25, the TCA training is enabled and
a drastic change is seen.
In Table \ref{tab1aa}, we listed the final MSE and LF for the
three phases. Nearly a factor of 2 reduction in MSE is seen.
The improved reconstruction of TCA can be seen on the bottom row.
\begin{figure}
\begin{floatrow}
\ffigbox{%
\includegraphics[width=1.5in]{L1profile.eps}
}{%
\caption{First layer mean square reconstruction error (MSE) as a function of training epoch
with log-time in X-axis. After convergence at X-axis location -2.15, the TCAs were allowed to change.}
\label{L1profile}
}
\capbtabbox{%
\begin{tabular}{|l|l|l|}
\hline
AF & MSE & LF\\
\hline
TED & .0135 & -7.0 \\
\hline
TCA-0 & .0134 & -7.0 \\
\hline
TCA & .0029 & -2.59\\
\hline
\end{tabular}
}{%
\caption{MSE and conditional LF for first layer only. TCA-0: initial (but not updated) TCA.}
\label{tab1aa}
}
\end{floatrow}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.0in]{L2updnprof.eps}
\caption{Training profile for the up-down algorithm
where it can be seen that when enabling TCA, both reconstruction error
and validation classfier errors decrease suddenly.
The X-axis is minus the log of the number epochs in the past.
Errors are on 1500 validation samples.}
\label{L2updnprof}
\end{center}
\end{figure}
\subsection{DBN performance}
The output of the first layer (using TCA) was applied to the second layer, with
one-hot encoded labels injected, forming a DBN.
We then trained the second layer using contrastive divergence (CD)
with three Gibbs iterations and an added term of direct free energy (FE) cost function as proposed in
\cite{Bag2021ITG}.
Finally, the entire network was fine-tuned using the up-down
algorithm, which is an extension of CD to the entire
deep belief network \cite{HintonDeep06}.
The TCA was initialized so that it has a characteristic similar to the
base activation. Then at some point, we enabled TCA training.
In Figure \ref{L2updnprof}, it can be see at X-axis -2.1, that
TCA training was enabled, resulting in a sudden improvement
of both reconstruction error number of and classifier errors measured
on separate validation data.
\section{TCA for Projected Belief Network and Auto-Encoders}
\subsection{Description}
The projected belief network (PBN) is a generative network that is based on
PDF estimation, a direct extension of (\ref{pdx2}), (\ref{pdx3})
to dimension-reducing transformations \cite{BagKayInfo2022},
so it is the ideal paradigm to test the concepts of TCA.
The PBN is based on the idea of back-projection through
a given feed-forward neural network (FFNN),
a way to reconstruct or re-sample the input data based on the network output \cite{BagPBN}.
There are both stochastic and deterministic versions of the PBN \cite{BagPBNEUSIPCO2019}.
In the stochastic PBN, a tractable likelihood function (LF)
is computed for the FFNN, and inserting a TCA into the FFNN
applies a term to the LF corresponding to the derivative of the TCA,
which is a mixture distribution. The deterministic PBN (D-PBN) operates similarly, but is trained not
to maximize the LF, but to maximize the conditional LF
(given the network output), which is a probabilistic
measure of the ability to reconstruct the input data.
The D-PBN can be seen as an auto-encoder (AEC), so we will compare
it with standard auto-encoders.
We used the same data as in Section \ref{ddesc}.
\subsection{Network}
The network which is illustrated in Figure \ref{pbn_nnl}
had two dense perceptron layers with 32 and 8 neurons,
respectively, and TCAs. The base non-linearity for the TCA was TED.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.5in]{pbn_nnl.eps}
\caption{Illustration of a two-layer feed-forward network based on TCAs.}
\label{pbn_nnl}
\end{center}
\end{figure}
\subsection{Results}
We trained the network as an AEC, a VAE, and as a D-PBN using PBN Toolkit \cite{PBNTk}.
Note that in the VAE, a TCA is not used in the the output layer,
because the output layer in a VAE has a special form.
The output TCA is also not used in the D-PBN,
since back projection starts with the output of the last linear transformation.
In all cases, we trained to convergence with TCA training disabled,
that is with the equivalent of a simple AF, then again with
TCA training enabled. We report mean square error (MSE)
on training and test data in Table \ref{tab1a}.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
Algorithm & TCA & MSE(train) & MSE(test)\\
\hline
AEC & No & .02024 & .02273\\
\hline
AEC & Yes & .01884 & .02403\\
\hline
VAE & No & .02220 & .02509\\
\hline
VAE & Yes & .01835 & .02179\\
\hline
D-PBN & No & .01917 & .01955\\
\hline
D-PBN & Yes & .01790 & {\bf .01790}\\
\hline
\end{tabular}
\end{center}
\caption{Mean square reconstruction error for various auto-encoders.}
\label{tab1a}
\end{table}
We may make a number of conclusions from the table. First, using TCA
significantly improves performance in all cases
(compare ``Yes" rows to `No" rows). Second,
the D-PBN has not only best performance, but it generalizes
much better than conventional auto-encoders,
a feature of D-PBN that we have
reported prevously \cite{BagPBNEUSIPCO2019}.
In this case, there was almost no measureable difference between
training and test data.
As we explained, both the VAE and D-PBN do not use the final TCA,
so the performance difference hinges only on the TCA at the output of the
first layer. Despite this, a significant improvement is seen.
\subsection{TCA vs Added Network Layers}
The performance improvements of TCA in a standard feed-forward or plain
auto-encoder can be attributed to the increased parameter count over standard
activation functions, but a TCA achieves this with far fewer parameters than adding layers.
Furthermore, in RBMs and DBNs, using TCAs creates novel generative models
with stochastic units based on finite mixture distributions, something that
cannot be achieved by adding network layers. Using TCAs, it is seen that
RBMs and DBNs have significantly better performance.
\section{Conclusions}
In this paper, we have introduced trained compound activations
(TCAs). We justified their use based on PDF estimation
and removal of modalities. We have derived novel restricted Boltzmann machines
(RBMs) based on TCAs, and have demonstrated convincing
improvements for TCAs in experiments using stacked RBMs, deep belief networks (DBNs),
auto-encoderes and deterministic projected belief networks (D-PBNs).
All experiments were implemented using PBN Toolkit \cite{PBNTk}.
All data, software, and instructions to repeat the results in this paper are archived at \cite{PBNTk}.
\bibliographystyle{ieeetr}
|
1,108,101,565,787 | arxiv | \section{Introduction}\label{sec1}
Reaction-diffusion systems satisfying a detailed or complex balance condition
provide interesting examples of evolution equations where the qualitative
behavior of the solutions can be studied using entropy methods. Such systems
typically describe reversible chemical reactions of the form
\begin{equation}\label{chemreact}
\alpha_1 \mathcal{A}_1 + \dots + \alpha_n \mathcal{A}_n \,\xrightleftharpoons[~k'~]{k}\,
\beta_1 \mathcal{A}_1 + \dots + \beta_n \mathcal{A}_n\,,
\end{equation}
where $\mathcal{A}_1, \dots, \mathcal{A}_n$ denote the reactant and product species, $k, k' > 0$
are the reaction rates, and the nonnegative integers $\alpha_i, \beta_i$
($i = 1,\dots,n$) are the stoichiometric coefficients. According to the
law of mass action, the concentration $c_i(x,t)$ of the species $\mathcal{A}_i$
satisfies the reaction-diffusion equation
\begin{equation}\label{RDgen}
\partial_t c_i \,=\, d_i \Delta c_i + (\beta_i - \alpha_i)
\biggl(k\prod_{j=1}^n c_j^{\alpha_j} - k' \prod_{j=1}^n c_j^{\beta_j}
\biggr)\,, \qquad i = 1,\dots,n\,,
\end{equation}
where $\Delta$ is the Laplace operator acting on the space variable $x$, and
$d_i > 0$ denotes the diffusion coefficient of species $\mathcal{A}_i$. We refer the
reader to \cite{HJ,SRJ,DFT,Mi2} for a more detailed mathematical modeling of
chemical reactions, including the realistic situation where several reactions
occur at the same time. For general kinetic systems, there is a notion of {\em
detailed balance}, which asserts that all reactions are reversible and
individually in balance at each equilibrium state, and a weaker notion of {\em
complex balance}, which only requires that each reactant or product complex is
globally at equilibrium if all reactions are taken into account. In the present
paper, we focus on a particular example of the single-reaction system
\eqref{RDgen}, for which the detailed balance condition is automatically
satisfied.
In recent years, many authors investigated the long-time behavior of solutions
to reaction-diffusion systems with complex or detailed balance, assuming that
the reaction takes place in a bounded domain $\Omega \subset \mathbb{R}^N$ and using an
entropy method that we briefly explain in the case of system \eqref{RDgen} with
$k = k'$. If $\mathbf{c}(t) = (c_1(t),\dots,c_n(t))$ is a solution of \eqref{RDgen} in
$\Omega$ satisfying no-flux boundary conditions on $\partial\Omega$, we have the
entropy dissipation law $\frac{{\rm d}}{{\rm d} t}\Phi(\mathbf{c}(t)) = -D(\mathbf{c}(t))$, where $\Phi$
is the entropy function defined by
\begin{equation}\label{Phidef}
\Phi(\mathbf{c}) \,=\, \sum_{i=1}^n \int_\Omega \phi\bigl(c_i(x)\bigr)\,{\rm d} x\,, \qquad
\phi(z) = z\log(z)-z+1\,,
\end{equation}
and $D$ is the entropy dissipation
\begin{equation}\label{Ddef}
D(\mathbf{c}) \,=\, \sum_{i=1}^n d_i \int_\Omega \frac{|\nabla c_i(x)|^2}{c_i(x)}
\,{\rm d} x + k \int_\Omega \log\biggl(\frac{B(x)}{A(x)}\biggr)\Bigl(B(x) - A(x)
\Bigr)\,{\rm d} x\,,
\end{equation}
where $A(x) = \prod c_j(x)^{\alpha_j}$, $B(x) = \prod c_j(x)^{\beta_j}$. It is
clear from \eqref{Ddef} that the entropy dissipation $D(\mathbf{c})$ is nonnegative
and vanishes if and only if the concentrations $c_i$ are spatially homogeneous
($\nabla c_i = 0$) and the system is at chemical equilibrium ($A = B$). The
entropy is therefore a Lyapunov function for \eqref{RDgen}, and using LaSalle's
invariance principle one deduces that all bounded solutions converge to
homogeneous chemical equilibria as $t \to +\infty$ \cite{Gr1,Ro}. In addition,
under appropriate assumptions, the entropy dissipation $D(\mathbf{c})$ can be bounded
from below by a multiple of the entropy $\Phi(\mathbf{c})$, or more precisely of the
{\em relative entropy} $\Phi(\mathbf{c}\,|\,\mathbf{c}_*)$ with respect to some equilibrium
$\mathbf{c}_*$. Such a lower bound can be established using a compactness argument
\cite{GGH,Gr2}, or invoking functional inequalities such a the logarithmic
Sobolev inequality \cite{DF1,DF2,DFT,FT1,FT2,Mi2,PSZ}. This leads to a first order
differential inequality for the relative entropy, which implies exponential
convergence in time to equilibria. In its constructive form, this
entropy-dissipation approach even provides explicit estimates of the convergence
rate and of the time needed to reach a neighborhood of the final equilibrium
\cite{DF1,DF2}. It is also worth mentioning that the reaction-diffusion system
\eqref{RDgen} is actually the {\em gradient flow} of the entropy function
\eqref{Phidef} with respect to an appropriate metric based on the Wasserstein
distance for the diffusion part of the system \cite{LM,Mi1,MHM}. Finally, we
observe that Lyapunov functions such as the entropy \eqref{Phidef} were also
useful to prove global existence of solutions to reaction-diffusion systems, see
\cite{CV,CGV,FMT,Fi,GV,Pi,PSY,So}.
Much less is known on the dynamics of the reaction-diffusion system
\eqref{RDgen} in an unbounded domain such as $\Omega = \mathbb{R}^N$. For bounded
solutions, the entropy \eqref{Phidef} is typically infinite, and it is known
that \eqref{RDgen} is no longer a gradient system. Solutions such as traveling
waves, which exist in many examples, do not converge to equilibria as
$t \to +\infty$, at least not in the topology of uniform convergence on
$\Omega$. In fact, the best we can hope for in general is {\em
quasiconvergence}, namely uniform convergence on compact subsets of $\Omega$
to the family of spatially homogeneous equilibria. That property is not
automatic at all, and has been established so far only for relatively simple
scalar equations where the maximum principle is applicable
\cite{DP,MP,PP1,PP2,P1,P2,P3}. On the other hand, it is important to mention
that entropy is still {\em locally} dissipated under the evolution defined by
\eqref{RDgen}, in the sense that the entropy density $e(x,t)$, the entropy flux
$\mathbf{f}(x,t)$ and the entropy dissipation $d(x,t)$ satisfy the local entropy
balance equation $\partial_t e = \div \mathbf{f} - d$. We have the explicit expressions
\begin{equation}\label{edfgen}
\begin{split}
e(x,t) \,&=\, \sum_{i=1}^n \phi\bigl(c_i(x,t)\bigr)\,, \qquad
\mathbf{f}(x,t) \,=\, \sum_{i=1}^n d_i \log(c_i(x,t))\nabla c_i(x,t)\,, \\
d(x,t) \,&=\, \sum_{i=1}^n d_i \frac{|\nabla c_i(x,t)|^2}{c_i(x,t)}
+ k \log\biggl(\frac{B(x,t)}{A(x,t)}\biggr)\Bigl(B(x,t) - A(x,t)
\Bigr)\,,
\end{split}
\end{equation}
from which we deduce the pointwise estimate $|\mathbf{f}|^2 \le Ced \log(2+e)$ for
some constant $C > 0$. This precisely means that the reaction-diffusion system
\eqref{RDgen} is an {\em extended dissipative system} in the sense of our
previous work \cite{GS1}. If $N \le 2$, the results of \cite{GS1} show that all
bounded solutions of \eqref{RDgen} in $\mathbb{R}^N$ converge uniformly on compact
subsets to the family of spatially homogeneous equilibria for ``almost all''
times, i.e. for all times outside a subset of $\mathbb{R}_+$ of zero density in the
limit where $t \to +\infty$. In particular, the $\omega$-limit set of any
bounded solution, with respect to the topology of uniform convergence on compact
sets, always contains an equilibrium. It should be mentioned, however, that
extended dissipative systems in the sense of \cite{GS1} may have
non-quasiconvergent solutions, even in one space dimension. A typical phenomenon
that prevents quasiconvergence is the coarsening dynamics that is observed, for
instance, in the one-dimensional Allen-Cahn equation \cite{ER,P1}.
In the present paper, we consider a very simple particular case of the
reaction-diffusion system \eqref{RDgen}, for which we can prove that all
positive solutions converge uniformly on compact sets to the family of spatially
homogeneous equilibria. In that example we only have two species $\mathcal{A}$, $\mathcal{B}$
which participate to the simplistic reaction
\begin{equation}\label{chem2}
\mathcal{A} \,\xrightleftharpoons[~k~]{k}\, 2 \mathcal{B}\,.
\end{equation}
Denoting by $u,v$ the concentrations of $\mathcal{A}, \mathcal{B}$, respectively,
we obtain the system
\begin{equation}\label{RD}
\begin{split}
u_t(x,t) \,&=\, a u_{xx}(x,t) + k\bigl(v(x,t)^2-u(x,t)\bigr)\,, \\
v_t(x,t) \,&=\, b v_{xx}(x,t) + 2k\bigl(u(x,t)-v(x,t)^2\bigr)\,,
\end{split}
\end{equation}
which is considered on the whole real line $\Omega = \mathbb{R}$. The parameters are the
diffusion coefficients $a,b > 0$ and the reaction rate $k > 0$, but scaling
arguments reveal that the ratio $a/b$ is the only relevant quantity. It is not
difficult to verify that, given bounded and nonnegative initial data $u_0, v_0$,
the system \eqref{RD} has a unique global solution that remains bounded and
nonnegative for all positive times, see Proposition~\ref{prop:exist} below for a
precise statement. Our goal is to investigate the long-time behavior of those
solutions, using the local form of the entropy dissipation and some additional
properties of the system.
As a warm-up we consider the case of equal diffusivities $a = b$, which is
considerably simpler because the function $w = 2u + v$ then satisfies the
one-dimensional heat equation $w_t = a w_{xx}$. Using that observation, it is
easy to prove the following result\:
\begin{prop}\label{main1}
If $a = b$ any bounded nonnegative solution of \eqref{RD} satisfies,
for all $t > 0$,
\begin{equation}\label{mainest1}
t\,\|u_x(t)\|_{L^\infty}^2 + t\,\|v_x(t)\|_{L^\infty}^2 + (1+t)
\|u(t) - v(t)^2\|_{L^\infty} \,\le\, C\,,
\end{equation}
where the constant only depends on the parameters $a,k$ and on
$\|u_0\|_{L^\infty},\|v_0\|_{L^\infty}$.
\end{prop}
Proposition~\ref{main1} implies that all nonnegative solutions converge,
uniformly on compact intervals $I \subset \mathbb{R}$, to the manifold of spatially
homogeneous equilibria defined by
\begin{equation}\label{cEdef}
\mathcal{E} \,=\, \Bigl\{(\bar u,\bar v) \in \mathbb{R}_+^2\,;\, \bar u =
\bar v^2\Bigr\}\,,
\end{equation}
see Corollary~\ref{cor:ulconv} below for a precise statement. In other words,
the $\omega$-limit set of any solution, with respect to the topology of uniform
convergence on compact sets, is entirely contained in $\mathcal{E}$. The proof shows
that the decay rates given by \eqref{mainest1} cannot be improved in general.
Moreover, it is clear that the $\omega$-limit set is not always reduced to a
single equilibrium, because examples of nonconvergent solutions can be
constructed even for the linear heat equation on $\mathbb{R}$, see \cite{CE}.
The proof Proposition~\ref{main1} heavily relies on the simple evolution
equation satisfied by the auxiliary function $w = 2u+v$, which is specific to
the case of equal diffusivities. The analysis becomes much more challenging when
$a \neq b$, because system~\eqref{RD} does not reduce to a scalar equation. Our
result in the general case is slightly weaker, and can be stated as follows.
\begin{prop}\label{main2}
Any bounded nonnegative solution of \eqref{RD} satisfies, for all $t > 0$,
\begin{equation}\label{mainest2}
\|u_x(t)\|_{L^\infty} + \|v_x(t)\|_{L^\infty} \,\le\,
\frac{C}{t^{1/2}}\,\log(2+t)\,, \qquad \|u(t) - v(t)^2\|_{L^\infty}
\,\le\, \frac{C}{(1+t)^{1/2}}\,,
\end{equation}
where the constant only depends on the parameters $a,b,k$
and on $\|u_0\|_{L^\infty},\|v_0\|_{L^\infty}$.
\end{prop}
The decay rates of the derivatives $u_x, v_x$ in \eqref{mainest2} agree with
\eqref{mainest1} up to a logarithmic correction, but the estimate of the
difference $u - v^2$, which measures the distance to the local chemical
equilibrium, is substantially weaker in the general case. We conjecture that the
discrepancy between the conclusions of Propositions~\ref{main1} and \ref{main2}
is of technical nature, and that the optimal estimates \eqref{mainest1} remain
valid when $a \neq b$. At this point, it is worth mentioning that the bounds
\eqref{mainest2} are actually derived from a {\em uniformly local} estimate
which fully agrees with the decay rates given in \eqref{mainest1}. Indeed, we
shall prove in Section~\ref{sec4} that any bounded nonnegative solution to
\eqref{RD} satisfies, for any $t > 0$,
\begin{equation}\label{ulest}
\sup_{x_0 \in \mathbb{R}}\,\int_{x_0-\sqrt{t}}^{x_0+\sqrt{t}} \Bigl(|u_x(x,t)|^2 + |v_x(x,t)|^2 +
\bigl|u(x,t) - v(x,t)^2\bigr|\Bigr) \,{\rm d} x \,\le\, C t^{-1/2}\,,
\end{equation}
where the constant depends only on the parameters $a,b,k$ and on the initial
data. It is obvious that \eqref{mainest1} implies \eqref{ulest}, but the
converse is not quite true and the best we could obtain so far is the weaker
estimate \eqref{mainest2}.
As before, we can conclude that all solutions converge uniformly on compact sets
to the manifold $\mathcal{E}$ as $t \to +\infty$.
\begin{cor}\label{cor:ulconv}
Under the assumptions of Proposition~\ref{main2}, the solution of \eqref{RD}
satisfies, for any time $t > 0$ and any bounded interval $I \subset \mathbb{R}$,
\begin{equation}\label{ulconv}
\inf\Bigl\{\|u(t) - \bar u\|_{L^\infty(I)} + \|v(t) - \bar v\|_{L^\infty(I)}
\,;\, (\bar u,\bar v) \in \mathcal{E}\Bigr\} \,\le\, \frac{C |I|}{|I| + t^{1/2}}
\,\log(2+t)\,,
\end{equation}
where the constant only depends on the parameters $a,b,k$ and on
$\|u_0\|_{L^\infty},\|v_0\|_{L^\infty}$.
\end{cor}
\begin{rem}\label{rem:positive}
It is important to keep in mind that the conclusions of Propositions~\ref{main1}
and \ref{main2} are restricted to nonnegative solutions. As a matter of
fact, the dynamics of system~\eqref{RD} is completely different if we
consider solutions for which the second component $v$ may take negative
values. For instance, if $a = b = k = 1$, we can look for solutions of
the particular form
\[
u(x,t) \,=\, 1 - \frac{3 z(x,t)}{4}\,, \qquad
v(x,t) \,=\, - 1 + \frac{3 z(x,t)}{2}\,,
\]
in which case \eqref{RD} reduces to the Fisher-KPP equation $z_t = z_{xx}
+ 3z(1-z)$. That equation has a pulse-like stationary solution given
by the explicit formula
\[
\bar z(x) \,=\, 1 - \frac{3}{2}\,\frac{1}{\cosh^2(\sqrt{3}x/2)}\,, \qquad
x \in \mathbb{R}\,,
\]
which provides an example of a steady state $(\bar u,\bar v)$ for \eqref{RD}
that is not spatially homogeneous nor at chemical equilibrium, in the sense
that $\bar u \neq \bar v^2$. Moreover, for any speed $c > 0$, the Fisher-KPP
equation has traveling wave solutions of the form $z(x,t) = \varphi(x-ct)$
where the wave profile $\varphi$ satisfies $\varphi(-\infty) = 1$ and
$\varphi(+\infty) = 0$. For the corresponding solutions of \eqref{RD}, the
quantities $\|u_x(t)\|_{L^\infty}$, $\|v_x(t)\|_{L^\infty}$, and $\|u(x) -
v(t)^2\|_{L^\infty}$ are bounded away from zero for all times, in sharp
contrast with \eqref{mainest1}.
\end{rem}
\begin{rem}\label{rem:bounded}
Our results also apply to the situation where system~\eqref{RD}
is considered on a bounded interval $I = [0,L]$, with homogeneous Neumann
boundary conditions, because the solutions $u,v$ can then be extended to
even and $2L$-periodic functions on the whole real line. In that case
the total mass $M = \int_0^L \bigl(2u(x,t) + v(x,t)\bigr)\,{\rm d} x$ is a
conserved quantity, and the solution necessarily converges to the unique
equilibrium $(u_\infty,v_\infty) \in \mathcal{E}$ satisfying $2u_\infty + v_\infty = M/L$.
As in \eqref{ulconv} we have the bound
\[
\|u(t) - u_\infty\|_{L^\infty(I)} + \|v(t) - v_\infty\|_{L^\infty(I)}
\,\le\, \frac{C L}{L + t^{1/2}}\,\log(2+t)\,, \qquad t \ge 0\,,
\]
which is far from optimal because, in that particular case, it is known that
convergence occurs at exponential rate, see \cite{DF1} for accurate estimates
with explicitly computable constants. However, the conclusion of
Proposition~\ref{main2} remains interesting in that situation. In particular,
the second estimate in \eqref{mainest2} shows that the time needed for a
solution to enter a neighborhood of the manifold $\mathcal{E}$ depends on the $L^\infty$
norm of the initial data, but {\em not} on the length $L$ of the interval. In
contrast, all estimates obtained in \cite{DF1} and related works necessarily
involve the size of the spatial domain, because they use as a Lyapunov function
the total entropy which is an extensive quantity in the thermodynamical sense.
\end{rem}
The proof of our main result, Proposition~\ref{main2}, is based on localized
energy (or entropy) estimates in the spirit of our previous works
\cite{GS0,GS1,GS2}. It turns out that the Boltzmann-type entropy density
introduced in \eqref{edfgen} is not the only possibility. Quite on the contrary,
there exist a large family of nonnegative quantities that are locally dissipated
under the evolution defined by the two-component system \eqref{RD}, see
Section~\ref{sec3} below for a more precise discussion. For simplicity, we chose
to use the energy density $e(x,t)$, the energy flux $f(x,t)$, and the energy
dissipation $d(x,t)$ given by the following expressions:
\begin{equation}\label{edf2x2}
e \,=\, \frac{1}{2}\,u^2 + \frac{1}{6}\,v^3\,, \qquad
f \,=\, a u u_x + \frac{b}{2}\,v^2 v_x\,, \qquad
d \,=\, a u_x^2 + b v v_x^2 + k(u-v^2)^2\,.
\end{equation}
If $(u,v)$ is any nonnegative solution of \eqref{RD}, one readily verifies that
the local energy balance $\partial_t e = \partial_x f - d$ is satisfied, as well
as the estimate $f^2 \le C e d$ where $C = \max(2a,3b/2)$. Altogether, this
means that \eqref{RD} is an ``extended dissipative system'' in the sense of
\cite{GS1}. As was already mentioned, the results of \cite{GS1} provide useful
information on the gradient-like dynamics of \eqref{RD}, but this is far from
sufficient to prove Proposition~\ref{main2}. For instance, extended dissipative
systems may have traveling wave solutions which, obviously, do not satisfy
uniform decay estimates of the form \eqref{mainest2}.
To go beyond the general results established in \cite{GS1} we follow the same
approach as in our previous work \cite{GS2}, where energy methods were developed
to study the long-time behavior of solutions for the Navier-Stokes equations in
the infinite cylinder $\mathbb{R} \times \mathbb{T}$. The main idea is to show that the energy
dissipation in \eqref{edf2x2} is itself locally dissipated under the evolution
defined by \eqref{RD}. More precisely, we look for another triple $(\tilde e,
\tilde f, \tilde d)$ satisfying the local balance $\partial_t \tilde e =
\partial_x \tilde f - \tilde d$, and such that the flux $|\tilde f|$ can be
controlled in terms of $\tilde e$, $\tilde d$. We also require that
$\tilde d \ge 0$ and that $\tilde e \approx d$, where $d$ is as in
\eqref{edf2x2}. We can then use localized energy estimates as in \cite{GS1,GS2}
to prove that, on any compact interval $[x_0,x_0+L] \subset \mathbb{R}$, the dissipation
$d(x,t)$ becomes uniformly small for all times $t \gg L^2$. We even get an
explicit upper bound depending only on $L$ and on the initial data, so that
taking the supremum over $x_0 \in \mathbb{R}$ we arrive at estimate \eqref{ulest}, which
is the crucial step in the proof of Proposition~\ref{main2}. In contrast, we
emphasize that the bounds one can obtain using the dissipative structure
\eqref{edf2x2} alone only show that the supremum of $d(x,t)$ over $[x_0,x_0+L]$
becomes small for ``almost all'' (sufficiently large) times, thus leaving space
for non-gradient transient behaviors such as traveling wave propagation or
coarsening dynamics.
The existence of a second dissipative structure on top of \eqref{edf2x2} is
obviously an important property of system~\eqref{RD}, which we would like to
understand in greater depth. It should be related to some convexity property of
the energy density with respect to the metric that turns \eqref{RD} into a
gradient system, see \cite{LM} for a more detailed discussion of gradient
structures and convexity properties of reaction-diffusion systems. It would be
interesting to determine if that property still holds for other systems of the
form \eqref{RDgen}, such as those considered in Section~\ref{sec6} below, but so
far we have no general result in that direction. We mention that the idea of
studying the variation of the entropy dissipation, or equivalently the second
variation of the entropy, is quite common in kinetic theory, see \cite{DV},
as well as in fluid mechanics, see \cite{AB} for a recent review on the
subject.
The rest of this paper is organized as follows. In Section~\ref{sec2} we briefly
discuss the Cauchy problem for the reaction-diffusion system \eqref{RD}, and we
prove Proposition~\ref{main1} and Corollary~\ref{cor:ulconv}. After these
preliminaries, we investigate in Section~\ref{sec3} various dissipative
structures of the form \eqref{edf2x2}, which play a key role in our analysis.
The proof of Proposition~\ref{main2} is completed in Section~\ref{sec4}, where
we use localized energy estimates inspired from our previous works \cite{GS1,GS2}.
Section~\ref{sec5} is devoted to the stability analysis of spatially homogeneous
equilibria $(\bar u,\bar v) \in \mathcal{E}$, which provides useful insight on the
decay rates of the solutions. In the final Section~\ref{sec6}, we briefly
discuss the potential applicability of our method to more general
reaction-diffusion systems of the form \eqref{RDgen}, and we mention some
open problems.
\medskip\noindent{\bf Acknowledgments.}
The authors thank Alexander Mielke for enlightening discussions at the early
stage of this project. Th.G. is supported by the grant ISDEEC ANR-16-CE40-0013
of the French Ministry of Higher Education, Research and Innovation.
S.S. is supported by the Croatian Science Foundation under grant
IP-2018-01-7491.
\section{Preliminary results}\label{sec2}
We first prove that system~\eqref{RD} is globally well-posed for all initial data
$u_0$, $v_0$ that are bounded and nonnegative. This a rather classical
statement, which can be deduced from more general results on reaction-diffusion
systems with quadratic nonlinearities, see e.g. \cite{GV,Pi,So}. For the
reader's convenience, we give here a simple and self-contained proof.
Without loss of generality, we assume henceforth that $k = 1$. We denote by
$X = C_\mathrm{bu}(\mathbb{R})$ the Banach space of all bounded and uniformly continuous
functions $f : \mathbb{R} \to \mathbb{R}$, equipped with the uniform norm $\|f\|_{L^\infty}$.
Since we are interested in nonnegative solutions of \eqref{RD}, we also
define the positive cone $X_+ = \{f \in X\,;\, f(x) \ge 0~\forall x \in \mathbb{R}\}$.
\begin{prop}\label{prop:exist}
For all initial data $(u_0,v_0) \in X_+^2$, system~\eqref{RD} has a unique global
(mild) solution $(u,v) \in C^0([0,+\infty),X^2)$ such that $(u(0),v(0)) = (u_0,v_0)$.
Moreover $(u(t),v(t)) \in X_+^2$ for all $t \ge 0$, and the following estimates hold\:
\begin{equation}\label{uvbound}
\begin{split}
\max\bigl(\|u(t)\|_{L^\infty}\,,\,\|v(t)\|_{L^\infty}^2\bigr)
\,&\le\, \max\bigl(\|u_0\|_{L^\infty}\,,\,\|v_0\|_{L^\infty}^2\bigr)\,, \\
2\|u(t)\|_{L^\infty} + \|v(t)\|_{L^\infty} \,&\le\,
2\|u_0\|_{L^\infty} + \|v_0\|_{L^\infty}\,.
\end{split}
\end{equation}
\end{prop}
\begin{proof}
Local existence of solutions in $X^2$ can be established by applying a standard
fixed point argument to the integral equation
\[
\begin{pmatrix} u(t) \\ v(t)\end{pmatrix} \,=\,
\begin{pmatrix} S(at) & 0 \\ 0 & S(bt)\end{pmatrix}
\begin{pmatrix} u_0 \\ v_0\end{pmatrix} +
\int_0^t \begin{pmatrix} S(a(t{-}s)) & 0 \\ 0 & S(b(t{-}s))\end{pmatrix}
\begin{pmatrix} v(s)^2 - u(s) \\ 2\bigl(u(s) - v(s)^2\bigr)\end{pmatrix}\,{\rm d} s\,,
\]
where $S(t) = \exp(t\partial_x^2)$ is the one-dimensional heat semigroup, see
e.g. \cite[Chapter~3]{He}. Since the nonlinearity is a polynomial of degree two,
the local existence time $T > 0$ given by the fixed point argument is no smaller
than $T_0\bigl(1+\|u_0\|_{L^\infty} +\|v_0\|_{L^\infty}\bigr)^{-1}$ for some
constant $T_0 > 0$. This shows that any local solution can be extended to a
global one, unless the quantity $\|u(t)\|_{L^\infty} + \|v(t)\|_{L^\infty}$
blows up in finite time. It remains to show that nonnegative solutions
satisfy the estimates \eqref{uvbound}, so that blow-up cannot occur.
Assume that $(u_0,v_0) \in X_+^2$ and let $(u,v) \in C^0([0,T_*),X^2)$ be
the maximal solution of \eqref{RD} with initial data $(u_0,v_0)$. This
solution is smooth for positive times, and the first component satisfies
$u_t = a u_{xx} + v^2 - u \ge a u_{xx} - u$ for $t \in (0,T_*)$. Applying
the parabolic maximum principle \cite{PW}, we deduce that $u(t) \in X_+$
for all $t \in (0,T_*)$. The second component in turn satisfies $v_t =
b v_{xx} + 2(u - v^2) \ge b v_{xx} - 2v^2$, and another application of
the maximum principle shows that $v(t) \in X_+$ too. So the positive
cone $X_+^2$ is invariant under the evolution defined by \eqref{RD}.
Another important observation is that \eqref{RD} is a {\em cooperative}
reaction-diffusion system in $X_+^2$, in the sense that the reaction terms
in \eqref{RD} satisfy
\[
\frac{{\rm d}}{{\rm d} v} \bigl(v^2 - u\bigr) \,=\, 2v \,\ge\, 0\,, \qquad
\frac{{\rm d}}{{\rm d} u} \,2\bigl(u - v^2\bigr) \,=\, 2 \,\ge\, 0\,.
\]
As is well known, such a system obeys a (component-wise) comparison principle
\cite{VVV}. In our case, this means that, if $(u,v)$ and $(\bar u,\bar v)$ are
two solutions of \eqref{RD} in $X_+^2$, and if the initial data satisfy
$u_0 \le \bar u_0$ and $v_0 \le \bar v_0$, then $u(t) \le \bar u(t)$ and
$v(t) \le \bar v(t)$ as long as the solutions are defined. We use that
principle to compare our nonnegative solution $(u,v)$ to the solution
$(\bar u,\bar v)$ of the ODE system
\begin{equation}\label{ODE}
\frac{{\rm d}}{{\rm d} t}\,\bar u(t) \,=\, \bar v(t)^2 - \bar u(t)\,, \qquad
\frac{{\rm d}}{{\rm d} t}\,\bar v(t) \,=\, 2 \bigl(\bar u(t) - \bar v(t)^2\bigr)\,,
\end{equation}
with initial data $\bar u_0 = \|u_0\|_{L^\infty}$, $\bar v_0 = \|v_0\|_{L^\infty}$.
The dynamics of \eqref{ODE} in the positive quadrant is very simple\:
the solution stays on the line $L_0 = \bigl\{(\bar u,\bar v) \in \mathbb{R}_+^2 \,;\,
2\bar u + \bar v = 2\bar u_0 + \bar v_0\bigr\}$ for all times, and converges
to the unique equilibrium $(\bar u_*,\bar v_*) \in L_0 \cap \mathcal{E}$, where
$\mathcal{E}$ is defined in \eqref{cEdef}; see Figure~1. In particular, we have
$\max(\bar u(t),\bar v(t)^2) \le \max(\bar u_0,\bar v_0^2)$ for all
$t \ge 0$. Applying the comparison principle, we conclude that our
solution $(u,v) \in C^0([0,T_*),X^2)$ satisfies estimates \eqref{uvbound}
for all $t \in [0,T_*)$, which implies that $T_* = + \infty$.
\end{proof}
\begin{rem}\label{rem:ODE}
The equilibrium $(\bar u_*,\bar v_*)$ which attracts the solution of
\eqref{ODE} is given by
\begin{equation}\label{uvstar}
\bar u_* \,=\, \bar v_*^2\,, \qquad \hbox{and}\qquad
\bar v_* \,=\, \frac{1}{4}\Bigl(-1 + \sqrt{1 + 16 \bar u_0 + 8 \bar v_0}\Bigr)\,.
\end{equation}
As is clear from Figure~1, we have the optimal bounds
\begin{equation}\label{uvbdd}
\min\bigl(\bar u_0, \bar u_*\bigr) \,\le\, \bar u(t) \,\le\,
\max\bigl(\bar u_0, \bar u_*\bigr)\,, \qquad
\min\bigl(\bar v_0, \bar v_*\bigr) \,\le\, \bar v(t) \,\le\,
\max\bigl(\bar v_0, \bar v_*\bigr)\,,
\end{equation}
which can be used to improve somewhat \eqref{uvbound}.
\end{rem}
\begin{rem}\label{rem:lowerbd}
In a similar way, we can use the comparison principle to show that the
solution of \eqref{RD} given by Proposition~\ref{prop:exist} satisfies
$u(x,t) \ge \underline{u}(t)$ and $v(x,t) \ge \underline{v}(t)$, where
$(\underline{u}(t),\underline{v}(t))$ is the solution of the ODE system
\eqref{ODE} with initial data
\[
\underline{u}_0 \,=\, \inf_{x \in \mathbb{R}} u_0(x)\,, \qquad
\underline{v}_0 \,=\, \inf_{x \in \mathbb{R}} v_0(x)\,.
\]
Two interesting conclusions can be drawn using such lower bounds. First, if
$\underline{v}_0 \ge \delta > 0$ for some $\delta > 0$, then $v(x,t) \ge
2\delta\bigl(1+\sqrt{1+8\delta}\bigr)^{-1}$ for all $x \in \mathbb{R}$ and all $t \ge 0$.
This observation will be used in the proof of Proposition~\ref{main2}. Second,
any homogeneous equilibrium $(\bar u,\bar v) \in \mathcal{E}$ is stable (in the
sense of Lyapunov) in the uniform topology: for any $\epsilon > 0$,
there exists $\delta > 0$ such that, if $\|u_0 - u_*\|_{L^\infty} +
\|v_0 - v_*\|_{L^\infty} \le \delta$, then $\|u(t) - u_*\|_{L^\infty} +
\|v(t) - v_*\|_{L^\infty} \le \epsilon$ for all $t \ge 0$. An explicit
expression for $\delta$ in terms of $\epsilon$ and $u_*,v_*$ can be
deduced from \eqref{uvstar}, \eqref{uvbdd}.
\end{rem}
\begin{rem}\label{rem:Linfty}
In Proposition~\ref{prop:exist} we assume for simplicity that the initial
data $u_0, v_0$ are bounded and uniformly continuous, but system~\eqref{RD}
remains globally well posed for all nonnegative data $(u_0,v_0) \in L^\infty(\mathbb{R})^2$.
The only difference in the proof is that, when $t \to 0$, the first term in the
integral equation does not converge to $(u_0,v_0)$ in the uniform norm, but only
in the weak-$*$ topology of $L^\infty(\mathbb{R})$.
\end{rem}
\setlength{\unitlength}{0.8cm}
\begin{center}
\begin{picture}(10,8)(-1,-1)
\thicklines
\put(-0.5,0){\vector(1,0){10.0}}
\put(0,-0.5){\vector(0,1){7.0}}
\qbezier(0,0)(0,3)(9,6)
\thinlines
\put(0.5,3){\vector(1,-1){0.5}}
\put(1.0,2.5){\line(1,-1){0.5}}
\put(3.0,0.5){\vector(-1,1){1.5}}
\put(0.5,5){\vector(1,-1){1.6}}
\put(2.1,3.4){\line(1,-1){0.6}}
\put(5.0,0.5){\vector(-1,1){2.3}}
\put(1.5,6){\vector(1,-1){1.8}}
\put(3.3,4.2){\line(1,-1){0.7}}
\put(7.0,0.5){\vector(-1,1){3.0}}
\put(3.5,6){\vector(1,-1){1.2}}
\put(4.7,4.8){\line(1,-1){0.6}}
\put(8.0,1.5){\vector(-1,1){2.7}}
\put(5.5,6){\vector(1,-1){0.6}}
\put(6.1,5.4){\line(1,-1){0.6}}
\put(8.0,3.5){\vector(-1,1){1.3}}
\put(9,0.3){$u$}
\put(0.3,6.1){$v$}
\put(8.6,5.4){$\mathcal{E}$}
\put(2.3,5){\footnotesize{$\bullet$}}
\put(3.57,3.73){\footnotesize{$\bullet$}}
\put(2.3,5.3){\footnotesize{$(u_0,v_0)$}}
\put(5.2,2.5){\footnotesize{$L_0$}}
\put(3.65,-0.2){\line(0,1){0.4}}
\put(-0.2,3.85){\line(1,0){0.4}}
\put(3.79,-0.4){$u_*$}
\put(-0.7,3.8){$v_*$}
\end{picture}
\end{center}
\begin{center}
\begin{minipage}[c]{0.8\textwidth}\footnotesize
{\bf Figure~1\:} A sketch of the dynamics of the ODE system $\dot u = v^2 - u$,
$\dot v = 2(u - v^2)$, which represents the kinetic part of \eqref{RD}. The
solution starting from the initial data $(u_0,v_0)$ stays on the line
$L_0 = \{(u,v)\,;\, 2u + v = 2u_0 + v_0\}$ and converges there to the
unique equilibrium $(u_*,v_*) \in L_0\cap \mathcal{E}$.
\end{minipage}
\end{center}
\medskip
\noindent{\bf Proof of Proposition~\ref{main1}.}
We assume here without loss of generality that $a = b = k = 1$. Given
$(u_0,v_0) \in X_+^2$, let $(u,v) \in C^0([0,+\infty),X^2)$ be the
unique global solution of \eqref{RD} with initial data $(u_0,v_0)$.
As was already mentioned, the quantity $w = 2u+v$ satisfies the linear
heat equation $w_t = w_{xx}$ on $\mathbb{R}$. In particular, we have the estimate
\begin{equation}\label{wder}
\|w_x(t)\| \,\le\, \frac{C\|w_0\|}{t^{1/2}} \,\le\, \frac{CR}{t^{1/2}}\,,
\qquad t > 0\,,
\end{equation}
where $R := 1 + \|u_0\| + \|v_0\|$. Here and in what follows, we denote
$\|\cdot\| = \|\cdot\|_{L^\infty}$, and the generic constant $C$ is always
independent of the initial data $(u_0,v_0)$.
We first estimates the derivatives $u_x(t), v_x(t)$ for $t \le t_0$, where
$t_0 := T_0/R$ is the local existence time appearing in the proof of
Proposition~\ref{prop:exist}. Differentiating the integral equation
and using the second inequality in \eqref{uvbound}, we easily obtain
\begin{equation}\label{uvder}
\|u_x(t)\| + \|v_x(t)\| \,\le\, \frac{CR}{t^{1/2}} + \int_0^t
\frac{CR^2}{(t-s)^{1/2}}\,{\rm d} s \,\le\, \frac{CR}{t^{1/2}}\,,
\qquad 0 < t \le t_0\,.
\end{equation}
In particular, we have $\|u_x(t_0)\| + \|v_x(t_0)\| \le C R^{3/2}$.
We next observe that the quantity $q = v_x$ satisfies the equation $q_t = q_{xx}
- (1+4v)q + w_x$. The corresponding integral equation reads
\[
q(t) \,=\, \Sigma(t,t_0)q(t_0) + \int_{t_0}^t \Sigma(t,s)
w_x(s) \,{\rm d} s\,, \qquad t > t_0\,,
\]
where $\Sigma(t,s)$ is the two-parameter semigroup associated with the
linear nonautonomous equation $q_t = q_{xx} - (1+4v)q$, assuming that
$v(x,t)$ is given. Since $v \ge 0$, the maximum principle implies
the pointwise estimate $\Sigma(t,s) \le e^{-(t-s)} S(t-s)$, where $S(t) =
\exp(t\partial_x^2)$ is the heat kernel. Using \eqref{wder}, \eqref{uvder},
we thus obtain
\begin{equation}\label{qest}
\begin{split}
\|q(t)\| \,&\le\, e^{-(t-t_0)}\|q(t_0)\| + \int_{t_0}^t e^{-(t-s)}\|w_x(s)\|\,{\rm d} s \\
\,&\le\, C R^{3/2}\,e^{-(t-t_0)} + \int_{t_0}^t e^{-(t-s)}\,\frac{CR}{s^{1/2}}\,{\rm d} s
\,\le\, \frac{CR^{3/2}}{t^{1/2}}\,, \qquad t > t_0\,.
\end{split}
\end{equation}
Note that \eqref{uvder}, \eqref{qest} imply that $\|q(t)\| \le C R^{3/2}t^{-1/2}$
for all $t > 0$.
Similarly, the quantity $p = u_x$ satisfies the equation $p_t = p_{xx}
- p + 2vq$, and we know from \eqref{uvbound} that $\|v(t)\| \le 2R$ for all
$t \ge 0$. It follows that
\begin{equation}\label{pest}
\begin{split}
\|p(t)\| \,&\le\, e^{-(t-t_0)}\|p(t_0)\| + 4R \int_{t_0}^t e^{-(t-s)}\|q(s)\|\,{\rm d} s \\
\,&\le\, C R^{3/2}\,e^{-(t-t_0)} + C \int_{t_0}^t e^{-(t-s)}\,\frac{R^{5/2}}{s^{1/2}}\,{\rm d} s
\,\le\, \frac{CR^{5/2}}{t^{1/2}}\,, \qquad t > t_0\,.
\end{split}
\end{equation}
Altogether we deduce from \eqref{uvder}, \eqref{qest}, \eqref{pest} that
$t \|u_x(t)\|^2 + t \|v_x(t)\|^2 \le C R^5$ for all $t > 0$, which proves
the first inequality in \eqref{mainest1}.
Finally, the quantity $\rho = u - v^2$ satisfies the equation $\rho_t =
\rho_{xx} - (1+4v)\rho + 2q^2$ as well as the a priori estimate $\|\rho(t)\|
\le R^2$ for all $t \ge 0$. Proceeding as above and using \eqref{qest}, we
find
\begin{equation}\label{rhoest}
\begin{split}
\|\rho(t)\| \,&\le\, e^{-(t-t_0)}\|\rho(t_0)\| + 2\int_{t_0}^t e^{-(t-s)}\|q(s)\|^2\,{\rm d} s \\
\,&\le\, R^2\,e^{-(t-t_0)} + C \int_{t_0}^t e^{-(t-s)}\,\frac{R^3}{s}\,{\rm d} s
\,\le\, \frac{CR^3}{t}\,\log(1{+}R)\,, \qquad t > t_0\,.
\end{split}
\end{equation}
Thus $(1+t)\|\rho(t)\| \le C R^3\log(1+R)$ for all $t \ge 0$, which
concludes the proof of \eqref{mainest1}. \mbox{}\hfill$\Box$
\begin{rem}\label{rem:higher}
Similarly, differentiating with respect to $x$ the evolution equations
for the quantities $q,p,\rho$ and using an induction argument, one can
show that the solution of \eqref{RD} with $a = b$ satisfies, for each
integer $m \in \mathbb{N}$, an estimate of the form
\begin{equation}\label{higherest}
\|\partial_x^m u(t)\|_{L^\infty} + \|\partial_x^m v(t)\|_{L^\infty} \,\le\,
\frac{C_m}{t^{m/2}}\,, \qquad \|\partial_x^m \rho(t)\|_{L^\infty}
\,\le\, \frac{C_m}{t^{m/2}(1{+}t)}\,, \qquad \forall\,t > 0\,.
\end{equation}
\end{rem}
\bigskip\noindent{\bf Proof of Corollary~\ref{cor:ulconv}.}
If $t \ge |I|^2$, we pick $x_0 \in I$ and define $\bar v = v(x_0,t)$,
$\bar u = \bar v^2$. Then $(\bar u,\bar v) \in \mathcal{E}$ and using the first
inequality in \eqref{mainest2} we find
\[
\|v(\cdot,t) - \bar v\|_{L^\infty(I)} \,=\, \|v(\cdot,t) - v(x_0,t)\|_{L^\infty(I)}
\,\le\, |I|\,\|v_x(\cdot,t)\|_{L^\infty(I)} \,\le\, \frac{C|I|}{t^{1/2}}\,
\log(2+t)\,.
\]
Similarly, using in addition the second inequality in \eqref{mainest2},
we obtain
\[
\|u(\cdot,t) - \bar u\|_{L^\infty(I)} \,\le\, \|u(\cdot,t) - u(x_0,t)\|_{L^\infty(I)}
+ |I|\,|u(x_0,t) - v(x_0,t)^2| \,\le\, \frac{C|I|}{t^{1/2}}\,
\log(2+t)\,.
\]
Combining these bounds and recalling that $t \ge |I|^2$, we arrive at
\eqref{ulconv}. If $t \le |I|^2$, we can take $\bar u = \bar v = 0$
and use the second bound in \eqref{uvbound} to arrive directly at
\eqref{ulconv} (without logarithmic correction in that case).
\mbox{}\hfill$\Box$
\section{An ordered pair of dissipative structures}\label{sec3}
We now relax the assumption that $a = b$ and return to the general
case where $a,b$ are arbitrary positive constants. Assuming without loss
of generality that $k = 1$, we write system \eqref{RD} in the equivalent
form
\begin{equation}\label{RD2}
u_t \,=\, a u_{xx} -\rho\,, \qquad v_t \,=\, b u_{xx} +2\rho\,,
\end{equation}
where the auxiliary quantity $\rho = u - v^2$ measures the distance to the
chemical equilibrium.
As was already mentioned, the proof of Proposition~\ref{main2} relies on local
energy estimates and follows the general approach described in \cite{GS1}.
For the nonnegative solutions of \eqref{RD2} given by Proposition~\ref{prop:exist},
it is convenient to use the energy density $e(x,t)$, the energy flux $f(x,t)$,
and the energy dissipation $d(x,t)$ given by \eqref{edf2x2}, namely
\begin{equation}\label{edf1}
e \,=\, \frac{1}{2}\,u^2 + \frac{1}{6}\,v^3\,, \qquad
f \,=\, a u u_x + \frac{b}{2}\,v^2 v_x\,, \qquad
d \,=\, a u_x^2 + b v v_x^2 + \rho^2\,.
\end{equation}
The local energy balance $\partial_t e = \partial_x f - d$ is easily verified
by a direct calculation. The main properties we shall use are the positivity of
the energy $e$ and the dissipation $d$, as well as the pointwise estimate
$f^2 \le C e d$, where $C > 0$ depends only on $a,b$. In \cite{GS1}, an evolution
equation equipped with a triple $(e,f,d)$ satisfying the above properties is
called an ``extended dissipative system''. According to that terminology, we
shall refer to the triple \eqref{edf1} as an ``EDS structure'' for system
\eqref{RD2}.
The essential step in the proof of Proposition~\ref{main2} is the construction
of a second EDS structure $(\tilde e,\tilde f,\tilde d)$ for \eqref{RD2},
where the new energy density $\tilde e$ is bounded from above by a multiple
of the energy dissipation $d$ in the first EDS structure. It is quite natural
to look for $\tilde e$ as a linear combination of the quantities $u_x^2$,
$v v_x^2$, and $\rho^2$ that appear in the expression of $d$ in \eqref{edf1}.
\begin{lem}\label{lem:edf2}
For all values of the parameters $\alpha,\beta > 0$ the quantities
$\tilde e$, $\tilde f$, $\tilde d$ defined by
\begin{equation}\label{edf2}
\begin{split}
\tilde e \,&=\, \frac{\alpha}{2}\,u_x^2 + \frac{\beta}{2}\,v v_x^2 +
\frac{1}{2}\,\rho^2\,, \\
\tilde f \,&=\, \alpha u_x u_t + \beta v v_x v_t - \frac{\beta b}{6}\,v_x^3
+ \frac{\beta}{2}\,\rho \rho_x\,, \\
\tilde d \,&=\, \alpha a u_{xx}^2 + \beta b v v_{xx}^2 + (1+4v)\rho^2
+ \frac{\beta}{2}\,\rho_x^2 -\bigl(a + \alpha - \beta/2\bigr)\rho u_{xx}
+ 2\bigl(b + \beta/2\bigr)\rho v v_{xx}\,,
\end{split}
\end{equation}
satisfy the local energy balance $\partial_t \tilde e = \partial_x \tilde f
- \tilde d$.
\end{lem}
\begin{proof}
Differentiating $\tilde e$ with respect to time and using \eqref{RD2},
we find by a direct calculation
\begin{equation}\label{prob}
\begin{split}
\partial_t \tilde e \,&=\, \alpha u_x u_{xt} + \beta v v_x v_{xt} +
\frac{\beta}{2}\,v_x^2 v_t + \rho \rho_t \\
\,&=\, \bigl(\alpha u_x u_t + \beta v v_x v_t\bigr)_x - \alpha u_{xx} u_t
- \beta v v_{xx} v_t - \frac{\beta}{2}\,v_x^2 v_t + \rho \rho_t \\
\,&=\, \Bigl(\alpha u_x u_t + \beta v v_x v_t - \frac{\beta b}{6}\,v_x^3\Bigr)_x
-\alpha a u_{xx}^2 - \beta b v v_{xx}^2 - (1+4v)\rho^2 \\[1mm]
& \hspace{20pt} - \beta \rho v_x^2 + (a + \alpha)\rho u_{xx} -2 (b + \beta)
\rho v v_{xx}\,.
\end{split}
\end{equation}
The last line collects the terms which have no definite sign and cannot be
incorporated in the flux $\tilde f$. Among them, the terms involving
$\rho u_{xx}$ and $\rho v v_{xx}$ can be controlled by the negative terms in
the previous line. This is not the case, however, of the term $-\beta \rho v_x^2$,
which is potentially problematic. The trick here is to use the identity
\begin{equation}\label{rhoid}
\rho v_x^2 \,=\, \frac{\rho}{2}\,\bigl(u_{xx} - 2v v_{xx} - \rho_{xx}\bigr)\,,
\end{equation}
which is easily obtained by differentiating twice the relation $\rho = u - v^2$
with respect to $x$. If we replace \eqref{rhoid} into \eqref{prob} and if we
observe in addition that $\rho \rho_{xx} = (\rho \rho_x)_x - \rho_x^2$, we
conclude that $\partial_t \tilde e = \partial_x \tilde f - \tilde d$, where
$\tilde e$, $\tilde f$, $\tilde d$ are defined in \eqref{edf2}.
\end{proof}
It remains to chose the free parameters $\alpha,\beta$ so that the dissipation
$\tilde d$ is positive. In the third line of \eqref{edf2}, the last two terms
involving $\rho u_{xx}$ and $\rho v v_{xx}$ have no definite sign, but (as
already mentioned) we can use Young's inequality to control them in terms of
the positive quantities $u_{xx}^2$, $v v_{xx}^2$, and $(1+4v)\rho^2$. This procedure
works if and only if
\begin{equation}\label{alphabet}
\bigl(a + \alpha - \beta/2\bigr)^2 \,<\, 4 a \alpha\,, \qquad \hbox{and}
\qquad (b + \beta/2\bigr)^2 \,<\, 4 b \beta\,.
\end{equation}
It is always possible to chose $\alpha,\beta > 0$ so that both inequalities
in \eqref{alphabet} are satisfied. A particularly simple solution is
$\alpha = a + b$, $\beta = 2b$, which we assume henceforth. We thus find\:
\begin{cor}\label{lem:dpos}
The quantities $\tilde e$, $\tilde f$, $\tilde d$ defined by
\begin{equation}\label{edf2bis}
\begin{split}
\tilde e \,&=\, \frac{a+b}{2}\,u_x^2 + b\,v v_x^2 + \frac{1}{2}\,\rho^2\,, \\[-0.5mm]
\tilde f \,&=\, (a+b) u_x u_t + 2b\,v v_x v_t - \frac{b^2}{3}\,v_x^3
+ b\,\rho \rho_x\,, \\[1mm]
\tilde d \,&=\, a(a+b) u_{xx}^2 + 2 b^2 v v_{xx}^2 + (1+4v)\rho^2
+ b\,\rho_x^2 -2a\rho u_{xx} + 4b\rho v v_{xx}\,,
\end{split}
\end{equation}
satisfy the local energy balance $\partial_t \tilde e = \partial_x \tilde f -
\tilde d$, and there exists a constant $\gamma > 0$ depending only on $a,b$ such
that
\begin{equation}\label{dpos}
\tilde d \,\ge\, \tilde d_0 \,:=\, \gamma\Bigl(u_{xx}^2 + v v_{xx}^2 +
(1+4v)\rho^2\Bigr) + b\rho_x^2\,.
\end{equation}
\end{cor}
\begin{rem}\label{rem:flux}
Strictly speaking, the triple $(\tilde e,\tilde f,\tilde d)$ is not an EDS
structure in the sense of \cite{GS1}, because the flux bound $\tilde f^2
\le C \tilde e \tilde d$ does not hold. The problem comes from the term
involving $v_x^3$ in $\tilde f$\: it is clearly not possible to bound $v_x^6$
pointwise in terms of $v v_x^2$ and $v v_{xx}^2$. Nevertheless we shall see in
Section~\ref{sec4} that the contribution of that term to the localized
energy estimates can be estimated as if the pointwise bound was valid.
This suggests that our definition of ``extended dissipative system'' given
in \cite{GS1} may be too restrictive, and should perhaps be generalized so as
to include more general flux terms as in the present example.
\end{rem}
The EDS structures \eqref{edf1}, \eqref{edf2bis} provide a good control on the
quantities $v^2$, $v_x^2$, and $v_{xx}^2$ only if the function $v(x,t)$ is
bounded away from zero. As was observed in Remark~\ref{rem:lowerbd}, this is the
case in particular if $v_0(x) \ge \delta > 0$ for some $\delta > 0$. However, we
are also interested in initial data which do not have that property. In
particular we may want to consider the situation where $(u_0,v_0) = (1,0)$
when $x < 0$ and $(u_0,v_0) = (0,1)$ when $x > 0$. In that case, the evolution
describes the diffusive mixing of the initially separated species $\mathcal{A}$, $\mathcal{B}$.
To handle the case where the second component $v(x,t)$ is not bounded away
from zero, a possibility is to add to the energy density $e(x,t)$ a small
multiple of $w^2$, where $w = 2u + v$.
\begin{lem}\label{lem:edf34}
If $\theta > 0$ is sufficiently small, the quantities $e_1(x,t)$, $f_1(x,t)$,
$d_1(x,t)$ defined by
\begin{equation}\label{edf3}
\begin{split}
e_1 \,&=\, e \,+\, \theta w^2/2\,, \\
f_1 \,&=\, f \,+\, \theta b w w_x + 2\theta (a-b)w u_x\,, \\
d_1 \,&=\, d \,+\, \theta b w_x^2 + 2\theta (a-b)w_x u_x\,,
\end{split}
\end{equation}
satisfy the energy balance $\partial_t e_1 = \partial_x f_1 - d_1$, and there
exists a constant $c > 0$ such that
\begin{equation}\label{cbd1}
e_1 \,\ge\, c\bigl(u^2 + (1+v)v^2\bigr)\,, \qquad
d_1 \,\ge\, c\bigl(u_x^2 + (1+v)v_x^2 + \rho^2\bigr)\,.
\end{equation}
Similarly the quantities $\tilde e_1(x,t)$, $\tilde f_1(x,t)$, $\tilde d_1(x,t)$
defined by
\begin{equation}\label{edf4}
\begin{split}
\tilde e_1 \,=\, \tilde e \,+\, \theta w_x^2/2\,, \quad
\tilde f_1 \,=\, \tilde f \,+\, \theta w_x w_t\,, \quad
\tilde d_1 \,=\, \tilde d \,+\, \theta b w_{xx}^2 + 2\theta (a{-}b)w_{xx}
u_{xx}\,,
\end{split}
\end{equation}
satisfy the energy balance $\partial_t \tilde e_1 = \partial_x \tilde f_1
- \tilde d_1$ as well as the lower bounds
\begin{equation}\label{cbd2}
\tilde e_1 \,\ge\, c\bigl(u_x^2 + (1+v)v_x^2 + \rho^2\bigr)\,, \qquad
\tilde d_1 \,\ge\, c\bigl(u_{xx}^2 + (1+v)v_{xx}^2 + \rho_x^2 + (1+v)\rho^2
\bigr)\,.
\end{equation}
\end{lem}
\begin{proof}
Since $w_t = 2a u_{xx} + b v_{xx} = b w_{xx} + 2(a-b)u_{xx}$, it is straightforward
to verify that the additional terms involving the parameter $\theta$ in
\eqref{edf3}, \eqref{edf4} do not destroy the local energy balances. The
lower bounds \eqref{cbd1} are easily obtained using the definitions of
the quantities $e_1$, $d_1$ and applying Young's inequality, provided
$\theta > 0$ is sufficiently small (depending on $a,b$). Estimates
\eqref{cbd2} are obtained similarly, using in addition the lower bound
\eqref{dpos}.
\end{proof}
\section{Uniformly local energy estimates}\label{sec4}
In this section we complete the proof of our main result,
Proposition~\ref{main2}, using the dissipative structures introduced in
Section~\ref{sec3}. We fix the parameters $a,b > 0$, and we consider a global
solution $(u,v) \in C^0([0,+\infty),X^2)$ of system \eqref{RD2} with initial
data $(u_0,v_0) \in X_+^2$, as given by Proposition~\ref{prop:exist}. Following
our previous works \cite{GS1,GS2}, our strategy is to control the behavior of
the solution $(u(t),v(t))$ for large times using localized energy estimates.
For convenience, we first prove Proposition~\ref{main2} under the additional
assumption that the solution of \eqref{RD2} satisfies
\begin{equation}\label{eq:lowv}
\inf_{t \ge 0}\, \inf_{x \in \mathbb{R}}\,v(x,t) \,\ge\, \delta\,, \qquad\hbox{for some}
~\delta > 0\,.
\end{equation}
As was observed in Remark~\ref{rem:lowerbd}, this is the case if the initial
function $v_0(x) = v(x,0)$ is bounded away from zero. Assumption~\ref{eq:lowv}
allows us to use the relatively simple EDS structures \eqref{edf1},
\eqref{edf2bis} instead of the more complicated ones introduced in
Lemma~\ref{lem:edf34}, and this makes the argument somewhat easier to follow.
The proof is however completely similar in the general case, see
Section~\ref{ssec44} below.
Given $\epsilon > 0$ and $x_0 \in \mathbb{R}$, we define the localization function
\begin{equation}\label{chidef}
\chi(x) \,=\, \chi_{\epsilon,x_0}(x) \,:=\, \frac{1}{\cosh\bigl(
\epsilon(x-x_0)\bigr)}\,, \qquad x \in \mathbb{R}\,.
\end{equation}
This function is smooth and satisfies the bounds
\begin{equation}\label{chibd}
0 \,<\, \chi(x) \le 1\,, \qquad |\chi'(x)| \,\le\, \epsilon \chi(x)\,,
\qquad |\chi''(x)| \,\le\, \epsilon^2 \chi(x)\,, \qquad x \in \mathbb{R}\,.
\end{equation}
Note also that $\int_\mathbb{R} \chi(x)\,{\rm d} x = \pi/\epsilon$. The translation parameter
$x_0$ plays no role in the subsequent calculations, but at the end we shall take
the supremum over $x_0 \in \mathbb{R}$ to obtain uniformly local estimates. In
contrast, the dilation parameter $\epsilon > 0$ is crucial, and will
be chosen in an appropriate time-dependent way.
\subsection{The localized energy and its dissipation}\label{ssec41}
We first exploit the EDS structure \eqref{edf1}. We fix some observation
time $T > 0$ and we consider the localized energy
\[
E(t) \,=\, \int_\mathbb{R} \chi(x)\,e(x,t)\,{\rm d} x \,=\, \int_\mathbb{R} \chi(x)
\Bigl(\frac12 u(x,t)^2 + \frac16 v(x,t)^3\Bigr)\,{\rm d} x\,, \qquad
t \in [0,T]\,.
\]
Note that this quantity is well defined thanks to the localization function
$\chi$, which is integrable. If $t > 0$, we also introduce the associated
energy dissipation
\[
D(t) \,=\, \int_\mathbb{R} \chi(x)\,d(x,t)\,{\rm d} x \,=\, \int_\mathbb{R} \chi(x)
\Bigl(a u_x(x,t)^2 + b v(x,t)v_x(x,t)^2 + \rho(x,t)^2\Bigr)\,{\rm d} x\,.
\]
Since the solution $(u,v)$ of the parabolic system \eqref{RD2} is
smooth for $t > 0$, it is straightforward to verify that $E \in
C^0([0,T]) \cap C^1((0,T))$ and that
\begin{equation}\label{Eder}
\begin{split}
E'(t) \,=\, \int_\mathbb{R} \chi(x)\,\partial_t e(x,t)\,{\rm d} x \,&=\,
\int_\mathbb{R} \chi(x) \Bigl(\partial_x f(x,t) - d(x,t)\Bigr)\,{\rm d} x \\
\,&=\, -\int_\mathbb{R} \chi'(x) f(x,t)\,{\rm d} x - D(t)\,.
\end{split}
\end{equation}
To bound the flux term, we use \eqref{chibd} and the pointwise estimate
$f^2 \le C_0\,e d$, where $C_0 > 0$ depends on the parameters $a,b$.
Applying Young's inequality, we obtain
\[
\bigg|\int_\mathbb{R} \chi'(x) f(x,t)\,{\rm d} x\bigg| \,\le\, \epsilon \int_\mathbb{R} \chi
\bigl(C_0\,e d\bigr)^{1/2}\,{\rm d} x \,\le\, \frac12\,D(t) + \frac{C_0
\epsilon^2}{2}\,E(t)\,.
\]
At this point, we choose the dilation parameter $\epsilon$ so that
\begin{equation}\label{epsdef}
C_0\,\epsilon^2 \,=\, \frac{1}{T}\,.
\end{equation}
We thus obtain the differential inequality $E'(t) \le -\frac12 D(t) +
\frac{1}{2T}E(t)$, which can be integrated using Gr\"onwall's lemma to
give the useful estimate
\begin{equation}\label{Eineq}
E(T) + \frac12 \int_0^T D(t)\,{\rm d} t \,\le\, e^{1/2}\, E(0)\,.
\end{equation}
Next, we introduce integrated quantities related to the second EDS structure
\eqref{edf2bis}. For all $t \in (0,T)$ we define
\begin{align*}
\tilde E(t) \,&=\, \int_\mathbb{R} \chi(x)\,\tilde e(x,t)\,{\rm d} x \,=\,
\int_\mathbb{R} \chi(x) \Bigl(\frac{a{+}b}{2}\,u_x^2 + bv v_x^2 + \frac{1}{2}\,
\rho^2\Bigr)(x,t)\,{\rm d} x\,, \\
\tilde D(t) \,&=\, \int_\mathbb{R} \chi(x)\,\tilde d_0(x,t)\,{\rm d} x \,=\,
\int_\mathbb{R} \chi(x) \Bigl(\gamma u_{xx}^2 + \gamma v v_{xx}^2
+ \gamma (1{+}4v)\rho^2 + b \rho_x^2\Bigr)(x,t) \,{\rm d} x\,,
\end{align*}
where $\gamma > 0$ is as in Corollary~\ref{lem:dpos}. The same calculation as
in \eqref{Eder} leads to
\[
\tilde E'(t) \,=\, -\int_\mathbb{R} \chi'(x) \tilde f(x,t)\,{\rm d} x - \int_\mathbb{R} \chi(x)
\,\tilde d(x,t)\,{\rm d} x \,\le\, -\int_\mathbb{R} \chi'(x) \tilde f(x,t)\,{\rm d} x -
\tilde D(t)\,,
\]
where the inequality follows from \eqref{dpos}. The difficulty here is that
the flux term does not satisfy a pointwise estimate of the form $\tilde f^2
\le C\,\tilde e \tilde d_0$, see Remark~\ref{rem:flux}. However, we can
decompose
\[
\tilde f \,=\, \tilde f_0 - \frac{b^2}{3}\,v_x^3\,, \qquad
\hbox{where} \quad \tilde f_0 \,=\, (a{+}b)u_x u_t + 2b v v_x v_t
+ b\rho \rho_x\,,
\]
and it is easy to check that $\tilde f_0^2\le C_1\,\tilde e \tilde d_0$ for some
$C_1 > 0$. In particular, we find as before
\begin{equation}\label{flux1}
\bigg|\int_\mathbb{R} \chi'(x) \tilde f_0(x,t)\,{\rm d} x\bigg| \,\le\, \epsilon \int_\mathbb{R}
\chi \bigl(C_1\,\tilde e \tilde d_0\bigr)^{1/2}\,{\rm d} x \,\le\, \frac14\,\tilde D(t)
+ C_1 \epsilon^2\,\tilde E(t)\,.
\end{equation}
As for the term involving $v_x^3$, we integrate by parts to obtain the
identity
\[
\int_\mathbb{R} \chi' v_x^3 \,{\rm d} x \,=\, -\int_\mathbb{R} \chi'' v v_x^2 \,{\rm d} x -
2 \int_\mathbb{R} \chi' v v_x v_{xx} \,{\rm d} x\,.
\]
Using \eqref{chibd} and Young's inequality, we deduce
\begin{equation}\label{flux2}
\frac{b^2}{3}\,\bigg|\int_\mathbb{R} \chi'(x) v_x^3(x,t)\,{\rm d} x\bigg| \,\le\,
\frac{b \epsilon^2}{3}\,\tilde E(t) + \frac14\,\tilde D(t) +
\frac{4b^3 \epsilon^2}{9\gamma}\,\tilde E(t)\,.
\end{equation}
The combination of \eqref{flux1}, \eqref{flux2} gives the desired
estimate on the flux term\:
\[
\bigg|\int_\mathbb{R} \chi'(x) \tilde f(x,t)\,{\rm d} x\bigg| \,\le\, \frac12\,
\tilde D(t) + C_2 \epsilon^2\,\tilde E(t)\,, \qquad \hbox{where}\quad
C_2 \,=\, C_1 + \frac{b}{3} + \frac{4b^3}{9\gamma}\,.
\]
Integrating the differential inequality $\tilde E'(t) \le -\frac12 \tilde
D(t) + C_2 \epsilon^2 E(t)$ over the time interval $[t_0,T]$, where
$t_0 \in [0,T]$, we arrive at the estimate
\begin{equation}\label{tEineq}
\tilde E(T) + \frac12 \int_{t_0}^T \tilde D(t)\,{\rm d} t \,\le\, C_3 \tilde
E(t_0)\,, \qquad t_0 \in [0,T]\,,
\end{equation}
where $C_3 = \exp(C_2 \epsilon^2 T) = \exp(C_2/C_0)$.
Finally, we use the crucial fact that the EDS structures \eqref{edf1},
\eqref{edf2bis} are {\em ordered}, in the sense that
\[
\tilde e(x,t) \,\le\, C_4\,d(x,t)\,, \qquad \hbox{where}\qquad
C_4 \,=\, \max\Bigl(1,\frac{a+b}{2a}\Bigr)\,.
\]
In particular, the inequality $\tilde E(t) \le C_4 D(t)$ holds for all
$t \in (0,T)$. Thus, if we average \eqref{tEineq} over $t_0 \in [0,T]$ and use
\eqref{Eineq}, we obtain
\begin{equation}\label{tEineq2}
\tilde E(T) + \frac{1}{2T} \int_0^T t \tilde D(t)\,{\rm d} t \,\le\, \frac{C_3}{T}\,
\int_0^T \tilde E(t_0)\,{\rm d} t_0 \,\le\, \frac{C_3 C_4}{T}\,\int_0^T D(t)\,{\rm d} t
\,\le\, \frac{C_5}{T}\,E(0)\,,
\end{equation}
where the constant $C_5 = 2e^{1/2}\,C_3 C_4$ only depends on the parameters
$a,b$ in system~\eqref{RD2}, and is in particular independent of the observation
time $T > 0$ and of the solution $(u,v)$ under consideration. It is however
important to keep in mind that all integrated quantities $E, \tilde E$ and
$D, \tilde D$ depend implicitly on $T$ through the weight function \eqref{chidef}
and the choice \eqref{epsdef} of the parameter $\epsilon$.
The bound \eqref{tEineq2} summarizes the information we can obtain from the EDS
structures \eqref{edf1}, \eqref{edf2bis}. It serves as a basis for all estimates
we shall derive on the solutions of \eqref{RD2} for large times. A typical
application of \eqref{tEineq2} is\:
\begin{lem}\label{lem:first}
There exist a constant $C_6 > 0$ depending only the parameters $a,b$
such that, for any solution $(u,v) \in C^0([0,+\infty),X^2)$ of
\eqref{RD2} with initial data $(u_0,v_0) \in X_+^2$ and any $T > 0$,
the following inequality holds\:
\begin{equation}\label{decay1}
\sup_{x_0 \in \mathbb{R}} \,\int_{I(x_0,T)} \Bigl(u_x^2 + vv_x^2 + \rho^2\Bigr)
(x,T)\,{\rm d} x \,\le\, C_6\,R^3\,T^{-1/2}\,,
\end{equation}
where $I(x_0,T) = \bigl\{x \in \mathbb{R}\,;\,|x-x_0| \le (C_0 T)^{1/2}\bigr\}$
and $R = 1 + \|u_0\|_{L^\infty} + \|v_0\|_{L^\infty}$.
\end{lem}
\begin{proof}
The initial energy density satisfies $e(x,0) \le \frac12 \|u_0\|_{L^\infty}^2 +
\frac16 \|v_0\|_{L^\infty}^2 \le R^3$, so that
\begin{equation}\label{E0bd}
E(0) \,=\, \int_\mathbb{R} \chi(x)\,e(x,0)\,{\rm d} x \,\le\, R^3 \int_\mathbb{R} \chi(x)\,{\rm d} x
\,=\, \frac{\pi R^3}{\epsilon} \,=\, \pi R^3(C_0 T)^{1/2}\,.
\end{equation}
On the other hand, we have the lower bound $\tilde e \ge \gamma_1 (u_x^2 +
v v_x^2 + \rho^2)$ for some constant $\gamma_1 > 0$, and it follows from
\eqref{chidef} that $\chi(x) \ge e^{-1}$ when $|x-x_0| \le \epsilon^{-1} =
(C_0 T)^{1/2}$. We thus find
\[
\tilde E(T) \,=\, \int_\mathbb{R} \chi(x)\,\tilde e(x,T)\,{\rm d} x \,\ge\,
\gamma_1 e^{-1} \int_{I(x_0,T)} \Bigl(u_x^2 + vv_x^2 + \rho^2\Bigr)
(x,T)\,{\rm d} x\,.
\]
Applying \eqref{tEineq2} we deduce that
\[
\int_{I(x_0,T)} \Bigl(u_x^2 + vv_x^2 + \rho^2\Bigr)
(x,T)\,{\rm d} x \,\le\, \frac{e\,C_5}{\gamma_1T}\, \pi R^3(C_0 T)^{1/2}\,,
\]
and taking the supremum over $x_0 \in \mathbb{R}$ in the left-hand side we
arrive at \eqref{decay1}.
\end{proof}
\begin{rem}\label{rem:decay1}
If $1 \le p < \infty$, the uniformly local space $L^p_\mathrm{ul}(\mathbb{R})$ is defined
as the set of all measurable functions $f : \mathbb{R} \to \mathbb{R}$ such that
\[
\|f\|_{L^p_\mathrm{ul}} \,:=\, \biggl(\,\sup_{x_0 \in \mathbb{R}}\,\int_{|x-x_0|\le 1}|f(x)|^p \,{\rm d} x
\biggr)^{1/p} \,<\, \infty\,,
\]
see \cite{ABCD} for a nice review article on uniformly local spaces. In view
of \eqref{eq:lowv}, the bound \eqref{decay1} implies that $\|u_x(t)\|_{L^2_\mathrm{ul}} +
\|v_x(t)\|_{L^2_\mathrm{ul}} + \|\rho(t)\|_{L^2_\mathrm{ul}} \le CR^{3/2}t^{-1/4}$ for all $t \ge 1$.
This estimate is far from optimal, but it already implies that the solution
$(u,v)$ converges uniformly on compact sets to the family $\mathcal{E}$ of spatially
homogeneous equilibria, which is a nontrivial result. Using the smoothing
properties of the parabolic system \eqref{RD2}, it is possible to deduce
analogous estimates in the uniform norm, in particular
\begin{equation}\label{uxvxbd}
\|u_x(t)\|_{L^\infty} + \|v_x(t)\|_{L^\infty} \,\le\, \frac{C R^{7/4}}{t^{1/4}}\,,
\qquad t \ge 2\,,
\end{equation}
see Section~\ref{ssec43} below. Note also that the optimal decay rates
for $u_x, v_x$ given by Proposition~\ref{main1} (in the particular case $a = b$)
indicate that the left-hand side of \eqref{decay1} indeed decays like $T^{-1/2}$
as $T \to +\infty$, so that \eqref{decay1} is not far from optimal.
\end{rem}
\subsection{Control of the second order derivatives}\label{ssec42}
So far we only used the first term $\tilde E(T)$ in the left-hand side of
inequality \eqref{tEineq2}, but the integral term involving $\tilde D(t)$ is
also valuable. In particular, the bounds \eqref{tEineq2}, \eqref{E0bd} together
imply that $\tilde D(t) \le CR^3T^{-3/2}$ for ``most'' times $t$ in the interval
$[0,T]$, but that information is difficult to exploit because the exceptional
times where such a bound possibly fails may depend on the translation
parameter $x_0 \in \mathbb{R}$. This difficulty is inherent to our approach, and to avoid
it we extract from \eqref{tEineq2} a somewhat weaker estimate which is valid for
all times.
To do that, we first study the linear parabolic system
\begin{equation}\label{UVsys}
U_t \,=\, a U_{xx} + 2v V - U\,, \qquad
V_t \,=\, b V_{xx} + 2U - 4v V\,,
\end{equation}
which is obtained by differentiating \eqref{RD2} (where $k = 1$) with respect
to the space coordinate $x$ or the time variable $t$. In the analysis of
\eqref{UVsys}, we consider the nonnegative function $v(x,t)$ as given,
independently of the solution $(U,V)$. The property we need is\:
\begin{lem}\label{lem:deriv}
There exists a constant $C_7 > 0$ depending only on the parameters $a,b$
such that, for any $v \in C^0([0,T],X_+)$ and any initial data $(U_1,V_1) \in X^2$
at time $t_1 \in [0,T]$, the solution $(U,V) \in C^0([t_1,T],X^2)$ of
\eqref{UVsys} satisfies
\begin{equation}\label{derivbd}
\int_\mathbb{R} \chi(x) \Bigl(2 |U(x,T)| + |V(x,T)|\Bigr)\,{\rm d} x \,\le\,
C_7 \int_\mathbb{R} \chi(x) \Bigl(2 |U_1(x)| + |V_1(x)|\Bigr)\,{\rm d} x\,,
\end{equation}
where $\chi$ is given by \eqref{chidef} with $\epsilon > 0$ as in
\eqref{epsdef}.
\end{lem}
\begin{proof}
Since the function $v(x,t)$ is nonnegative, the linear system \eqref{UVsys} is
cooperative, so that a (component-wise) comparison principle holds as for
the original system \eqref{RD}. In particular, the solution $(U,V)$ satisfies
the estimates $|U(x,t)| \le \tilde U(x,t)$ and $|V(x,t)| \le \tilde V(x,t)$,
where $(\tilde U,\tilde V)$ denotes the solution of \eqref{UVsys} with initial
data $(|U_1|,|V_1|)$ at time $t_1$. In other words, it is sufficient to prove
\eqref{derivbd} for nonnegative initial data $(U_1,V_1)$, in which case the
solution $(U,V)$ remains nonnegative by the maximum principle.
Fix $t_1 \in [0,T]$, $(U_1,V_1) \in X_+^2$, and let $(U,V) \in C^0([t_1,T],X^2)$
be the solution of \eqref{UVsys} such that $(U(t_1),V(t_1)) = (U_1,V_1)$.
Integrating by parts and using \eqref{chibd}, we easily find
\begin{align*}
\frac{{\rm d}}{{\rm d} t}\int_\mathbb{R} \chi \bigl(2U + V)\,{\rm d} x \,&=\, \int_\mathbb{R} \chi
\bigl(2a U_{xx} + b V_{xx}\bigr)\,{\rm d} x \\
\,&=\, \int_\mathbb{R} \chi'' \bigl(2a U + b V\bigr)\,{\rm d} x \,\le\,
\epsilon^2 c \int_\mathbb{R} \chi\bigl(2U + V\bigr)\,{\rm d} x\,,
\end{align*}
where $c = \max(a,b)$. This differential inequality is then integrated on the time
interval $[t_1,T]$ to give \eqref{derivbd} with $C_7 = \exp(c\epsilon^2 T) =
\exp(c/C_0)$.
\end{proof}
Returning to the nonlinear system~\eqref{RD2}, we apply Lemma~\ref{lem:deriv}
to estimate first the time derivatives $u_t, v_t$, and then the quantity
$\rho = u - v^2$.
\begin{lem}\label{lem:uvrho}
Under the assumption~\eqref{eq:lowv}, any solution $(u,v) \in C^0([0,+\infty),X^2)$ of
\eqref{RD2} with initial data $(u_0,v_0) \in X_+^2$ satisfies, for any $T > 0$,
\begin{equation}\label{decay4}
\sup_{x_0 \in \mathbb{R}} \,\int_{I(x_0,T)} \Bigl(|u_{xx}| + |v_{xx}| + |\rho|\Bigr)
(x,T)\,{\rm d} x \,\le\, C_8\,R^3\,T^{-1/2}\,,
\end{equation}
where $I(x_0,T)$ and $R$ are as in Lemma~\ref{lem:first}, and the constant
$C_8 > 0$ depends only on $a,b,\delta$.
\end{lem}
\begin{proof}
We start from inequality \eqref{tEineq2}, and we choose a time $t_1 \in [T/2,T]$
where the continuous function $t \mapsto t\tilde D(t)$ reaches its minimum
over the interval $[T/2,T]$. We then have
\begin{equation}\label{tD}
\frac{T}{8}\,\tilde D(t_1) \,\le\, \frac{1}{2T} \int_0^T t \tilde D(t)\,{\rm d} t
\,\le\, \frac{C_5}{T}\, E(0)\,, \qquad \hbox{hence} \quad
\tilde D(t_1) \,\le\, \frac{8C_5}{T^2}\, E(0)\,.
\end{equation}
We recall that $\tilde D = \int_\mathbb{R} \chi \tilde d_0 \,{\rm d} x$, where $\tilde d_0$
is defined in \eqref{dpos}. Under assumption \eqref{eq:lowv}, there exists
a constant $\gamma_2 > 0$ (depending only on $a,b,\delta$) such that
$\tilde d_0 \ge \gamma_2 (u_t^2 + v_t^2)$. Therefore, using H\"older's inequality
and estimate \eqref{tD}, we find
\begin{align*}
\int_\mathbb{R} \chi \bigl(2|u_t(t_1)| + |v_t(t_1)|\bigr)\,{\rm d} x \,&\le\,
\biggl(\int_\mathbb{R} \chi\,{\rm d} x\biggr)^{1/2} \biggl(5\int_\mathbb{R} \chi \bigl(u_t(t_1)^2
+ v_t(t_1)^2\bigr)\,{\rm d} x\biggr)^{1/2} \\
\,&\le\, \Bigl(\frac{5\pi}{\gamma_2 \epsilon}\Bigr)^{1/2}\,
\tilde D(t_1)^{1/2} \,\le\, \Bigl(\frac{5\pi}{\gamma_2 \epsilon}\Bigr)^{1/2}
\Bigl(\frac{8C_5}{T^2}\Bigr)^{1/2}\, E(0)^{1/2}\,.
\end{align*}
Note that the right-hand side is of the form $C T^{-3/4} E(0)^{1/2}$, where the
constant depends only on $a,b,\delta$. We now apply Lemma~\ref{lem:deriv}
to $(U,V) = (u_t,v_t)$, and we deduce from \eqref{E0bd}, \eqref{derivbd} that
\begin{equation}\label{decay2}
\int_\mathbb{R} \chi(x) \bigl(2|u_t(x,T)| + |v_t(x,T)|\bigr)\,{\rm d} x \,\le\,
C\,T^{-3/4}\,E(0)^{1/2} \,\le\, C_9\,R^{3/2}\,T^{-1/2}\,,
\end{equation}
where the constant $C_9 > 0$ only depends on $a,b,\delta$. Note that
estimate \eqref{decay2} holds at the observation time $T$, and not
at the intermediate time $t_1$ on which we have poor control.
To complete the proof of \eqref{decay4}, it remains to control the quantity
$\rho = u - v^2$, which measures the distance to the chemical equilibrium. It
is straightforward to verify that $\rho$ satisfies the evolution equation
\begin{equation}\label{rhoeq}
\rho_t \,=\, a \rho_{xx} - \bigl(1 + 4rv\bigr)\rho
+ 2 \bigl(r{-}1\bigr)v v_t + 2 a v_x^2\,,
\end{equation}
where $r = a/b$. Since $v(x,t) \ge 0$, the maximum principle implies that
$|\rho(x,t)| \le \bar \rho(x,t)$ for all $x \in \mathbb{R}$ and all $t \in
[0,T]$, where $\bar \rho$ is the solution of simplified equation
\[
\bar \rho_t \,=\, a \bar \rho_{xx} - \bar \rho + 2 |r{-}1| |v v_t| + 2 a v_x^2\,,
\]
with initial data $\bar \rho(x,0) = |\rho(x,0)|$. If we denote $c =
2\max(|r{-}1|,a)$, we thus have
\begin{align*}
\frac{{\rm d}}{{\rm d} t}\int_\mathbb{R} \chi \bar \rho\,{\rm d} x \,&=\, a \int_\mathbb{R} \chi'' \bar \rho\,{\rm d} x
- \int_\mathbb{R} \chi \bar \rho\,{\rm d} x + c \int_\mathbb{R} \chi \bigl(|v v_t| + v_x^2\bigr)\,{\rm d} x \\
\,&\le\, \bigl(a\epsilon^2 - 1\bigr)\int_\mathbb{R} \chi \bar \rho\,{\rm d} x + c \int_\mathbb{R} \chi
\bigl(|v v_t| + v_x^2\bigr)\,{\rm d} x\,.
\end{align*}
Integrating that inequality over the time interval $[0,T]$ and using
\eqref{epsdef}, \eqref{decay1}, \eqref{decay2}, we obtain
\begin{equation}\label{decay3}
\begin{split}
\int_\mathbb{R} \chi|\rho(T)|\,{\rm d} x \,&\le\, C\,e^{-T} \int_\mathbb{R} \chi|\rho(0)|\,{\rm d} x
+ C \int_0^T e^{-(T-t)} \int_\mathbb{R} \chi\Bigl(|v(t)| |v_t(t)| + v_x(t)^2\Bigr)
\,{\rm d} x\,{\rm d} t \\
\,&\le\, C \,e^{-T} R^2 T^{1/2} + C \int_0^T e^{-(T-t)} \Bigl(R^{5/2}\,t^{-1/2} +
R^3\,t^{-1/2}\Bigr)\,{\rm d} t \\ \,&\le\, C_{10}\,R^3\,T^{-1/2}\,,
\end{split}
\end{equation}
where the constant $C_{10}$ only depends on $a,b,\delta$.
Finally, since $au_{xx} = u_t + \rho$ and $b v_{xx} = v_t - 2\rho$, estimate
\eqref{decay4} follows immediately from \eqref{decay2}, \eqref{decay3} after
taking the supremum over $x_0 \in \mathbb{R}$.
\end{proof}
\begin{rem}\label{rem:decay2}
Estimate \eqref{decay2} implies that $\|u_{xx}(t)\|_{L^1_\mathrm{ul}} + \|v_{xx}(t)\|_{L^1_\mathrm{ul}}
+ \|\rho(t)\|_{L^1_\mathrm{ul}} \le CR^3 t^{-1/2}$ for $t \ge 1$, and using parabolic
smoothing one deduces that
\begin{equation}\label{uvrho}
\|u_{xx}(t)\|_{L^\infty} + \|v_{xx}(t)\|_{L^\infty} + \|\rho(t)\|_{L^\infty}
\,\le\, \frac{C R^{7/2}}{t^{1/2}}\,, \qquad t \ge 2\,,
\end{equation}
see Section~\ref{ssec43}. However, in view of Remark~\ref{rem:higher},
we believe that these decay rates are suboptimal. Note that the optimal rates
conjectured in \eqref{higherest} suggest that the left-hand side of
\eqref{decay4} indeed decays like $T^{-1/2}$ as $T \to +\infty$,
so that \eqref{decay4} is not far from optimal.
\end{rem}
\subsection{From uniformly local to uniform estimates}\label{ssec43}
Lemmas~\ref{lem:first} and \ref{lem:uvrho} give apparently optimal
estimates on the quantities $u_x, v_x$ in some (time-dependent) uniformly
local $L^2$ norm, and on $u_{xx}, v_{xx}, \rho$ in some uniformly local $L^1$
norm. To conclude the proof of Proposition~\ref{main2}, it remains to convert
these estimates into ordinary $L^\infty$ bounds, as already announced in
Remarks~\ref{rem:decay1} and \ref{rem:decay2}. The starting point is the
following well-known estimate for the heat semigroup
$S(t) = \exp(t\partial_x^2)$ acting on uniformly local spaces. If
$f \in L^p_\mathrm{ul}(\mathbb{R})$ for some $p \in [1,+\infty)$, then
\begin{equation}\label{heatul}
\|S(t)f\|_{L^\infty(\mathbb{R})} \,\le\, C \min\bigl(1,t^{-1/(2p)}\bigr)
\|f\|_{L^p_\mathrm{ul}(\mathbb{R})}\,, \qquad t > 0\,,
\end{equation}
see \cite[Proposition~2.1]{ABCD}. In particular, for short times, we have
exactly the same parabolic smoothing effect for the solutions of the
heat equation as in the ordinary $L^p$ spaces. It is easy to establish a
similar result for the solutions of the linearized system \eqref{UVsys}.
\begin{lem}\label{UVsmoothing}
Assume that $(U,V)$ is a solution of \eqref{UVsys}, where $\|v(t)\|_{L^\infty}
\le R$ for some $R \ge 1$. Given $p \in [1,\infty)$, there exists a constant
$C_{11} \ge 1$ depending only on $a,b,p$ such that, for all $t_1 > t_0 \ge 0$
satisfying $C_{11} R (t_1 - t_0) \le 1$, the following estimate holds\:
\begin{equation}\label{UVestLp}
\|U(t)\|_{L^\infty} + \|V(t)\|_{L^\infty} \,\le\, \frac{C_{11}}{(t-t_0)^{1/(2p)}}
\Bigl(\|U(t_0)\|_{L^p_\mathrm{ul}} + \|V(t_0)\|_{L^p_\mathrm{ul}}\Bigr)\,, \qquad
t_0 < t \le t_1\,.
\end{equation}
\end{lem}
\begin{proof}
Without loss of generality, we can take $t_0 = 0$. We denote $W = (U,V)$
and we assume that the initial data $W_0 = (U_0,V_0)$ belong to $L^p_\mathrm{ul}(\mathbb{R})^2$ for
some $p \in [1,+\infty)$. If we write equation \eqref{UVsys} in integral
form and use estimate \eqref{heatul}, we easily obtain
\[
\|W(t)\|_{L^\infty} \,\le\, \frac{C}{t^{1/(2p)}}\,\|W_0\|_{L^p_\mathrm{ul}} +
C R \int_0^t \|W(s)\|_{L^\infty}\,{\rm d} s\,, \qquad t > 0\,.
\]
Setting $\|W\| = \sup\bigl\{t^{1/(2p)}\|W(t)\|_{L^\infty}\,;\, 0 < t \le t_1\bigr\}$,
we find $\|W\| \le C \|W_0\|_{L^p_\mathrm{ul}} + C' R t_1 \|W\|$, for some positive
constants $C,C'$. If we now choose $t_1 > 0$ so that $C'R t_1 \le 1/2$, we
conclude that $\|W\| \le 2C \|W_0\|_{L^p_\mathrm{ul}}$, which is the desired estimate.
\end{proof}
We first apply Lemma~\ref{UVsmoothing} to $(U,V) = (u_x,v_x)$, with $p = 2$.
As was observed in Remark~\ref{rem:decay1}, we know from \eqref{decay1} that
$\|u_x(t)\|_{L^2_\mathrm{ul}} + \|v_x(t)\|_{L^2_\mathrm{ul}} + \|\rho(t)\|_{L^2_\mathrm{ul}} \le CR^{3/2}t^{-1/4}$
for all $t \ge 1$. Thus, taking $t \ge 2$ and choosing $t_0 = t-1/(C_{11}R)
\ge t/2$, we see that \eqref{UVestLp} implies estimate \eqref{uxvxbd}.
Similarly, we can apply Lemma~\ref{UVsmoothing} to $(U,V) = (u_t,v_t)$, with
$p = 1$. Here we invoke estimate \eqref{decay2}, which implies that
$\|u_t(t)\|_{L^1_\mathrm{ul}} + \|v_t(t)\|_{L^1_\mathrm{ul}} \le C R^{3/2}t^{-1/2}$ for all $t \ge 1$,
and choosing $t, t_0$ as above we deduce from \eqref{UVestLp} that
\begin{equation}\label{utvtbd}
\|u_t(t)\|_{L^\infty} + \|v_t(t)\|_{L^\infty} \,\le\, \frac{C R^2}{t^{1/2}}\,,
\qquad t \ge 2\,.
\end{equation}
To control the quantity $\rho = u - v^2$ in $L^\infty(\mathbb{R})$, we can proceed
as in the proof of Lemma~\ref{lem:uvrho}. Integrating \eqref{rhoeq}
on the time interval $[2,t]$ and using estimates \eqref{uxvxbd}, \eqref{utvtbd},
we easily obtain
\begin{equation}\label{rhobd}
\begin{split}
\|\rho(t)\|_{L^\infty} \,&\le\, e^{-(t-2)} \|\rho(2)\|_{L^\infty} +
c \int_2^t e^{-(t-s)}\,\Bigl(\|v(s)\|_{L^\infty}\|v_t(s)\|_{L^\infty}
+ \|v_x(s)\|_{L^\infty}^2\Bigr)\,{\rm d} s \\
\,&\le\, R^2\,e^{-(t-2)} + C \int_2^t e^{-(t-s)}\,\Bigl(\frac{R^3}{s^{1/2}}
+ \frac{R^{7/2}}{s^{1/2}}\Bigr)\,{\rm d} s \,\le\, \frac{C R^{7/2}}{t^{1/2}}\,,
\quad t \ge 2\,.
\end{split}
\end{equation}
As $au_{xx} = u_t + \rho$ and $b v_{xx} = v_t - 2\rho$, we obtain \eqref{uvrho}
from estimates \eqref{utvtbd}, \eqref{rhobd}. In addition, since $|\rho(x,t)|
\le R^2$ for all $t \ge 0$ by \eqref{uvbound}, we deduce from \eqref{rhobd}
that $\|\rho(t)\|_{L^\infty} \le C R^{7/2}(1+t)^{-1/2}$ for all $t \ge 0$,
which is the second estimate in \eqref{mainest2}.
The only remaining step consists in improving the decay rates of the first-order
derivatives $u_x, v_x$, so as to obtain the first estimate in \eqref{mainest2}.
\begin{lem}\label{lem:uvbetter}
Under the assumption~\eqref{eq:lowv}, any solution $(u,v) \in C^0([0,+\infty),X^2)$ of
\eqref{RD2} with initial data $(u_0,v_0) \in X_+^2$ satisfies, for any $T > 0$,
\begin{equation}\label{uvbetter}
\|u_x(t)\|_{L^\infty} + \|v_x(t)\|_{L^\infty} \,\le\, \frac{C_{12} R^{7/2}}{t^{1/2}}
\,\log(2+t)\,, \quad t > 0\,,
\end{equation}
where $R = 1 + \|u_0\|_{L^\infty} + \|v_0\|_{L^\infty}$ and the constant $C_{12}$
depends only on $a,b,\delta$.
\end{lem}
\begin{proof}
Since $u_t = a u_{xx} - \rho$, we have the integral representation
\begin{equation}\label{uxint}
u_x(t) \,=\, \partial_x S(at/2) u(t/2) - \int_{t/2}^t \partial_x S(a(t{-}s))
\rho(s)\,{\rm d} s\,, \qquad t > 0\,,
\end{equation}
where $S(t) = \exp(t\partial_x^2)$ is the heat semigroup. The first term in
the right-hand side is easily estimated\:
\begin{equation}\label{uxsimple}
\|\partial_x S(at/2) u(t/2)\|_{L^\infty} \,\le\, \frac{C}{t^{1/2}}\,
\|u(t/2)\|_{L^\infty} \,\le\, \frac{CR}{t^{1/2}}\,.
\end{equation}
To bound the integral term in \eqref{uxint}, we distinguish two cases,
according to whether $s \ge t-1$ or $s < t-1$ (if $t \le 2$, the second
possibility is excluded).
\medskip\noindent
{\bf Case 1\:} $s \ge \max(t-1,t/2)$. Since $\|\partial_x S(t)f\|_{L^\infty} \le
C t^{-1/2}\|f\|_{L^\infty}$, we obtain using \eqref{rhobd}
\begin{equation}\label{case1est}
\bigl\|\partial_x S(a(t{-}s))\rho(s)\bigr\|_{L^\infty} \,\le\,
\frac{C}{(t-s)^{1/2}}\,\|\rho(s)\|_{L^\infty} \,\le\, \frac{C}{(t-s)^{1/2}}\,
\frac{R^{7/2}}{(1+s)^{1/2}}\,.
\end{equation}
\medskip\noindent
{\bf Case 2\:} $t \ge 2$ and $t/2 \le s \le t-1$. Here we observe that, for all
$x \in \mathbb{R}$,
\begin{align*}
\bigl|\partial_x S(a(t{-}s))\rho(t)\bigr|(x) \,&\le\,
\frac{C}{(t-s)^{1/2}}\int_\mathbb{R} \exp\biggl(-\frac{|x-y|^2}{4a(t{-}s)}\biggr)
\frac{|x-y|}{t-s}\,|\rho(y,s)|\,{\rm d} y \\
\,&\le\,
\frac{C}{t-s}\int_\mathbb{R} \exp\biggl(-\frac{|x-y|^2}{5a(t{-}s)}\biggr)
\,|\rho(y,s)|\,{\rm d} y \\
\,&\le\, \frac{C}{t-s}\int_\mathbb{R} \frac{|\rho(y,s)|}{\cosh\bigl(\epsilon(s)
|x-y|\bigr)}\,{\rm d} y\,,\quad \hbox{where}\quad \epsilon(s) \,=\,
\frac{1}{(C_0 s)^{1/2}}\,.
\end{align*}
In the last line, we used the assumption that $t-s \le s$ and the fact
that, for any $\gamma > 0$, there exists $C > 0$ such that $e^{-x^2}
\le C\cosh(\gamma x)^{-1}$ for all $x \in \mathbb{R}$. Now, we know from
\eqref{decay3} that
\[
\sup_{x \in \mathbb{R}}\,\int_\mathbb{R} \frac{|\rho(y,s)|}{\cosh\bigl(\epsilon(s)|x-y|\bigr)}\,{\rm d} y
\,\le\, C_{10} R^3 s^{-1/2} \,\le\, \frac{C R^3}{(1+s)^{1/2}}\,,
\]
and we conclude that
\begin{equation}\label{case2est}
\bigl\|\partial_x S(a(t{-}s))\rho(s)\bigr\|_{L^\infty} \,\le\,
\frac{C}{t-s}\,\frac{R^3}{(1+s)^{1/2}}\,.
\end{equation}
Combining \eqref{case1est}, \eqref{case2est} we can estimate the
integral term in \eqref{uxint} as follows\:
\begin{align*}
\int_{t/2}^t \bigl\|\partial_x S(a(t{-}s))\rho(s)\bigr\|_{L^\infty} \,{\rm d} s
\,&\le\, \frac{CR^{7/2}}{(1+t)^{1/2}}\int_{t/2}^t \min\biggl(\frac{1}{t-s}\,,\,
\frac{1}{(t-s)^{1/2}}\biggr)\,{\rm d} s \\
\,&\le\, \frac{C R^{7/2}}{(1+t)^{1/2}}\log(2+t)\,,
\end{align*}
and using in addition \eqref{uxsimple} we obtain the desired estimate for
$\|u_x(t)\|_{L^\infty}$. The bound on $\|v_x(t)\|_{L^\infty}$ is obtained by
a similar argument.
\end{proof}
\subsection{The case where $v$ is not bounded away from zero}\label{ssec44}
We briefly indicate here how the arguments of Sections~\ref{ssec41}--\ref{ssec43}
have to be adapted to establish Proposition~\ref{main2} without assuming that the
second component $v(x,t)$ of system~\eqref{RD2} is bounded away from zero.
As already mentioned, the idea is to use the modified EDS structures introduced
in Remark~\ref{lem:edf34}, where the additional parameter $\theta > 0$ is chosen
sufficiently small, depending on $a,b$. It is straightforward to verify that the flux term
$f_1(x,t)$ in \eqref{edf3} still satisfies the bound $f_1^2 \le C_0 e_1 d_1$ for some
$C_0 > 0$, so that the proof of inequality \eqref{Eineq} is unchanged. Similarly,
the additional flux term $\theta w_x w_t$ in \eqref{edf4} is harmless, because
\[
\bigl(w_x w_t\bigr)^2 \,=\, w_x^2 \bigl(bw_{xx} + 2(a-b)u_{xx}\bigr)^2
\,\le\, C\, \tilde e_1 \tilde d_1\,,
\]
for some constant $C > 0$. As a consequence, the proof of the crucial inequality
\eqref{tEineq2} is not modified either. In view of the improved lower bounds
\eqref{cbd2}, the conclusion of Lemma~\ref{lem:first} is strengthened as follows\:
\[
\sup_{x_0 \in \mathbb{R}} \,\int_{I(x_0,T)} \Bigl(u_x^2 + (1+v)v_x^2 + \rho^2\Bigr)
(x,T)\,{\rm d} x \,\le\, C_6\,R^3\,T^{-1/2}\,,
\]
for some constant $C_6 > 0$ depending only on $a,b,\theta$. Similarly,
Lemma~\ref{lem:uvrho} holds without assuming \eqref{eq:lowv} and with
the stronger conclusion
\[
\sup_{x_0 \in \mathbb{R}} \,\int_{I(x_0,T)} \Bigl(|u_{xx}| + (1+v)|v_{xx}| + |\rho|\Bigr)
(x,T)\,{\rm d} x \,\le\, C_8\,R^3\,T^{-1/2}\,,
\]
where the constant $C_8$ only depends on $a,b,\theta$. The rest of the proof
of Proposition~\ref{main2} does not rely on assumption~\ref{eq:lowv}, and
follows exactly the same lines as in Section~\ref{ssec43}.
\section{Stability analysis of spatially homogeneous equilibria}
\label{sec5}
In this section we study the solutions of system~\eqref{RD} in a neighborhood
of a spatially homogeneous equilibrium $(\bar u,\bar v)$ with $\bar u = \bar v^2$
and $\bar v > 0$. We look for solutions in the form
\[
u(x,t) \,=\, \bar u\bigl(1 + 4 \tilde u(x,t)\bigr)\,, \qquad
v(x,t) \,=\, \bar v\bigl(1 + 2 \tilde v(x,t)\bigr)\,,
\]
so that the perturbations $\tilde u, \tilde v$ satisfy the system
\begin{equation}\label{RD3}
\begin{split}
\tilde u_t(x,t) \,&=\, a \tilde u_{xx}(x,t) + k_1\bigl(\tilde v(x,t)
- \tilde u(x,t) + \tilde v(x,t)^2\bigr)\,, \\
\tilde v_t(x,t) \,&=\, b \tilde v_{xx}(x,t) + k_2\bigl(\tilde u(x,t)
- \tilde v(x,t) - \tilde v(x,t)^2\bigr)\,,
\end{split}
\end{equation}
where $k_1 = k$ and $k_2 = 4k\bar v$. We introduce the matrix notation
\[
W \,=\, \begin{pmatrix} W_1 \\ W_2\end{pmatrix} \,\equiv\,
\begin{pmatrix} \tilde u \\ \tilde v\end{pmatrix}\,,\qquad
\mathcal{N} \,=\, \begin{pmatrix} -1 \\ 1\end{pmatrix}\,, \qquad
\mathcal{M} \,=\, \begin{pmatrix} k_1 \\ -k_2\end{pmatrix}\,, \qquad
D \,=\, \begin{pmatrix} a & 0 \\ 0 & b\end{pmatrix}\,,
\]
so that \eqref{RD3} takes the simpler form
\begin{equation}\label{RD4}
W_t \,=\, D W_{xx} + \bigl(\mathcal{N} \cdot W + W_2^2\bigr)\mathcal{M}\,.
\end{equation}
Note that the reaction terms in \eqref{RD4} are always proportional
to the vector $\mathcal{M}$, which therefore spans the ``stoichiometric subspace''
of the chemical reaction. They vanish when $\mathcal{N}\cdot W + W_2^2 = 0$,
so that the tangent space to the manifold $\mathcal{E}$ of equilibria at
the origin $W = 0$ is orthogonal to the vector $\mathcal{N}$.
The integral equation associated with \eqref{RD4} is
\begin{equation}\label{Duhamel}
W(t) \,=\, \mathcal{S}(t) * W_0 + \int_0^t \mathcal{S}(t-s)\mathcal{M} * W_2(s)^2 \,{\rm d} s\,,
\qquad t > 0\,,
\end{equation}
where $*$ denotes the convolution with respect to the space variable
$x \in \mathbb{R}$, and $\mathcal{S}(t) = \mathcal{S}(\cdot,t)$ is the matrix-valued function
defined by
\begin{equation}\label{Sdef}
\mathcal{S}(x,t) \,=\, \frac{1}{2\pi}\int_\mathbb{R} \exp\bigl(t A(\xi)\bigr)
\,e^{i\xi x}\,{\rm d} x\,, \qquad x \in \mathbb{R}\,,\quad t > 0\,,
\end{equation}
with
\begin{equation}\label{Adef}
A(\xi) \,=\, -D\xi^2 + \mathcal{M} \,\mathcal{N}^\top \,=\, \begin{pmatrix} -k_1 -a\xi^2
& k_1 \\ k_2 & -k_2 -b\xi^2\end{pmatrix}\,.
\end{equation}
The exponential in \eqref{Sdef} can be computed explicitly. For that
purpose, it is convenient to introduce the notation
\[
\mu \,=\, \frac{a+b}{2}\,, \qquad \nu \,=\, \frac{a-b}{2}\,, \qquad
\kappa \,=\, \frac{k_1+k_2}{2}\,, \qquad \ell \,=\, \frac{k_1-k_2}{2}\,,
\]
so that $a = \mu + \nu$, $b = \mu - \nu$, $k_1 = \kappa+\ell$, $k_2 =
\kappa - \ell$. We observe that
\[
A(\xi) \,=\, - (\kappa+\mu\xi^2)\mathbf{1} + B(\xi)\,, \qquad \hbox{where}\qquad
B(\xi) \,=\, \begin{pmatrix} -\ell-\nu\xi^2
& k_1 \\ k_2 & \ell + \nu \xi^2\end{pmatrix}\,.
\]
Moreover $B(\xi)^2 = \Delta(\xi)^2\mathbf{1}$, where $\mathbf{1}$ is the identity matrix and
\begin{equation}\label{Deltadef}
\Delta(\xi) \,=\, \sqrt{\kappa^2 + 2 \ell \nu \xi^2 + \nu^2 \xi^4}
\,=\, \sqrt{k_1 k_2 + (\ell + \nu \xi^2)^2}\,.
\end{equation}
In particular, the eigenvalues of $A(\xi)$ are real and equal to $\lambda_\pm(\xi)
= -(\kappa+\mu \xi^2) \pm \Delta(\xi)$. Using these observations, it is easy to
verify that
\begin{equation}\label{expA}
\exp\bigl(t A(\xi)\bigr) \,=\, e^{-(\kappa+\mu\xi^2)t}\Bigl(\cosh\bigl(\Delta(\xi)
t\bigr) \,\mathbf{1} \,+\, \frac{\sinh(\Delta\bigl(\xi) t\bigr)}{\Delta(\xi)}
\,B(\xi)\Bigr)\,, \qquad t \ge 0\,.
\end{equation}
The following result specifies the decay rate of the kernel $\mathcal{S}(\cdot,t)$ in
$L^1(\mathbb{R})$ as $t \to +\infty$.
\begin{prop}\label{prop:Sest}
For any integer $m \in \mathbb{N}$, there exists a constant $C > 0$ such that the
matrix-valued function $S(\cdot,t)$ defined by \eqref{Sdef} satisfies,
for all $t > 0$, the estimates
\begin{equation}\label{Sest}
\begin{split}
\|\partial_x^m \mathcal{S}(t)\|_{L^1(\mathbb{R})} \,&\le\, C t^{-m/2}\,, \\
\|\partial_x^m \mathcal{S}(t)\mathcal{M}\|_{L^1(\mathbb{R})} \,&\le\, C t^{-m/2}\bigl(
e^{-2\kappa t} + |\nu| t^{-1}\bigr)\,, \\
\|\partial_x^m \mathcal{N}^\top\mathcal{S}(t)\|_{L^1(\mathbb{R})} \,&\le\, C t^{-m/2}\bigl(
e^{-2\kappa t} + |\nu| t^{-1}\bigr)\,, \\
\|\partial_x^m \mathcal{N}^\top\mathcal{S}(t)\mathcal{M}\|_{L^1(\mathbb{R})} \,&\le\, C t^{-m/2}\bigl(
e^{-2\kappa t} + \nu^2 t^{-2}\bigr)\,.
\end{split}
\end{equation}
\end{prop}
\begin{proof}
The following interpolation estimate will be repeatedly used\: if $f : \mathbb{R}
\to \mathbb{C}$ is integrable and if the Fourier transform $\hat f$ belongs to the
Sobolev space $H^1(\mathbb{R})$, then
\begin{equation}\label{finterp}
\|f\|_{L^1}^2 \,\le\, C\|f\|_{L^2} \|x f\|_{L^2} \,\le\, C
\|\hat f\|_{L^2} \|\partial_\xi \hat f\|_{L^2}\,.
\end{equation}
Of course, inequality \eqref{finterp} remains valid if $f$ is vector-valued or
matrix-valued. Given any $t > 0$, we first apply \eqref{finterp} to $f(x) = S(x,t)$,
recalling that $\hat f(\xi) = \hat \mathcal{S}(\xi,t) = \exp(tA(\xi))$ is given
by \eqref{expA}. Without loss of generality, we assume henceforth that $a \ge b$,
so that $\nu \ge 0$ (the converse case is completely similar). Using the
elementary bounds
\[
\max\bigl(\sqrt{k_1k_2}\,,\,|\ell + \nu\xi^2|\bigr) \,\le\, \Delta(\xi)
\,\le\, \kappa + \nu \xi^2\,,
\]
as well as $\cosh(z) \le e^z$ and $\sinh(z) \le \min(1,z)e^z$ for $z \ge 0$,
we easily deduce from \eqref{expA} the pointwise estimates
\[
|\hat \mathcal{S}(\xi,t)| \,\le\, C\,e^{-b\xi^2 t}\,, \qquad
|\partial_\xi\hat \mathcal{S}(\xi,t)| \,\le\, C|\xi| t\,e^{-b\xi^2 t}\,,
\]
which imply that $\|\hat\mathcal{S}(t)\|_{L^2} \le C t^{-1/4}$ and $\|\partial_\xi \hat\mathcal{S}(t)\|_{L^2}
\le C t^{1/4}$. It thus follows from \eqref{finterp} that the $L^1$ norm of
$\mathcal{S}(t)$ is uniformly bounded for all $t > 0$, and repeating the same argument
with $f(x) = \partial_x^m S(x,t)$ for some $m \in \mathbb{N}$ we arrive at the first
inequality in \eqref{Sest}.
The other inequalities in \eqref{Sest} exploit cancellations that
occur when the matrix $S(x,t)$ acts on the vector $\mathcal{M}$ (to the right)
or on the vector $\mathcal{N}^\top$ (to the left). We start from the identities
\begin{equation}\label{MNid}
B(\xi)\mathcal{M} \,=\, -\kappa \mathcal{M} - \nu \xi^2 \begin{pmatrix} k_1 \\ k_2
\end{pmatrix}\,, \qquad
\mathcal{N}^\top B(\xi)\,=\, -\kappa \mathcal{N}^\top + \nu \xi^2 \begin{pmatrix} 1 & 1
\end{pmatrix}\,,
\end{equation}
which follow immediately from the definitions. Writing $\cosh(\Delta t) =
e^{-\Delta t} + \sinh(\Delta t)$ in \eqref{expA}, we find
\begin{equation}\label{SMprelim}
\hat \mathcal{S}(\xi,t)\mathcal{M} \,=\, e^{-(\kappa+\mu\xi^2)t}\left\{e^{-\Delta t}\mathcal{M} +
\Bigl(1 - \frac{k}{\Delta}\Bigr)\sinh(\Delta t)\mathcal{M} - \nu\xi^2\,
\frac{\sinh(\Delta t)}{\Delta}\begin{pmatrix} k_1 \\ k_2
\end{pmatrix}\right\}\,.
\end{equation}
In the particular case where $\nu = 0$, one has $\Delta = \kappa$, so that
$\hat \mathcal{S}(\xi,t)\mathcal{M} = e^{-(2\kappa +\mu \xi^2)t}\mathcal{M}$. In general, only the
first term in the right-hand side of \eqref{SMprelim} decays exponentially
in time, and can be estimated using the elementary bound $\mu \xi^2 +
\Delta(\xi) \ge \kappa + b\xi^2$. The remaining terms are treated as above,
and we arrive at pointwise estimates of the form
\begin{align*}
|\hat \mathcal{S}(\xi,t)\mathcal{M}| \,&\le\, e^{-(2\kappa +b\xi^2)t} + C\nu\xi^2\,e^{-b\xi^2 t}\,,\\
|\partial_\xi\hat \mathcal{S}(\xi,t)\mathcal{M}| \,&\le\, C|\xi|t\,e^{-(2\kappa +b\xi^2)t}
+ C\nu|\xi|(1+\xi^2t)\,e^{-b\xi^2 t}\,.
\end{align*}
Invoking \eqref{finterp}, we thus obtain the second inequality in \eqref{Sest}.
The third one is obtained similarly, starting from the second relation in
\eqref{MNid}.
Finally, a straightforward calculation shows that
\[
\mathcal{N}^\top \hat \mathcal{S}(\xi,t)\mathcal{M} \,=\, -2\,e^{-(\kappa+\mu\xi^2)t}\left\{
\kappa\,e^{-\Delta t} + \Bigl(\kappa - \frac{\kappa^2 + \ell\nu \xi^2}{\Delta}\Bigr)
\sinh(\Delta t)\right\}\,,
\]
and we deduce the pointwise estimates
\begin{align*}
|\mathcal{N}^\top \hat \mathcal{S}(\xi,t)\mathcal{M}| \,&\le\, C\,e^{-(2\kappa + b\xi^2)t} + C\nu^2\xi^4
\,e^{-b\xi^2 t}\,,\\
|\partial_\xi\mathcal{N}^\top \hat \mathcal{S}(\xi,t)\mathcal{M}| \,&\le\, C|\xi|t\,e^{-(2\kappa
+b \xi^2)t} + C\nu^2|\xi|^3(1+\xi^2t)\,e^{-b\xi^2 t}\,.
\end{align*}
Using again \eqref{finterp}, we obtain the last inequality in \eqref{Sest}.
\end{proof}
The conclusion of Proposition~\ref{prop:Sest} is interesting for at least two
reasons. First, if $a \neq b$ and if $W(t) = S(t)*W_0$ is a solution of the
{\em linearized} equation \eqref{RD4} with initial data $W_0 \in X^2$, the first
inequality in \eqref{Sest} (with $m = 1$) and the third one (with $m = 0$)
imply that
\begin{equation}\label{linest}
\|\tilde u_x(t)\|_{L^\infty} + \|\tilde v_x(t)\|_{L^\infty} \,=\, \mathcal{O}(t^{-1/2})\,,
\qquad \|\tilde u(t) - \tilde v(t)\|_{L^\infty} \,=\, \mathcal{O}(t^{-1})\,,
\end{equation}
as $t \to +\infty$. We emphasize that, at the linear level, the difference
$\tilde u - \tilde v$ measures the distance to the manifold $\mathcal{E}$ of equilibria.
Because of \eqref{linest}, we conjecture that the decay rates in \eqref{mainest1}
are optimal for general solutions of \eqref{RD}, see the discussion after
Proposition~\ref{main2}. Note that Proposition~\ref{main1} assumes that
the diffusivities are equal, in which case Proposition~\ref{prop:Sest}
shows that the difference $\tilde u(t) - \tilde v(t)$ decays exponentially
fast as $t \to +\infty$ when $W = (\tilde u,\tilde v)$ solves the linearized
equation.
The second observation concerns the full, nonlinear equation \eqref{RD4}. Using
the first two estimates in \eqref{Sest}, it is easy to prove by a fixed point
argument that the Cauchy problem for \eqref{RD4} is globally well-posed for
small data $W_0 \in L^p(\mathbb{R})^2$ if $p < \infty$, and that the solutions satisfy
$\|W(t)\|_{L^\infty} = \mathcal{O}(t^{-1/(2p)})$ as $t \to +\infty$. However, the
critical case $p = \infty$, which is relevant in the context of the present
paper, cannot be treated by this approach. In fact, using the optimal decay
estimates listed in Proposition~\ref{prop:Sest}, we are not even able to show
that the solution $W(t)$ of \eqref{RD4} originating from small initial data
$W_0 \in X^2$ stays uniformly bounded for all times, except in the case of equal
diffusivities where the problem is much simpler. The reason is that, if
$a \neq b$, the quantity $\|\mathcal{S}(t)\mathcal{M}\|_{L^1(\mathbb{R})}$ decays like $t^{-1}$ as
$t \to +\infty$ and is therefore not integrable in time. This indicates that
the dynamics of system~\eqref{RD} in the space of bounded functions on $\mathbb{R}$
is not simple to analyze, even in a neighborhood of a spatially homogeneous
equilibrium.
\section{Conclusion and perspectives}\label{sec6}
The present work is only a modest incursion into the realm of extended
reaction-diffusion systems with a local gradient structure. Even for the very
simple example \eqref{RD}, which has many specific properties, our results are
incomplete and a global understanding of the dynamics is still missing. To
be more precise, assume that the decay rates \eqref{higherest} hold for all
bounded and nonnegative solutions of \eqref{RD}, which is a reasonable conjecture
(although we are not able to prove that when $a \neq b$). The quantity
$\rho = u - v^2$, which measures the distance to the manifold $\mathcal{E}$ of
equilibria, satisfies the equation
\begin{equation}\label{rhoequation}
\rho_t \,=\, a \rho_{xx} - k(1{+}4v)\rho + 2(a{-}b)v v_{xx} + 2a v_x^2\,.
\end{equation}
According to \eqref{higherest}, the last three terms in \eqref{rhoequation}
decay like $t^{-1}$ when $t \to +\infty$, whereas $\rho_{xx} = \mathcal{O}(t^{-2})$.
It is therefore reasonable to expect that
\begin{equation}\label{slaving}
\rho \,=\, \frac{1}{k(1{+}4v)}\Bigl(2(a{-}b)v v_{xx} + 2a v_x^2\Bigr)
+ \mathcal{O}(t^{-2})\,, \qquad t \to +\infty\,.
\end{equation}
Inserting this ansatz into the $v$-equation $v_t = bv_{xx} + 2k\rho$ and
neglecting the higher-order terms, we obtain the following quasilinear
diffusion equation
\begin{equation}\label{diffeq1}
v_t \,=\, \frac{b+4av}{1+4v}\,v_{xx} + \frac{4a}{1+4v}\,v_x^2\,,
\qquad x \in \mathbb{R}\,, \quad t > 0\,.
\end{equation}
Alternatively, setting $w = v + 2v^2$, we can write \eqref{diffeq1} in
the more elegant form
\begin{equation}\label{diffeq2}
w_t \,=\, \bigl(D(w)w_x\bigr)_x\,, \qquad \hbox{where}\qquad
D(w) \,=\, a + \frac{b-a}{\sqrt{1+8w}}\,.
\end{equation}
We conjecture that the long-time asymptotics of any solution of \eqref{RD} in
$X_+^2$ corresponds to a slow motion along the manifold $\mathcal{E}$ of chemical
equilibria, which is described to leading order by the diffusion equation
\eqref{diffeq1} or \eqref{diffeq2}. Note that the effective diffusion $D(w)$ in
\eqref{diffeq2} depends on the solution $w$ in a nontrivial way, except in the
particular case $a = b$ where \eqref{diffeq2} reduces to the linear heat
equation. Given two positive constants $w_\pm$, one can solve the Cauchy problem
for \eqref{diffeq2} with Riemann-like initial data
\[
w_0(x) \,=\, \begin{cases} w_- & \hbox{if } x < 0\,, \\
w_+ & \hbox{if } x > 0\,,\end{cases}
\]
and this produces a self-similar solution of \eqref{diffeq2} which should
describe the {\em diffusive mixing} of two chemical equilibria under the
dynamics of \eqref{RD}, see \cite{GM} for a similar result in the context
of the Ginzburg-Landau equation. A rigorous justification of the slaving
ansatz \eqref{slaving} and of the relevance of the diffusion equation
\eqref{diffeq2} for the long-time asymptotics of the original system
\eqref{RD} is left to a future work.
On the other hand, the model we consider is just a simple example in a broad
class of systems, and it is natural to ask to which extent our analysis
relies on specific features of \eqref{RD}. In a first step towards greater
generality, we consider the reaction $n\mathcal{A} \xrightleftharpoons[]{} m\mathcal{B}$,
where $n,m$ are positive integers such that $n + m \ge 3$. The corresponding system
\begin{equation}\label{RDnm}
u_t \,=\, a u_{xx} + nk\bigl(v^m-u^n\bigr)\,, \qquad
v_t \,=\, b v_{xx} + mk\bigl(u^n-v^m\bigr)\,,
\end{equation}
is still cooperative, and the analogue of Proposition~\ref{prop:exist} holds.
It is also possible to find a polynomial EDS structure of the form \eqref{edf1},
which reads
\begin{align*}
e \,&=\, \frac{1}{n(n{+}1)}\,u^{n+1} + \frac{1}{m(m{+}1)}\,v^{m+1}\,, \\
f \,&=\, \frac{a}{n} u^n u_x + \frac{b}{m}\,v^m v_x\,, \\[2mm]
d \,&=\, a u^{n-1} u_x^2 + b v^{m-1} v_x^2 + (u^n-v^m)^2\,.
\end{align*}
However, there is apparently less flexibility for constructing a second EDS
structure in the sense of Section~\ref{sec3}, and at the moment we can do
that only if the ratio $a/b$ is not too different from $1$. Except for that
limitation in the choice of the parameters $a,b$, the analogue of
Proposition~\ref{main2} holds with a similar proof.
The situation changes significantly when we turn our attention to more realistic
chemical reactions such as $\mathcal{A}_1 \xrightleftharpoons[]{} \mathcal{A}_2 + \mathcal{A}_3$.
The associated system is still relatively simple
\begin{equation}\label{RD3x3}
u_t \,=\, a u_{xx} - u + vw\,, \qquad
v_t \,=\, b v_{xx} + u - vw\,, \qquad
w_t \,=\, c w_{xx} + u - vw\,,
\end{equation}
but new difficulties arise that make the analysis substantially more
difficult. First, system~\eqref{RD3x3} is not cooperative, and does not satisfy
any comparison principle we know of. As a consequence, new arguments are needed
to show that the solutions of \eqref{RD3x3} stay uniformly bounded for all
nonnegative initial data in $L^\infty(\mathbb{R})$. For the same reason, it is not
obvious that a solution starting close (in the $L^\infty$ sense) to a chemical
equilibrium will stay in a neighborhood of that equilibrium for all times. Next,
the only EDS structure we are aware of is given by the general formulas
\eqref{edfgen}, and we are not able to construct a second EDS structure that
controls the entropy dissipation, as we did in Section~\ref{sec3} for the
simpler system \eqref{RD}. At the moment, we are thus unable to prove the
analogue of Proposition~\ref{main2} for system~\eqref{RD3x3}, and a fortiori for
more complex reaction-diffusion systems of the form \eqref{RDgen}. We hope to be
able to elucidate some of these questions in the future.
|
1,108,101,565,788 | arxiv | \section{Introduction}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\linewidth]{./GCN-LSTM.png}
\caption{The structure of one AGC-LSTM layer. Different from traditional LSTM, the graph convolutional operator within AGC-LSTM causes the input, hidden state, and cell memory of AGC-LSTM to be graph-structured data.
}
\label{GCN-LSTM}
\end{figure}
In the computer vision field, human action recognition plays a fundamental and important role, with the purpose of predicting the action classes from videos. It has been studied for decades and is still very popular due to its extensive potential applications, \eg, video surveillance, human-computer interaction, sports analysis and so on \cite{Poppe2010survey, Weinland2011survey, Aggarwal2011Human}.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.8\linewidth, height=0.3\linewidth]{./pipeline.png}
\caption{The architecture of the proposed attention enhanced graph convolutional LSTM network (AGC-LSTM). Feature augmentation (FA) computes feature differences with position features and concatenates both position features and feature differences. LSTM is used to dispel scale variance between feature differences and position features. Three AGC-LSTM layers can model discriminative spatial-temporal features. Temporal average pooling is the implementation of average pooling in the temporal domain. We use the global feature of all joints and the local feature of focused joints from the last AGC-LSTM layer to predict the class of human action.
}
\label{pipeline}
\end{figure*}
Action recognition is a challenging task in the computer vision community. There are various attempts on human action recognition based on RGB video and 3D skeleton data. The RGB video based action recognition methods \cite{Simonyan2014Two-stream, Limin2016Temporal, Tran_2015_ICCV, human2018Pichao} mainly focus on modeling spatial and temporal representations from RGB frames and temporal optical flow. Despite RGB video based methods have achieved promising results, there still exist some limitations, \eg, background clutter, illumination changes, appearance variation, and so on. 3D skeleton data represents the body structure with a set of 3D coordinate positions of key joints. Since skeleton sequence does not contain color information, it is not affected by the limitations of RGB video. Such robust representation allows to model more discriminative temporal characteristics about human actions. Moreover, Johansson \etal~\cite{johansson1973visual} have given an empirical and theoretical basis that key joints can provide highly effective information about human motion. Besides, the Microsoft Kinect \cite{zhang2012microsoft} and advanced human pose estimation algorithms \cite{cao2017realtime} make it easier to gain skeleton data.
For skeleton based action recognition, the existing methods explore different models to learn spatial and temporal features.
Song \etal \cite{Song2017Attention} employ a spatial-temporal attention model based on LSTM to select discriminative spatial and temporal features. The Convolutional Neural Networks (CNNs) are used to learn spatial-temporal features from skeletons in \cite{Yong2015Skeleton, Chao2018Co-occurrence, Ke2017A}. \cite{Yan2018Spatial, Kalpit2018Part} employ graph convolutional networks (GCN) for action recognition.
Compared with \cite{Yan2018Spatial, Kalpit2018Part}, Si \etal~\cite{Chenyang2018Skeleton} propose to utilize the graph neural network and LSTM to represent spatial and temporal information, respectively.
In short, all these methods are trying to design an effective model that can identify spatial and temporal features of skeleton sequence. Nevertheless, how to effectively extract discriminative spatial and temporal features is still a challenging problem.
Generally, there are three notable characteristics for human skeleton sequences: 1) There are strong correlations between each node and its adjacent nodes so that the skeleton frames contain abundant body structural information. 2) Temporal continuity exists not only in the same joints (\eg, hand, wrist and elbow), but also in the body structure. 3) There is a co-occurrence relationship between spatial and temporal domains. In this paper, we propose a novel and general framework called attention enhanced graph convolutional LSTM network (AGC-LSTM) for skeleton-based action recognition, which improves the skeleton representation by synchronously learning spatiotemporal characteristics mentioned above.
The architecture of the proposed AGC-LSTM network is shown in Fig.\ref{pipeline}. Firstly, the coordinate of each joint is transformed into a spatial feature with a linear layer. Then we concatenate spatial feature and feature difference between two consecutive frames to compose an augmented feature. In order to dispel scale variance between both features, a shared LSTM is adopted to process each joint sequence. Next, we apply three AGC-LSTM layers to model spatial-temporal features. As shown in Fig.\ref{GCN-LSTM}, due to the graph convolutional operator within AGC-LSTM, it can not only effectively capture discriminative features in spatial configuration and temporal dynamics but also explore the co-occurrence relationship between spatial and temporal domains. More specially, the attention mechanism is employed to enhance the features of key nodes at each time step, which can promote AGC-LSTM to learn more discriminative features. For example, the features of ``elbow'', ``wrist'' and ``hand'' are very important for action ``handshaking'' and should be enhanced in the process of identifying the behavior. Inspired by spatial pooling in CNNs, we present a temporal hierarchical architecture with temporal average pooling to increase temporal receptive fields of the top AGC-LSTM layers, which boosts the ability to learn high-level spatiotemporal semantic features and significantly reduces the computational cost. Finally, we use the global feature of all joints and the local feature of focused joints from the last AGC-LSTM layer to predict the class of human actions. Although the joint-based model achieves the state-of-the-art results, we also explore the performance of the proposed model on the part level. For the part-based model, the concatenation of joints of each part serves as a node to construct the graph. Furthermore, the two-stream model based on joint and part can lead to further performance improvement.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item We propose a novel and general AGC-LSTM network for skeleton-based action recognition, which is the first attempt of graph convolutional LSTM for this task.
\item The proposed AGC-LSTM is able to effectively capture discriminative spatiotemporal features. More specially, the attention mechanism is employed to enhance the features of key nodes, which assists in improving spatiotemporal expressions.
\item A temporal hierarchical architecture is proposed to boost the ability to learn high-level spatiotemporal semantic features and significantly reduce the computational cost.
\item The proposed model achieves the state-of-the-art results on both NTU RGB+D dataset and Northwestern-UCLA dataset. We perform extensive experiments to demonstrate the effectiveness of our model.
\end{itemize}
\section{Related Work}
\textbf{\emph{Neural networks with graph}} \hspace{3mm}
Recently, graph-based models have attracted a lot of attention due to the effective representation for the graph structure data \cite{Keyulu2018How}. Existing graph models mainly fall into two architectures. One framework called graph neural network (GNN) is the combination of graph and recurrent neural network. Through multiple iterations of message passing and states updating of nodes, each node captures the semantic relation and structural information within its neighbor nodes. Qi \etal~\cite{Qi_2018_ECCV} apply GNN to address the task of detecting and recognizing human-object interactions in images and videos. Li \etal~\cite{Li_2017_ICCV} exploit the GNNs to model dependencies between roles and predict a consistent structured output for situation recognition. The other framework is graph convolutional network (GCN) that generalizes convolutional neural networks to graph. There are two types of GCNs: spectral GCNs and spatial GCNs. Spectral GCNs transform graph signals on graph spectral domains and then apply spectral filters on spectral domains. For example, the CNNs are utilized in the spectral domain relying on the graph Laplacian \cite{NIPS2015_5954, henaff2015deep}. Kipf \etal~\cite{Thomas2017Semi} introduce Spectral GCNs for semi-supervised classification on graph-structured data. For spatial GCNs, the convolution operation is applied to compute a new feature vector for each node using its neighborhood information. Simonovsky \etal~\cite{Simonovsky_2017_CVPR} formulate a convolution-like operation on graph signals performed in the spatial domain and are the first to apply graph convolutions to point cloud classification. In order to capture the spatial-temporal features of graph sequences, a graph convolutional LSTM is firstly proposed in \cite{Youngjoo2016Structured}, which is an extension of GCNs to have the recurrent architecture. Inspired by \cite{Youngjoo2016Structured}, we exploit a novel AGC-LSTM network to learn inherent spatiotemporal representations from skeleton sequences.
\textbf{\emph{Skeleton-based action recognition}} \hspace{3mm}
Human action recognition based on skeleton data has received a lot of attention, due to its effective representation of motion dynamics. Traditional skeleton-based action recognition methods mainly focus on designing hand-crafted features \cite{Raviteja2014Human, Wang2012Mining, Hussein2013Human}. Vemulapalli \etal~\cite{Raviteja2016Rolling} represent each skeleton using the relative 3D rotations between various body parts. The relative 3D geometry between all pairs of body parts is applied to represent the 3D human skeleton in \cite{Raviteja2014Human}.
Recent works mainly learn human action representations with deep learning networks\cite{Action2018Zhengyuan, Chunyu2018Memory, Human2017Fabien}. Du \etal~\cite{Du2015Hierarchical} divide human skeleton into five parts according to the human physical structure, and then separately feed them into a hierarchical recurrent neural network to recognize actions. A spatial-temporal attention network learns to selectively focus on discriminative spatial and temporal features in \cite{Song2017Attention}. Zhang \etal~\cite{Zhang2017View} present a view adaptive model for skeleton sequence, which is capable of regulating the observation viewpoints to the suitable ones by itself. The works in \cite{Yan2018Spatial, Kalpit2018Part, Chao2018Co-occurrence, Chenyang2018Skeleton} further show that learning discriminative spatial and temporal features is the key element for human action recognition. A hierarchical CNN model is presented in \cite{Chao2018Co-occurrence} to learn representations for joint co-occurrences and temporal evolutions. A spatial-temporal graph convolutional network (ST-GCN) is proposed for action recognition in \cite{Yan2018Spatial}. Each spatial-temporal graph convolutional layer constructs spatial characteristics with a graph convolutional operator, and models temporal dynamic with a convolutional operator. In addition, a part-based graph convolutional network (PB-GCN) is proposed to learn the relations between parts in \cite{Kalpit2018Part}. Compared with ST-GCN \cite{Yan2018Spatial} and PB-GCN \cite{Kalpit2018Part}, Si \etal~\cite{Chenyang2018Skeleton} apply graph neural networks to capture spatial structural information and then use LSTM to model temporal dynamics. Despite the significant performance improvement in \cite{Chenyang2018Skeleton}, it ignores the co-occurrence relationship between spatial and temporal features. In this paper, we propose a novel attention enhanced graph convolutional LSTM network that can not only effectively extract discriminative spatial and temporal features but also explore the co-occurrence relationship between spatial and temporal domains.
\section{Model Architecture}
\subsection{Graph Convolutional Neural Network}
Graph convolutional neural network (GCN) is a general and effective framework for learning representation of graph structured data. Various GCN variants have achieved the state-of-the-art results on many tasks. For skeleton-based action recognition, let $\mathcal{G}_t$ = \{$\mathcal{V}_t, \mathcal{E}_t$\} denotes a graph of human skeleton on a single frame at time $t$, where $\mathcal{V}_t$ is the set of $N$ joint nodes and $\mathcal{E}_t$ is the set of skeleton edges. The neighbor set of a node $v_{ti}$ is defined as $\mathcal{N}(v_{ti}) = \{v_{tj}|d(v_{ti}, v_{tj}) \leq D\}$, where $d(v_{ti}, v_{tj})$ is the minimum path length from $v_{tj}$ to $v_{ti}$. A graph labeling function $\ell: \mathcal{V}_t \to \{1,2,...,K\}$ is designed to assign the labels $\{1,2,...,K\}$ to each graph node $v_{ti} \in \mathcal{V}_t$, which can partition the neighbor set $\mathcal{N}(v_{ti})$ of node $v_{ti}$ into a fixed number of $K$ subsets. The graph convolution is generally computed as:
\begin{align}
\label{GCN1}
\textbf{Y}_{out}(v_{ti}) = \sum_{v_{tj}\in\mathcal{N}(v_{ti})} \frac{1}{Z_{ti}(v_{tj})} \textbf{X}(v_{tj}) \textbf{W}(\ell(v_{tj}))
\end{align}
where $\textbf{X}(v_{tj})$ is the feature of node $v_{tj}$. $\textbf{W}(\cdot)$ is a weight function that allocates a weight indexed by the label $\ell(v_{tj})$ from $K$ weights. $Z_{ti}(v_{tj})$ is the number of the corresponding subset, which normalizes feature representations. $\textbf{Y}_{out}(v_{ti})$ denotes the output of graph convolution at node $v_{ti}$. More specifically, with the adjacency matrix, the Eqn. \ref{GCN1} can be represented as:
\begin{align}
\label{GCN2}
\textbf{Y}_{out} = \sum_{k=1}^{K} \boldsymbol{\Lambda}_k^{-\frac{1}{2}} \textbf{A}_k \boldsymbol{\Lambda}_k^{-\frac{1}{2}} \textbf{X} \textbf{W}_k
\end{align}
where $\textbf{A}_k$ is the adjacency matrix in spatial configuration of the label $k \in \{1,2,...,K\}$. $\boldsymbol{\Lambda}_k^{ii} = \sum_j \textbf{A}_k^{ij}$ is a degree matrix.
\subsection{Attention Enhanced Graph Convolutional LSTM}
For sequence modeling, a lot of studies have demonstrated that LSTM, as a variant of RNN, has an amazing ability to model long-term temporal dependencies. Various LSTM-based models are employed to learn temporal dynamics of skeleton sequences. However, due to the fully connected operator within LSTM, there is a limitation of ignoring spatial correlation for skeleton-based action recognition. Compared with LSTM, AGC-LSTM can not only capture discriminative features in spatial configuration and temporal dynamics, but also explore the co-occurrence relationship between spatial and temporal domains.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth, height=0.5\linewidth]{./GCN-LSTM-cell.png}
\caption{The structures of AGC-LSTM unit. Compared with LSTM, the inner operator of AGC-LSTM is graph convolutional calculation. To highlight more discriminative information, the attention mechanism is employed to enhance the features of key nodes.
}
\label{GCN-LSTM-cell}
\end{figure}
Like LSTM, AGC-LSTM also contains three gates: an input gate $\textbf{i}_t$, a forget gate $\textbf{f}_t$, an output gate $\textbf{o}_t$. However, these gates are obtained with the graph convolution operator. The input $\textbf{X}_t$, hidden state $\textbf{H}_t$, and cell memory $\textbf{C}_t$ of AGC-LSTM are graph-structured data. Fig.\ref{GCN-LSTM-cell} shows the strcture of AGC-LSTM unit. Due to the graph convolutional operator within AGC-LSTM, the cell memory $\textbf{C}_t $ and hidden state $\textbf{H}_t$ are able to exhibit temporal dynamics, as well as contain spatial structural information. The functions of AGC-LSTM unit are defined as follows:
\begin{align}
\label{gc-lstm_neuron}
\textbf{i}_t & = \sigma(\textbf{W}_{xi} *_{\mathcal{G}} \textbf{X}_t + \textbf{W}_{hi} *_{\mathcal{G}} \textbf{H}_{t-1} + \textbf{b}_i) \nonumber\\
\textbf{f}_t & = \sigma(\textbf{W}_{xf} *_{\mathcal{G}} \textbf{X}_t + \textbf{W}_{hf} *_{\mathcal{G}} \textbf{H}_{t-1} + \textbf{b}_f) \nonumber\\
\textbf{o}_t & = \sigma(\textbf{W}_{xo} *_{\mathcal{G}} \textbf{X}_t + \textbf{W}_{ho} *_{\mathcal{G}} \textbf{H}_{t-1} + \textbf{b}_o) \nonumber\\
\textbf{u}_t & = tanh(\textbf{W}_{xc} *_{\mathcal{G}} \textbf{X}_t + \textbf{W}_{hc} *_{\mathcal{G}} \textbf{H}_{t-1} + \textbf{b}_c) \\
\textbf{C}_t & = \textbf{f}_t \odot \textbf{C}_{t-1} + \textbf{i}_t \odot \textbf{u}_t \nonumber\\
\widehat{\textbf{H}}_t & = \textbf{o}_t \odot tanh(\textbf{C}_t) \nonumber\\
\textbf{H}_t & = f_{att} \left( \widehat{\textbf{H}}_t \right) + \widehat{\textbf{H}}_t \nonumber
\end{align}
where $*_{\mathcal{G}}$ denotes the graph convolution operator and $\odot$ denotes the Hadamard product. $\sigma \left( \cdot \right)$ is the sigmoid activation function. $\textbf{u}_t$ is the modulated input. $\widehat{\textbf{H}}_t$ is an intermediate hidden state. $\textbf{W}_{xi} *_{\mathcal{G}} \textbf{X}_t$ denotes a graph convolution of $\textbf{X}_t $ with $\textbf{W}_{xi} $, which can be written as Eqn.\ref{GCN1}. $f_{att}(\cdot)$ is an attention network that can select discriminative information of key nodes. The sum of $f_{att} \left( \widehat{\textbf{H}}_t \right)$ and $\widehat{\textbf{H}}_t$ as the output aims to strengthen information of key nodes without weakening information of non-focused nodes, which can maintain the integrity of spatial information.
The attention network is employed to adaptively focus on key joints with a soft attention mechanism that can automatically measure the importance of joints. The illustration of the spatial attention network is shown in Fig.\ref{attention}. The intermediate hidden state $\widehat{\textbf{H}}_t$ of AGC-LSTM contains rich spatial structural information and temporal dynamics that are beneficial in guiding the selection of key joints. So we first aggregate the information of all nodes as a query feature:
\begin{align}
\label{aggregate_node}
\textbf{q}_t & = ReLU \left( \sum_{i=1}^{N}\textbf{W}\widehat{\textbf{H}}_{ti} \right)
\end{align}
where $\textbf{W}$ is the learnable parameter matrix. Then the attention scores of all nodes can be calculated as:
\begin{align}
\label{alpha_compute}
\boldsymbol{\alpha}_t = Sigmoid \left(\textbf{U}_s tanh \left(\textbf{W}_h \widehat{\textbf{H}}_t + \textbf{W}_q \textbf{q}_t + \textbf{b}_s \right) + \textbf{b}_u \right)
\end{align}
where $\boldsymbol{\alpha}_t = (\alpha_{t1}, \alpha_{t2}, ..., \alpha_{tN})$, and $\textbf{U}_s, \textbf{W}_h, \textbf{W}_q$ are the learnable parameter matrixes. $\textbf{b}_s, \textbf{b}_u$ are the bias. We use the non-linear function of \emph{Sigmoid} due to the possibility of existing multiple key joints. The hidden state $\textbf{H}_{ti}$ of node $v_{ti}$ can also be represented as $(1+\alpha_{ti}) \cdot \widehat{\textbf{H}}_{ti}$. The attention enhanced hidden state $\textbf{H}_t$ will be fed into the next AGC-LSTM layer. Note that, at the last AGC-LSTM layer, the aggregation of all node features will serve as a global feature $\textbf{F}_t^{g}$, and the weighted sum of focused nodes will serve as a local feature $\textbf{F}_t^{l}$:
\begin{align}
\label{global_feature}
\textbf{F}_t^{g} & = \sum_{i=1}^{N} \textbf{H}_{ti} \\
\textbf{F}_t^{l} & = \sum_{i=1}^{N} \alpha_{ti} \cdot \widehat{\textbf{H}}_{ti}
\label{local_feature}
\end{align}
The global feature $\textbf{F}_t^{g}$ and local feature $\textbf{F}_t^{l}$ are used to predict the class of human action.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{./attention.png}
\caption{ Illustration of the spatial attention network.}
\label{attention}
\end{figure}
\subsection{AGC-LSTM Network}
We propose an end-to-end attention enhanced graph convolutional LSTM network (AGC-LSTM) for skeleton-based human action recognition. The overall pipeline of our model is shown in Fig.\ref{pipeline}. In the following, we discuss the rationale behind the proposed framework in detail.
\textbf{Joints Feature Representation.} For the skeleton sequence, we first map the 3D coordinate of each joint into a high-dimensional feature space using a linear layer and an LSTM layer. The first linear layer encodes the coordinates of joints into a 256-dim vector as position features $\textbf{P}_t \in \mathbb{R}^{N \times 256}$ , and $\textbf{P}_{ti} \in \mathbb{R}^{1 \times 256}$ denotes the position representation of joint $i$. Due to only containing position information, the position feature $\textbf{P}_{ti}$ is beneficial for learning spatially structured characteristic in the graph model. Frame difference features $\textbf{V}_{ti}$ between two consecutive frames can facilitate the acquisition of dynamic information for AGC-LSTM. In order to take into account both advantages, the concatenation of both features serve as an augmented feature to enrich feature information. However, the concatenation of position feature $\textbf{P}_{ti}$ and frame difference feature $\textbf{V}_{ti}$ exists the scale variance of the features vectors. Therefore, we adopt an LSTM layer to dispel scale variance between both features:
\begin{align}
\label{feature_flatten}
\textbf{E}_{ti} & = f_{lstm} \left( concat \left(\textbf{P}_{ti}, \textbf{V}_{ti} \right) \right) \nonumber\\
& = f_{lstm} \left( concat \left(\textbf{P}_{ti}, \left( \textbf{P}_{ti} - \textbf{P}_{(t-1)i} \right) \right) \right)
\end{align}
where $\textbf{E}_{ti}$ is the augmented feature of joint $i$ at time $t$. Note that the linear layer and LSTM are shared among different joints.
\textbf{Temporal Hierarchical Architecture.} After the LSTM layer, the sequence $\{\textbf{E}_1,\textbf{E}_2,...,\textbf{E}_{T}\}$ of augmented features will be fed into the following GC-LSTM layers as the node features, where $\textbf{E}_t \in \mathbb{R}^{N \times d_e}$. The proposed model stacks three AGC-LSTM layers to learn the spatial configuration and temporal dynamics. Inspired by spatial pooling in CNNs, we present a temporal hierarchical architecture of AGC-LSTM with average pooling in temporal domain to increase the temporal receptive field of the top AGC-LSTM layers. Through the temporal hierarchical architecture, the temporal receptive field of each time input at the top AGC-LSTM layer becomes a short-term clip from a frame, which can be more sensitive to the perception of the temporal dynamics. In addition, it can significantly reduce computational cost on the premise of improving performance.
\textbf{Learning AGC-LSTM.} Finally, the global feature $\textbf{F}_t^g$ and local feature $\textbf{F}_t^l$ of each time step are transformed into the scores $\textbf{o}_t^g$ and $\textbf{o}_t^l$ for $C$ classes, where $\textbf{o}_t = (o_{t1}, o_{t2},...,o_{tC})$. And the predicted probability being the $i^{th}$ class is then obtained as:
\begin{align}
\label{score_compute}
{\hat y}_{ti} & = { {e^{o_{ti}}} \over { \sum_{j=1}^C e^{o_{tj}} }}, i = 1,...,C
\end{align}
During training, considering that the hidden state of each time step on the top AGC-LSTM contains a short-term dynamics, we supervise our model with the following loss:
\begin{small}
\begin{align}
\label{loss}
\displaystyle \mathcal{L} & = - \sum_{t=1}^{T_3} \sum_{i=1}^{C} y_i log {\hat y}_{ti}^g - \sum_{t=1}^{T_3} \sum_{i=1}^{C} y_i log {\hat y}_{ti}^l \\
\displaystyle & + \lambda \sum_{j=1}^{3} \sum_{n=1}^{N} {\left( 1 - {{\sum_{t=1}^{T_j} \alpha_{tnj}} \over {T_j}} \right)}^{2} + \beta \sum_{j=1}^{3} {1 \over {T_j}} \sum_{t=1}^{T_j} {\left( \sum_{n=1}^{N} \alpha_{tnj} \right)}^{2} \nonumber
\end{align}
\end{small}
where $\emph{\textbf{y}} = \left( y_1,...,y_C \right)$ is the groundtruth label. $T_j$ denotes the number of time step on $j^{th}$ AGC-LSTM layer. The third term aims to pay equal attention to different joints. The last term is to limit the number of interested nodes. $\lambda$ and $\beta$ are weight decaying coefficients. Note that only the sum probability of ${\hat{\emph{\textbf{y}}}}_{T_3}^g$ and ${\hat{\emph{\textbf{y}}}}_{T_3}^l$ at the last time step is used to predict the class of the human action.
Although the joint-based AGC-LSTM network has achieved the state-of-the-art results, we also explore the performance of the proposed model on the part level. According to human physical structure, the body can be divided into several parts. Similar to joint-based AGC-LSTM network, we first capture part features with a linear layer and a shared LSTM layer. Then the part features as node representations are fed into three AGC-LSTM layers to model spatial-temporal characteristics. The results illustrate that our model can also achieve superior performance on the part level. Furthermore, the hybrid model (shown in Fig.\ref{two-stream}) based on joints and parts can lead to further performance improvement.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\linewidth, height=0.4\linewidth]{./two-stream.png}
\caption{ Illustration of the hybrid model based on joins and parts.}
\label{two-stream}
\end{figure}
\section{Experiments}
\subsection{Datasets}
\textbf{NTU RGB+D dataset} \cite{Shahroudy2016NTU}.
This dataset contains 60 different human action classes that are divided into three major groups: daily actions, mutual actions, and health-related actions. There are 56,880 action samples in total which are performed by 40 distinct subjects. Each action sample contains RGB video, depth map sequence, 3D skeleton data, and infrared video captured by three Microsoft Kinect v2 cameras concurrently. The 3D skeleton data that we focus on consists of 3D positions of 25 body joints per frame. There are two evaluation protocols for this dataset: Cross-Subject (CS) and Cross-View (CV) \cite{Shahroudy2016NTU}. Under the Cross-Subject protocol, actions performed by 20 subjects constitute the training set and the rest of actions performed by the other 20 subjects are used for testing. For Cross-View evaluation, samples captured by the first two cameras are used for training and the rest are for testing.
\textbf{Northwestern-UCLA dataset} \cite{Jiang2014Cross}.
This dataset contains 1494 video clips covering 10 categories. It is captured by three Kinect cameras simultaneously from a variety of viewpoints. Each action sample contains RGBD and human skeleton data performed by 10 different subjects. The evaluation protocol is the same as in \cite{Jiang2014Cross}. Samples from the first two cameras constitute the training set and samples from the other camera constitute the testing dataset.
\subsection{Implementation Details}
In our experiments, we sample a fixed length $T$ from each skeleton sequence as the input. We set the length $T = 100$ and 50 for NTU dataset and Northwestern-UCLA dataset, respectively. In the proposed AGC-LSTM, the neighbor set of each node contains only nodes directly connected with itself, so $D = 1$. In order to compare fairly with ST-GCN \cite{Yan2018Spatial}, the graph labeling function in AGC-LSTM will partition the neighbor set into $K = 3$ subsets according to \cite{Yan2018Spatial}: the root node itself, centripetal group, and centrifugal group. The channels of three AGC-LSTM layers are set to 512. During training, we use the Adam optimizer \cite{kingma2015adam} to optimize the network. Dropout with a probability of 0.5 is adopted to avoid over-fitting on these two datasets. We set $\lambda$ and $\beta$ to 0.01 and 0.001, respectively. The initial learning rate is set to 0.0005 and reduced by multiplying it by 0.1 every 20 epochs. The batch sizes for the NTU dataset and Northwestern-UCLA dataset are 64 and 30, respectively.
\subsection{Results and Comparisons}
In this section, we compare our proposed attention enhanced graph convolutional LSTM network (AGC-LSTM) with several state-of-the-art methods on the used two datasets.
\subsubsection{NTU RGB+D Dataset}
\begin{table}[b]
\begin{center}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Methods&Year&CV&CS\\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
HBRNN-L \cite{Du2015Hierarchical} &2015 &64.0&59.1\\
Part-aware LSTM \cite{Shahroudy2016NTU} &2016 &70.3&62.9\\
Trust Gate ST-LSTM \cite{Liu2016Spatio-temporal} &2016 &77.7&69.2\\
Two-stream RNN \cite{Wang2017Modeling} &2017 &79.5&71.3\\
STA-LSTM \cite{Song2017Attention} &2017 &81.2&73.4\\
Ensemble TS-LSTM \cite{Inwoong2017Ensemble} &2017 &81.3&74.6\\
Visualization CNN \cite{liu2017enhanced} &2017 &82.6&76.0\\
VA-LSTM \cite{Zhang2017View} &2017 &87.6&79.4\\
ST-GCN \cite{Yan2018Spatial} &2018 &88.3&81.5\\
SR-TSL \cite{Chenyang2018Skeleton} &2018 &92.4&84.8\\
HCN \cite{Chao2018Co-occurrence} &2018 &91.1&86.5\\
PB-GCN \cite{Kalpit2018Part} &2018 &93.2&87.5\\
\hline
AGC-LSTM (Joint) &- &93.5&87.5\\
AGC-LSTM (Part) &- &93.8&87.5\\
AGC-LSTM (Joint\&Part) &- &\textbf{95.0}&\textbf{89.2}\\
\hline
\end{tabular}
\end{center}
\caption{Comparison with the state-of-the-art methods on the NTU RGB+D dataset for Cross-View (CS) and Cross-Subject (CV) evaluation in accuracy.}
\label{ntu_result}
\end{table}
From Table \ref{ntu_result}, we can see that our proposed method achieves the best performance of 95.0\% and 89.2\% in terms of two protocols on the NTU dataset. To demonstrate the effectiveness of our method, we choose the following related methods to compare and analyze the results:
\textbf{\emph{AGC-LSTM vs HCN}}.\hspace{1mm}
HCN \cite{Chao2018Co-occurrence} employs the CNN model for learning global co-occurrences from skeleton data. It treats each joint of a skeleton as a channel, then uses the convolution layer to learn the glob co-occurrence features from all joints. We can see that our performances significantly outperform the HCN \cite{Chao2018Co-occurrence} by about 3.9\% and 2.7\% for cross-view evaluation and cross-subject evaluation, respectively.
\textbf{\emph{AGC-LSTM vs GCN models}}.\hspace{1mm}
In order to compare fairly with \cite{Yan2018Spatial}, we use the same GCN operator in the proposed AGC-LSTM layer as in ST-GCN.
On the joint-level evaluation, the results of AGC-LSTM are 93.5\% and 87.5\% that outperform 5.2\% and 6.0\% than ST-GCN. Moreover, Our model outperforms the PB-GCN \cite{Kalpit2018Part} by 1.8\% and 1.7\% for tow evaluations. The comparison results prove that the AGC-LSTM is optimal for skeleton-based action recognition than ST-GCN.
\textbf{\emph{Co-occurrence relationship between spatial and temporal domains}}.\hspace{1mm}
Although Si \etal.\cite{Chenyang2018Skeleton} propose a spatial reasoning and temporal stack learning network with graph neural network (GNN) and LSTM, they ignore the co-occurrence relationship between spatial and temporal domains. Due to the ability to explore the co-occurrence relationship between spatial and temporal domains, Our AGC-LSTM outperforms \cite{Chenyang2018Skeleton} by 2.6\% and 4.4\%.
\textbf{\emph{The performances on joint level and part level}}.\hspace{1mm}
Recent methods can be grouped into two categories: joint-based \cite{Yan2018Spatial, Zhang2017View, Inwoong2017Ensemble, Wang2017Modeling, Chao2018Co-occurrence} and part-based methods \cite{Chenyang2018Skeleton, Wang2017Modeling, Du2015Hierarchical}. Our method achieves the state-of-the-art results on joint-level and part-level, which illustrates the better generalization of our model for joint-level and part-level inputs.
\begin{table}[b]
\begin{center}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Methods&Year&Accuracy (\%)\\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
Lie group \cite{Raviteja2014Human} &2014 &74.2\\
Actionlet ensemble \cite{Jiang2014Learning} &2014 &76.0\\
HBRNN-L \cite{Du2015Hierarchical} &2015 &78.5\\
Visualization CNN \cite{liu2017enhanced} &2017 &86.1\\
Ensemble TS-LSTM \cite{Inwoong2017Ensemble} &2017 &89.2\\
\hline
AGC-LSTM (Joint) &- &92.2\\
AGC-LSTM (Part) &- &90.1\\
AGC-LSTM (Joint\&Part) &- &\textbf{93.3}\\
\hline
\end{tabular}
\end{center}
\caption{Comparison with the state-of-the-art methods on the Northwestern-UCLA dataset in accuracy.}
\label{ucla_result}
\end{table}
\subsubsection{Northwestern-UCLA Dataset}
As shown in Table \ref{ucla_result}, the proposed AGC-LSTM again achieves the best accuracy of 93.3\% on the Northwestern-UCLA dataset. The previous state-of-the-art model \cite{Inwoong2017Ensemble} employs multiple Temporal Sliding LSTM (TS-LSTM) to extract short-term, medium-term and long-term temporal dynamics respectively, which has similar functionality to our temporal hierarchical architecture. However, our model outperforms TS-LSTM \cite{Inwoong2017Ensemble} by 4.1\%. Compared with the CNN-based method \cite{liu2017enhanced}, our method also obtains much better performance.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|l|cc}
\hline\noalign{\smallskip}
\multicolumn{2}{c|}{Methods}&CV&CS\\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
\multirow{4}{*}{Joint} &LSTM &89.4&80.3\\
~ &GC-LSTM &92.4&85.6\\
~ &LSTM+TH &90.4&81.4\\
~ &GC-LSTM+TH &92.9&86.3\\
~ &AGC-LSTM+TH (AGC-LSTM) &93.5&87.5\\
\hline
Part &AGC-LSTM+TH (AGC-LSTM) &93.8&87.5\\
\hline
\multicolumn{2}{c|}{AGC-LSTM (Joint\&Part)} &\textbf{95.0}&\textbf{89.2}\\
\hline
\end{tabular}
\end{center}
\caption{The comparison results between several baselines and our AGC-LSTM on the NTU RGB+D dataset.}
\label{ablation_ntu}
\end{table}
\subsection{Model Analysis}
\subsubsection{Architecture Analysis}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|l|c}
\hline\noalign{\smallskip}
\multicolumn{2}{c|}{Methods}&Accuracy (\%)\\
\noalign{\smallskip}
\hline
\hline
\noalign{\smallskip}
\multirow{4}{*}{Joint} &LSTM &70.0\\
~ &GC-LSTM &87.5\\
~ &LSTM+TH &78.5\\
~ &GC-LSTM+TH &89.4\\
~ &AGC-LSTM+TH (AGC-LSTM) &92.2\\
\hline
Part &AGC-LSTM+TH (AGC-LSTM) &90.1\\
\hline
\multicolumn{2}{c|}{AGC-LSTM (Joint\&Part)} &\textbf{93.3}\\
\hline
\end{tabular}
\end{center}
\caption{The comparison results between several baselines and our AGC-LSTM on the Northwestern-UCLA dataset.}
\label{ablation_ucla}
\end{table}
Tables \ref{ablation_ntu} and \ref{ablation_ucla} show experimental results of several baselines on the NTU RGB+D dataset and Northwestern-UCLA dataset, respectively. TH denotes temporal hierarchical architecture. Compared with LSTM and GC-LSTM, LSTM+TH and GC-LSTM+TH can increase the temporal receptive fields of each time step on the top layer. The improved performances prove that the temporal hierarchical architecture can boost the ability of representing temporal dynamics. Replacing LSTM with GC-LSTM, GC-LSTM+TH increases the accuracies to 2.5\%, 4.9\% on the NTU dataset and 10.9\% on the Northwestern-UCLA dataset, respectively. Substantial performance improvements verify the effectiveness of GC-LSTM, which can capture more discriminative spatial-temporal features from skeleton data. Compared with GC-LSTM, AGC-LSTM can employ the spatial attention mechanism to select spatial information of key joints, which can promote the ability of feature representation. In addition, the fusion of part-based and joint-based AGC-LSTM can further improve the performance.
\begin{figure}[t]
\centering
\subfigure[\fontsize{7}{7}\selectfont ]{
\label{attention-results:a}
\includegraphics[width=1.12in]{./attention1.png}}
\subfigure[\fontsize{7}{7}\selectfont ]{
\label{attention-results:b}
\includegraphics[width=0.90in]{./attention2.png}}
\subfigure[\fontsize{7}{7}\selectfont ]{
\label{attention-results:c}
\includegraphics[width=1.07in]{./attention3.png}}
\caption{Visualizations of the attention weights of three AGC-LSTM layers on one actor of the action ``handshaking``. Vertical axis denotes the joints. Horizontal axis denotes the frames. (a), (b), (c) are the attention results of the first, second and third AGC-LSTM layer, respectively. }
\label{attention-results}
\end{figure}
We also visualize the attention weights of three AGC-LSTM layers in Fig.\ref{attention-results}. For the ``handshaking'' action, the results show our method can gradually enhance the attention of ``right elbow'', ``right wrist'', and ``right hand''. Meanwhile, ``tip of the right hand'' and ``right thumb'' have some degree of attention.
Furthermore, we analyze the experimental results with a confusion matrix on the Northwestern-UCLA dataset. As show in Fig.\ref{NUCLA-acc-results:a}, it is very confusing for LSTM to recognize similar actions. For example, the actions ``pick up with one hand'' and ``pick up with two hands'' have very similar skeleton sequences. Nevertheless, we can see that the proposed AGC-LSTM can significantly improve the ability to classify these similar actions (shown in Fig.\ref{NUCLA-acc-results:b}). The above results illustrate that the proposed AGC-LSTM is an effective method for skeleton-based action recognition.
\begin{figure}[b]
\centering
\subfigure[\fontsize{7}{7}\selectfont LSTM]{
\label{NUCLA-acc-results:a}
\includegraphics[width=1.45in]{./NUCLA-acc-lstm.png}}
\subfigure[\fontsize{7}{7}\selectfont AGC-LSTM]{
\label{NUCLA-acc-results:b}
\includegraphics[width=1.45in]{./NUCLA-acc.png}}
\caption{Confusion matrix comparison on the Northwestern-UCLA dataset. (a) LSTM. (b) AGC-LSTM.}
\label{NUCLA-acc-results}
\end{figure}
\subsubsection{Failure Case}
Finally, we analyze misclassification results with a confusion matrix on the NTU dataset. Fig.\ref{NTU-acc} shows the part confusion matrix comparison of the actions (``eat meal/snack'', ``reading'', ``writing'', ``playing with phone/tablet'', ``typing on a keyboard'', ``pointing to something with finger'', ``sneeze/cough'', ``pat on back of other person'') with accuracies less than 80\% for the cross-subject setting on the NTU dataset. We can see that misclassified actions are mainly very similar movements. For example, 20\% samples of ``reading'' are misclassified as ``writing'', and there are 19\% sequences of ``writing'' misclassified as ``typing on as keyboard''. For the NTU dataset, only two joints are marked on fingers (``tip of the hand'' and ``thumb''), so that it is very challenging to capture such subtle movements of the hands.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{./NTU-acc.png}
\caption{ Confusion matrix comparison on the NTU dataset. It shows the part of confusion matrix comparison of the actions (``eat meal/snack'', ``reading'', ``writing'', ``playing with phone/tablet'', ``typing on a keyboard'', ``pointing to something with finger'', ``sneeze/cough'', ``pat on back of other person'') with accuracies less than 80\% on NTU dataset.}
\label{NTU-acc}
\end{figure}
\section{Conclusion and Future Work}
In this paper, we propose an attention enhanced graph convolutional LSTM network (AGC-LSTM) for skeleton-based action recognition, which is the first attempt of graph convolutional LSTM for this task. The proposed AGC-LSTM can not only capture discriminative features in spatial configuration and temporal dynamics, but also explore the co-occurrence relationship between spatial and temporal domains. Furthermore, the attention network is employed to enhance information of key joints in each AGC-LSTM layer. In addition, we also propose a temporal hierarchical architecture to capture high-level spatiotemporal semantic features. On two challenging benchmarks, the proposed AGC-LSTM achieves the state-of-the-art results. Learning the pose-object relation is helpful to overcome the limitations mentioned in the failure case. In the future, we will try the combination of skeleton sequence and object appearance to promote the performance of human action recognition.
\section{Acknowledgements}\label{sect:acknowledgements}
This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), National Natural Science Foundation of China (61525306, 61633021, 61721004, 61420106015, 61572504), Capital Science and Technology Leading Talent Training Project (Z181100006318030), and Beijing Science and Technology Project (Z181100008918010).
{\small
\bibliographystyle{ieee_fullname}
|
1,108,101,565,789 | arxiv | \section{Introduction}
Recent in advancements in Artificial Intelligence Deep Learning \cite{lecun2015deep} have generated fast-growing interests in applying advanced machine learning/deep learning techniques to education. In particular, there has been substantial interest in multimodal teaching and learning analytics (MUTLA), a research methodology that aims to bring together Educational Data Mining and Learning Analytics to multimodal learning environments by directly working on data from multimodalities \cite{worsley2018multimodal,worsley2016situating}. Over last ten years, researchers have exploited to apply machine learning/ data mining techniques on multimodal data for various tasks, including communicative interaction \cite{macwhinney2004talkbank}, online education \cite{thomas2018multimodal}, student’s uncertainty modeling \cite{jraidi2013student}, and emotional responses recognition in children \cite{nojavanasghari2016emoreact}.
Despite some recent progress of collecting multimodal data and utilizing it in learning science \cite{macwhinney2004talkbank,antoniadou2017collecting,oviatt2013multimodal,nojavanasghari2016emoreact}, teaching and learning analytics is largely limited by the quality and quantity of multimodal data that is publicly accessible. So here in this paper, we bridge this gap between enthusiastic AI researchers and challenging teaching and learning analytics problems, by presenting a large-scale and real-world MUTLA dataset that is publicly accessible.
Our dataset includes time-synchronized multimodal data recordings (learning logs, videos, EEG brainwaves) on students as they work on various Squirrel AI Learning products to solve problems that vary in subjects and difficulty levels. The dataset resources include user records from learner records store of Squirrel AI Learning system, brainwave data collected by a EEG headset device, and video data captured by web camera while students working on the learning system. The primary aim is to analyze student learning activities, facial expressions, body movements, and brainwave patterns to predict student engagement. This can then be used to improve adaptive learning selection and student learning outcomes. An additional goal is to provide a dataset gathered from truly real-world educational activities versus those from a controlled lab environment. Our dataset can be publicly accessed via the following link. \footnote{\url{https://tinyurl.com/SAILdata}}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[scale=0.55]{2.png}
\caption{Integration of multimodal data capture using camera and EEG headband. During the question answering process, we collect student's learning record, EEG brainwave signals, and a set of video data.}
\end{center}
\end{figure*}
\section{Method}
\subsection{Participants}
For this dataset, the participants were students from 2 Squirrel AI Learning (SAIL) after-school learning centers in China, with 33 of students coming from school G and 123 students coming from school N. Students worked on 6 Subjects in total. For Chinese and English (Grammar), students came from primary school and middle school. For Math, Physics, Chemistry and English reading, students came from middle school only.
\subsection{Tasks and data collection procedure}
Note that the students in the study go to the SAIL schools after their regular school day. In school G after school tutoring sessions were approximately 1.5 hours long, and in school N they were approximately 2 hours long. In each session, students focus on a particular set of knowledge points, which can be small units of facts, concepts, or skills (called tag\_code in the dataset) for a specific subject. They typically start with a pretest or review then view instructional videos and work on learning and practice questions until the knowledge points have been “mastered”. Pretest questions do not have immediate corrective feedback, but students can view the answer and the explanations of the answer after they complete the pretest. For learning and practice, students can activate varying levels of hints and corrective feedback before submitting their response, and they can view the answer explanation for each question after submitting their response.
After each learning and practice question, SAIL computes and updates each student’s proficiency level on each knowledge point to determine the next question to present. When the proficiency estimates reach a certain threshold, that knowledge point is considered “learned” and will be queued for review at a later time.
For our study, students wore “brainwave” headsets (manufactured by BrainCo \footnote{\url{ https://www.brainco.tech/}}) while they were studying in the after-school learning session, and they were also asked to have their webcam cameras turned on. Thus, there are three main data sources collected for each student: (i) User records from the SAIL learning record store, (ii) Students brainwave data while learning from the BrainCo headband and stored by FocusEDU platform. (iii) The video data captured by the webcam installed on each computer and stored by Debut video capture software\footnote{\url{ https://www.nchsoftware.com/capture/index.html}}. The whole procedure and data source flows are shown in figure 1.
\subsubsection{User record}
The user records are collected from Squirrel AI learning's learner record store. They contain all the question level logs of student responses while students are working in the SAIL tutoring sessions for each subject. Each item is a question. Note that students were generally focused on answering questions throughout the session except when they were watching videos to help them learn a particular knowledge component. Students' interactions with the instructional videos were also captured. These include actions such as video play, fast forward, play back, etc.
\subsubsection{Brainwaves}
When students work in each learning session, the headsets they are wearing generate three data files: attention, EEG and events.
The attention data contains a Unix timestamp followed by its attention value. This attention value ranged 0-100 and was generated by the BrainCo devices. Since the data collection occurred in China, this timestamp was then converted to Beijing time so that it can be synchronized to other data sources.
The EEG file contains the raw EEG data. Each row has the timestamp, sequence number, battery level, logging label and EEG array. Each point represents the difference in potential between the EEG reference point and the acquisition point. There are 160 such points in a minute for $uA$.
The vectors in square brackets [ ] are the electrical signal output values of the sensors. These electrical signals can be transformed into frequency domain signals or wave forms (alpha, beta, gamma, etc.) by Fourier transform, and the average energy of each wave band can be calculated.
The event file contains the raw events data. Each data point has a Unix timestamp followed by the device stage. It indicates whether the device is connected or not in the corresponding time.
\subsubsection{Webcam videos}
Each student’s entire session was also recorded by the webcam on the computer, and this focused on the student’s upper body, including the face. And these video sessions are time stamped so they can be properly synchronized.
\subsection{Multimodal data synchronization and processing}
\subsubsection{User records}
Since we recorded students facial movements by the camera and captured brainwaves by the headset while they working on the after-school session using the SAIL system, we sync these three data sources together by time.
\subsubsection{Brainwave data and synchronization}
There are two steps in brainwave data synchronization. The first step is to match each student with his or her brainwave data, and we used student user id, session time, headset number to do this. The second step is to synchronize each student’s brainwave data with that student’s user record. The algorithm for syncing these two data sources is shown in Algorithm 1. In this algorithm, ctime represents the answer submission time of corresponding question. The main task is to find the right student brainwave triplets while the student working on the question.
\begin{algorithm}[tb]
\caption{Syncing Brainwave with User Record}
\label{alg:algorithm}
\textbf{Input}: Table of all User record, Table of brainwave data\\
\textbf{Parameter}:user\_id, subject, end\_time\_with\_date, ctime, class\_start\_time, class\_end\_time, brainwave\_file\_path\\
\textbf{Output}: brainwave triplet file paths
\begin{algorithmic}[1]
\WHILE{user\_id = 52027 and subject = "E" in table Brainwave}
\STATE find the end\_time\_with\_date and class\_start\_time
\WHILE{user\_id = 52027 in English User record}
\STATE find ctime
\IF {ctime is between class\_start\_time and end\_time\_with\_date }
\STATE assign the brainwave\_file\_path to the item which user\_id = 52027 in User record
\ENDIF
\ENDWHILE
\ENDWHILE
\STATE \textbf{return} User record with brainwave file path appended
\end{algorithmic}
\end{algorithm}
\subsubsection{Video processing and synchronization and segmentation}
The webcam video data has the same fields as the brainwave data. We used the same algorithm to sync each student's webcam video data with that student's user record. After the synchronization, each question item in the user record is matched with the original video file path.
After we synchronized the video with the user record, the data were segmented from each session into time phases representing the start and end of each question the student worked on. Each question in the user record has a stime, which means the time the student was given the question, and a ctime, which equals to the submitted time. With this encoding, the total time required by the student to solve each question is simply ctime - stime. And the video piece starting at stime and ending at ctime gives the video data when the student was working on that specific question.
The extracted question segments have two files associated with each segment: a json file with metadata of the question and an npy (python Numpy) file containing the tracking metadata, and they should be very easy to load into Python.
1. Question segments filenames name as “school name\_video ids\_segment numbers.
2. Json file with metadata of the question can be retrieved from user records.
3. File named “npy\_key.md” describing the meanings of the various data point included in the numpy file.
After the webcam videos were matched with user records, the total video length for different Subjects are computed and are shown in table 1.
After filtering, it contains 2170 segments. Table 2 showed the percentage of valid tracking of the segments for different percentile. We define that fully visible student face as valid tracking. At least 50\% means at least half of the frames have valid tracking information.
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Subject & total time (seconds)\\
\hline
Chemistry & 3809 \\
Chinese & 3835 \\
English & 25991 \\
Math & 17012 \\
Physics & 50790\\
\hline
\end{tabular}
\caption{Total time for different subjects}
\label{tab:plain}
\end{table}
\begin{table}
\centering
\begin{tabular}{lll}
\hline
Percentage of valid segments & Percentage (2170 segments)\\
\hline
At least 50\% & 61.54\% \\
At least 70\% & 49.88\% \\
100\% & 18.30\% \\
\hline
\end{tabular}
\caption{valid tracking about segments}
\label{tab:plain}
\end{table}
\section{Conclusion and Future Work}
In this paper, we have presented MUTLA, a large-scale multimodal dataset including learning logs, videos, and EEG brainwaves. Because this dataset is collected in a real complex learning environment, we hope that it will greatly facilitate research on measuring and improving student engagement, which can then be used to improve the effectiveness of adaptive learning systems.
Going forward, we will try to apply machine learning and deep learning techniques ourselves to this multimodal data in the hope better understanding how these complex data sets are related.
\section{Acknowledgement}
Special thanks to the two after-school learning centers staff and students who participated and supported this data collection, and Squirrel AI Learning staff and interns who aided in the data collection, cleaning, and aggregation. The data described and shared here were collected by Squirrel AI Learning, and they also funded this research. SRI International also aided preparation for this dataset.
\bibliographystyle{named}
|
1,108,101,565,790 | arxiv | \section{Introduction}
This paper is concerned with two different ways of transferring Riesz projection to the infinite-dimensional setting of Dirichlet series: first, by lifting it in a multiplicative way to the infinite-dimensional torus $\mathbb{T}^{\infty}$ and second, by using one-dimensional Riesz projection to study the partial sum operator acting on Dirichlet series. In either case, we will be interested in studying the action of the operator in question on functions in $L^p$ or $H^p$ spaces.
By Fefferman's duality theorem \cite{F}, Riesz projection $P_{1}^{+}$ on the unit circle $\mathbb{T}$, formally defined as
\[
P_{1}^{+}\big(\sum_{k\in \mathbb{Z}}c_k z^k\big):=\sum_{k\geq 0}c_k z^k ,
\]
maps $L^{\infty}(\mathbb{T})$ into and onto $\operatorname{BMOA}(\mathbb{T})$, i.e., the space of analytic functions of bounded mean oscillation. We may thus think of the image of $L^{\infty}(\mathbb{T}^{\infty})$ under Riesz projection on $\mathbb{T}^{\infty}$
(or equivalently, in view of the Hahn--Banach theorem, the dual space $H^1(\mathbb{T}^\infty)^*$) as a possible infinite-dimensional counterpart to $\operatorname{BMOA}(\mathbb{T})$.
This brings us to the second main topic of this paper which is to describe some of the main properties of this space.
Our main result, given in Section~\ref{sec:Riesz}, verifies that Riesz projection does not map
$L^{\infty}(\mathbb{T}^{\infty})$ into $H^p(\mathbb{T}^{\infty})$ for any $p>2$, whence $H^1(\mathbb{T}^\infty)^*$ is not embedded in $H^p(\mathbb{T}^{\infty})$ for any $p>2$. This result solves a problem posed in \cite{MS} and contrasts the familiar inclusion of $\operatorname{BMOA}(\mathbb{T})$ in $H^p(\mathbb{T})$ for every $p<\infty$.
The key idea of the proof is to first show that the norm of a Fourier multiplier
$M_{\chi_A}:L^p(\mathbb{T}^n)\to L^q(\mathbb{T}^n)$ corresponding to a bounded convex domain $A$
in $\mathbb{R}^n$ is dominated by the norm of the Riesz projection on $\mathbb{T}^{n+m}$ for $m$ sufficiently large, depending on $A$. Another crucial ingredient is Babenko's well-known lower estimate for spherical Lebesgue constants.
We then proceed to view $H^1(\mathbb{T}^\infty)^*$ as a space of Dirichlet series, employing as usual the Bohr lift. This leads us in Section \ref{sec:BMOA} to a distinguished subspace of $H^1(\mathbb{T}^\infty)^*$ which is indeed a ``true'' $\operatorname{BMO}$ space, namely the family of Dirichlet series that belong to $\operatorname{BMOA}$ of the right half-plane. By analogy with classical results on $\mathbb{T}$, we give several conditions for membership in this space, also for randomized Dirichlet series, and we describe how this $\operatorname{BMOA}$ space relates to some other function spaces of Dirichlet series.
In Section \ref{sec:compare}, we study Dirichlet polynomials of fixed length $N$ and compare the size of their norms in $H^p$, $\operatorname{BMOA}$, and the Bloch space. One of these results is then applied in the final Section \ref{sec:partial}, where we turn to our second usage of Riesz projection. Here we present an explicit device for expressing the $N$th partial sum of a Dirichlet series in terms of one-dimensional Riesz projection and give some $L^p$ estimates for the associated partial sum operator.
We refer the reader to \cite{HLS} and \cite{MAHE} (see especially \cite[Section 6]{MAHE}) for definitions and basics on Hardy spaces of Dirichlet series of Hardy spaces on $\mathbb{T}^\infty$.
\subsection*{Notation} We will use the notation $f(x)\ll g(x)$ if there is some constant $C>0$ such that $|f(x)|\le C|g(x)|$ for all (appropriate) $x$. If we have both $f(x)\ll g(x)$ and $g(x)\ll f(x)$, then we will write $f(x)\asymp g(x)$. If $\lim_{x\to \infty} f(x)/g(x)=1$, then we write $f(x)\sim g(x)$.
\subsection*{Acknowledgements} We thank Ole Fredrik Brevig for allowing us to include an unpublished argument of his in this paper. We are also grateful to the referees for a number of valuable comments that helped improve the presentation.
\section{The norm of the Riesz projection from $L^\infty(\Bbb{T}^n)$ to $L^p(\Bbb{T}^n)$}\label{sec:Riesz}
The norm $\|f\|_p$ of a function $f$ in $L^p(\mathbb{T}^\infty)$ is computed with respect to
Haar measure $m_\infty$ on $\mathbb{T}^\infty$, which is the countable product of one-dimensional normalized Lebesgue measures on $\mathbb{T}$. We denote by $m_n$ the measure on $\mathbb{T}^n$ that is the $n$-fold product of the normalised one-dimensional measures, and $L^p(\mathbb{T}^n)$ is defined with respect to this measure.
We write the Fourier series of a function $f$ in $L^1(\mathbb{T}^n)$ on the $n$-torus $\mathbb{T}^n$ as
\begin{equation}
\label{def_expan}
f(\zeta) = \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha) \zeta^{\alpha}.
\end{equation}
For a function $f$ in $L^1(\mathbb{T}^\infty)$ the Fourier series takes the form
$f(\zeta) = \sum_{\alpha\in\mathbb{Z}^\infty_{ fin}} \hat f(\alpha) \zeta^{\alpha}$, where
$\mathbb{Z}^\infty_{ fin}$ stands for infinite multi-indices such that all but finitely many
indices are zero. We also set $\mathbb{Z}_+:=\{0,1,\dots\}$ so that $\mathbb{Z}_+^n$ (respectively $\mathbb{Z}_+^\infty$)
is the positive cone in $\mathbb{Z}^n$ (respectively $\mathbb{Z}^\infty$).
The operator
\[ P_{n}^+ f(\zeta) := \sum_{\alpha\in\mathbb{Z}_+^n} \hat f(\alpha) \zeta^{\alpha} \]
is the Riesz projection on $\mathbb{T}^n$, and, as an operator on $L^2(\mathbb{T}^n)$, it has norm $1$.
If we instead view $P_{n}^+$ as an operator on $L^p(\mathbb{T}^n)$ for $1<p<\infty$, then
a theorem of Hollenbeck and Verbitsky \cite{HV} asserts that its norm equals
$(\sin(\pi/p))^{-n}$. In an analogous way we denote by $P^+_\infty$ the Riesz projection on $\mathbb{T}^\infty,$ and obviously $P^+_\infty$ is bounded on $L^p(\mathbb{T}^\infty)$ only for $p=2$, when its norm equals 1.
Using this normalization, we let $\|P_n^+\|_{q,p}$ denote the norm of
the operator $P_n^+:\,L^q(\mathbb{T}^n)\to L^p(\mathbb{T}^n)$ for $q\ge p$. By H\"older's
inequality, $p\to \|P_n^+\|_{\infty,p}$ is a continuous and nondecreasing function,
and obviously
$\|P^+_n\|_{\infty,p}\le(\sin(\pi/p))^{-n}$. Consider the quantity
\[ p_n := \sup\left\{p\ge2:\,\|P_n^+\|_{\infty,p}\leq1\right\},\] which we following \cite{FIP}
call the critical exponent of $P_n^+$.
The critical exponent is well-defined since clearly
$\|P_n^+\|_{\infty,2}=1$. We also set
\[p_\infty := \sup\left\{p\ge2:\,\|P_\infty^+\|_{\infty,p}\leq1\right\}. \] Defining $A_m f(z_1,z_2,\ldots):=f(z_1,\ldots, z_m,0,0,\ldots)$ and using that $\|A_m f\|_p\to \| f\|_p$ as $m\to \infty$ for every $f$ in $L^p(\mathbb{T}^\infty)$ and $1\leq p\leq \infty$, we see that in fact
\[
p_\infty= \lim_{n\to\infty} p_n.
\]
This also follows from the proof of Theorem \ref{n_to_infty} below.
Marzo and Seip \cite{MS} proved that the critical exponent of $P_1^+$ is $4$ and moreover that
\[ 2+2/(2^n-1)\le p_n< 3.67632\] for $n>1$. Recently, Brevig \cite{B} showed that $\lim_{n\to \infty} p_n \le 3.31138$. The following theorem
settles the asymptotic behavior of the critical exponent of $P_n^+$ when $n\to\infty$.
\begin{theorem}\label{n_to_infty} We have $p_\infty=\lim_{n\to\infty} p_n=2$.
\end{theorem}
By considering a product of functions in disjoint variables, we obtain the following immediate consequence concerning
the Riesz projection $P_{\infty}^{+}$ on the infinite-dimensional torus, formally defined as
\[ P_{\infty}^{+}\Big(\sum_{k\in \mathbb{Z}^{(\infty)}}c_\alpha z^\alpha\Big):=\sum_{\alpha\in \mathbb{N}^{(\infty)}}c_\alpha z^\alpha. \]
\begin{corollary}\label{inf_dim} The Riesz projection $P_{\infty}^{+}$ is not bounded from $L^q$ to $L^p$ when
$2<p<q\le \infty$.
\end{corollary}
In turn, since the ``analytic'' dual of $H^1$ obviously equals $P^+_\infty(L^\infty(\mathbb{T}^\infty))$, we obtain a further interesting consequence.
\begin{corollary}\label{cor:dual_non_embedding}
The dual space $H^1(\mathbb{T}^\infty)^*$ is not contained in $H^p(\mathbb{T}^\infty)$ for any $p>2.$
\end{corollary}
The latter result has an immediate translation in terms of Hardy spaces of Dirichlet series, as will be recorded in
Corollary \ref{cor:dual_non_embedding2} below.
The proof of Theorem~\ref{n_to_infty} deals with the (pre)dual operator $P^+_\infty:L^q(\mathbb{T}^n)\to L^1(\mathbb{T}^n)$, where $q<2.$ The idea is to prove first that
for the characteristic function $\chi_A$ of a bounded convex domain $A$ in $\mathbb{R}^n$, the norm of the Fourier multiplier $M_{\chi_{A}}$ on $\mathbb{T}^n$ is actually bounded by that of $P^+_{n+m}$ for large enough $m$, depending on $A$. This key observation will be applied when $A$ is a large ball $B(0,R)$ in $\mathbb{R}^n$, and the desired result is deduced by invoking the following result of Ilyin \cite{I1}.
\begin{theorem}\label{thm:babenko}
The circular Dirichlet kernel
\[
D_{R,n}(\zeta) := \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \zeta^{\alpha}
\]
on $\mathbb{T}^n$ satisfies $\| D_{R,n}\|_{L^1(\mathbb{T}^n)}\geq cR^{(n-1)/2},$ where $c=c(n)>0$ and $\|\cdot\|$ stands for the standard Euclidean norm.
\end{theorem}
Babenko's famous 1971 preprint (see \cite{BA,Li2}) gives another proof. Moreover, it establishes a comparable upper bound, which can also be found in Ilyin and Alimov's paper \cite{I2}. We refer to Liflyand's review \cite{Li1} for further information on the related literature and for a simple proof of Theorem \ref{thm:babenko}.
\begin{proof}[Proof of Theorem \ref{n_to_infty}.]
Fix $n\geq 2$ and $\alpha =(\alpha_1,\dots,\alpha_n)\in\mathbb{Z}^n$ together with $\beta^j\in\mathbb{Z}^n$ and $ b_j\in\mathbb{Z}$ for $j=1,\dots,m$, where $m\in\mathbb{N}$ is also fixed. We consider
$n+m$ linear functions $\phi_j:\,\mathbb{Z}^n\to\mathbb{Z}$, with $j=1,\dots,n+m$, where
\begin{align*}
\phi_j(\alpha) & := \alpha_j, \quad j=1,\dots,n, \\
\phi_{n+j}(\alpha) &:= (\alpha,\beta^j) + b_j,\quad j=1,\dots,m. \end{align*}
We associate with any trigonometric polynomial $f$ as in \eqref{def_expan} (that is, any $f$ of the form \eqref{def_expan} with finitely many non-zero terms)
the function
\[ g(\eta) := \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha) \prod_{j=1}^{n+m}
\eta_j^{\phi_j(\alpha)},\]
where $\eta = (\eta_1,\dots,\eta_{n+m})\in\mathbb{T}^{n+m}$.
\begin{lemma}\label{equal_norms} We have $\|g\|_p = \|f\|_p$ for $0<p\le \infty$.
\end{lemma}
\begin{proof} Set
\[ \eta' :=(\eta_1,\dots,\eta_n),\quad \eta'' :=(\eta_{n+1},\dots,\eta_{n+m}).\]
We have
\[ g(\eta) = \psi_0(\eta'') \sum_{\alpha\in\mathbb{Z}^n} \hat f(\alpha)\prod_{j=1}^n (\psi_j(\eta'')\eta_j)^{\alpha_j},\]
where
\[
\psi_0(\eta''):=\prod_{k=1}^m\eta_{n+k}^{b_k}\; ,\quad\textrm{and}\quad \psi_j(\eta''):=\prod_{k=1}^m\eta_{n+k}^{\beta^{k}_j}\;\;\textrm{for}\;\; j=1,\ldots, n.
\]
We clearly have $\psi_j(\eta'')\in\mathbb{T}$
for $j=0,\dots,n$. For a fixed $\eta''$ in $\mathbb{T}^m$
consider $g$ as a function of $\eta'$:
\[ g(\eta) =g_{\eta''}(\eta').\]
Set $\tilde\eta'=(\tilde\eta_1,\dots,\tilde\eta_n)$, where
$\tilde\eta_j = \psi_j(\eta'')\eta_j$ for $j=1,\dots,n$. We see that
\[ g_{\eta''}(\eta') = \psi_0(\eta'') f(\tilde\eta').\]
We therefore obtain the asserted isometry:
\[ \|g_{\eta''}\|_p = \|f\|_p \]
\end{proof}
By duality, for any positive integer $N$ and $p>2$, we have
$\|P_N^+\|_{\infty,p} = \|P_N^+\|_{p',1}$ where
$p'=p/(p-1)$. Hence, to prove Theorem \ref{n_to_infty},
we have to show that for any $q$ in $(1,2)$ there exist a positive integer $N$
and $g$ in $L^q(\mathbb{T}^N)$ such that
\begin{equation}
\label{refor_Theorem}
\|g\|_q=1,\quad \|P_N^+ g\|_1 >1.
\end{equation}
Indeed, by duality, this will imply the existence of a function $h$ in $L^\infty(\mathbb{T}^n)$
such that
\[ \|h\|_\infty = 1,\quad \|P_N^+(h)\|_{q'} >1,\]
where $q' = q/(q-1)$. Since $q<2$ is arbitrary, Theorem~\ref{n_to_infty} then follows.
For a bounded set $E$ in $\mathbb{R}^n$ and a function $f$ in $L^{1}(\mathbb{T}^n)$, we consider
a partial sum of the Fourier series of $f$:
\[ \big(S_Ef\big)(\zeta) := \sum_{\alpha\in E\cap\mathbb{Z}^n} \hat f(\alpha) \zeta^{\alpha}.\]
Note that as an operator, $S_E$ coincides with the Fourier multiplier $M_{\chi_E}$.
We say that a polytope $E$ in $\mathbb{R}^n$ is non-degenerate if it is not contained in
a hyperplane.
\begin{lemma}\label{reduc_polyt} Let $1<q<2$. Assume that there is a
non-degenerate convex polytope $E$ in $\mathbb{R}^n$ with integral vertices such
that, for some $f$ in $L^q(\mathbb{T}^n)$ with a finite set of non-zero Fourier
coefficients $\hat f(\alpha)$, we have
\[ \|f\|_q=1,\quad \|S_E(f)\|_1 >1.\]
Then there are a positive integer $N\in\mathbb{N}$ and a function
$g$ in $L^q(\mathbb{T}^N)$ satisfying \eqref{refor_Theorem}.
\end{lemma}
\begin{proof} Let $e:=(1,1,\ldots, 1)\in\mathbb{Z}^+_n$. By considering instead $E+Ne$ and $(\eta_1\ldots \eta_n)^Nf(\eta)$ with large enough $N\in \mathbb{N}$, if necessary, we may assume that $E$ and the Fourier coefficients of $f$ satisfy
\begin{equation}
\label{assum_posit}
E\subset\mathbb{Z}_n^+\quad\textrm{and}\quad \hat f(\alpha)\not=0 \;\;\Rightarrow\;\; \alpha_j\geq 0 \;\,\textrm{for all}\;\;j=1,\ldots , n.
\end{equation}
It is known that $E$ is the intersection of closed semispaces,
bounded by the hyperplanes containing the faces of $E$ of dimension $n-1$
(see \cite[Ch. 1, Thm. 5.6]{L}). All hyperplanes are determined by
their intersections with the set of the vertices of $E$. Since the
vertices are integral, the semispaces can be defined by inequalities
\[ (\alpha,\beta^j) + b_j\ge0,\quad j=1,\dots,m,\]
where $\beta^j\in\mathbb{Z}^n, b_j\in\mathbb{Z}$ for $j=1,\dots,m$.
Thus
\[ E=\bigcap_{j=n+1}^{n+m} \{\alpha\in\mathbb{R}^m:\,\phi_{j}(\alpha)\ge0\},\]
where $\phi_{j}(\alpha) = (\alpha,\beta^{j-n}) + b_{j-n},\quad j=n+1,\dots,n+m$.
We set $N:=n+m$ and construct the function $g$ from $f$ as in Lemma~\ref{equal_norms}.
Using that lemma, we get
\[ \|g\|_q = \|f\|_q =1,\quad \|P_N^+ g\|_1 = \|S_E(f)\|_1 >1, \]
and Lemma \ref{reduc_polyt} follows. \end{proof}
To construct an integer $n$, a polytope $E$, and a function $f$
satisfying Lemma \ref{reduc_polyt}, we take first $n$ satisfying
the inequality
\begin{equation}
\label{choice_n}
n>q/(2-q).
\end{equation}
For sufficiently large $R$, let $E$ be the convex hull of the integral
points contained in the euclidean ball $\{\alpha\in\mathbb{R}^n:\,\|\alpha\|\le R\}$. Hence for any function $f$ in $L^{1}(\mathbb{T}^n)$,
we have
\[ (S_Ef)(\zeta) = \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \hat f(\alpha) \zeta^{\alpha}.\]
Recall the circular Dirichlet kernel from Theorem \ref{thm:babenko}:
\[ D_{R,n}(\zeta) = \sum_{\alpha\in \mathbb{Z}^n:\,\|\alpha\|\le R} \zeta^{\alpha}.\]
Define the function
$\displaystyle \widetilde f (\zeta):=\sum_{|\alpha_1|\le R}\dots \sum_{|\alpha_n|\le R}\zeta^\alpha$
so that
$S_R \widetilde f = D_{R,n}.$
It is easy to see that
\[ \|\widetilde f\|_q = \left\|\sum_{|\alpha_1|\le R} \zeta_1^{\alpha_1}\right\|_q^n
\le C R^{n(1-1/q)},\]
where $C= C(q,n) >0$. In view of \eqref{choice_n}, which amounts to $\frac{n-1}{2}>n(1-\frac{1}{q}),$ and by recalling Theorem \ref {thm:babenko}, we obtain
\[ \|S_E(\widetilde f)\|_1 > \|\tilde f\|_q. \]
for sufficiently large $R$. Taking
$f := \widetilde f\big/\|\widetilde f\|_q,$
we get a function $f$ satisfying the conditions of Lemma \ref{reduc_polyt}, and
this completes the proof of Theorem \ref{n_to_infty}.
\end{proof}
\section{The space of Dirichlet series in $\operatorname{BMOA}$}\label{sec:BMOA}
The result of the preceding section is purely multiplicative in the sense that it only involves analysis on the product space $\mathbb{T}^n$. Function spaces
on $\mathbb{T}^{n}$ or on $\mathbb{T}^{\infty}$ may however, by a device known as the Bohr lift (see below for details), also be viewed as spaces of Dirichlet series. From an abstract point of view (see for example \cite[Ch. 8]{R}), this means that we equip our function spaces with an additive structure that reflects the additive order of the multiplicative group of positive rational numbers $\mathbb{Q}_+$. This results in interesting interaction between function theory in polydiscs and half-planes that sometimes involves nontrivial number theory.
As we will see in the next subsection, this point of view leads us naturally from $H^1(\mathbb{T}^{\infty})^*$ to the space of ordinary Dirichlet series $\sum_{n=1}^{\infty} a_n n^{-s}$ that belong to $\operatorname{BMOA}$, i.e., the space of analytic functions
$f(s)$ in the right half-plane $\operatorname{Re} s> 0$ satisfying
\begin{equation} \label{eq:int1} \sup_{\sigma>0} \int_{-\infty}^{\infty} \frac{|f(\sigma+it)|^2}{1+\sigma^2+t^2} dt < \infty \end{equation} and
\[\|f\|_{\operatorname{BMO}} := \sup_{I \subset \mathbb{R}} \frac{1}{|I|}\int_{I}\left|f(it)-\frac{1}{|I|}\int_I f(i\tau)\,d\tau\right|\,dt < \infty.\]
Here the supremum is taken over all finite intervals $I$; \eqref{eq:int1} means that $g(s):=f(s)/(s+1)$ belongs to the Hardy space $H^{2}(\mathbb{C}_0)$ of the right half-plane $\mathbb{C}_0$, and then
$f(it):=\lim_{\sigma\to 0^+} f(\sigma+it)$ exists for almost all real $t$ by Fatou's theorem applied to $g$. We will use the notation $\operatorname{BMOA} \cap \mathcal{D}$ for this $\operatorname{BMOA}$ space, where $\mathcal{D}$ is the class of functions expressible as a convergent Dirichlet series in some half-plane $\operatorname{Re} s > \sigma_0$.
The space $\operatorname{BMOA} \cap \mathcal{D}$ arose naturally in a recent study of multiplicative Volterra operators \cite{BPS}. We refer to that paper for a complementary discussion of bounded mean oscillation in the context of Dirichlet series. By combining \cite[Cor. 6.4]{BPS} and \cite[Thm. 5.3]{BPS}, we may conclude that $\operatorname{BMOA} \cap \mathcal{D}$ can be viewed, via the Bohr lift, as a subspace of $H^{1}(\mathbb{T}^{\infty})^\ast$. This inclusion may however be proved in a direct way by an argument that we will present in the next subsection.
\subsection{The Bohr lift and the inclusion $\operatorname{BMOA} \cap \mathcal{D}\subset (\mathcal{H}^1)^*$}
We begin by considering an ordinary Dirichlet series of the form
\begin{equation} \label{eq:f} f(s)=\sum_{n=1}^\infty a_n n^{-s}. \end{equation}
By the transformation $z_j=p_j^{-s}$ (here $p_j$ is the $j$th prime number) and the fundamental theorem of arithmetic, we have the Bohr correspondence,
\begin{equation}\label{eq:bohr}
f(s):= \sum_{n=1}^\infty a_{n} n^{-s}\quad\longleftrightarrow\quad \mathcal{B}f(z):=\sum_{n=1}^{\infty} a_n z^{\kappa(n)},
\end{equation}
where $\kappa(n)=(\kappa_1,\ldots,\kappa_j,0,0,\ldots)$ is the multi-index such that $n = p_1^{\kappa_1} \cdots p_j^{\kappa_j}$. The transformation $\mathcal{B}$ is known as the Bohr lift. For $0 < p < \infty$, we define $\mathcal{H}^p$ as the space of Dirichlet series $f$ such that $\mathcal{B}f$ is in $H^p(\mathbb{T}^\infty)$, and we set
\[\|f\|_{\mathcal{H}^p} := \|\mathcal{B}f\|_{H^p(\mathbb{T}^\infty)} = \left(\int_{\mathbb{T}^\infty} |\mathcal{B}f(z)|^p\,dm_\infty(z)\right)^\frac{1}{p}.\]
Note that for $p=2$, we have
\[\|f\|_{\mathcal{H}^2} = \left(\sum_{n=1}^\infty |a_n|^2\right)^\frac{1}{2}.\]
In terms of the spaces ${\mathcal{H}^p}$, Corollary \ref{cor:dual_non_embedding} takes the form
\begin{corollary}\label{cor:dual_non_embedding2}
The dual space $(\mathscr{H}^1)^*$ is not contained
in $\mathscr{H}^p$ for any $p>2.$
\end{corollary}
We will now use the notation $\mathbb{C}_{\theta}:=\{s=\sigma+it: \sigma>\theta\}$. The conformally invariant Hardy space $H_{\operatorname{i}}^p(\mathbb{C}_\theta)$ consists of functions $f$ that are analytic on $\mathbb{C}_\theta$ and satisfy
\[\|f\|_{H_{\operatorname{i}}^p(\mathbb{C}_\theta)} := \sup_{\sigma>\theta} \left(\frac{1}{\pi}\int_{\mathbb{R}} |f(\sigma+it)|^p\,\frac{dt}{1+t^2}\right)^\frac{1}{p} <\infty.\]
These spaces show up naturally in our discussion in the following two ways. First, we will repeatedly use that a function $g$ analytic on $\mathbb{C}_0$ is in $\operatorname{BMOA}$ if and only if the measure
\[ d\mu(s):=|g'(\sigma+it)|^2 \sigma d\sigma \frac{ dt}{1+t^2} \]
is a Carleson measure for $H_{\operatorname{i}}^1(\mathbb{C}_0)$, which means that there is a constant $C$ such
\[ \int_{\mathbb{C}_0} |f(s)| d\mu(s) \le C \| f \|_{H_{\operatorname{i}}^1(\mathbb{C}_0)} \]
for all $f$ in $H_{\operatorname{i}}^1(\mathbb{C}_0)$. The smallest such constant $C$ is called the Carleson norm of the measure.
Second, by Fubini's theorem, we have the following connection between $\mathcal{H}^p$ and $H_{{\operatorname{i}}}^p(\mathbb{C}_0)$:
\begin{equation}
\label{eq:avgrotemb} \big\|f\big\|_{\mathcal{H}^p}^p = \int_{\mathbb{T}^\infty} \|f_\chi\|^p_{H^p_{\operatorname{i}}(\mathbb{C}_0)} \, dm_\infty(\chi),
\end{equation}
where $\chi$ is a character on $\mathbb{Q}^+$, i.e., a completely multiplicative function taking only unimodular values, and
\[ f_{\chi}(s):=\sum_{n=1}^{\infty} \chi(n) a_n n^{-s}. \] Here we recall that an arithmetic function $g:\mathbb{N}\to\mathbb{C}$ is completely multiplicative if it satisfies $g(nm)$ $=g(n)g(m)$ for all integers $m,n\geq 1$. A completely multiplicative function $g$ satisfies $g(1)=1$ unless $g$ vanishes identically, and it is completely determined by its values at the primes.
Note that we identify via the Bohr lift $\alpha\mapsto p^{\alpha}$ the group $\mathbb{Z}^{(\infty)}$ with the group $Q^+$, and by duality the group $\mathbb{T}^\infty$ with the group of completely multiplicative functions $\chi:\mathbb{N}\to \mathbb{T}$. Accordingly, we identify the Haar measures $dm_{\infty}(z)$ and $dm_{\infty}(\chi)$ of both groups. We also used in \eqref{eq:avgrotemb} the fact that, for almost every character $\chi$ and $\sum_{n=1}^\infty a_n n^{-s}$ in $\mathcal{H}^p$, the series $\sum_{n=1}^\infty a_n \chi(n) n^{-s}$ converges $m_{\infty}$-almost everywhere in $\mathbb{C}_0$, and defines an element in $H_{{\operatorname{i}}}^p(\mathbb{C}_0)$. For these facts, we refer e.g. to \cite[Section 4.2]{HLS} and \cite[Thm 5]{Ba}.
From \eqref{eq:avgrotemb} we may deduce Littlewood--Paley type expressions for the norms of $\mathcal{H}^p$. This was first done for $p=2$ in \cite[Prop.~4]{Ba}, and later for $0<p < \infty$ in \cite[Thm.~5.1]{BQS}, where the formula
\begin{equation}
\label{eq:LPp} \|f\|_{\mathcal{H}^p}^p \asymp |f(+\infty)|^p + \frac{4}{\pi}\int_{\mathbb{T}^\infty} \int_{\mathbb{R}}\int_0^\infty |f_\chi(\sigma+it)|^{p-2}|f_\chi'(\sigma+it)|^2 \sigma d\sigma\frac{dt}{1+t^2}dm_\infty(\chi)
\end{equation}
was obtained. When $p=2$, we have equality between the two sides of \eqref{eq:LPp}.
The Littlewood--Paley formula \eqref{eq:LPp} for $p=2$ may be polarized, so that we have
\[
\langle f, g \rangle_{\mathcal{H}^2} = f(+\infty)\overline{g(+\infty)} + \frac{4}{\pi} \int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty f_\chi'(\sigma+it) \overline{g_\chi'(\sigma+it)} \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi). \label{eq:LPpolar}
\]
Hence, by the Cauchy--Schwarz inequality and \eqref{eq:LPp}, we have for $f$ in $\mathcal{H}^1$ and $g$ in $\operatorname{BMOA}\cap \mathcal{D}$,
\begin{align*}
\big|\langle f, g \rangle_{\mathcal{H}^2} - f(+\infty)\overline{g(+\infty)}\big|^2 & \le \frac{4}{\pi}\int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty |f_\chi(\sigma+it)|^{-1} |f_\chi'(\sigma+it)|^2 \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi) \\
& \times
\int_{\mathbb{T}^\infty} \int_\mathbb{R} \int_0^\infty |f_\chi(\sigma+it)| |g_\chi'(\sigma+it)|^2 \sigma\,d\sigma\,\frac{dt}{1+t^2}\,dm_\infty(\chi) \\
& \ll \| f \|_{\mathcal{H}^1} \int_{\mathbb{T}^{\infty}}\|f_\chi\|_{H^1_{\operatorname{i}}(\mathbb{C}_0)} \, dm_\infty(\chi) = \| f \|_{\mathcal{H}^1}^2,
\end{align*}
where we in the second step used the Littlewood--Paley formula for $p=1$ and that
\[ |g'_{\chi}(\sigma+it)|^2 \sigma d\sigma \frac{ dt}{1+t^2} \]
is a Carleson measure for $H_{\operatorname{i}}^1(\mathbb{C}_0)$, with Carleson constant uniformly bounded in $\chi$, as follows from \cite[Lem. 2.1 (ii) and Lem. 2.2]{BPS}.
Hence we conclude that a Dirichlet series $g$ in $\operatorname{BMOA}\cap \mathcal{D}$ belongs to
$(\mathcal{H}^1)^*$.
The ``reverse'' problem\label{brevig} of finding an embedding of $(\mathcal{H}^1)^*$ into a ``natural'' space of functions analytic in $\mathbb{C}_{1/2}$ appears challenging. (This is a reverse question only in a rather loose sense as we are now considering functions defined in $\mathbb{C}_{1/2}$.) It was mentioned in \cite[Quest. 4]{SaSe} that $(\mathcal{H}^1)^*$ is not contained in $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$
for any $q>4$. Since no argument for this assertion was given in \cite{SaSe}, we take this opportunity to offer a proof\footnote{We thank Ole Fredrik Brevig for showing us this argument and allowing us to include it in this paper.}. To begin with, let us consider the interval from $1/2-i$ to $1/2+i$ and let $E$ denote the corresponding local embedding of $\mathscr{H}^2$ into $L^2(-1,1)$, given by $Ef(t) := f(1/2+it)$, so that
\[\|E f\|_{L^2(-1,1)}^2 = \int_{-1}^1 |f(1/2+it)|^2 \,dt.\]
Then the adjoint $E^\ast \colon L^2(-1,1) \to \mathscr{H}^2$ is
\[E^{\ast}g(s) := \sum_{n=1}^\infty \frac{\widehat{g}(\log{n})}{\sqrt{n}} n^{-s},\]
where $\widehat{g}(\xi) = \int_{-1}^1 e^{-i \xi t} g(t)\,dt$.
Fix $0<\beta<1$ and set $g_\beta(t): = |t|^{\beta-1}$. Plainly, $g_\beta$ is in $L^q(-1,1)$ if and only if $\beta>1-1/q$. Moreover, if $\xi\geq \delta>0$, then
$\widehat g_\beta(\xi) \asymp \xi^{-\beta}$,
where the implied constants depend only on $\delta$ and $\beta$. We now invoke Helson's inequality \cite[p. 89]{He2}
\[ \Big\| \sum_{n=1}^{\infty} a_n n^{-s} \Big\|_1 \ge \left(\sum_{n=1}^{\infty} \frac{|a_n|^2}{d(n)} \right)^{1/2}, \]
where $d(n)$ is the divisor function. We then use the classical fact that $\sum_{n\le x} 1/d(n)$ is of size $x (\log x)^{-1/2}$; the precise asymptotics of this summatory function was first computed by Wilson \cite[Formula (3.10)]{W} and may now be obtained as a simple consequence of a general formula of Selberg \cite{Se}. Taking $\beta=1/4$, we may therefore infer by partial summation that $E^\ast$ is unbounded from $L^q(-1,1)$ to $\mathscr{H}^1$ whenever $q < 4/3$.
By duality we conclude that for any $q>4$, there are $\varphi$ in $(\mathscr{H}^1)^\ast$ that are not locally embedded in $L^q(-1,1)$ and hence do not belong to $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$. Note that here $(\mathscr{H}^1)^\ast$ is identified as a subspace of $H^2$ (with respect to the natural pairings of $L^2(-1,1)$ and $L^2(\mathbb{T}^\infty)$) ,whence $E^{**}g=Eg$ for $(\mathscr{H}^1)^\ast$.
In view of Corollary~\ref{cor:dual_non_embedding}, it is natural to ask if the situation is even worse, namely that $(\mathcal{H}^1)^*$ fails to be contained in $H_{\operatorname{i}}^q(\mathbb{C}_{1/2})$ for any $q>2$.
We conclude from the preceding argument that there is no simple relation between $(\mathcal{H}^1)^*$ and $\operatorname{BMOA}(\mathbb{C}_{1/2})$. We may further illustrate this point by the following example. The Dirichlet series
\[ h(s):=\sum_{n=2}^{\infty} \frac{1}{\log n} n^{-s-1/2} \]
belongs to $\operatorname{BMOA}(\mathbb{C}_{1/2})$ (see \eqref{eq:hilbert}) below), but it is unknown whether it is in $(\mathcal{H}^1)^*$. It would be interesting to settle this question about membership in $(\mathcal{H}^1)^*$, as $h$ is both a primitive of $\zeta(s+1/2)-1$ and the analytic symbol of the multiplicative Hilbert matrix \cite{BPSSV}.
\subsection{Fefferman's condition for membership in $\operatorname{BMOA}\cap \mathcal{D}$}
The following theorem gives interesting information about Dirichlet series in $\operatorname{BMOA}$. It is an immediate consequence of existing results, as will be explained in the subsequent discussion.
\begin{theorem} \label{thm:fefferman}
\begin{itemize}
\item[(i)]
Suppose that $a_n\ge 0$ for every $n\ge 1$. Then $f(s):=\sum_{n=1}^{\infty} a_n n^{-s}$ is in $\operatorname{BMOA}$ if and only if
\begin{equation} \label{eq:feff} S^2:=\sup_{x\ge e } \sum_{k=1}^{\infty} \Big(\sum_{x^k\le n <x^{k+1}} a_{n} \Big)^2 < \infty, \end{equation}
and we have $S\asymp \Vert f\Vert_{\operatorname{BMOA}}$.
\item[(ii)]
If \ $\sum_{n=1}^{\infty} |a_n| n^{-s}$ is in $\operatorname{BMOA}$, then $\sum_{n=1}^{\infty} a_n n^{-s}$ is in $\operatorname{BMOA}$.
\end{itemize}
\end{theorem}
It is immediate from (i) that
\begin{equation} \label{eq:hilbert} \sum_{n=2}^{\infty} \frac{1}{\log n} n^{-s-1} \end{equation}
is in $\operatorname{BMOA}$ (see \cite[Thm. 2.5]{BPS}). By Mertens's formula
\begin{equation} \label{eq:mertens} \sum_{p\le x} \frac{1}{p} = \log\log x + M +O\left((\log x)^{-1}\right), \end{equation}
where the sum is over the primes $p$, part (i) also implies that
$ \sum_p p^{-1-s} $
is in $\operatorname{BMOA}$, and consequently $\log \zeta(s+1)$ is a function in $\operatorname{BMOA}$, where $\zeta(s)$ is now the Riemann zeta function. Then part (ii) of Theorem \ref{thm:fefferman} implies also that
$\sum_p \chi(p) p^{-1-s}$ is in $\operatorname{BMOA}$ for any sequence of unimodular numbers $\chi(p)$. In fact, we have more generally:
\begin{corollary}\label{cor:prime}
A Dirichlet series $\sum_{p} a_p p^{-s}$ over the primes $p$ is in $\operatorname{BMOA}$ if and only if
\begin{equation} \label{eq:bmo} \sup_{x\ge e} \sum_{k=1}^{\infty} \Big(\sum_{x^k\le p < x^{k+1}} |a_{p}| \Big)^2 < \infty.\end{equation}
\end{corollary}
\noindent Corollary~\ref{cor:prime} is a consequence of part (i) of Theorem~\ref{thm:fefferman} and the fact (see \cite[Lem. 2.1]{BPS}) that $\sum_p a_p p^{-s}$ is in $\operatorname{BMOA}$ if and only if $\sum_p a_p \chi(p) p^{-s}$ is in $\operatorname{BMOA}$ for every sequence of unimodular numbers $\chi(p)$.
The sufficiency of condition \eqref{eq:feff} in Theorem~\ref{thm:fefferman})(i) follows as a corollary to an $H^1$ multiplier theorem of Sledd and Stegenga \cite[Thm. 1]{SS} via Fefferman's duality theorem \cite{F, FS} and Parseval's theorem. The necessity also follows from \cite[Thm. 1]{SS} if we first note that for any $f$ in $H^1(\mathbb{C}_0)$, using the standard $H^2$ factorization of $H^1$, we may construct $g$ in $H^1(\mathbb{C}_0)$ with $\| g\|_{H^1(\mathbb{C}_0)}=\| f\|_{H^1(\mathbb{C}_0)}$ and $\widehat g(\xi)\geq |\widehat f(\xi)|\geq 0$ for all $\xi\in\mathbb{R}$. Here $\widehat f,\widehat g$ refer to the Fourier transforms of the boundary values on the imaginary axis. A corresponding result for $\operatorname{BMO}$ in the unit disc is stated in \cite[Cor. 2]{SS}: The Taylor series $\sum_{m=0}^{\infty} c_m z^m$ with $c_m\ge 0$ belongs to $\operatorname{BMO}$ of the unit circle $\mathbb{T}$ if and only if
\[ \sup_{m\ge 1} \sum_{j=0}^{\infty} \left(\sum_{r=0}^{m-1} c_{mj+r}\right)^2 < \infty. \]
Other proofs of this result, relying more directly on Hankel operators, can be found in \cite{Bon, HW}. This result is commonly known to have appeared in unpublished work of Fefferman.
To establish part (ii) of Theorem~\ref{thm:fefferman}, we use the following Carleson measure characterization of $\operatorname{BMOA}\cap \mathcal{D}$ which could be used to give an alternative proof of part (i) of Theorem~\ref{thm:fefferman}.
\begin{lemma}\label{basaux} Suppose that $f$ is in $H_{\operatorname{i}}^{2}(\mathbb{C}_0)\cap \mathcal{D}$. Then $f$ is in $\operatorname{BMOA}\cap\mathcal{D}$ if and only if there exists a positive constant $C$ such
\begin{equation} \label{eq:carl} \sup_{t\in \mathbb{R}} \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le C h \end{equation}
for $0\le h \le 1$.
Moreover, the best constant $C$ in ( \ref{eq:carl}) and $\Vert f\Vert_{\operatorname{BMO}}^{2}$ are equivalent.
\end{lemma}
\begin{proof} We first observe that \eqref{eq:carl} and the assumption that $f$ is in $H_{i}^{2}(\mathbb{C}_0)$ imply, by the maximum modulus principle, that $f'(\sigma+it)$ is uniformly bounded by $O(\sqrt{C})$ for $\sigma\geq 1$. Then, if $h>1$ and $t\in \mathbb{R}$ are given and
\begin{align*} I & :=\int_{0}^h \int_{t}^{t+h} |f'(\sigma+i\tau)|^2 \sigma d\tau d\sigma \\ &\ =\int_{0}^{1}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma+ \int_{1}^{h}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma=:I_1+I_2,\end{align*}
we have $I_1\ll Ch$ by \eqref{eq:carl}, while
\[ I_2 \ll \int_{1}^{\infty}\Big[\int_{t}^{t+h}|f'(\sigma+i\tau)|^2 d\tau\Big]\sigma d\sigma\ll \int_{t}^{t+h}\Big[\int_{1}^{\infty} \sigma C4^{-\sigma}d\sigma\Big]d\tau \ll Ch.\]
To obtain the final estimate above, we used that $f'(\sigma+it)=O(\sqrt{C} 2^{-\sigma})$, which holds uniformly in $t$ when $\sigma\geq 1$ because $f$ is a Dirichlet series.
\end{proof}
Part (ii) of Theorem~\ref{thm:fefferman} is immediate from this lemma along with a property of almost periodic functions established by Montgomery \cite[p. 131]{MO} (see also \cite[p. 4]{M}) which asserts that if
$|a_n|\leq b_n$, then for sums with a finite number of non-zero terms \[ \int_{T_1-T}^{T_1+T} \big|\sum a_n e^{i\lambda_n t}\big|^2dt\leq 3\int_{-T}^T \big|\sum b_n e^{i\lambda_n t}\big|^2dt.\]
Here $T>0$, $T_1$ is a real number, $a_n, b_n$ respectively complex and nonnegative coefficients,
and $\lambda_n$ are distinct real frequencies.
We will now apply Theorem~\ref{thm:fefferman} to see how our $\operatorname{BMOA}$ space of Dirichlet series relates to Hardy spaces and the Bloch space. We denote as usual $H^{\infty}(\mathbb{C}_0) \cap \mathcal{D}$ by $\mathcal{H}^\infty$, and we say that a function $f(s)$ analytic in $\operatorname{Re} s >0$ is in the Bloch space $\mathfrak{B}$ if
\[ \| f \|_{\mathfrak{B}}:=\sup_{\sigma+it: \sigma>0} \sigma |f'(\sigma+i t)| < \infty . \]
We have
\[ \mathcal{H}^{\infty} \subset \operatorname{BMOA} \cap \mathcal{D} \subset \bigcap_{0<q<\infty} \mathcal{H}^q, \]
where the inclusion to the left is trivial and that to the right was established in \cite[Lem. 2.1]{BPS}. Hence, in contrast to $(\mathcal{H}^1)^*$ itself, the subspace $\operatorname{BMOA}\cap \mathcal{D}$ is included in $ \bigcap_{0<q<\infty} \mathcal{H}^q$.
Moreover, is a classical fact and easy to see that $\operatorname{BMOA} \subset \mathfrak{B}$.
The following consequence of Corollary~\ref{cor:prime} is a Dirichlet series counterpart to a result of Campbell, Cima, and Stephenson \cite{CCS} that further enunciates the relation between the spaces in question. Our proof is close to that found in \cite{HT}.
\begin{corollary}\label{cor:bloch}There exist Dirichlet series that belong to $\mathfrak{B}$ and
$\bigcap_{0<q<\infty} \mathcal{H}^q$ but not to $\operatorname{BMOA}$.
\end{corollary}
\begin{proof}
It is an easy consequence of the definition of the Bloch space that $\sum_{n=1}^{\infty} a_n n^{-s}$ with $a_n\ge 0$ is in $\mathfrak{B}$ if and only if
\begin{equation}\label{eq:Bloch} \sup_{x\ge 2} \sum_{x\le n <x^2} a_n < \infty. \end{equation}
Indeed, if \eqref{eq:Bloch} holds, then we use it with $x_j=\exp(2^{j}/\sigma),\ x_{j+1}=x_{j}^{2}$, to show that for $\sigma>0$,
\[ \sum_{n\geq 2} a_n\, \sigma \log n\,e^{-\sigma\log n}\le \sum_{j} 2^{-j} \big(\sum_{x_j\leq n<x_{j+1}}a_n\big)\ll \sum_{j} 2^{-j}.\]
Conversely, if $\sum_{n\geq 2} a_n\, \sigma \log ne^{-\sigma\log n}\leq C$ for all $\sigma>0$, then choosing $\sigma=1/\log x$, we see that the sum on the left-hand side of \eqref{eq:Bloch}
is bounded by $C e^2/2$.
Let $\mathbb{P}_j$ be the primes in the interval $[e^{2^j}, e^{2^j+1}]$. Then
$|\mathbb{P}_j|\sim (e-1) e^{2^j} 2^{-j}$ by the prime number theorem. Setting $a_p:=e^{-2^j} 2^j$ if $p$ is in $\mathbb{P}_j$ and $a_p=0$ otherwise, we see from \eqref{eq:Bloch} that $\sum_p a_p p^{-s}$ is in the Bloch space, but from part (i) of Theorem~\ref{thm:fefferman} that it fails to be in $\operatorname{BMOA}$.
We next recall Khinchin's inequality for the Steinhaus variables $Z_p$ (that are i.i.d. random variables with uniform distribution on $\mathbb{T}$):
\[
\mathbb{E} \big| \sum_pa_p Z_p\big|^q\asymp \big(\sum_p|a_p|^2 \big)^{q/2},
\]
with the implied constants only depending on $q>0$ (see \cite[Thm. 1]{K}). Since in the Bohr correspondence $p_k^{-s}$ corresponds to the independent variable $z_k$, we see that they form a sequence of Steinhaus variables with respect to the Haar measure on $\mathbb{T}^\infty$.
Thus, in view of the bound
\[ \sum_p a_p^2 \ll \sum_{j=0}^{\infty} e^{-2^j} 2^j < \infty, \]
Khinchin's inequality implies that
$\sum_p a_p p^{-s}$ belongs to $\mathcal{H}^q$.
\end{proof}
\subsection{The relation between Dirichlet series in $H^{\infty}$, $\operatorname{BMOA}$, and $\mathfrak{B}$}
\label{sec:relation}
We turn to some further comparisons between the three spaces $\mathcal{H}^{\infty}$, $\operatorname{BMOA}\cap \mathcal{D}$, and $\mathfrak{B}\cap \mathcal{D}$. We begin with a discussion of uniform and absolute convergence of Dirichlet series in $\mathfrak{B}\cap \mathcal{D}$. The following lemma will be useful in this discussion. Here we use the notation $\log_+ x:=\max(0,\log x)$ for $x>0$, and we will also write $(T_c f)(s):=f(s+c)$ in what follows.
\begin{lemma}\label{vari}Suppose that $f(s)=\sum_{n=1}^\infty a_n n^{-s}$ is in $\mathfrak{B}\cap \mathcal{D}$. Then
\begin{align}
\label{eq:coeff} |a_n| & \le e \| f \|_{\mathfrak{B}}, \quad n\ge 2, \\ \label{eq:point}
|f(\sigma+it)-a_1| & \le \left(\log_+\frac{1}{\sigma} + C 2^{-\sigma} \right) \| f \|_{\mathfrak{B}}, \quad \sigma>0, \end{align}
for some absolute constant $C$. Up to the precise value of $C$, these bounds are both optimal.
\end{lemma}
\begin{proof} To prove \eqref{eq:coeff}, we use that $T_{\varepsilon}f'$ is in ${\mathscr{H}}^\infty$ for every $\varepsilon>0$. By either viewing the coefficients of a Dirichlet series as Fourier coefficients or using that $\| f \|_{\mathcal{H}^2}\le \| f \|_{\mathcal{H}^{\infty}}$, we see that they are dominated by its $\mathcal{H}^\infty$ norm. We therefore have \[ |a_n|(\log n) n^{-\varepsilon}\leq \Vert T_{\varepsilon}f' \Vert_\infty \leq \frac{\Vert f\Vert_{\mathfrak{B}}}{\varepsilon} \]
and hence
\[ |a_n|\leq \frac{n^{\varepsilon}\Vert f\Vert_{\mathfrak{B}}}{\varepsilon \log n}.\]
We conclude by taking $\varepsilon=1/\log n$. In addition, we notice that the bound is optimal because $\| n^{-s} \|_{\mathfrak{B}}=1/e$.
To prove \eqref{eq:point}, we begin by noticing that \eqref{eq:coeff} implies that
\begin{equation} \label{eq:large} |f(\sigma+it)-a_1| \le \sum_{n=2}^{\infty} |a_n| n^{-\sigma} \le e (\zeta(\sigma)-1) \| f \|_{\mathfrak{B}} \end{equation}
holds for $\sigma\ge 2$.
For $\sigma\le 2$, we use that
\[ |f(\sigma+it)-a_1| \le |f(2+it)-a_1|+ \int_{\sigma}^2 \| f \|_{\mathfrak{B}} \frac{d\alpha}{\alpha} \le \left(\log \frac{1}{\sigma} + C\right)\| f \|_{\mathfrak{B}}, \]
where we in the final step used \eqref{eq:large} with $\sigma=2$. The example $\sum_{n=2}^{\infty} n^{-1-s}/\log n$ shows that the inequality is optimal, up to the precise value of $C$.
\end{proof}
The pointwise bound \eqref{eq:point} implies that what is known about uniform and absolute convergence of Dirichlet series in
$\mathcal{H}^{\infty}$ carries over in a painless way to $\mathfrak{B}\cap \mathcal{D}$. In fact, a rather weak bound of the form
\begin{equation} \label{eq:gen} |f(\sigma+it)|\le C(\sigma), \quad \sigma>0, \end{equation}
suffices to draw such a conclusion, as will now be explained. To begin with we will assume that $C(\sigma)$ is an arbitrary positive function and later specify its required behavior as $\sigma\to 0^+$.
First, by a classical theorem of Bohr \cite[p. 145]{MAHE}, a bound like \eqref{eq:gen} implies that
the Dirichlet series of $f(s)$ converges uniformly in every half-plane $\operatorname{Re} s \ge \sigma_0>0.$ Following Bohr, we then see that
$\sigma_{u}(f)\leq 0$, where $\sigma_u(f)$ is the abscissa of uniform convergence, defined as the infimum over those $\sigma_0$ such that the Dirichlet series of $f(s)$ converges uniformly in $\operatorname{Re} s \ge \sigma_0$.
Second, as observed by Bohr, it is immediate that $\sigma_{u}(f)\leq 0$ implies $\sigma_{a}(f)\leq 1/2$, where $\sigma_{a}(f)$ is the abscissa of absolute convergence of $f$, i.e., the infimum over those $\sigma_0$ such that the Dirichlet series of $f(s)$ converges absolutely in $\operatorname{Re} s \ge \sigma_0$. Thanks to more recent work originating in \cite{BCQ}, an interesting refinement of this result holds when $C(\sigma)$ does not grow too fast as $\sigma\searrow 0$. To arrive at that refinement, we set $(S_N f)(s):=\sum_{n=1}^{N} a_n n^{-s} $ and recall that
\begin{equation}\label{improv2}\sum_{n=1}^N|a_n| \leq \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}}\Vert S_N f\Vert_\infty \end{equation}
with $c_N\to 1/\sqrt{2}$ when $N\to \infty$. This ``Sidon constant'' estimate was proved in \cite{KQ} with a smaller value of $c_N$. The proof from \cite{KQ}, using at one point the hypercontractive Bohnenblust--Hille inequality from \cite{DFOOS}, yields \eqref{improv2} with
$c_N\to 1/\sqrt{2}$, which is stated as Theorem 3 in \cite{DFOOS}. This is optimal by \cite{dB}.
It was proved in \cite{BCQ} that there exists an absolute constant $C$ such that if $f(s):=\sum_{n=1}^{\infty}{a_nn^{-s}}$ is in $\mathcal{H}^{\infty}$, then
$ \| S_N f\|_{\infty}\le C \log N \| f\|_{\infty} $. See also Section~\ref{sec:partial}, where an alternate proof of this bound will be given. Using this fact, we obtain from \eqref{improv2} that
\begin{equation}\label{improv3}\sum_{n=1}^N|a_n| \leq \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}}\Vert f \Vert_\infty , \end{equation}
still with $c_N\to 1/\sqrt{2}$ when $N\to \infty$. Now applying \eqref{improv3} to $T_{\varepsilon} f $ with $\varepsilon=1/\log N$ and taking into account
\eqref{eq:gen}, we get
\[ \sum_{n=1}^N |a_n| \le e \sum_{n=1}^N |a_n| n^{-\varepsilon} \le \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}} C(1/\log N). \]
We now see that if $\log C(\sigma)=o(\sqrt{|\log \sigma |/\sigma }) $ when $\sigma\searrow 0$, then
\begin{equation} \label{eq:imp}\sum_{n=1}^N |a_n| \le \sqrt{N} e^{-c_N \sqrt{\log N \log\log N}} \end{equation}
with $c_N\to 1/\sqrt{2}$. When $f$ is in $\mathfrak{B}$, we have $C(\sigma)=O(|\log \sigma|)$ and hence \eqref{eq:imp} clearly holds. Summing by parts and using \eqref{eq:imp}, we get
\begin{equation} \label{eq:sum} \sum_{n=3}^{\infty} \frac{|a_n|}{\sqrt{n}} e^{{c \sqrt{\log n \log\log n}}} < \infty \end{equation}
for every $c<1/\sqrt{2}$. This is a bound previously known to hold for functions $f$ in $\mathcal{H}^{\infty}$ (see \cite{BCQ, DFOOS}). As shown in \cite{DFOOS}, the result is optimal in the sense that there exist functions $f$ in $\mathcal{H}^{\infty}$ for which the series in \eqref{eq:sum} diverges when $c>1/\sqrt{2}$.
In Section~\ref{sec:poly}, we will establish ``reverse'' inequalities to $\| f \|_{\mathfrak{B}} \le \| f \|_{\infty} $ and $\| f \|_{\mathfrak{B}} \ll \| f\|_{\operatorname{BMOA}}$ when $f(s)=\sum_{n=1}^N a_n n^{-s}$ and $N$ is fixed.
\subsection{A condition for random membership in $\operatorname{BMOA} \cap \mathcal{D}$}
In the sequel, if $f(s)=\sum_{n=1}^\infty a_n n^{-s}$ is a Dirichlet series, we denote by $f_\omega$ the corresponding randomized Dirichlet series, namely $f_{\omega}(s):=\sum_{n=1}^\infty \varepsilon_{n}(\omega)a_n n^{-s}$ where $(\varepsilon_n)$ is a standard Rademacher sequence.
We are interested in extending the following result of Sledd \cite{S} (see also \cite{DUR}) to the setting of ordinary Dirichlet series:
\begin{theorem}\label{dur}Suppose $\sum_{n=1}^\infty |a_n|^2 \log n<\infty$. Then, the power series $\sum \varepsilon_n a_n z^n$ is almost surely in
$\operatorname{BMOA}$. \end{theorem}
This result is optimal in a rather strong sense as shown in \cite{ACP}: If one replaces $\log n$ by any sequence growing at a slower rate, then the condition does not guarantee membership even in the Bloch space.
We see from Theorem~\ref{dur} that if we require slightly more than $\ell^2$ decay of the coefficients, then we may expect that a ``generic'' analytic function in the unit disc will be in $\operatorname{BMOA}$. The results of the preceding sections show in two respects that a similarly strong result can not hold in the context of Hardy spaces of Dirichlet series.
First, we know that $f(s)=\sum_{p} a_p p^{-s}$ is in $\operatorname{BMOA}\cap \mathcal{D}$ if and only if \eqref{eq:bmo} of Corollary~\ref{cor:prime} holds, and by the Cauchy--Schwarz inequality, this implies in particular that the abscissa of absolute convergence is $0$. Hence
\[ \sum_{p} \pm p^{-\alpha-s} \]
can not be in $\operatorname{BMOA}\cap \mathcal{D} $ for any choice of the signs $\pm$ when $1/2<\alpha<1$, although, from an $\ell^2$ point of view, the coefficients decay fast when $\alpha$ is close to $1$. Second, in view of \eqref{eq:sum}, none of the Dirichlet series
\[f(s):=\sum_{n=2}^\infty \pm \frac{1}{\sqrt {n}} \exp\Big(-c \sqrt{\log n\log\log n}\Big)\, n^{-s},\quad 0<c<1/\sqrt 2, \]
with random signs $\pm$ can be in $\operatorname{BMOA}\cap \mathcal{D}$, again in spite of fairly good $\ell^2$ decay of the coefficients.
These observations indicate that we should impose an extra condition to obtain a result of the same strength as that of Theorem~\ref{dur}. In fact, they suggest that a possible remedy could be to consider integers generated by a very thin sequence of primes. We will therefore assume that we are in this situation with a fixed set $\mathcal{P}_0$ (finite or not) of prime numbers. We will measure the thinness of this set in terms of its distribution function
\[ \pi_{0}(x):=\sum_{p\in \mathcal{P}_0
p\leq x} 1. \]
We will say that $\mathcal{P}_0$ is an ultra-thin set of primes if
\begin{equation} \label{eq:ultra}
\int_{3}^{\infty} \frac{\pi_{0}(x)\log\log x}{x\log^{3}x}dx<\infty ,
\end{equation}
and we declare the numbers $w_1=w_2=1$,
\[ w_n:=\int_{n}^{\infty} \frac{\pi_{0}(x)\log\log x}{x\log^{3}x}dx , \quad n\ge 3,\]
to constitute the weight sequence of $\mathcal{P}_0$. We denote by $\mathcal{N}_0$ the set of all $\mathcal{P}_0$-smooth integers, i.e., the set of positive integers with all their prime divisors belonging to $\mathcal{P}_0$. Our extension of Theorem~\ref{dur} now reads as follows.
\begin{theorem}\label{proba} Let $\mathcal{P}_0$ be an ultra-thin set of primes with weight sequence $(w_n)$. If
\begin{equation} \label{eq:durendir} \sum_{n\in \mathcal{N}_0} |a_n|^2 w_n \log^{2} n <\infty, \end{equation}
then the Dirichlet series $ f_{\omega}(s)=\sum_{n\in \mathcal{N}_0} \varepsilon_n a_n n^{-s}$ is almost surely in $\operatorname{BMOA}\cap\mathcal{D}$.
\end{theorem}
Let us first note that this is in fact a true extension of Theorem~\ref{dur}, i.e., it reduces to Theorem~\ref{dur} when $\mathcal{P}_0$ consists of a single prime. To see this, we first observe that if
$\pi_0(x) \ll \log^{\delta} x$ for some $\delta$, $0\le \delta < 2$, then $\mathcal{P}_0$ is ultra-thin and
$w_n \ll (\log\log n)/\log^{2-\delta} n$. In particular, in the special case when
$\mathcal{P}_0$ is a finite set, we find that $w_n\asymp (\log\log n)/\log^2 n$ and hence
the series in \eqref{eq:durendir} becomes $\sum_{n\in \mathcal{N}_0} |a_n|^2 \log\log n $.
If $\mathcal{P}_0$ consists of a single prime $p$, then the Dirichlet series over $\mathcal{N}_0$
becomes a Taylor series in the variable $z:=p^{-s}$ and $\log\log n=\log k + \log\log p\sim \log k$
for $n=p^k$, and hence \eqref{eq:durendir} becomes the condition of Theorem~\ref{dur}. Finally,
we note that, plainly, the Dirichlet series over the numbers $p^k$ will be in $\operatorname{BMOA}(\mathbb{C}_0)$
if and only if the corresponding Taylor series in the variable $z$ is in $\operatorname{BMOA}(\mathbb{T})$.
In view of this relation between Theorem~\ref{dur} and Theorem~\ref{proba}, we see by again appealing to \cite{ACP} that we cannot replace $\log^2 n$ by any sequence growing at a slower rate.
For the proof of Theorem~\ref{proba}, we begin by observing that for fixed $\sigma>0$, we have
\[ \mathbb{E}\Big(\int_{-\infty}^\infty \frac{|f_{\omega}(\sigma+it)|^2}{t^2+1}dt\Big)=\pi \sum_{n=1}^\infty |a_n|^2 n^{-2\sigma}\leq \pi \sum_{n=1}^\infty |a_n|^2, \]
and hence $f_{\omega}$ is almost surely in $H_{\operatorname{i}}^{2}(\mathbb{C}_0)$. This means that we may base our proof on Lemma~\ref{basaux}.
The rest of the proof of Theorem~\ref{proba} relies on a lemma from \cite {BCQ} (see also \cite[Theorem 5.3.4]{MAHE}) which is deduced, via the Bohr lift, from a multivariate analogue of a classical inequality of Salem and Zygmund due to Kahane \cite[Thm. 3, Sect. 6]{KAH2}.
\begin{lemma}\label{trois}There exists an absolute constant $C$ such that if $P(s)=\sum_{k=1}^n a_k k^{-s}$ is a $\mathcal{P}_0$-smooth Dirichlet polynomial of length $n \ge 3$ and $P_\omega$ the corresponding randomized polynomial, then
\[ \mathbb{E}(\Vert P_\omega\Vert_\infty) \le C \big(\sum_{k=1}^n |a_k|^2\big)^{1/2}\sqrt{\pi_{0}(n)}\sqrt{\log\log n}.\]
\end{lemma}
Here the price we pay for estimating the uniform norm on the whole of $\mathbb{R}$ is this additional factor $\sqrt{\pi_{0}(n)}$. By considering the randomization (i.e. adding random signs) of the Dirichlet polynomial $\sum_{1\leq k\leq N}p_k^{-s}$ (or randomizing more complicated polynomials of the form $\sum_{1\leq k\leq N}p_k^{-s}g(p_{N+k}^{-s})$), with a fixed standard polynomial $g$, we see that this extra factor is more or less mandatory.
\begin{proof}[Proof of Theorem~\ref{proba}] We may for convenience assume that $a_2=0$. Let $X$ be the random variable defined by
\begin{equation}\label{rava}X(\omega):=\int_{0}^1 \sigma\, \Vert T_{\sigma} f'_{\omega} \Vert_{\infty}^{2}d\sigma.\end{equation}
We will prove that $\mathbb{E}(X)<\infty$. This will imply that $X(\omega)<\infty$ a.s., hence that $f_\omega$ is in $\operatorname{BMOA}\cap \mathcal{D}$ a.s. in view of Lemma \ref{basaux}.
We fix $\sigma>0$ and set
\[ S(x,t):=-\sum_{3\le j\le x} \varepsilon_{j} a_j (\log j) j^{-it} \quad \text{and} \quad B(x):=\Big(\sum_{3\le j\le x} |a_j|^2 \log^{2}j\Big)^{1/2}. \]
Since $(T_{\sigma} f'_{\omega})(it)=-\sum_{n=3}^\infty \varepsilon_n\,a_n (\log n)\, n^{-it} n^{-\sigma}$, we find by partial summation that
\[ \big|(T_{\sigma} f'_{\omega})(it)\big|\le \int_3^\infty \sigma x^{-\sigma-1}|S(x,t)| dx.\]
Now using the $L^1-L^2$ Khintchin--Kahane inequality and Lemma~\ref{trois}, we find that
\begin{equation}\label{khka}\mathbb{E}\big(\big\Vert T_{\sigma} f'_{\omega}\big\Vert_{\infty}^{2}\big)\ll \big(\mathbb{E}\big\Vert T_{\sigma} f'_{\omega}\big\Vert_{\infty}\big)^{2}\ll \Big(\int_{3}^\infty \sigma x^{-\sigma-1} B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x} \, dx\Big)^{2} ,\end{equation}
whence
\begin{equation} \label{eq:wh} \mathbb{E}(X) \ll \int_0^1 \sigma \Big(\int_{3}^\infty \sigma x^{-\sigma-1} B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x} \, dx\Big)^{2} d\sigma. \end{equation}
Setting for convenience $h(x):=B(x) \sqrt{\pi_{0}(x)} \sqrt{\log\log x}$ and using that for $x,y>1$
\[ \int_{0}^{1} \sigma^3 (xy)^{-\sigma}d\sigma\leq \int_{0}^{\infty} \sigma^3 (xy)^{-\sigma}d\sigma=\frac{6}{\log^{4}(xy)}, \]
we find by Fubini's theorem that
\begin{align*} \int_0^1 \sigma^3 \Big(\int_{3}^\infty x^{-\sigma-1} h(x) \, dx\Big)^{2} d\sigma & \le 6 \int_{3}^{\infty}\int_{3}^{\infty} \frac{h(x) h(y)}{xy \log^4 (xy)} dxdy \\
&\le \frac{3}{4} \int_{3}^{\infty}\int_{3}^{\infty} \frac{h(x) h(y)}{(\log x \log y)^{3/2}} \frac{dxdy}{xy \log (xy)} \le \frac{3 \pi}{4} \int_{3}^{\infty} \frac{h(x)^2}{x\log^3 x} dx.
\end{align*}
Here we used in the last step that
\[ \int_{1}^\infty \int_{1}^\infty\psi(x)\psi(y)\frac{dxdy}{xy (\log xy)}\leq \pi \int_{1}^\infty\psi^{2}(x) \frac{dx}{x}\]
holds for a nonnegative function $\psi$, which we recognize as
Hilbert's inequality \cite[Thm. 316]{HLP}
\[ \int_{0}^\infty \int_{0}^\infty\varphi(u)\varphi(v)\frac{dudv}{u+v}\leq \pi \int_{0}^\infty \varphi^{2}(u)du\]
for $\varphi(u):=\psi(e^{u})$, after the change of variables $u=\log x$, $v=\log y$.
Hence, returning to \eqref{eq:wh}, we see that
\begin{equation}\label{eq:change} \mathbb{E}(X) \ll \int_3^{\infty} \frac{B^2(x) \pi_0(x) \log\log x}{x \log^3 x } dx. \end{equation}
Now using the definition of $B^2(x)$ as a finite sum and changing the order of integration and summation, we observe that the right-hand side of \eqref{eq:change} equals the series in
\eqref{eq:durendir}, and hence we conclude that $\mathbb{E}(X)<\infty$.
\end{proof}
\section{Comparison of norms for Dirichlet polynomials}\label{sec:poly}\label{sec:compare}
We will now establish some relations between the various norms considered so far, when computed for Dirichlet polynomials of fixed length. Throughout this section, our Dirichlet polynomials will be denoted by $f$ and not $P$ as before. Our results complement the main result of \cite{DP} which shows that the supremum of the
ratio $\| f \|_q/\| f \|_{q'}$ for nonzero Dirichlet polynomials $f$ of length $N$ is
\begin{equation} \label{eq:comp} \exp\left((1+o(1))\frac{\log N}{\log\log N}\log \sqrt{q/q'} \right) \end{equation}
when $1\le q'<q < \infty$.
We begin with comparisons involving
$\operatorname{BMOA}$ and $\mathfrak{B}$. For the purpose of this discussion, it will be convenient to agree that
\[ \| f \|_{\operatorname{BMOA}}^2:= \sup_{h>0} \frac{1}{h} \sup_{t\in \mathbb{R}} \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma,\]
in accordance with the Carleson measure condition of Lemma~\ref{basaux}. We denote by $\mathcal{D}_N$ the space of Dirichlet polynomials of length $N$ vanishing at $+\infty$. The respective ratios $\| f \|_{\infty}/\|f\|_{\mathfrak{B}}$ and $\| f \|_{\operatorname{BMOA}}/\| f\|_{\mathfrak{B}}$ are quite modest compared to \eqref{eq:comp}:
\begin{theorem}\label{unus}
When $N\to \infty$, we have
\begin{align} \label{eq:asymp} \sup_{f\in \mathcal{D}_N\setminus \{0\}} \frac{\| f \|_{\infty}}{\| f \|_{\mathfrak{B}}} & \sim \log\log N, \\ \label{eq:asymp1}
\sup_{f\in \mathcal{D}_N\setminus \{0\}} \frac{\| f \|_{\operatorname{BMOA}}}{\| f \|_{\mathfrak{B}}} & \asymp \sqrt{\log\log N},\\
\label{eq:asymp2}\sup_{f\in \mathcal{D}_N\setminus\{0\}} \frac{\Vert f\Vert_\infty}{\Vert f\Vert_{BMOA}} & \asymp \log \log N .
\end{align}
\end{theorem}
We require two new lemmas. The first contains two versions of Bernstein's inequality.
\begin{lemma}[Bernstein inequalities] \label{es} We have
\begin{equation} \label{eq:bern} \| f' \|_\infty \le \log N \| f \|_{\infty} \quad \text{and} \quad \Vert f'\Vert_\infty \leq 4\log N \Vert f\Vert_{\mathfrak{B}}\end{equation}
for every $f$ in $\mathcal{D}_N$.
\end{lemma}
The first inequality in \eqref{eq:bern} is a special case of a general version of Bernstein's inequality for finite sums of purely imaginary exponentials (see \cite[p. 30]{KAH}). We will find that the second inequality is a consequence of the next lemma.
\begin{lemma}\label{mardi} We have
\[ \| f \|_{\infty} \le \frac{1}{(1-c)} \| T_{c/\log N}f \|_{\infty} \]
for every Dirichlet polynomial $f$ in $\mathcal{D}_N$, when $0<c<1$ and $N\ge 2$.
\end{lemma}
\begin{proof}
The first inequality in \eqref{eq:bern} and the maximum modulus principle give for any fixed $\sigma>0$
\[ |f(it)-f(\sigma+it)|\leq \sigma \| f'\Vert_{\infty}\leq \sigma \log N\Vert f\Vert_{\infty}. \]
Hence, setting $\sigma=c/\log N$, we see that
\[ |f(it)|\leq \big|\big(T_{c/\log N}f\big)(it)\big|+c \| f \|_{\infty} \]
from which the result follows.
\end{proof}
\begin{proof}[Proof of the second inequality in \eqref{eq:bern}]
Using the definition of the Bloch norm, we see for any fixed $\sigma>0$ that
\[ \| f\|_{\mathfrak{B}}\geq \sup_{t\in \mathbb{R}} \sigma |f'(\sigma+it)|. \]
Setting $\sigma=c/\log N$ and applying Lemma~\ref{mardi} to $f'$, we then get
\[ \| f\|_{\mathfrak{B}} \ge \frac{c(1-c)}{\log N} \| f' \|_{\infty} .\] Choosing $c=1/2$, we obtain the asserted result.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{unus}]
Combining \eqref{eq:point} and Lemma~\ref{mardi}, we find that if $f(+\infty)=0$, then
\begin{equation} \label{eq:bloch} \|f\|_{\infty} \le \frac{\log\log N + \log (1/c) + C}{(1-c)} \| f \|_{\mathfrak{B}}. \end{equation}
Choosing $c=1/\log\log N$, we obtain
\[ \frac{\|f\|_{\infty}}{\| f\|_{\mathfrak{B}}}\le \log\log N +O(\log\log\log N), \]
assuming that $f\neq 0$. On the other hand, the polynomial $f(s)=\sum_{n=2}^N \frac{1}{n\log n} n^{-s}$ satisfies $\| f \|_{\infty} = \log\log N + O(1)$, while
\[ |f'(s)| \le \sum_{n=2}^{\infty} n^{-\sigma-1}\leq \zeta(\sigma+1)-1\leq \frac{1}{\sigma}, \]
so that $\| f \|_{\mathfrak{B}} \le 1$. Hence we have shown that the supremum over $f$ of the left-hand side of \eqref{eq:bloch} exceeds $\log\log N+O(1)$. We conclude that \eqref{eq:asymp} holds.
We now use Lemma \ref{basaux} to estimate $\| f \|_{\operatorname{BMOA}}$ under the assumption that $f$ is in $\mathcal{D}_N$ and $\| f \|_{\mathfrak{B}}=1$. We first observe that if $h\le 1/\log N$, then by the second Bernstein inequality of Lemma~\ref{es},
\[ \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le 16 (\log N)^2 h \int_{0}^h \sigma d\sigma \le 8h.\]
On the other hand, if $1/\log N<h \le 1$, then we obtain by the same argument
\[ \int_{0}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma \le 8h + \int_{1/\log N}^h \int_{t}^{t+h} |f'(\sigma+i \tau)|^2 \sigma d\tau d\sigma .\]
Using the bound $|f'(\sigma+i \tau)| \le 1/\sigma$ in the integral term, where $1/\log N \le \sigma \le h\le1$, we infer from this that
\[ \| f \|_{\operatorname{BMOA}}^2 \le \log\log N+O(1). \]
The optimality of the latter bound is seen by considering the function
\[ g(s):=\sum_{k\le \log\log N} \left[e^{e^k}\right]^{-s} , \]
that satisfies $\| g \|_{\mathfrak{B}} \asymp 1$ and $\| g \|_{\operatorname{BMOA}}^2 \asymp \log \log N$. Here the first relation is trivial, and the second follows from \eqref{eq:feff} of Theorem~\ref{thm:fefferman}. Hence \eqref{eq:asymp1} has been established.
Finally, to prove \eqref{eq:asymp2}, we first infer from \eqref{eq:asymp} that
\[ \Vert f\Vert_\infty\ll \log\log N\Vert f\Vert_{\mathcal{B}}\ll \log\log N \Vert f\Vert_{BMOA}. \]
The example
$f(s)=\sum_{2\leq n\leq N} \frac{1}{n\log n} n^{-s}$ used above satisfies $\Vert f\Vert_{BMOA}\asymp 1$ by \eqref{eq:hilbert} and trivially $\Vert f\Vert_{\infty}\asymp \log\log N$. This establishes the reverse inequality in \eqref{eq:asymp2}.
\end{proof}
We close this section by establishing a lemma that will be used in two different ways in the next section. In contrast to the preceding comparison results, as well as those of \cite{DP}, Lemma~\ref{lem:finite} is a purely multiplicative result, and we therefore state it for polynomials in several complex variables.
\begin{lemma}\label{lem:finite}
There exists an absolute constant $C$ such that if $F$ is a holomorphic polynomial of degree $d\ge 2$ in $n\ge 1$ complex variables, then
\begin{equation} \label{eq:sub} \| F \|_{\infty} \le C \| F\|_{n \log d}. \end{equation}
\end{lemma}
\begin{proof}
We now apply a multi-dimensional version of Bernstein's inequality, namely
\[ |F(z)-F(w)|\leq \frac{\pi}{2} d \Vert z-w\Vert_\infty \Vert F\Vert_\infty ,\]
which holds for holomorphic polynomials $F$ in $n$ complex variables and all points $z=(z_j)$ and $ w=(w_j)$ on $\mathbb{T}^n$ (see \cite[pp. 125--126]{MAHE}). This implies that if $w$ is a point on $\mathbb{T}^n$ at which $|F(w)|=\| F\|_\infty $, then $|F(z)|\ge \| F \|_\infty/2$ whenever we have
$|w_j-z_j|\le \frac{c}{d} $ for $j\le \pi_0(n)$ with $c:=1/\pi$. It follows that
\[ \| F \|_q\ge \frac{1}{2} (2c)^{n/q} d^{-n/q} \| F\|_\infty \]
and hence we get
\[ \| F\|_\infty \le 2e (2c)^{-1/\log d} \| F\|_{n \log d} \le 2 \pi^{1/\log 2} \| F\|_{n \log d}. \]
\end{proof}
\section{The partial sum operator for Dirichlet series and Riesz projection on $\mathbb{T}$}\label{sec:partial}
We will now make some remarks about the partial sum operator $S_N$ which is defined by the formula
\[ (S_N f)(s):=\sum_{n\le N} a_n n^{-s} \]
for $f(s)=\sum_{n=1}^{\infty} a_n n^{-s}$. We are interested in computing the norm of $S_N$ when it acts on $\mathcal{H}^q$. In what follows, we denote this norm by $\| S_N \|_q$. Most of what is known about $\|S_N\|_q$ for different values of $q$ and $N$ can be deduced from an idea that goes back to Helson \cite{He}, by which we may effectively rewrite $S_N$ as a one-dimensional Riesz projection. We will now state and prove a theorem in this vein that can be obtained almost immediately by combining \cite[Thm. 8.7.2]{R} with the optimal bounds of Hollenbeck and Verbitsky \cite{HV} for Riesz projection on $\mathbb{T}$. We choose to offer a detailed proof, however, because it makes the transference to one-dimensional Riesz projection explicit and leads to nontrivial quantitative estimates.
We will consider a somewhat more general situation to emphasize the main idea of the transference to the unit circle. To this end, we fix a completely multiplicative function $g(n)\ge 1$ such that $g(n)\to \infty$ when $n\to \infty$. By considering $g(p^k)$ for $k\geq 1$, we see that this means that
$g(p)>1$ for all primes $p$ and that $\lim_{p\to\infty} g(p)=\infty.$ We then introduce the projection
\[ P_{g,x} \left(\sum_{n=1}^\infty a_n n^{-s}\right) := \sum_{g(n)\le x} a_n n^{-s}. \]
We see that $S_N=P_{g,N}$ in the special case when $g(n)=n$.
\begin{theorem}\label{show} Suppose that $g$ is a completely multiplicative function taking only positive values and that $g(n)\to\infty$ when $n\to\infty$. Then
\begin{equation}\label{key} \sup_{x\ge 1} \| P_{g,x} \|_{\mathcal{H}^q}= \frac{1}{\sin(\pi/q)} \end{equation}
for $1<q<\infty$.
\end{theorem}
\begin{proof}
We consider first the easy direction, namely that $\sup_{x\geq 1} \| P_{g,x} \|_q\geq \frac{1}{\sin(\pi/q)}$. It is classical and straightforward to check that the norm of the Riesz projection equals $\sup_{N\geq 1} \| \widetilde S_N \|_q,$ where $\widetilde S_N$ is the 1-dimensional partial sum operator acting on $H^q(\mathbb{T})$. On the other hand, clearly $\| P_{g,g(2^N)} \|_q\ge \| \widetilde S_{N} \|_q$, so the claim follows from the fact that the bound of Hollenbeck and Verbitsky is optimal.
In order to treat the more interesting direction, we begin by fixing a positive integer $Q$ that will be specified later, depending on $x$. Then for every prime $p$, we choose a positive integer $m_p$ such that
\[ \left| Q \log g(p) -m_p\right| \le \frac{1}{2}. \]
This is possible because $g(p)>1$ by the assumption that $g(n)\to \infty$. Now let $z$ be a point on the unit circle. Write $n$ in multi-index notation as $n=p^{\alpha(n)}=\prod_{p} p^{\alpha_p(n)}$, set accordingly $\beta(n)=\sum_{p} \alpha_{p}(n) m_p$ and consider the transformation
\[ T_{g,Q,z} \left(\sum_{n=1}^\infty a_n n^{-s}\right)=\sum_{n=1}^\infty a_n z^{\beta(n)} n^{-s}. \]
Taking the Bohr lift, we see that the effect of $T_{g,Q,z}$ acting on $f$ is that each variable is multiplied by a unimodular number. This shows that $T_{g,Q,z}$ acts isometrically on $\mathscr{H}^q$ for every $q>0$.
Note that by construction
\[ \left| \beta(n) - Q \log g(n) \right|\le \frac{1}{2}\big|\alpha(n)\big|=\frac{1}{2}\Omega(n), \]
where $\Omega(n)$ is the number of prime factors of $n$ counting with multiplicity.
We now choose the parameter $Q$ so large that
\begin{equation}\label{eq:req}
\max_{g(n)\leq x }\beta(n)< \inf_{g(n)>x}\beta(n).
\end{equation}
This is obtained if
\begin{equation} \label{eq:sep} \inf_{g(n)>x} \big(Q\log g(n)- \frac{1}{2} \Omega(n) \big) > \max_{g(n)\le x} \big(Q \log g(n) + \frac{1}{2} \Omega(n) \big). \end{equation}
We may achieve \eqref{eq:sep} because the assumptions on $g$
imply that $\log g(n) \ge c \Omega(n) $ for some $c>0$. Namely, this inequality clearly yields that
\[ \inf_{g(n)>x} \big(Q\log g(n)- \frac{1}{2} \Omega(n) \big)\ge (Q-c^{-1}/2) \log (x+1) \]
for some $\varepsilon>0$, while on the other hand
\[ \max_{g(n)\le x} \big(Q \log g(n) + \frac{1}{2} \Omega(n) \big) \le (Q+c^{-1}/2) \log x.\]
Having made this choice of $Q$, we see that \eqref{eq:req} ensures that we may write
\[ (T_{g,Q,z} P_{g,x} f)(s)=\sum_{\beta(n)\le x'} a_n z^{\beta(n)} n^{-s}\]
for a suitable $x'$.
Hence,
using the Bohr lift $B$, the translation invariance of $m_\infty$ under $T_z$ with $ T_{z}(w)=(w_p z^{m_p})$, Fubini's theorem, and Hollenbeck and Verbitsky's theorem \cite{HV} on the $L^q$ norm of the Riesz projection on $\mathbb{T}$, we get successively:
\begin{align*}
\Vert S_{N}(f)\Vert_{q}^{q}& =\int_{\mathbb{T}^\infty} \big|S_{N}(Bf)(w)\big|^{q}dm_{\infty}(w)\\
&=\int_{\mathbb{T}}\Big(\int_{\mathbb{T}^\infty} \big|S_{N}(Bf)(T_{z} w)\big|^{q}dm_{\infty}(w)\Big)dm(z)\\
&=\int_{\ T^\infty}\Big(\int_{\mathbb{T}} \big|\sum_{n=1}^N a_n w^{\alpha(n)}z^{\beta(n)}
\big|^{q}dm(z)\Big)dm_{\infty}(w)\\
&\leq \Big(\frac{1}{\sin \pi/q}\Big)^q \int_{\mathbb{T}^\infty}\Big(\int_{\mathbb{T}} \big|\sum_{n=1}^\infty a_n w^{\alpha(n)}z^{\beta(n)}\big|^{q}dm(z)\Big)dm_{\infty}(w)\\
&= \Big(\frac{1}{\sin \pi/q}\Big)^q \Vert f\Vert_{q}^{q}.\end{align*}
\end{proof}
If we specialize to the case when $g(n)=n$ and $x= N$, it is of interest to see how large the intermediate parameter $Q$ has to be to ensure that \eqref{eq:sep} holds.
We see that
this happens if
\begin{equation} \label{eq:special} Q\log(N+j)-\frac{\log (N+j)}{2\log 2}>Q\log N+\frac{\log N}{2\log 2}.\end{equation}
when $j=1,2 ...$. We may assume that $Q>1/(2\log 2)$ so that
\[ Q\log(N+j)-\frac{\log (N+j)}{2\log 2} \ge Q \log N - \frac{\log N}{2\log 2} + \Big(Q- \frac{1}{2\log 2}\Big) \frac{1}{2N}. \]
This shows that \eqref{eq:special} holds if we choose
\begin{equation} \label{eq:require} Q\ge c N \log N \end{equation}
with $c>0$ large enough. Since $T_{n,Q} S_N f$ will be a polynomial of degree at most $Q \log N+(\log N)(2\log 2)$ in the dummy variable $z$, we may now, following again the reasoning of the above proof, use Lemma~\ref{lem:finite} with $n=1$ and $d= Q \log N+O(\log N)$ to deduce that $\| S_N \|_{\infty} \ll \log N $. We thus recapture a result that was first established in \cite{BCQ} by use of Perron's formula and contour integration.
The bound just obtained remains the best known upper bound for $\| S_N\|_{\infty} $. On the other hand, it is known that $\| S_N \|_{\infty}\gg \log \log N$ (obtained for Dirichlet series over powers of a single prime). We are thus far from knowing the right order of magnitude of $\| S_N\|_{\infty}$. A similar situation persists when $q=1$ in which case we have $\log\log N \ll \| S_N\|_1 \ll \log N/\log\log N$ by a result of \cite{BBSS}.
We will now show that if we specialize to Dirichlet series over $\mathcal{P}_0$-smooth numbers, then the estimate in the case $q=\infty$ can be improved for certain ultra-thin sets of primes $\mathcal{P}_0$. To this end, we denote by $\mathcal{H}^q(\mathcal{P}_0)$ the subspace of $\mathcal{H}^{q}$ consisting of Dirichlet series over the sequence $\mathcal{N}_0$ of $\mathcal{P}_0$-smooth numbers, and we let $\| S_{N}\|_{\mathcal{H}^q(\mathcal{P})_0}$ be the norm of $S_N$ when restricted to $\mathcal{H}^q(\mathcal{P}_0)$.
The crucial observation is that it may now be profitable to apply Lemma~\ref{lem:finite} \emph{before} we make the transference to one-dimensional Riesz projection. Indeed, we observe that the Bohr lift of a Dirichlet polynomial of length $N$ over $\mathcal{P}_0$-smooth numbers will be a polynomial of degree at most $\log N/\log 2$ in $\pi_0(N)$ complex variables. Hence the norm on the right-hand side of \eqref{eq:sub} can be taken to be the $\pi_0(N) \log \log N$-norm. Combining this observation with Theorem~\ref{show}, we then get the following result which yields an improvement when $\pi_0(x)=o(\log x/\log\log x)$.
\begin{theorem}\label{thm:reis} There exists an absolute constant $C$ such that
\begin{equation}\label{d}\Vert S_N\Vert_{\mathcal{H}^{\infty}(\mathcal{P}_0)} \leq C \pi_0(N) \log\log N \end{equation}
when $\pi_0(N)\ge 1$ and $\log\log N\ge 2$.
\end{theorem}
Following the proof of \cite[Thm. 5.2]{BBSS} word for word, we may obtain a similar result for $\| S_N \|_{\mathcal{H}^{1}(\mathcal{P}_0)}$ with $\pi_0(N)\log\log N$ replaced by the logarithm of the maximal order of the divisor function at $N$ when restricted to $\mathcal{N}_0$. In contrast to \eqref{d}, this bound is nontrivial for all sets of primes $\mathcal{P}_0$. In particular, it yields $\| S_N\|_1 \ll \log\log N$ when $\mathcal{P}_0$ is a finite set and $\| S_N\|_1\ll \log N/\log\log N$ when $\mathcal{P}_0$ is the set of all primes, since then the logarithm of the maximal order of the divisor function at $N$ is $O(\log N/\log\log N)$.
|
1,108,101,565,791 | arxiv | \section{Introduction:}
\label{intro.sec}
The history and present status of the atmospheric Cherenkov imaging
technique has been reviewed by Ong (1998). Its great contribution to
ground-based VHE astronomy has led to burgeoning designs for ``next
generation'' instruments of increased collection area and complexity,
one of which is the Very Energetic Radiation Imaging Telescope Array
System (VERITAS), first proposed to the Smithsonian Institution in
1996. Our design study has culminated in a detailed proposal by Weekes
et al. (1999) to build an array of 10m aperture Cherenkov telescopes
in Montosa Canyon in southern Arizona. This is a topographically flat,
dark site, at 1390 m a.s.l., close to the Whipple Observatory which
will provide the necessary infrastructure.
VERITAS will consist of six telescopes located at the corners of a
hexagon of side 80\,m with a seventh at the centre. The telescopes'
structure will be similar to the design of the Whipple 10m reflector,
which has withstood mountain conditions for over thirty years. By
employing largely existing technology in the first instance and
stereoscopic imaging, the power of which has recently been
demonstrated by HEGRA (Daum et al., 1997), we expect VERITAS to achieve the
following:
\begin{verse}
1) {\it Effective area}: $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}}$0.1\,km$^2$ at 1\,TeV.\\
2) {\it Effective energy threshold}: $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$100\,GeV with significant
sensitivity at 50\,GeV.\\
3) {\it Energy resolution}: 10\% - 15\% for events in the range 0.2 to 10\,TeV.\\
4) {\it Angular Resolution}: $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$0.05$^\circ$ for
individual photons; source location to better than 0.005$^\circ$.\\
\end{verse}
\begin{figwindow}[1,r,%
{\mbox{\epsfig{file=relsens.ps,width=4in}}}
VERITAS' sensitivity to point-like sources as compared to those of Whipple,
MAGIC, CELESTE/STACEE, GLAST and MILAGRO (Weekes et al. (1999) and references
therein).]
The performance of VERITAS is perhaps best summarised by its flux
sensitivity versus energy, shown in Figure 1 for an object of spectrum
dN/dE $\propto$ E$^{-2.5}$. Here we define a minimum detectable flux
for VERITAS as that giving a 5$\sigma$ excess of $\gamma$-rays above
background (or 10 photons where the statistics become Poissonian). We
expect to detect sources which emit at levels of 0.5\% of the Crab
Nebula at energies of 200\,GeV in 50 hours of observation. VERITAS,
together with the southern hemisphere Cherenkov telescope arrays HESS
and CANGAROO \mbox{-III}, will obtain high sensitivity in the 100\,GeV to
10\,TeV range between space-borne instruments and air shower
arrays. If sources of UHE cosmic rays are discovered Cherenkov
telescopes may further localise and identify them. Also, the
MILAGRO wide-field water Cherenkov detector will be sensitive to
transient sources, which, once detected can be studied in more detail
by VERITAS.
This report highlights the physics goals and some technical aspects of
VERITAS emerging from the core proposal/design study. Monte Carlo
simulations of the array's performance are presented by Vassiliev et
al. (1999).
\end{figwindow}
\section{Physics Highlights:}
\label{Phys.sec}
At a capital cost of $\sim$\$16\,M, less than 10\% of that of the
Gamma-ray Large Area Space Telescope, VERITAS will be an excellent
investment in terms of scientific return.
The large effective area of VERITAS ($\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 2.5\times 10^4$\,m$^2$ at
200\,GeV) will allow accurate measurements of extremely short
variations in $\gamma$-ray flux. We illustrate this in Figure 2. For
dense temporal coverage, VERITAS can be divided into dedicated
sub-arrays, with one sub-array observing a single object, e.g. throughout a
multi-wavelength campaign. Thus, we minimise the impact on other
scientific programmes.
\begin{figure}[t]
\centerline{\epsfig{file=main_m4_960515_col.ps,width=4.5in}}
\caption{{\it Left:} Observation of a VHE flare from Mrk 421
(Gaidos et al. 1996). The dashed curve is a possible intrinsic flux
variation consistent with this data. {\it Right:} Simulated
response of VERITAS to such a flare above 200\,GeV (GLAST would detect
$\sim$3 photons above 1\,GeV from a flare of this duration and power).
\label{main-960515-fig}
}
\end{figure}
\subsection{Extragalactic Astrophysics:}
\label{Exgal.sec}
We estimate that VERITAS will detect $\sim$11 of the active galactic
nuclei identified at EGRET energies, or more if they are observed in
high emission states. VERITAS should also detect $\ge$ 30 X-ray
selected BL Lac objects (based on the spectra of Mrk 421 and Mrk 501)
and possibly the ``Extreme BL Lacs'' hypothesised by Ghisellini
(1999). We hope to distinguish intrinsic spectral features from those
due to pair-production on the IR background. A large sample of energy
spectra for a single class of blazar will dramatically improve our
estimate of the background IR density.
Gamma ray bursts should be visible out to z$\approx$1 or more at
energies $\le$100 GeV. VERITAS' rapid slew speed, good angular
resolution and field of view (up to 10$^\circ$ with offset pointing of
individual telescopes to cover the position error box) make it
excellent for counterpart searches. Attenuation at high
energies from interaction with background IR fields could provide
distance bounds, if source energetics are known.
\subsection{Galactic Astrophysics:}
\label{Gal.sec}
For a typical supernova remnant (SNR) luminosity and angular extent,
VERITAS should be able to detect such objects within 4\,kpc of Earth
according to the Drury, Aharonian \& V\"olk (1994) model of
$\gamma$-ray production by hadronic interactions ($\sim$ 20 shell-type
SNRs are known to lie within that range). As regards plerions and
pulsars, VERITAS should be sufficiently sensitive to detect Crab-like
objects anywhere within the Galaxy if their declination is
$>-28^\circ$. The detection of pulsed $\gamma$-rays above 50\,GeV
might be decisive in favor of the outer gap pulsar model over the
polar cap model, as the latter predicts a sharp spectral cut-off at
low energies.
For an 80-night survey of the galactic plane region $0^\circ < l <
85^\circ$, VERITAS would be sensitive to fluxes down to $\sim$0.02
Crab above 300\,GeV. This part of the sky includes 19 young, energetic
pulsars and 12 hitherto unidentified EGRET sources. In addition,
estimates of the annihilation line flux for neutralinos at the
galactic center by Bergstr\"om, Ullio \& Buckley (1998) predict a
signal potentially detectable by VERITAS.
\section{Design:}
\label{design.sec}
\subsection{Telescope Structure:}
The telescopes will be constructed following the Davies-Cotton
reflector design with spherical and identical facet mirrors of pyrex
glass (slumped and polished, aluminised and anodised) to provide the
optimum combination of optical quality and cost effectiveness. A
study is underway to design a stress-free mounting scheme for the
hexagonal (60\,cm flat to flat) mirrors incorporating remotely
controlled motorized alignment. The time spread in light across the
proposed $f$/1.2 reflector is only 3-4\,ns and 100\% of the light from
a point source is captured by a 0.125$^\circ$ pixel out to 1$^\circ$
from the optical axis, decreasing to 72\% at 2$^\circ$. It is then
possible to match the inherent angular fluctuations in the shower images with
a camera that has a reasonable number of pixels (499) and a field of
view (3.5$^\circ$) which is large enough to conduct surveys and
observe extended ($\le1^\circ$) sources efficiently. For the optical support
structure of trussed steel on a commercially available pedestal,
effects due to gravitational slumping during slewing will be less than
2.2 mm on the camera face. Slew speed can be as high as 1$^\circ$ per
second on both axes.
\subsection{Electronics:}
\label{elec.sec}
At present, the need for a low noise, high gain ($>$ 10$^6$), photon
counting detector, with risetimes of less than a few ns is satisfied
only by photomultipliers (PMTs). The Hamamatsu R7056, with a bialkali
photocathode, UV glass window and 25\,mm active diameter meets our
requirements. The collection efficiency will be increased by Winston cones.
The spacing between the PMTs will correspond to a focal plane angular
distance of $0.15^\circ$. We plan to use a modular high voltage supply
(e.g. LeCroy 1458) where each PMT has a separately programmable high
voltage, supplied from a system crate located at the base of the telescope.
Each PMT signal will be taken via a linear amplifier in the focus box
to a custom CFD/scaler module. The targeted gain of the PMT plus
amplifier (based on a standard 1GHz bandwidth integrated circuit chip,
AD8009) provides a signal level of $\sim$2\,mV/photoelectron. Optical
fibre signal lines may be an attractive alternative to RG58 coaxial
cabling allowing the CFD and downstream electronics to be located in
the central control building. A prototype multi-channel analog
optical fiber system is now being installed on the Whipple 10\,m
telescope.
The CFD/scaler board, incorporating an adjustable channel by channel
delay, will provide an analog fanout of the PMT signal to an FADC
system. A prototype 500\,MHz FADC system, each channel using a
commercially available 8-bit FADC integrated circuit, has been
developed and tested successfully on the Whipple telescope (Buckley et
al. 1999).
VERITAS will operate at a minimal threshold by requiring a time
coincidence of adjacent pixel signals to form a single telescope
trigger and $>$ n coincident telescope triggers to initiate data
recording. For example, at a threshold of 5 photoelectrons the CFD
trigger rate of a single pixel will be $\sim$300\,kHz. A telescope
trigger will then require a coincidence of $\ge$ 3 {\it neighbouring}
pixel signals. This topological trigger will be similar to that used
on the Whipple telescope (e.g. Bradbury et al. 1999). Telescope
triggers will be received at a central station where they can be used
to immediately initiate a telescope readout, or delayed to account for
orientation of the shower front (e.g. by a CAEN V486 digital delay)
and combined in a more complex trigger requirement. For example, if an
array trigger requires that 3 of 7 telescopes trigger within a 40\,ns
coincidence window then the {\it accidental} array trigger rate is $<$
1Hz at the 5 photoelectron trigger threshold.
The acquisition system architecture will be based largely on a fast
VME backplane and distributed computation by local CPUs running a
real-time operating system. For an array trigger rate of $\sim$1\,kHz
each telescope is expected to have an average data flow rate of
$\sim$800\,kbyte/s, resulting in a 200\,kbyte/s rate on any VME
backplane. Each crate controller will be connected to a local
workstation which in turn will communicate with a central CPU. The
central CPU will perform control and quicklook functions, data
integration and compression. Telescope guidance and high voltage
control will be performed by inexpensive Pentium PCs.
\section{Conclusion:}
\label{Con.sec}
Our aim is to commission the first VERITAS telescope in 2001. The 10m
Whipple telescope nearby will operate throughout the construction
phase and serve as a test-bed for innovative technologies. By staged
construction of the array over a four year period, we expect to
maintain an astronomical facility at all times. VERITAS is expected to
be in routine operation prior to the launch of GLAST in 2005.
\vspace{1ex}
\begin{center}
{\Large\bf References}
\end{center}
Bergstr\"om, L., Ullio, P. \& Buckley, J.H. 1998, Astrop. Phys., 9, 137\\
Bradbury, S.M. et al. 1999, Proc. 26th ICRC (Salt Lake City, 1999), OG 4.3.21\\
Buckley, J.H. et al. 1999, Proc. 26th ICRC (Salt Lake City, 1999), OG 4.3.22\\
Daum, A. et al. 1997, Astrop. Phys., 8, 1\\
Drury, L.O'C., Aharonian, F.A. \& V\"olk, H.J. 1994, A\&A, 287, 959\\
Gaidos, J.A. et al. 1996, Nature, 383, 319\\
Ghisellini, G. 1999, in ``TeV Astrophysics of Extragalactic Sources'',
Astrop. Phys., eds. M. Catanese \& T.C. Weekes, in press\\
Hurley, K. 1994, Nature, 372, 652\\
Ong, R.A. 1998, Physics Reports, 305, 93\\
Vassiliev, V.V. et al. 1999, Proc. 26th ICRC (Salt Lake City, 1999), OG 4.3.35\\
Weekes, T.C. et al. 1999, VERITAS, proposal to SAGENAP\\
\end{document}
|
1,108,101,565,792 | arxiv | \section{Introduction and notation}
The expectation value of the vector current
between two pion fields may be written as
$$
\langle \pi^\pm(p') \arrowvert j^\mu \arrowvert \pi^\pm(p) \rangle = (p+p')^\mu
F_\pi^V(q^2 )\ .
$$
Since the only form factor that we discuss is the charged pion form
factor, we will denote it simply as $F(q^2)$, where $q=p-p'$. It is conventionally
normalized as $F(0)=1$.
An expansion around zero momentum transfer allows for a physical
interpretation of the form factor in terms of the pion's rest frame
charge density $\rho(r)$, given by
\begin{equation}
\label{Fexpansion}F(t)= 1+
\frac{1}{3!} \langle r^2\rangle_\rho t + \frac{1}{5!} \langle
r^4\rangle_\rho t^2+ {\mathcal O}(t^3) \ . \end{equation}
Here we used the notation of
Ref.~\cite{hofstaedter}. Alternatively, in Ref.~\cite{Gasser:1990bv}
the curvature of the form factor was introduced via
\begin{equation}\label{Fexpansion2}%
F(t)=1+\frac{1}{6}\langle r^2 \rangle t + c_V^\pi t^2 + {\mathcal O}(t^3) \ . \end{equation}%
Comparing with Eq. (\ref{Fexpansion}) we see that
$$
c_V^\pi = \frac{1}{5!} \langle r^4 \rangle \ .
$$
The first term in the form--factor expansion is the conventional
charge normalization $\int d^3r \rho(r)=\langle 1 \rangle_\rho = 1$,
and the derivative at the origin provides the (vector or charge)
pion radius $\langle r^2 \rangle =6 (dF/dq^2)(0)$. At the next order, the
non-relativistic interpretation should be modified as effects of
boosting the pion wave function should begin to appear. In this
article we will ignore this subtlety, and simply use equivalently
the term pion's mean quartic radius or form factor curvature. We
will mainly focus on this quantity. Using both chiral perturbation
theory and dispersion relations we find a reliable value for the
curvature.
Assuming that the $\rho\pi\pi$ coupling is largely independent of
the quark masses for a given quark mass dependence of $m_\rho$, we
can also predict the quark mass dependence of the curvature.
The quark mass dependence of the $\rho$ properties
was studied in various recent lattice
simulations~\cite{Aoki:1999ff,Gockeler:2008kc} as well as
using unitarized chiral perturbation theory~\cite{MadridJuelich}.
Abundant data on the pion form factor exists, that can be obtained
from the Durham reaction database. For the timelike form factor we
use contemporary sets from the CMD2, KLOE, and SND experiments
\cite{data:novosibirsk,Aloisio:2004bu,achasov}.
In addition there is
higher energy data from Babar\cite{Solodov:2002xu} that
shows the $\rho(1700)$ and a shoulder that could correspond to the
$\rho(1400)$. However, in our study we employ only the $\rho(770)$.
We will therefore not extend our study beyond 1.2 GeV, where in
addition $K{\bar K}$ and other inelastic channels start to
contribute significantly. It is this condition that prevents us from
studying the radius instead of the curvature, as will be explained
below.
In the case of the spacelike pion form factor the data is taken from
the CERN NA7 collaboration \cite{Amendolia:1986wj}. The more recent
data from JLAB~\cite{Volmer:2000ek} was taken at values of $Q^2$ too
large for our study. For a recent review on the status of the
spacelike form factor see Ref.~\cite{Huber:2007zzb}.
The two-loop chiral perturbation theory ($\chi$PT) analysis of
\cite{Bijnens:2002hp} yielded a mean quadratic radius of
\begin{equation}%
\langle r^2 \rangle = 0.452(13)\ {\rm
fm}^2 \ ,
\end{equation}%
which is the currently accepted value~\cite{Yao:2006px}. In order
to get a feeling on what to expect for the quartic radius, we start
with some simple classical examples. For this discussion we will
divide it by the mean square radius squared, the resulting ratio
$R\equiv\langle r^4\rangle/\langle r^2\rangle^2$ quantifies the radial spread of the
charge distribution. For a charge conducting sphere the spread is
minimal with $R = 1$ (all the charge is at the surface), and for a
uniformly charged dielectric sphere $ R = 25/21$. On the other hand,
the ratio is as large as 5/2 for a charge distribution with an
exponential dependence on the radius $e^{-mr}$. A vector-meson pole
form factor
$$
F(t) = \frac{1}{1-t/m_\rho^2}
$$
gives an even higher value of 10/3~\cite{Gasser:1990bv}.
Some results about the pion's quartic radius, including this work,
is collected in table \ref{tablequartic}.
\begin{table}[t]
\caption{Current theoretical estimates of the pion's quartic radius
\label{tablequartic}. The lattice results are our estimate based on
the form factor data from \cite{Boyle:2008yd}.}
\begin{center}
\begin{tabular}{|c|c|}
\hline
$\langle r^4\rangle/\langle r^2\rangle^2$ & Method \\
\hline
$3.3$ & VMD~\cite{Gasser:1990bv} \\
$2\pm 2.5$ & Lattice $m_\pi=0.33$~GeV~\cite{Boyle:2008yd}\\
$2 \pm 2.3$ & Lattice extrapolated to $m_\pi=0.139$~GeV~\cite{Boyle:2008yd}\\
$4 \pm 2$ & NNLO $\chi$PT~\cite{Bijnens:1998fm} \\
$3.1\pm0.4$ & Pad\'{e} approximants~\cite{Masjuan:2008fv} \\
$3.6 \pm 0.6$ & Eq. (\ref{curvatureq}) this work \\\hline
\end{tabular}
\end{center}
\end{table}
Furthermore, after having analyzed the pion em form factor data by
using analyticity, the curvature was constrained in the range
$[0.25~{\rm GeV}^{-4},7.57~{\rm GeV}^{-4}]$ in
Ref.~\cite{Caprini:1999ws} and $[2.3~{\rm GeV}^{-4},5.4~{\rm
GeV}^{-4}]$ in a very recent analysis~\cite{Ananthanarayan:2008mg}.
Of particular interest for us, and for a lattice determination, is
the quark mass dependence, or $m_\pi$ dependence, of the quartic
radius. A study of this within chiral perturbation theory would
require control of the N$^3$LO Lagrangian, since the quartic radius
is NNLO itself, and this seems out of today's reach.
We examine the problem with the simplifying assumption that
$g_{\rho\pi\pi}$ is $m_\pi$--independent while the $m_\pi$
dependence of $m_\rho$ is taken from other sources. To control the
model dependence, we employ the Omn\`es representation of the form
factor, sketched in Subsection \ref{Omnesintro} below. In the
absence of form factor zeroes, and neglecting inelastic channels,
this only requires knowledge of the elastic pion--pion scattering
phase shifts. We parametrize them, with a simple Breit--Wigner model
described in Subsection \ref{BWsubsection}. Since this model
contains $g_{\rho\pi\pi}$ and $m_\rho$ as the only parameters, the
above assumption can be employed in a straightforward way. In
addition we also use the predictions of unitarized chiral
perturbation theory for both rho mass and coupling. The resulting
quark mass dependence of the rho properties were investigated in
Ref.~\cite{MadridJuelich}. Using this alternative parametrization we
get almost identical results. The pion mass dependence of the
curvature turns out to be similar to that of the square radius.
\section{Omn\`es representation of the form factor}
\subsection{Basics}\label{Omnesintro}
The Omn\`es equation \cite{Omnes:1958hv} encodes the analyticity
properties of the pion form factor $F(s)$, that has an elastic
unitarity cut on the positive $s$-axis for $s\in(4m_\pi^2,\infty)$,
and is otherwise analytic. Further superimposed cuts due to
inelastic channels are neglected in its derivation, and the form
factor is assumed to have no zeroes (which, as we know today, is
phenomenologically correct). We have explored the possibility of
zeroes in the complex plane by analytically continuing the
experimental data with the help of the Cauchy--Riemann equations
\cite{GimenoSegovia:2008sx}. For a small band around the real axis,
they can be excluded. Some remarks on inelastic channels can be
found in \cite{Leutwyler:2002hm}.
The starting point is the well known relation ${\rm
Im}(F)=\tan\delta_{11} {\rm Re}(F)$, which relates the discontinuity
in the vector form factor to the elastic scattering phase shift in
the vector--isovector channel. From this relation the Watson theorem
follows straightforwardly. Since the large-$q^2$ asymptotic
behavior of the form factor is known from QCD counting
rules~\cite{brodskyfarrar}, $F(q^2)\to c/q^2$, as a matter of
principle one may write an unsubtracted dispersion relation, which
reads for arbitrary $t$ \begin{equation} \label{omnes} F(t)=\frac{1}{\pi}
\int_{4m_\pi^2}^\infty ds \tan \delta_{11}(s)
\frac{{\rm Re}(F(s))}{s-t-i\epsilon}\ .
\end{equation}%
We specified ``as a matter of principle'' since the QCD counting
rules apply only when elastic scattering is irrelevant by
the numerous inelastic channels open. However, in this work we only
want to use low energy input (up to 1.2 GeV) and we will therefore
use a subtracted dispersion relation below and cut the high energy
contributions with a cut--off.
The
variation of the results with this cut--off provides a
systematic uncertainty in our work, which, as a consequence of
the subtraction, turns out to be moderate.
If there are no bound state poles, as is the case of $\pi\pi$
scattering for physical quark masses, nor the form factor vanishes
anywhere in the complex plane, as we presume for $F(t)$, the
celebrated solution family of this equation provides a
representation of the form-factor in terms of the scattering phase,
known as the Omn\`es representation. The standard treatment proceeds
by deriving an integral equation for $\log F(t)/(2i)$ instead of
$F(t)$ itself, \begin{eqnarray} \label{logomnes} \log \frac{F(t)}{2i} =
\frac{1}{2\pi i} \int_{4m_\pi^2}^\infty \frac{ds}{s-t} \left( \log
\frac{F(s+i\epsilon)}{2i} - \log \frac{F(s-i\epsilon)}{2i} \right)
= \frac{1}{\pi} \int_{4m_\pi^2}^\infty \frac{ds}{s-t}
\delta_{11}(s) \ . \end{eqnarray}
Instead of this relation we may use a subtracted
version. This will allow us to effectively suppress
the high energy behaviour of the phase shifts. In particular we will use a twice subtracted version
which reads after exponentiation
\begin{equation}\label{omnesrep2} %
F(t) =\exp\left(P_1 t+\frac{t^2}{\pi} \int_{4m_\pi^2}^\infty ds
\frac{\delta_{11}(s)}{s^2(s-t-i\epsilon)} \right) \ .
\end{equation}%
Note, the normalization condition of the form factor
prohibits a constant term in the exponent.
The constant $P_1$ can be identified with the
square radius of the pion
$$
P_1 = \langle r^2\rangle /6 \ .
$$
This
representation of the form factor has been used in
the literature, see for example \cite{guerrero}.
\subsection{Phase--shift representation of the quartic radius}
Recalling the definition of the curvature of the pion form factor
(c.f. Eq.~(\ref{Fexpansion2})) we may read off an expression for
$c_V^\pi$ directly from Eq.~(\ref{omnesrep2}):
\begin{equation} %
\label{curvatureq} c_V^\pi = \frac{\langle r^4 \rangle}{120} =
\frac{1}{72} \langle r^2 \rangle^2 + \frac{1}{\pi}
\int_{4m_\pi^2}^\infty ds \frac{\delta_{11}(s)}{s^{3}}
\end{equation} %
which is quite a beautiful formula, since it allows a third
independent extraction of the curvature $c_V^\pi$
besides
NNLO $\chi$PT or a fit to spacelike data beyond
the linear fall in $t$ where uncertainties get large.
Instead we employ only
the elastic phase shift.
In addition, since the quantity
\begin{equation}%
\label{curvaturefromphase} \tilde{c}_V^\pi\equiv c_V^\pi -
\frac{1}{72} \langle r^2 \rangle^2
\end{equation}%
is described solely in terms of the $\pi\pi$ $p$--wave phase shifts,
its quark mass dependence is closely linked to that of the
$\rho$--meson properties. This relation is analogous to others
existing for the mean--square radius~\cite{Gasser:1990bv,Oller:2007xd}.
\subsection{A simple Breit--Wigner model} \label{BWsubsection}
To provide an estimate of the form factor based on the Omn\`es
representation, we employ a simple relativistic Breit--Wigner model
of the scattering amplitude, in which an $s$--channel resonance
dominates the scattering \begin{equation} a_{11}(s) =
\frac{c}{s-m_\rho^2-im_\rho\Gamma_{\rm tot}(s)} \end{equation} where
$\Gamma_{\rm tot}$ is the total width of the $\rho$ resonance with
\begin{equation} \Gamma_{\rm tot} = \frac{g_{\rho\pi\pi}^2 p^3}{6\pi m_\rho^2} =
\frac{g_{\rho\pi\pi}^2 (\frac{s}{4}-m_\pi^2)^{3/2}}{6\pi m_\rho^2}\
. \end{equation} We neglected terms of order $\Gamma_{\rm tot}^2$.
Here $c$ is
an irrelevant constant that may be expressed in terms of $m_\rho$
and $\Gamma_{\rm tot}$. We may write
\begin{equation}%
\delta_{11}(s) = \arctan \frac{{\rm Im} a_{11}(s)}{{\rm Re}
a_{11}(s)} = \arctan \frac{m_\rho \Gamma_{\rm
tot}(s)}{m_\rho^2-s} \ .
\end{equation}%
With this phase variation the integral representation converges
without additional subtractions (although the high--energy tail is
ad-hoc), however, even values as large as $s=7$ GeV$^2$ contribute
to the integral.
A good fit can be seen in Fig.~\ref{fig:phase} for the phase shift
and square form--factor modulus. To produce the figures we use
$m_\pi^{\rm phys}=139$~MeV, $m_\rho=775$~MeV, $\Gamma_\rho^{\rm
phys}=150$~MeV (to determine $g_{\rho\pi\pi}$).
\begin{center}
\begin{figure}[t]
\parbox{7cm}{\includegraphics[width=7cm,angle=0]{Rhophase_notitle.eps}
\vspace{0.01cm}}
\parbox{7cm}{\includegraphics[width=7.4cm,angle=-0]{Formfactortimespace_notitle.eps}
\vspace{0.02cm}}
\caption{The scattering phase in the vector channel (left) for the
Breit--Wigner model (dashed line) and the Inverse Amplitude Method
(solid line). We also plot the square form factor modulus (right).
To be able to plot the spacelike and timelike data together, the
first is plotted against the unphysical variable $-\sqrt{-q^2}$ with
$q^2$ the (negative) spacelike momentum transfer. Here we use
$\langle r^2\rangle$ as input as described in the text.
\label{fig:phase}}
\end{figure}
\end{center}
Using the formula given above we may now extract
the curvature directly from the elastic pion
phase shifts, using the square radius as input.
We find
\begin{equation}%
\label{eq:cvno}
c_V^\pi = 3.75\pm0.33 \ \mbox{GeV}^{-4} \
\end{equation}%
where the uncertainty contains both the uncertainty in $\langle
r^2\rangle$ and the systematic uncertainty introduced by evaluating
the integral only up to finite values (we allow a large range from
1~GeV to 16~GeV for the variation of the cut--off, although the
integral is basically converged for a cut--off of 2 GeV). It agrees to
that from vector meson dominance which is about
3.5~GeV$^{-4}$~\cite{Gasser:1990bv} and it is consistent with the
constraint $[2.3~{\rm GeV}^{-4},5.4~{\rm GeV}^{-4}]$ from analyzing
the form factor data using analyticity~\cite{Ananthanarayan:2008mg}.
The advantage of our analysis is that it allows in addition for a
controlled estimate of the uncertainty. Equivalently, the result in
term of quartic radius is
\begin{equation}%
\langle r^4 \rangle = 0.68\pm0.06~{\rm fm}^4 \ .
\end{equation}%
As mentioned above we will investigate the quark mass dependence of
the pion form factor based on the assumption that $g_{\rho \pi\pi}$
is independent of the quark mass with the $m_\pi$ dependence of
$m_\rho$ taken from other sources. Since both parameters are
explicit in the parametrization given above, we may study the
resulting quark mass dependence of $c_V^\pi$, once that of $\langle
r^2\rangle$ is fixed.
\section{Chiral perturbation theory}
\subsection{General considerations\label{subsec:defNNLO}}
In order to determine the quark mass dependence of the square
radius, which is the input needed for the formalism described above,
we will use the results of $\chi$PT. Clearly, the curvature
$c_V^\pi$ as well as its quark mass dependence, could also be
determined in $\chi$PT directly. Depending on the fit and
systematics chosen in Ref.~\cite{Bijnens:1998fm}, which is
two--flavor ${\mathcal O}(p^6)$ $\chi$PT calculation, its value could vary
between $2-6$ GeV$^{-4}$, although the authors quote a value around
4 GeV$^{-4}$, in agreement with a previous estimate
\cite{Gasser:1990bv} (By fitting to form factor data, they obtain
$3.85$~GeV$^{-4}$). A ${\mathcal O}(p^6)$ fit in three--flavor $\chi$PT leads
to a range $4.49\pm0.28$~GeV$^{-4}$~\cite{Bijnens:2002hp}. Adopting
$c_V^\pi=4\pm 2$~GeV$^{-4}$ as the NNLO $\chi$PT result, we obtain
$\langle r^4 \rangle/ \langle r^2 \rangle^2= 4\pm2$. This value is
copied into Table~\ref{tablequartic}.
\subsection{Matching the Omn\`es representation}
We start by giving the chiral expansion of the vector form factor
\cite{gasserleutwyler} valid to NLO in $\chi$PT,
\begin{equation} \label{ffinchipt}%
F(t) = 1 + \frac{1}{6f_\pi^2}(t-4m_\pi^2)\bar{J}(t)
+\frac{t}{96\pi^2f_\pi^2}(\bar{l}_6-\frac{1}{3}) \ .
\end{equation}%
(To this order we are free to change $M,\ F$ to
the physical $m_\pi, f_\pi$, since the difference is of NNLO). In
this expression,
\begin{eqnarray}%
\bar{J}(t) = \frac{1}{16\pi^2} \left[ \sigma \log \left(
\frac{\sigma-1}{\sigma+1}\right)+2 \right]
\end{eqnarray}%
with $\sigma = \sqrt{1-4m_\pi^2/t}$. A common strategy is to fix the
$\bar{l}_6$ constant from the square charge radius
\cite{gasserleutwyler}
\begin{equation} \label{radio}%
\langle r^2 \rangle = \frac{1}{16\pi^2f_\pi^2} (\bar{l}_6-1)
\end{equation}%
which is correct up to ${\mathcal O}(m_\pi^2)$ in $\chi$PT. Higher orders in
the chiral expansion cannot bring in powers of $t$ since, by
definition, the charge squared radius is proportional to the
coefficient of the term linear in $t$ in the form factor. However,
they can bring additional constants to the right hand side (each of
a natural order suppressed by additional factors of $1/(4\pi
f_\pi)^2$), and, more important for our purposes, a polynomial of
$m_\pi^2$. To make sure we are not eschewing a critical $m_\pi$
dependence, we will compare the right-hand-side of eq.(\ref{radio})
with the NNLO correction in chiral perturbation theory
\cite{Bijnens:1998fm}. The NLO result eq. (\ref{radio}), that
depends only logarithmically on the pion mass (see
eq.~(\ref{eq:lbarmpi}) below), is then extended to
\begin{eqnarray} %
\langle r^2 \rangle=
\frac{1}{16\pi^2f_\pi^2} \left[ \left( 1 +
\frac{m_\pi^2}{8\pi^2f_\pi^2}\bar{l}_4\right) (\tilde{l}_6-1) +
\frac{m_\pi^2}{16\pi^2f_\pi^2}\left(
16\pi^2\frac{13}{192}-\frac{181}{48}\right) \right] \end{eqnarray} %
with
\begin{equation}%
\label{tildeseis} \tilde{l}_6=\bar{l}_6 + 6 \frac{m_\pi^2}{f_\pi^2}
\left[ 16\pi^2 r^r_{V1}(\mu^2)+ \frac{1}{48\pi^2}\log \left(
\frac{m_\pi^2}{\mu^2} \right) \left(
\frac{19}{12}-\bar{l}_1+\bar{l}_2\right) \right]
\end{equation} %
where
$r^r_{V1}$ is a counterterm to be determined empirically, and we
will use the simple VMD estimate from the same work, at the $\rho$
scale,
$$
r^r_{V1}(m_\rho^2) \simeq -0.25 \times 10^{-3} \ .
$$
With this estimate, those authors find
$$
\tilde{l}_6 = \bar{l}_6 -1.44
$$
(the scale--dependence of this number cancels in
Eq.~(\ref{tildeseis}). The estimate is taken with constants
corresponding to set I that we copy in Table
\ref{tabla:constantes}).
Here we have to recall the pion mass dependence of the $\bar{l}$'s.
The $l_i$, as coefficients of the expansion in powers of $m_\pi^2$
of the Lagrangian density, are by definition pion--mass independent,
and so are their renormalized counterterms $l_i^r$. However, the
barred quantities are related to them by absorbing a chiral
logarithm
\begin{equation}%
l^r_i = \frac{\gamma_i}{32\pi^2} \left[ \bar{l}_i+\log\left(
\frac{m_\pi^2}{\mu^2}\right) \right]
\end{equation}%
that makes the $\bar{l}$'s scale--independent, but in exchange,
pion--mass dependent. This dependence needs to be kept track of in
the calculation. It becomes crucial in the chiral limit when the
pion radius diverges due to the virtual pion cloud becoming
long--ranged as the pion mass vanishes. This effect appears through
$\bar{l}_6$.
Therefore, we denote by $\bar{l}_i^{\rm phys}$ the value that the
low energy constants take by fitting to physical--world data. From
here on, when varying the quark (or pion) mass, one needs to change
the constant according to
\begin{equation}%
\label{eq:lbarmpi} \bar{l}_i = \bar{l}_i^{\rm phys} - \log \left(
\frac{m_\pi^2}{(m_\pi^{\rm phys})^2} \right)
\end{equation}%
With this we have all the input ready to use Eq.~(\ref{curvatureq})
also to establish the quark mass dependence of the curvature using
the Breit--Wigner representation of the phase shifts and the given
assumptions on the quark mass dependence of both the $\rho$ mass and
coupling.
However, before we proceed we introduce a method that allows one to
estimate the quark mass dependence of the $\rho$ properties directly
from the $\chi$PT amplitudes evaluated up to a given order, namely
the unitarized chiral perturbation theory or the inverse amplitude
method (IAM). The representation we are going to use is consistent
with NLO chiral perturbation theory at low momentum, and satisfies
exact elastic unitarity, fitting the pion scattering data up to
1.2~GeV well. The formalism will be introduced in the next
subsection.
\begin{figure}[t]
\begin{center}
\hglue-0.8cm\includegraphics[width=0.4\textwidth]{phaseofmBW_notitle.eps}
\hglue-0.0cm\includegraphics[width=0.4\textwidth]{phaseofmIAM_notitle.eps}
\end{center}
\vglue-0.2cm \caption{Variation of the elastic $\pi\pi$ phase
$\delta_{11}$ with the pion mass. Left: Breit--Wigner model. Note
that for $m_\pi= 3m_\pi^{\rm phys}$ the $\rho$ (held at constant
mass) has already crossed below the $\pi\pi$ threshold and is a
bound state. Right: Inverse Amplitude Method. The resonance stays
above the $\pi\pi$ threshold, its mass having a slight dependence on
$m_\pi$, until rather high pion masses. \label{fig:phaseBW}}
\end{figure}
\subsection{P-wave $\pi\pi$ scattering}
To derive the expression for the IAM one starts with the on--shell
$\pi\pi$ scattering amplitude in NLO $\chi$PT that, for $I=1$, is
\begin{equation} A_1(s,t,u)= A(t,s,u)-A(u,t,s) \end{equation} with
\begin{eqnarray} \label{pipiamplitude} \nonumber
A(s,t,u)&=& \frac{s-m_\pi^2}{F^2} + \frac{1}{6F^4}\left[3\bar{J}(s)
\left(s^2-m_\pi^4 \right)+
\bar{J}(t)\left(t(t-u)-2m_\pi^2t+4m_\pi^2u-2m_\pi^4\right)\right.
\\ \nonumber
&+&\left.
\bar{J}(u)\left(u(u-t)-2m_\pi^2u+4m_\pi^2t-2m_\pi^4\right)
\right] + \frac{1}{96\pi^2f_\pi^4} \left[ 2\left(\bar{l}_1-\frac{4}{3}\right)
\left(s-2m_\pi^2\right)^2
\right. \\ &+&\left.
\left(\bar{l}_2-\frac{5}{6}\right)\left(s^2+(t-u)^2\right) -3
m_\pi^4\bar{l}_3 -12m_\pi^2s +15m_\pi^4 \right] .
\end{eqnarray}
The first term can be identified as the leading order, low--energy
theorem \cite{weinberg}, but we express it in terms of the physical
$m_\pi$, instead of its leading order value $M$ used in the original expression
\cite{gasserleutwyler}. At the meanwhile, we keep the $m_\pi$
independent pion decay constant $F$. The quantities $F
$ and $M$ are related to the physical ones via
$$
F=f_\pi\left(1-\frac{m_\pi^2}{16\pi^2f_\pi^2} \bar{l}_4 \right), \ \
\ M^2=m_\pi^2\left(1+\frac{m_\pi^2}{32\pi^2f_\pi^2} \bar{l}_3
\right).
$$
The latter expression introduces $\bar{l}_3$ into the last line of
Eq.~(\ref{pipiamplitude}).
The projection to the spatial $p$--wave has the usual factor of
$1/2$ to avoid double--counting quantum states by counting all
angular configurations with exchanged identical particles
\begin{equation}%
a_{11}(s) = \frac{1}{32\pi} \frac{1}{2} \int_{-1}^1 d\cos \theta
(\cos \theta) A_1(s,t(s,\cos \theta),u(s,\cos \theta)) \ .
\end{equation}%
One can organize the chiral expansion as
\begin{equation}%
a_{11}(s) = a_{11}^{\rm LO}(s) + a_{11}^{\rm NLO}(s) +\ \dots
\end{equation}%
but the series truncated at any order only satisfies elastic unitarity
perturbatively.
This is solved, with the first two expansion terms, by
the Inverse Amplitude Method \cite{truong} that reads (suppressing
the spin and isospin subindices)
\begin{equation}%
a^{\rm IAM}(s) = \frac{a^2_{\rm LO}(s)}{a^{\rm LO}(s)-a^{\rm
NLO}(s)} \ .
\end{equation}%
A Taylor expansion of this amplitude returns NLO $\chi$PT as usual
for a Pad\'e approximant. However elastic unitarity is now exact,
and the possibility of a zero of the denominator allows for
resonances to appear.
The associated phase shift
$$\delta_{11}^{\rm IAM}(s)= \arctan \left( \frac{{\rm Im} a_{11}^{\rm IAM}(s)}{ {\rm
Re} a_{11}^{\rm IAM}(s)} \right)
$$
may be directly employed for the time--like form factor through the
Omn\`es representation. A similar procedure was taken to calculate
the scalar and vector form factors of the
pion~\cite{Truong:1988zp,Guerrero:1998ei}.
The pion mass dependence of the $\rho$ meson properties were studied
in Ref.~\cite{MadridJuelich} and it was found that $g_{\rho \pi\pi}$
depends only very mildly on the quark mass. In the next section we
will investigate the consequences of this finding on the pion vector
form factor.
The low energy constants necessary to complete the calculation are
fit to the phase shifts data and given in
Table~\ref{tabla:constantes}, where they are compared to well--known
determinations. Note that with the phase shift data one can only
determine the difference ${\bar l}_2-{\bar l}_1$ which is about 6 as
a result of the fitting~\cite{Dobado:1996ps}. Using
Eq.~(\ref{curvatureq}), the curvature can then be obtained as
\begin{equation}%
c_V^\pi = 4.00\pm0.50~{\rm GeV}^{-4},
\end{equation}%
or equivalently,
\begin{equation}%
\langle r^4\rangle = 0.73\pm0.09~{\rm fm}^{4}.
\end{equation}%
The quantity depending solely on the phase shift is
\begin{equation}%
\tilde{c}_V^\pi = 2.13\pm0.42~{\rm GeV}^{-4}.
\end{equation}%
These values are to be considered as our results at the physical
pion mass.
\begin{table}[t]
\caption{ Values of the low energy constants of the NLO SU(2) Chiral
Lagrangian. We employ the last row in the calculation. For
comparison we give several well--known sets. The error refers to the
last significant figure. These are determinations based on data
alone. Several phenomenological and theoretical predictions based on
semi-analytical approaches (large $N_c$, Dyson--Schwinger, resonance
saturation, etc.) can be found in the literature \cite{various}.
\label{tabla:constantes} }
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline%
LEC &$\bar{l}_1$&$\bar{l}_2$
&$\bar{l}_3$ &$\bar{l}_4$ & $\bar{l}_6$
\\ \hline
Gasser-Leutwyler~\cite{gasserleutwyler} &$-2 \pm 4$ & $6\pm 1.3$ & $2.9\pm 2.4$ & $4.3\pm 0.9$ &
$16\pm 1$ \\
Dobado {\it et al.}~\cite{dgnmp} &$-0.6\pm 0.9$ & $6.3 \pm 0.5$ & $2.9\pm 2.4$ & $4.3\pm 0.9$ &
$16\pm 1$ \\
Bijnens {\it et al.} set I~\cite{Bijnens:1998fm} & $-1.7$ & $6.1$ & $2.4$ & $4.4\pm 0.3$
&
$16\pm 1$ \\
Bijnens {\it et al.} set II~\cite{Bijnens:1998fm} & $-1.5$ & $4.5$ & $2.9$ &
$4.3$ &
\\
This work & $0.1\pm 1.5$ & $6\pm 1.3$ & $2.9$(fix)
& $4.3\pm 0.9$ & $16.6\pm 0.4$ \\\hline
\end{tabular}
\end{center}
\end{table}
\section{Pion-mass dependence needed for lattice extrapolation}
\subsection{Mass dependence of the phase shift} \label{subsec:mpidep}
In this section we study the pion mass dependence of the pion vector
form factor based on both the Breit--Wigner model as well as the
amplitudes from the IAM. In the left panel of Fig.~\ref{fig:phaseBW}
we plot the variation in the isospin--1 $p$--wave elastic $\pi\pi$
phase shift $\delta_{11}$ with the pion mass in the Breit--Wigner
model, where the physical pion mass is denoted by $m_\pi^{\rm phys}$. For
small increases in the pion mass, with $m_\rho$ being held fixed for
illustration, we see how the resonance becomes narrower as the pion
threshold approaches. Finally, for $2m_\pi>m_\rho$, the $\rho$
becomes bound and the phase shift starts at 180 degrees in agreement
with Levinson's theorem with one bound state.
Next we consider the IAM. Here what is held constant is the
renormalized constants in the chiral Lagrangian ($l_i^r(\mu)$)
since, as discussed, they are by definition independent of the pion
mass. The scale--independent $\bar{l}'s$ run logarithmically with
the pion mass. This dependence and the explicit pion masses in the
chiral series bring about a small $m_\pi$--dependence of the $\rho$
mass that puts it just above threshold for $m_\pi=3m_\pi^{\rm
phys}$. We plot in the right panel of Fig.~\ref{fig:phaseBW} the
resulting phase as a function of the $\pi\pi$ invariant mass for
different values of the pion mass.
The prediction of the pion mass dependence of the $\rho$ mass resulting from the IAM
is shown as the solid curve in Fig.~\ref{fig:mrho}. The
parameters used are ${\bar l}_1=-0.08,{\bar l}_2=5.78,{\bar
l}_3=2.9$ and ${\bar l}_4=4.3$.
In this figure we also show the results of
a recent lattice study~\cite{Gockeler:2008kc}.
For our comparison we choose this one, for it
is the simulation where the lowest pion masses
are used.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.58\textwidth]{mrho.eps}
\caption{Dependence of the $\rho$ mass on the pion mass. Here the $\rho$ mass is
defined as the value of $\sqrt{s}$, where
the $\pi\pi$ $p$--wave phase shifts cross
90 degrees. Shown are the result from the IAM
(solid line) and a fit using Eq.~(\ref{brunsfit}) (dashed line) to the lattice
data, shown as solid dots~\cite{Gockeler:2008kc}. The lowest point is
the physical $\rho$ mass. \label{fig:mrho}}
\end{center}
\end{figure}
To allow for a comparison with recent lattice data, here the $\rho$--mass is defined as the value
of $\sqrt{s}$, where the $\pi\pi$ phase shift
is 90$^\circ$. The resulting numbers
differ somewhat from those corresponding to
the real part of the pole position in
the second Riemann sheet --- the latter definition
of the mass was used in Ref.~\cite{MadridJuelich}. For comparison, we also show the very
recent lattice data~\cite{Gockeler:2008kc} as the
filled circles
with error bars. The agreement of the IAM with the lattice data is
rather satisfying. For later use, the lattice data are fitted with
an expression derived from an extended version of $\chi$PT~\cite{Bruns:2004tj}
\begin{equation}%
m_\rho = m_\rho^0 + c_1m_\pi^2 + c_2m_\pi^3 +
c_3m_\pi^4\log\left({m_\pi^2\over m_\rho^2}\right) \ .
\label{brunsfit}
\end{equation}%
The parameter $m_\rho^0$ is not included in the fit. It is fixed by the condition
that $m_\rho=0.77$~GeV at the physical pion mass.
We find for the $\rho$ mass in the chiral limit $m_\rho^0=0.77\pm 0.1$
GeV.
The
resulting parameters are
\begin{equation}%
c_1 = -0.53\pm0.44~{\rm GeV}^{-1}, \quad c_2 = 2\pm1~{\rm
GeV}^{-2}, \quad c_3 = -1\pm 3~{\rm GeV}^{-3}.
\end{equation}
Using the central values, we get the dashed curve as shown in
Fig.~\ref{fig:mrho}.
Note, the uncertainties in the parameters show some correlation,
however, since we are here mainly interested in a parametrization
of the lattice data, we may ignore this observation.
Since the pion mass grows faster with the quark
mass, eventually the $\rho$ becomes bound (just as the $J/\psi$ is
under the $D\bar{D}$ threshold), but this happens for yet larger
pion masses.
\subsection{Extrapolation in NLO and NNLO chiral perturbation theory}
Space--like form factors are in principle accessible on a lattice.
Since these studies usually employ heavier-than-real quarks, the
pion mass obtained is also higher than the physical pion mass, and
an extrapolation is necessary. Another extrapolation to low momentum
(due to the finite volume enclosed by the lattice) is necessary if
the mean square and quartic radii are to be extracted. The mean
square radius has indeed been studied before
\cite{Boyle:2008yd,Bunton:2006va} and extrapolation to physical pion
masses taken from chiral perturbation theory. It would be
interesting to have lattice data at several quark masses to test it.
Momentum extrapolations to $q^2=0$ are, in view of the mean quartic
radius, non-linear. In the extraction of the mean square radius, the
authors of \cite{Bunton:2006va} quote a 10\% systematic error in
the lattice extraction due to $m_\pi^2/(1~{\rm GeV}^2)$ $\chi$PT
errors, and 20\% due to $q^2_{\rm min}/(1~{\rm GeV}^2)$ momentum
extrapolation errors.
The momentum extrapolation however seems to be avoidable with twisted
boundary conditions for the fermion fields \cite{Boyle:2008yd}, and
indeed
those authors find
$$
\langle r^2\rangle_{330\ \rm MeV} = 0.35(3) \ {\rm fm}^2,\ \ \ \langle
r^2\rangle_{139\ \rm MeV} = 0.42(3) \ {\rm fm}^2\ .
$$
where the value at the physical pion mass is obtained with the help
of the NLO $SU(2)$ chiral Lagrangian.
\subsection{Chiral extrapolation assisted by the Omn\`es representation}
We have achieved a representation of the form factor based on the
Omn\`es representation, matched to low energy $\chi$PT. Since we
have relatively good theoretical control of the entire construction,
we can now extrapolate to unphysical quark (pion) masses.
The parametrization in Eq. (\ref{curvatureq}) requires two pieces of
input: the pion scattering $p$--wave phase shift and the mean square
radius. For the former we may either use the Breit--Wigner model --
together with additional assumptions on the $\rho$ properties -- or
the IAM, where the quark mass dependence is predicted from NLO
$\chi$PT -- higher order pion mass dependencies as they arise from
NNLO $\chi$PT are not yet studied in this framework.
\begin{figure}[t]
\begin{center}
\includegraphics[width=7cm]{rsqofm_notitle.eps}
\caption{Pion mass dependence of the mean square radius. We show
results based on NLO and NNLO Chiral Perturbation theory. The
$r_{V1}^r$ parameter is fixed at its VMD value (its pion-mass
dependence contributing at NNNLO is neglected). Likewise we plot the
mass dependence resulting from a once-subtracted Omn\`es
representation. The data is normalized to the radius for physical
pion mass. \label{fig:r2mass}}
\end{center}
\end{figure}
The square radius has a NLO pion mass dependence caused by the
chiral logarithm in $\bar{l}_6$. This is a major effect for pion
masses smaller than physical, towards the chiral limit, but for pion
masses higher than physical (say the 330~MeV where the lattice data
is taken), the $m_\pi^2$ term from the NNLO Lagrangian density might
come to dominate, so we employ this too. Finally, we have an
order-of-magnitude countercheck at our disposal. By employing a
once--subtracted instead of a twice--subtracted Omn\`es
representation, we obtain a closed form for the mean square radius
in terms of the phase shift
\begin{equation} \label{radiusfromshift} %
\langle r^2 \rangle = \frac{6}{\pi} \int_{4m_\pi^2}^\infty ds
\frac{\delta_{11}(s)} {s^2} \ .
\end{equation} %
All three methods are plotted in Fig.~\ref{fig:r2mass}.
\begin{figure}[t]
\hglue-8mm\includegraphics[width=0.6\textwidth]{r4band_iam.eps}
\hglue-1.5cm\includegraphics[width=0.6\textwidth]{ctildeband_iam.eps}
\vglue-5mm
\hglue-8mm\includegraphics[width=0.6\textwidth]{r4band_BW.eps}
\hglue-1.5cm\includegraphics[width=0.6\textwidth]{ctildeband_BW.eps}
\caption{Dependence on the pion mass of the mean quartic radius of the
pion (left panel) and $\tilde{C}_V^\pi$ (right panel). We show
results based on the Breit--Wigner model and the Inverse Amplitude
Method. The bands correspond to the uncertainties from the
parameters used (the ${\bar l}_i$'s for the IAM and the $c_i$ (c.f.
Eq.~(\ref{brunsfit})) for the Breit-Wigner) as well as from the
variation of the cut--off.\label{fig:r4mass}}
\end{figure}
The results of the pion mass dependence of the quartic radius using
the Breit--Wigner model and the IAM are plotted in
Fig.~\ref{fig:r4mass}(a) and (c), respectivley, with the $m_\pi$
dependence of the square radius coming from that of ${\bar l}_6$ as
dictated by Eq.~(\ref{eq:lbarmpi}). In the IAM, the pion mass
dependence of $m_\rho$ is included intrinsically, while in the
Breit--Wigner method, it is input from fitting to the recent lattice
data~\cite{Gockeler:2008kc} as described at the end of
Subsection~\ref{subsec:mpidep}. The bands include the uncertainty
from varying the parameters within one sigma and that from varying
the integration cut--off from 1 to 16~GeV. The uncertainty from the
cut--off is the dominant one. For comparison, we also plot the
result of the Breit--Wigner model with fixed $\rho$ mass as the
dashed curve in Fig.~\ref{fig:r4mass}(c). The dependence is smooth
up to the point when the rho becomes stable.
Here the curve ends.
Imposing the $m_{\pi}$--dependent $\rho$ mass as that given by the
IAM, the result for the quartic radius in the Breit--Wigner model is
shown as the solid curve in Fig.~\ref{fig:r4mass}(c).
We can dispose altogether from the explicit pion mass dependence in
$\langle r^2\rangle$ by studying the quantity
$$
\tilde{c}_V^\pi = c_V^\pi - \frac{1}{72} \langle r^2\rangle^2 .
$$
This constant $\tilde{c}_V^\pi$ can of course be also studied on a
lattice by itself, although its physical interpretation is not
transparent. But its mass dependence comes from the phase shift
alone (c.f. Eq.~(\ref{curvaturefromphase})), and is not compounded
with the mass dependence of the square radius. It is therefore this
quantity that allows most directly access to the pion mass
dependence of the $\rho$ properties. Our results for this quantity
are shown in Fig.~\ref{fig:r4mass}(b) and (d) using the IAM and the
Breit--Wigner model, respectively.
\section{Summary}
Using the Omn\`es representation for the pion vector form factor, in
this paper we improved the existing value for the corresponding
curvature using as input only the well known $\pi\pi$ phase shifts
in the $p$--wave as well as the pion radius.
We find $\langle r^4 \rangle = 0.73 \pm 0.09$~fm$^4$
or equivalently $c_V=4.00\pm 0.50$ GeV$^{-4}$ which are consistent
with the results from NNLO
$\chi$PT~\cite{Bijnens:1998fm,Bijnens:2002hp} and recent analysis
using analyticity~\cite{Ananthanarayan:2008mg}.
In addition we studied the pion mass dependence of the curvature. A
modification of the curvature, called $\tilde{c}_V^\pi$ in the paper,
can be represented solely by the $\pi\pi$ $p$--wave phase shift. We
argued that this quantity allows for a clean and model--independent
alternative access to the pion mass dependence of the $\rho$ properties
and would therefore provide a consistency check of the methods to
extract physical parameters from lattice simulations. A lattice QCD
study of the pion curvature would therefore be of high theoretical
interest. We also argued that the pion square radius is not well suited for
this kind of investigation, since additional, not so well
controlled, theoretical input would be needed in the analysis.
Quantities similar to $\tilde{c}_V^\pi$ exist also for other
form factors, and a study of them from both theoretical and lattice sides
would be interesting.
\vspace{0.5cm}
{\emph{ We would like to thank Stephan D\"urr
and Jose Pelaez for useful discussions.
This work is supported in part by grants FPA 2004-02602, 2005-02327,
FPA2007-29115-E (Spain), and by the Helmholtz Association through
funds provided to the virtual institute ``Spin and strong QCD''
(VH-VI-231). FJLE thanks the members of the IKP (Theorie) at
Forschungszentrum J\"ulich for their hospitality during the
preparation of this work and the Fundacion Flores Valles for
economical support. }}
|
1,108,101,565,793 | arxiv | \section{Introduction}
Recently, the semileptonic decays of the charmed baryons~\cite{Cheng:2021qpd} have raised both theoretical and experimental
interests, as they provide clean theoretical backgrounds to examine the standard model as well as various specific quark models~\cite{Lc,Lc2,Lc3,Lc4,Lc5,Xic,RELA,LFQM,LFQMGENG,LFQMKE,Xic2}.
In 2021,
the Belle and ALICE collaborations reported the branching fractions~\cite{Belle:2021crz,ALICE:2021bli},
\begin{eqnarray}\label{1}
&&{\cal B}_{\text{Belle}}(\Xi_c^0 \to \Xi^ - e^+ \nu _e)= (1.31 \pm 0.04\pm 0.07\pm 0.38) \%\,,\nonumber\\
&&{\cal B}_{\text{ALICE}}(\Xi_c^0 \to \Xi^ - e^+ \nu_e )=
( 2.43 \pm 0.25 \pm 0.35 \pm 0.72) \% \,,
\end{eqnarray}
where the first and second uncertainties are statistical and systematic, respectively, and the third ones come from the normalizing channel of $\Xi_c^0 \to \Xi^- \pi ^+$~\cite{abs}. This decay has also been calculated by the lattice QCD~(LQCD), with the branching fraction found to be~\cite{Lattice}
\begin{equation}\label{2}
{\cal B}_{\text{LQCD}}(\Xi_c^0 \to \Xi^ - e^+ \nu_e )= (2.38 \pm 0.44) \%\,,
\end{equation}
which is consistent with ${\cal B}_{\text{ALICE}}$ but nearly twice larger than ${\cal B}_{\text{Belle}}$.
Collecting Eqs.~\eqref{1} and \eqref{2}, we arrive at the average values\footnote{We adopt the same averaging method with the Particle Data Group~\cite{pdg}.
}
\begin{equation}\label{bav}
{\cal B}'_{av} = ( 1.85 \pm 0.28)\,,~~~
{\cal B}_{av} = (2.39 \pm 0.40)\%\,.
\end{equation}
Here, ${\cal B}'_{av}$ is the average of (${\cal B}_{\text{Belle}}$, ${\cal B}_{\text{ALICE}}$, ${\cal B}_{\text{LQCD}}$), whereas ${\cal B}_{\text{Belle}}$ has not been included in
${\cal B}_{av}$ due to its tension with the others.
On the other hand, very recently, the BES\MakeUppercase{\romannumeral 3} collaboration has reported the branching fraction~\cite{BESIII:2022ysa}
\begin{equation}\label{ExpLC}
\begin{aligned}
& {\cal B} (\Lambda_c^+ \to \Lambda e^+ \nu_e) = ( 3.56 \pm 0.11 \pm 0.07) \%\,,
\end{aligned}
\end{equation}
leading to the ratios
\begin{equation}\label{eq4}
\begin{aligned}
R(\text{Belle}) = 0.33 \pm 0.10 \,,~~R(\text{ALICE}) = 0.60\pm 0.21\,,~~R(\text{LQCD}) = 0.59 \pm 0.11 \,,
\end{aligned}
\end{equation}
where
\begin{equation}\label{Rdefinition}
R(\text{method})=
\frac{2 \tau_{\Lambda^+_c}}{3 \tau_{\Xi^0_c}}
\frac{{\cal B}_{\text{method}}(\Xi _c^0 \to \Xi^- e^+ \nu_e )}{{\cal B}(\Lambda_c \to \Lambda e^+ \nu_e )} \,,
\end{equation}
with $\tau_{\Lambda_c^+}$ and $\tau_{\Xi_c^0}$ the baryon lifetimes. The averages of the ratios in Eq.~\eqref{eq4} are then given as
\begin{equation}
R_{av}'
= 0.46\pm 0.07\,,~~~
R_{av}
= 0.59\pm 0.10\,,
\end{equation}
which deviate from $R(SU(3)_F)=1 $~\cite{SU(3),SU(3)1,SU(3)0} based on the $SU(3)$ flavor~($SU(3)_F$) symmetry by $54\%$ and $41\%$, respectively.
By using $\Gamma(\Lambda_c^+\to \Lambda e^+\nu_e)$ as an input, the $SU(3)_F$ symmetry may overestimate $\Gamma (\Xi_c^0 \to \Xi_c^- e^+ \nu_e)$ more than twice larger than those by the experiments~\cite{SU(3),SU(3)1,SU(3)0}, against the common believe that the $SU(3)_F$ breaking effects shall be less than $20\%$.
For instance, the
meson versions of the ratios are given by~\cite{pdg}
\begin{eqnarray}
&&\frac{ \Gamma(D_s^+ \to \phi e ^+ \nu_e)}{\Gamma(D^+ \to \overline{K}^{\ast 0 } e ^+ \nu_e) }
= 0.91 \pm 0.06\,,~~~\frac{1}{2}\frac{ \Gamma(D_s^+ \to K^0 e^+ \nu_e)}{ \Gamma(D^+ \to \pi ^0 e ^+ \nu_e) }
=0.94\pm 0.10\,,
\end{eqnarray}
which are expected to be unity in the exact $SU(3)_F$ symmetry and lie nicely within the $20\%$ errors. Furthermore,
even if the strange quark mass is considered, it is found that $R=0.86$ in the relativistic quark model~\cite{RELA}, and
$R=0.87$ and
$R=0.99$ in the light-front quark model from Refs.~\cite{LFQM} and \cite{LFQMGENG}, respectively.
Certainly, there must exist an additional breaking mechanism in $\Xi_c^{ 0} \to \Xi^- e^+ \nu_e$, which is somehow negligible
in other decays.
In this work, we point out that the mechanism is precisely the $\Xi_c - \Xi_c'$ mixing, which is naturally absent in other decays~\cite{SU(3)1}.
This paper is organized as follows. In Sec.~\MakeUppercase{\romannumeral 2}, we extract the
mixing angle of $\Xi_c - \Xi_c'$ from the mass spectra. In Sec.~\MakeUppercase{\romannumeral 3}, we discuss the mixing effects in $\Xi_c \to \Xi \ell ^+ \nu_\ell $ with $\Xi_c =\Xi_c^0 (\Xi_c^+)$ for $\Xi = \Xi^-( \Xi^0)$ and $\ell= ( e, \mu )$.
In Sec.~\MakeUppercase{\romannumeral 4}, we examine some of the decay channels to further confirm the mixing in the experiments. We conclude our study in Sec.~\MakeUppercase{\romannumeral 5}.
\section{Mixing angles of $\Xi_Q - \Xi_Q'$}
In addition to their masses,
hadrons are categorized according to conserved quantities. Since the rotational symmetry is respected by all interactions, we can always label hadrons with their spins.
For this reason, the octet baryons do not mix with the decuplet ones.
Furthermore, to an excellent approximation, the weak interaction can be safely neglected in hadronizations. Accordingly, baryons are in eigenstates of parity as well as flavorness~({\it i.e.} strangeness, charmness and bottomness). With these quantum numbers, we can distinguish most of the low-lying baryons.
However, there are some cases of which two baryons are degenerate in flavorness, spin, and parity~\cite{Mixing}, bringing up the issue of the $\Xi_Q-\Xi_Q'$ mixings with $Q= (b,c)$.
Both $\Xi_Q$ and $\Xi_Q'$ have the same quantum numbers in angular momentum, parity and flavorness. To distinguish them, a further approximation is required. Traditionally, it is done by the $SU(3)_F$ symmetry as the strong interaction treats the light quarks of ($u,d,s$)
approximately the same.
Therefore, the physical baryon states shall have definite $SU(3)_F$ representations.
The $\overline{{\bf 3}}$ representation suggests that the light quark pair forms a spin-0 state, while the ${\bf 6}$ representation a spin-1 one, given as
\begin{eqnarray}\label{spinflavor}
|\Xi_Q^{ \overline{{\bf 3} } } \rangle &=& \frac{1}{\sqrt{2}} \left(
\uparrow\downarrow- \downarrow\uparrow
\right) \uparrow \otimes\frac{1}{\sqrt{2}} \left(
qsQ - sqQ
\right)\,, \nonumber\\
|\Xi_Q ^{ {\bf 6} } \rangle &=& \frac{1}{\sqrt{6}} \left(
2 \uparrow\uparrow \downarrow - \downarrow\uparrow\uparrow - \uparrow \downarrow \uparrow
\right) \otimes \frac{ 1}{ \sqrt{2} } \left(
qsQ + sqQ
\right) \,,
\end{eqnarray}
with $q = (u,d)$. If the $SU(3)_F$ symmetry is exact, then $\Xi_Q = \Xi_Q ^{ \overline{{\bf 3} }} $ and $\Xi_Q' = \Xi_Q ^{ {\bf 6} } $.
However, the symmetry is broken by the strange quark mass. As a result,
the physical baryons are made of liner combinations of $\overline{{\bf 3}}$ and ${\bf 6}$ instead, given as
\begin{eqnarray}\label{ph}
|\Xi_Q \rangle &=& \cos\theta _Q | \Xi_Q^{ \overline{{\bf 3}}} \rangle + \exp\left( i\phi_Q\right) \sin \theta_Q | \Xi_Q^{ {\bf 6}} \rangle \,, \nonumber\\
|\Xi_Q ' \rangle &=& \cos \theta_Q | \Xi_Q^ {{\bf 6}} \rangle - \exp\left(- i\phi_Q\right) \sin \theta_Q | \Xi_Q^{ \overline{{\bf 3}}} \rangle \,,
\end{eqnarray}
where $|\Xi_Q \rangle $ and $|\Xi_Q' \rangle$ diagonalize the QCD Hamiltonian,
\begin{equation}
H_{\text{QCD}} | \Xi_Q \rangle = M_{\Xi_Q} | \Xi_Q \rangle \,,~~~H_{\text{QCD}} | \Xi_Q' \rangle = M_{\Xi_Q'} | \Xi_Q ' \rangle \,,
\end{equation}
with $M_{\Xi_Q^{(\prime)}}$ the baryon masses.
Without lost of generality, by taking $M_{\Xi_ Q'}>M_{\Xi_ Q}$ we obtain that~\cite{pdg}
\begin{eqnarray}\label{experiment}
&&M_{\Xi_c} = \frac{1}{2}\left( M_{\Xi_c^0} +M_{\Xi_c^-} \right)= 2.4691(2)\,, ~~~~M_{\Xi_c'} =\frac{1}{2} \left( M_{\Xi_c^{\prime 0 }} +M_{\Xi_c^{\prime-}} \right)= 2.5784(4)\,. \nonumber\\
&&M_{\Xi_ b} = \frac{1}{2}\left( M_{\Xi_b^0} +M_{\Xi_b^-} \right) = 5.9745(4)\,, \quad M_{\Xi_b'} =
M_{\Xi_b^{\prime 0 } } =
5.93502(5)\,,
\end{eqnarray}
in units of GeV.
Here, we use the average masses of the isospin doublets to lower the uncertainties if it is available.
The mass matrices of $\Xi_Q$ are given as
\begin{equation}
M_Q = \left(
\begin{array}{cc}
M_Q^{ \overline{ { \bf 3 }} \overline{ { \bf 3 }} } & M_Q^{ \overline{ { \bf 3 }}{\bf 6} } \\
M_Q^{ \overline{ { \bf 3 }}{\bf 6} } & M_Q^{ {\bf 6} {\bf 6} }
\end{array}
\right)
=\left(
\begin{array}{cc}
- \cos 2 \theta_Q & - e^{ - i\phi_Q} \sin 2 \theta _Q \\
-e^{ i\phi_Q } \sin 2 \theta_Q & \cos 2 \theta _Q
\end{array}
\right) M_Q^\Delta + M_Q^0 \,,
\end{equation}
with
\begin{equation}
M^{{\bf ij}}_Q \equiv \frac{1}{\sqrt{
\langle \Xi_Q ^ {\bf i} | \Xi_Q^ {\bf i} \rangle
\langle \Xi_Q ^ {\bf j} | \Xi_Q^ {\bf j} \rangle
}}
\langle \Xi_Q ^ {\bf i} | H_{\text{QCD}} | \Xi_Q ^ {\bf j}\rangle\,,~~~~\text{for}~{\bf i},{\bf j} = \overline{{\bf 3}}, {\bf 6}\,.
\end{equation}
Note that the physical states in Eq.~\eqref{ph} diagonalize the matrices.
From the time-reversal symmetry, we have $\phi_Q=0$.
It is straightforward to show that
\begin{eqnarray}\label{thetamaster}
&& M_Q^0 = \frac{1}{2}(M_{\Xi_ Q} + M_{\Xi_Q'} ) \,,~~~M_Q^\Delta = \frac{1}{2}(M_{\Xi_ Q'}- M_{\Xi_Q'} ) \,, \nonumber\\
&&\theta_Q = \pm \frac{1}{2} \cos ^{-1} \left(
\frac{M_Q^{{\bf 6}{\bf 6}} - M^0 _Q }{M_Q^\Delta }
\right)\,.
\end{eqnarray}
Consequently, we can obtain $|\theta_Q|$ once $M^{{\bf 6}{\bf 6}}_Q$ is known.
To obtain $M^{{\bf 6}{\bf 6}}_c $, we utilize the model-independent mass relation\footnote{
The mass relations in Ref.~\cite{Jenkins:1996rr} read as
$(M_{\Sigma_Q^*}-M_{\Sigma_Q})-2(M_{\Xi_Q^*}-M_{\Xi_Q^{\prime}})+(M_{\Omega_Q^*}-M_{\Omega_Q}) = 0$
without considering the $\Xi_Q-\Xi_Q'$ mixing. Thus, we replace $M_{\Xi_Q'}$ as $M_Q^{{\bf 6}{\bf6}}$ in the relations. We do not need to worry about the $\Lambda_Q-\Sigma_Q$ mixing here for that the isospin symmetry is well protected in comparison. In addition, the $\Sigma_Q^{(\prime)} -\Sigma_Q^*$ , $\Xi_Q^{(\prime)} -\Xi_Q^*$ and $\Omega_Q' - \Omega_Q^*$ mixings are forbidden by the angular momentum conservation. We conclude that only the $\Xi_Q- \Xi_Q'$ mixing should be considered. A similar argument applies to Eq.~\eqref{eq15} also.
}~\cite{Jenkins:1996rr}
\begin{eqnarray}
&&M^{{\bf 6}{\bf 6}}_c = M_{\Xi_c^ *} - \frac{1}{2} \left(
M_{\Sigma_c^*} - M_{\Sigma_c} + M_{\Omega_c^*} - M_{\Omega_c}
\right) \pm 0.46~\text{MeV} \,,
\end{eqnarray}
where
the asterisks denote the low-lying baryons with $J=3/2$, and $\pm 0.46$~MeV correspond to the expected errors.
On the other hand, the mass of $\Omega^*_b$ is not available yet in the experiment. Thus, we use the improved equal spacing rule
\begin{equation}\label{eq15}
M^{{\bf 6}{\bf 6}}_b = \frac{1}{2}\left(
M_{\Sigma_b} + M_{\Omega_b}
\right) - \frac{1}{2}
\left(
M_{\Sigma^\ast_c} - 2M_{\Xi_c^*} + M_{\Omega_c^*}
\right) \pm 0.6~\text{MeV}
\,,
\end{equation}
from the heavy quark symmetry to fix $M_b^{{\bf 6}{\bf 6}}$ instead~\cite{Savage:1995dw,Jenkins:1996rr}.
With the baryon masses from the experiments~\cite{pdg},
we arrive at
\begin{eqnarray}\label{M66}
&&M_c^{{\bf 6}{\bf 6}} = 2.5600(11)~\text{GeV}\,, ~~~ M_b^{{\bf 6}{\bf 6}} = 5.9315(18)~\text{GeV}\,,
\end{eqnarray}
respectively.
Comparing to $M_{\Xi_Q'}$, it is clear that the mixings are required.
Plugging Eq.~\eqref{M66} in Eq.~\eqref{thetamaster}\,, we find that
\begin{eqnarray} \label{mixingangle}
&&|\theta_c| = 0.137 (5 ) \pi \,, ~~~|\theta_b| = 0.049 (13) \pi\,.
\end{eqnarray}
It indicates that about $20\%$ of $\Xi_c$ and $\Xi_c'$ are made of $\Xi_c^{{\bf 6}}$ and
$\Xi_c^{\overline{{\bf 3}}}$, and $2\%$ of $\Xi_b$ and $\Xi_b'$ of
$\Xi_b^{{\bf 6}}$ and
$\Xi_b^{\overline{{\bf 3}}}$, respectively. The ratio of $\theta_Q$ is consistent with the expectation from the heavy quark expansion, which states that $\theta_{b}/\theta_c \propto m_c/m_{b}$~\cite{Savage:1995dw} with $m_c$ and $m_b$ the charmed and bottom quark masses, respectively.
In the following, we will concentrate on the charm baryon decays
For simplicity, we take the spatial wave functions~(quark distributions) of $\Xi_c^{\overline{\bf 3}}$ and $\Xi_c ^{{\bf 6}}$ to be the same in this work, so the only difference between the two states is the spin-flavor part as shown in Eq.~\eqref{spinflavor}.
We start with the helicity amplitudes $H_{\lambda_1 \lambda_W}$ with $\lambda_1$ and $\lambda_W$ the helicities of $\Xi$ and off-shell $W^*$ boson, given as
\begin{eqnarray}\label{15}
H_{\pm \frac{1}{2}\pm 1} = \varepsilon^{*\mu}_\pm\left( N_{\text{flip}}^{eff} {\cal V}_\mu^{\uparrow \downarrow} \mp N_{\text{flip}}^{eff} {\cal A}_{\mu}^{\uparrow \downarrow} \right) \,, \quad
H_{\pm \frac{1}{2} z }= \varepsilon^{*\mu}_z \left( N_{\text{unflip}}^{eff} {\cal V}_\mu^{\uparrow \uparrow} \mp N_{\text{flip}}^{eff} {\cal A}_{\mu}^{\uparrow \uparrow} \right) \,,
\end{eqnarray}
respectively,
where $\varepsilon^\mu$ is the polarization vector of $W^*$~\cite{Timereversal},
$z\in\{ 0,t \}$, $N^{eff}$ are the spin-flavor overlappings, given by
\begin{eqnarray}\label{19}
N_{\text{flip}} ^{eff} \equiv
\left( N_{\text{flip}}^{\overline{\bf 3}} \cos\theta_c
+ N_{\text{flip}}^{\bf 6} \sin \theta_c
\right)\,,\quad
N_{\text{unflip}} ^{eff} \equiv
\left( N_{\text{unflip}}^{\overline{\bf 3}} \cos\theta_c
+ N_{\text{unflip}}^{\bf 6} \sin \theta_c
\right)\,,
\end{eqnarray}
and ${\cal V}_{\mu }^{\lambda_1 \lambda_2} $ and ${\cal A}_\mu ^{\lambda_1\lambda_2}$ are the matrix elements of the current operators
\begin{eqnarray}\label{spatio}
{\cal V}_\mu^{\lambda_1 \lambda_2 } & \equiv &
\left\langle q ss, J_z^3= \lambda_1 , \vec{p}=|\vec{p}| \hat{z} \right| \overline{s}_3 \gamma_\mu c_3 \left| qsc, J_z^3 = \lambda_2 \right \rangle \,, \nonumber\\
{\cal A}_{\mu}^{\lambda_1 \lambda_2 } &\equiv&
\left\langle qss, J^3_z= \lambda_1, \vec{p}=|\vec{p}|\hat{z} \right| \overline{s}_3 \gamma_\mu \gamma_5 c_3 \left| qsc, J^3_z = \lambda_2 \right \rangle \,,
\end{eqnarray}
with $\vec{p}$ the three-momentum of $\Xi$ in the
rest frame of $\Xi _c$.
Here,
the subscripts in $\overline{s}_3$ and $c_3$ indicate that they only act on the third quarks, $J^3$ stands for the angular momentum of the third quarks, and the states are normalized as
\begin{equation}
\langle \Xi_{(Q)} , \vec{p}\, |\Xi_{(Q)} , \vec{p}\,' \rangle = 2
u^\dagger_{(Q)} u_{(Q)}
(2\pi )^3 \delta^3 (\vec{p} -\vec{p}\,' )\,,
\end{equation}
with $u_{(Q)}$ the Dirac spinor of $\Xi_{(Q)}$.
In turn, $N^{{\bf i}}_{\text{(un)flip}}$ are defined as
\begin{eqnarray}\label{const}
&&N^{\bf i}_{\text{flip}} = \Big
\langle \Xi ,J_z =\frac{1}{2} \Big | s^\dagger \sigma_z c \Big| \Xi ^ {\bf i}_c ,J_z =\frac{1}{2} \Big \rangle \,,~~N_{\text{unflip}} ^ {\bf i} =
\Big \langle \Xi ,J_z =\frac{1}{2} \Big | s^\dagger c \Big| \Xi _c ^ {\bf i} ,J_z =\frac{1}{2} \Big \rangle\,,\nonumber \\
& & s^\dagger \sigma_z c = s^\dagger_\uparrow c_\uparrow - s^\dagger_\downarrow c_\downarrow \,, \qquad\qquad\qquad\qquad\quad
s^\dagger c = s^\dagger_\uparrow c_\uparrow + s^\dagger_\downarrow c_\downarrow \,,
\end{eqnarray}
where
the baryon states in Eq.~\eqref{const} are in the nonrelativistic constituent quark limit.
The values of $N^{{\bf i}}_{\text{(un)flip}}$ can be calculated once the spin-flavor parts of the wave functions are given~(see Appendix A of Ref.~\cite{Cheng:2018hwl} for instance), which are the same for all quark models up to an overall constant.
They are collected in TABLE~\ref{tableNflip}, where we also include the spin-flavor overlapping of $\Lambda^+_c \to \Lambda$ for the later convenience.
\begin{table}
\caption{The spin-flavor overlappings defined in Eq.~\eqref{const}. }\label{tableNflip}
\vskip 0.2cm
\begin{tabular}{c|ccccccc}
\hline
\hline
& $ \Xi_c^{\overline{ {\bf 3}}}\to \Xi $& $ \Xi_c^{{\bf 6}} \to \Xi $ &$\Lambda_c \to \Lambda $\\
\hline
$N_{\text{unflip}}$&$\sqrt{ \frac{3}{2} } $&$-\frac{1}{\sqrt{2}}$ &1\\
$N_{\text{flip}}$ &$\sqrt{ \frac{3}{2} } $&$\frac{1}{3\sqrt{2}}$ &1\\
\hline
\hline
\end{tabular}
\end{table}
Finally,
the total decay widths are given as
\begin{eqnarray}\label{number4}
&&\Gamma
= \frac{G_F^2 }{24 \pi^3} |V_{cs}| ^2 \int^{(M_{\Xi_c} -M_\Xi )^2}_{M_\ell^2} \frac{(q^2-M_{\ell}^2)^2|\vec{p}| }{8M_{\Xi_c }^2q^2} \Big[
(\delta_\ell +1 ) \\
&&
\times \left( H_{ \frac{1}{2} 1 }^2 + H_{ -\frac{1}{2} -1 }^2 + H_{ \frac{1}{2} 0 }^2 H_{ - \frac{1}{2} 0 }^2 \right)
+ 3 \delta_\ell \left( H_{\frac{1}{2}t} ^2 + H_{-\frac{1}{2}t} ^2 \right)
\Big]
dq^2 \appropto (N_{\text{unflip}}^{eff} )^2 + (N_{\text{flip}}^{eff} )^2 \,,\nonumber
\end{eqnarray}
where $G_F$ is the Fermi constant, $V_{cs}=0.973$ is the Cabibbo–Kobayashi–Maskawa matrix element~\cite{pdg}, $M_{\ell}$
is the charged lepton mass, and $\delta_\ell = M_\ell^2 / 2q^2$.
We note that the sign of $\theta_c$ has a little effect on the branching fractions. It is due to that
$N_{\text{unflip}}^{\overline{{\bf 3}}}$ and $N_{\text{flip}}^{\overline{{\bf 3}}}$ are much larger than
$N_{\text{unflip}}^{{\bf 6}}$ and $N_{\text{flip}}^{{\bf 6}}$.
In addition,
$N_{\text{unflip}}^{{\bf 6}}$ and $N_{\text{flip}}^{{\bf 6}}$ are opposite in sign, which destructively (constructively) and constructively (destructively) interfere with
$N_{\text{unflip}}^{\overline{{\bf 3}}}$ and $N_{\text{flip}}^{\overline{{\bf 3}}}$ for a positive (negative) $\theta_c$. Consequently, large parts of their effects get canceled in the branching fractions.
In practice, we have the first order approximation
\begin{equation}
{\cal B}\left( \Xi_c \to \Xi \ell^+ \nu _\ell\right) \approx (1 - \sin ^2\theta_c )
{\cal B}\left( \Xi_c^{\overline{3}} \to \Xi \ell^+ \nu _\ell \right)\,,
\end{equation}
where the second term is understood as the mixing effect. With Eq.~\eqref{mixingangle}, we find that $\Xi_c\to \Xi \ell^+ \nu$ are suppressed about $20\%$ by the mixing, which is a generic model-independent result.
Subsequently, the $SU(3)_F$ relation shall be modified as
\begin{equation}
R(SU(3)_F) \to ( 1 - \sin^2 \theta_c)(1 - 0.2 ) \,,
\end{equation}
where the second term take account the $20\%$ breaking from the strange quark mass. As a result, we obtain $R\approx 0.6$ in agreement with $R_{av}$. Alternatively, by taking $R_{av}$ as an input, we find
$ |\theta_c| = (0.17 \pm 0.05)\pi$, which agrees well with Eq.~\eqref{mixingangle}.
Note that $R_{av}'$ leads to $ |\theta_c| = (0.24 \pm 0.04)\pi$, which indicates that $\Xi_c^{\overline{{\bf 3}}}$ and $\Xi_c^{{\bf 6}}$ share $\Xi_c^{(\prime)}$ equally.
\section{ Numerical results and discussions}
To illustrate the decays numerically, we adopt the homogeneous bag model~\cite{Liu:2022pdk} in the calculation of
${\cal V}^{\lambda_1\lambda_2}_\mu$ and ${\cal A}^{\lambda_1\lambda_2}_\mu$.
To explain the large $SU(3)$ flavor violation, it is important that a theory should be able to explain both ${\Gamma }(\Lambda_c^+ \to \Lambda e^+\nu _e)$ and ${\Gamma }(\Xi_c^0 \to \Xi^- e^+\nu_e )$, simultaneously.
The formalism for $\Lambda_c^+ \to \Lambda \ell^+ \nu_\ell $ can be obtained directly by substituting the spectator quark of $s$ as $u$ in Eq.~\eqref{spatio}. The branching fractions are found to be
\begin{equation}
\begin{aligned}
{\cal B} (\Lambda_c^+ \to \Lambda e^+ \nu_e) = ( 3.78\pm 0.24\pm 0.05) \%\,,\\
{\cal B} (\Lambda_c^+ \to \Lambda \mu ^+ \nu_\mu ) = ( 3.67\pm 0.23\pm 0.05 ) \%\,,
\end{aligned}
\end{equation}
where the first and second uncertainties arise from the model parameters and $\tau_{\Lambda_c^+}$, respectivley.
Notably, the results are consistent with the experimental values in Eq.~\eqref{ExpLC}
and LQCD calculations~\cite{Lattice Lambda}, given by
\begin{equation}
\begin{aligned}
&{\cal B}_{\text{LQCD}} (\Lambda_c^+ \to \Lambda e^+ \nu_e) = ( 3.80 \pm 0.19 \pm 0.11) \%\,,\\
&{\cal B}_{\text{LQCD}} (\Lambda_c^+ \to \Lambda \mu ^+ \nu_\mu ) = ( 3.69\pm 0.19 \pm 0.11) \% \,,
\end{aligned}
\end{equation}
where the uncertainties arise from the lattice simulation and $\tau_{\Lambda_c^+}$, respectively.
We conclude that the homogeneous bag model is suitable for explaining ${\cal B}(\Lambda_c^+\to \Lambda e^+ \nu_e)$.
We
now turn our attention to the $\Xi_c$ decays.
In the left figure of FIG.~\ref{PA3}, we plot
${\cal B}(\Xi_c^0 \to \Xi^- e^+ \nu_e)$ versus $\theta_c$,
where the upper bounds~(UBs) of ${\cal B}_{av}$ and ${\cal B}_{av}'$ are included for comparison.
On the other hand,
the values with $\theta_c\in \{ \pm \pi/4, \pm 0.2 \pi , \pm 0.137\pi, 0\}$ are given explicitly in TABLE~\ref{table1}.
With $|\theta_c| = 0.137 \pi $ and $|\theta_c| = 0.2 \pi$, our predictions of the branching fractions are consistent with ${\cal B}_{av}$ within $1\sigma$. In particular, ${\cal B}_{av}'$ can be explained by $\theta_c =0.2\pi$.
In contrast, the branching fraction for $\theta_c=0$ is incompatible with both ${\cal B}_{av}$ and ${\cal B}_{av}'$.
\begin{figure}
\includegraphics[width=0.45\linewidth]{deca.eps}~
\includegraphics[width=0.46 \linewidth]{Ratio.eps}
\caption{
${\cal B}( \Xi_c^0 \to \Xi^- e^+ \nu_e)$ versus $\theta_c$~(left), and $R$ of different methods~(right) with the bands representing the uncertainties.
}
\label{PA3}
\end{figure}
\begin{table}
\caption{
The branching fractions with different $\theta_c$.
}\label{table1}
\vskip 0.2cm
\begin{tabular}{c|ccccccc}
\hline
\hline
$\theta_C$ &$- \pi /4 $&$-0.2\pi$ &$- 0.137 \pi $&$ 0 $&$ 0.137\pi $ &$ 0.2\pi$ &$ \pi/4$ \\
\hline
$\Xi_c ^0 \to \Xi^- e^+ \nu_e $ &$1.98(40) $&$2.47(49)$&$3.02(58)$&$3.59(66)$&$3.02(19)$&$2.50(43)$&$2.01(34)$\\
$\Xi_c ^0 \to \Xi^- \mu ^+ \nu_\mu $ &$1.92 (38) $&$2.40(47)$&$2.94( 56)$&$3.49(63)$&$2.93( 18)$&$2.43(41)$&$1.95(32)$\\
\hline
$\Xi_c^+ \to \Xi ^0 e^+ \nu_e $ &$5.89(120)$&$7.36(146)$&$9.00(174)$ &$10.7(20)$&$8.99(58)$ &$7.46(128)$ &$5.99(102)$\\
$\Xi_c^+ \to \Xi ^0 \mu ^+ \nu_\mu $ &$5.73(115)$&$7.16(139)$&$8.75(165)$ &$10.4(19)$&$8.74(53)$ &$7.24(122)$ &$5.81(97)$\\
\hline
\hline
\end{tabular}
\end{table}
To reduce the errors and uncertainties from the model calculation, we plot $R(\text{method})$ defined in Eq.~\eqref{eq4}
on the right hand side of FIG.~\ref{PA3}, where $R(\theta_c)$ are the ones computed in this work, while $20\%$ errors for $R(SU(3)_F)$ without mixing are included\footnote{
The central value can be obtained by squaring the spin-flavor overlapping in TABLE~\ref{table1}.
}.
We find that the result of $R(\theta_c=0)=0.84 \pm 0.10$ is consistent with the theoretical expectations in the literature, such as those from
the relativistic quark model~\cite{RELA}, light front quark model~\cite{LFQM,LFQMGENG} and
$SU(3)_F$, but incompatible with the experiments and LQCD.
In contrast, $R(\theta_c = -0.137\pi )=0.70\pm 0.09$ and $R(\theta_c=-0.2\pi)=0.57\pm0.08$ agree well
with $R_{av}$.
In addition, $R_{av}'$ can be explained by $|\theta_c| = 0.2\pi$.
We conclude that the mixing between $\Xi_c-\Xi_c'$ is supported by not only the mass spectra but also the semileptonic decay experiments.
\section{Mixing effects involving decuplet baryons}
To confirm the mixing in the future experiments, we recommend some of the decay channels involving the decuplet baryons,
which do not exist without the mixing. In this work, we take $\Xi'$ to denote $\Xi(1530)^{-,0}$ for $\Xi_c = \Xi_c^{0,+}$, respectively.
The topological diagram for $\Xi_c \to \Xi ' \ell^+ \nu_\ell$ is given in the left hand side of FIG.~\ref{FIG3}.
Since the light quarks in $\Xi_c^{\overline 3}$ and $\Xi'$ are antisymmetric and symmetric in flavors, respectively, the transition matrix elements of the spectator quarks vanish. Thus, we have~\cite{SU(3)0}
\begin{equation}
\Gamma\left(\Xi_c^{\overline{{\bf 3}}} \to \Xi' \ell^+ \nu_\ell \right) = 0\,,
\end{equation}
resulting in
\begin{equation}
\Gamma\left(\Xi_c \to \Xi ' e^+ \nu_e \right) = \sin ^2 \theta_c \Gamma\left(\Xi_c^{{\bf 6}} \to \Xi ' e^+ \nu_e \right) \,.
\end{equation}
With the similar formulas given in the previous section,
we obtain that
\begin{equation}
{\cal B} ( \Xi_c^0 \to \Xi ^{\prime -} e^ + \nu_e ) = (4.4\pm 0.5) \times 10 ^{-3}\,,\qquad
{\cal B} ( \Xi_c^+ \to \Xi ^{\prime 0} e^ + \nu _e ) = (1.3 \pm 0.2)\% \,,
\end{equation}
with $\theta_c = -0.137\pi$, and
\begin{equation}
{\cal B} ( \Xi_c^0 \to \Xi ^{\prime -} e^ + \nu_e ) = (8.7\pm 1.0) \times 10 ^{-3}\,,\qquad
{\cal B} ( \Xi_c^+ \to \Xi ^{\prime 0} e^ + \nu _e ) = (2.6 \pm 0.4)\% \,,
\end{equation}
with $\theta_c = -0.2 \pi$.
which are accessible at Belle~\MakeUppercase{\romannumeral 2}.
We emphasize that nonzero branching fractions of $\Xi_c \to \Xi' e^+ \nu_e$ in future experiments will be a smoking gun of $\theta_c \neq 0 $.
\begin{figure}
\includegraphics[width=0.3 \linewidth]{Xics.eps}\qquad
\includegraphics[width=0.3 \linewidth]{Xicn.eps}
\caption{ The topological diagrams for $\Xi_c \to \Xi ' \ell ^+ \nu_\ell$
and $\Xi_c \to \Xi' \pi^+$, where the blobs represent the hadronizations, and $\{ s, q \} $ indicates that $s$ and $q$ are symmetric in flavors and spins.
}
\label{FIG3}
\end{figure}
The nonleptonic decays of $\Xi_c^{+} \to \Xi^{\prime 0 }\pi^+ $ and $ \Xi_c^{+} \to \Sigma^{\prime +} \overline{K}^0$ are also forbidden for the absence of the mixing~\cite{Geng:2019awr}. To see it, the topological diagrams of $\Xi_c^{+} \to \Xi^{\prime 0 }\pi^+ $ are depicted in FIG.~\ref{FIG3}, where
the factorizable diagram is
obtained by replacing $\ell^+ \nu_\ell $ with $\pi^+~(u \overline{d})$, and the right figure is the nonfactorizable diagram.
Due to the K\"orner-Pati-Woo theorem~\cite{Korner:1970xq}, the nonfactorizable amplitude vanishes, and the overlapping of the spectator quarks is zero in the left diagram for $\Xi_c^{\overline{{\bf 3}}}$.
Thus, we have
\begin{equation}
\begin{aligned}
&\Gamma \left( \Xi_c^{+} \to \Xi^{\prime 0 } \pi^+
\right) =\sin^2 \theta_c \Gamma \left( \Xi_c^{{\bf 6} +} \to \Xi^{\prime 0 } \pi ^+
\right) \,,\\
&\Gamma \left( \Xi_c^{+} \to \Sigma^{\prime + }\overline{K}^0
\right) =\sin^2 \theta_c \Gamma \left( \Xi_c^{{\bf 6} +} \to \Sigma^{\prime +} \overline{K}^0
\right) \,,
\end{aligned}
\end{equation}
which can be calculated in the factorization framework.
The helicity amplitudes of
$\Xi_c^{+} \to \Xi^{\prime 0 } \pi^+$ and $ \Xi_c^{+} \to \Sigma^{\prime + }\overline{K}^0 $
are then given as
\begin{eqnarray}\label{nonleptonic}
H_\pm = i \frac{G_F}{\sqrt{2}} V_{cs}a_1 f_\pi q^\mu \left \langle \Xi^{\prime 0 } , J_z = \pm \frac{1}{2}, \vec{p} = p \hat{z} \right |\overline{s} \gamma_\mu (1-\gamma_5) c \left | \Xi_c^{{\bf 6}+} , J_z = \pm \frac{1}{2} \right \rangle \,,\nonumber\\
H_\pm = i \frac{G_F}{\sqrt{2}} V_{cs}a_2 f_K q^\mu \left \langle \Sigma^{\prime +} , J_z = \pm \frac{1}{2}, \vec{p} = p \hat{z} \right |\overline{u} \gamma_\mu (1-\gamma_5) c \left | \Xi_c^{{\bf 6} +} , J_z = \pm \frac{1}{2} \right \rangle \,,
\end{eqnarray}
respectively,
where $(a_{1}, a_2)= ( 1.26\pm 0.01 , 0.45\pm 0.03)$ are the effective Wilson coefficients~\cite{Cheng:2018hwl}, and $f_{\pi(K)}$ corresponds to the decay constant of the pion(kaon), and the matrix elements in Eq.~\eqref{nonleptonic} are calculated by the homogeneous bag model~\cite{Liu:2022pdk}.
The decay widths are given as
\begin{equation}
\Gamma = \frac{|\vec{p}| }{16 M_{\Xi_c^+}^2 \pi }\left( | H_+^2 | + |H_-^2|
\right)
\,,
\end{equation}
leading to
\begin{eqnarray}\label{nonle}
{\cal B} (\Xi_c^ + \to \Xi^{\prime 0} \pi ^+ ) = (3.8 \pm 0.5)\times 10^{-3}\,,~~~
{\cal B} (\Xi_c^ + \to \Sigma^{\prime +} \overline{K}^0 ) = (6.6 \pm 1.0)\times 10^{-4}\,.
\end{eqnarray}
with $|\theta_c| = 0.137 \pi$, and
\begin{eqnarray}\label{nonle}
{\cal B} (\Xi_c^ + \to \Xi^{\prime 0} \pi ^+ ) = (7.5 \pm 1.0)\times 10^{-3}\,,~~~
{\cal B} (\Xi_c^ + \to \Sigma^{\prime +} \overline{K}^0 ) = (1.3 \pm 0.2)\times 10^{-3}\,,
\end{eqnarray}
with $|\theta_c| = 0.2 \pi$.
On the other hand, in 2003 the FOCUS collaboration measured the ratios~\cite{FOCUS:2003gpe}
\begin{equation}\label{38}
\begin{aligned}
\frac{\Gamma(\Xi_c^+ \to \Xi^{'0} \pi^+ )}{\Gamma(\Xi_c^+ \to \Xi^{-} \pi ^+ \pi^+ )} < 0.1\,, ~~~
\frac{\Gamma(\Xi_c^+ \to \Sigma^{'+} \overline{K}^0)}{\Gamma(\Xi_c^+ \to \Xi^{-} \pi ^+ \pi^+ )} =1.00 \pm 0.49\,.
\end{aligned}
\end{equation}
Combining Eq.~\eqref{38} with the recent experimental absolute branching fraction~\cite{XPP}
\begin{equation}
{\cal B}(\Xi_c^+ \to \Xi^- \pi^+\pi^+) = (2.9\pm 1.3)\%\,,
\end{equation}
we arrive at
\begin{eqnarray}
{\cal B} (\Xi_c^+ \to \Xi^{\prime 0} \pi ^+ ) < 4.2 \times 10^{-3}\,,~~~
{\cal B} (\Xi_c^+ \to \Sigma^{\prime +} \overline{K}^0 ) = (2.9 \pm 2.0) \%\,.
\end{eqnarray}
The upper bound of ${\cal B} (\Xi_c^+ \to \Xi^{\prime 0} \pi ^+ )$ is very close to our result with $|\theta_c| = 0.137\pi$, but too small compare to the one with $|\theta_c |= 0.2 \pi$.
On the other hand the experimental value of ${\cal B} (\Xi_c^+ \to \Sigma^{\prime +} \overline{K}^0 ) $ supports the existence of the mixing. However, the uncertainties are too large at the current stage to draw a firm conclusion.
We strongly recommend the future experiments to measure $\Xi_c\to \Xi' e^+ \nu_e$ and pay a revisitation on $\Xi_c^+ \to \Xi' \pi^+ $ and $\Xi_c^+ \to \Sigma^{\prime +} \overline{K}^0 $.
\section{conclusion}
We have proposed the existence of the $\Xi_c-\Xi_c'$ mixing to resolve the tension between the experimental measurements and theoretical expectations in $\Xi_c^0 \to \Xi_c^- e^+ \nu_e$.
We have analyzed the $\Xi_Q- \Xi_Q'$ mixings model-independently from the mass spectra and obtained $|\theta_c |= 0.137(5)\pi$ and $|\theta_b | = 0.049(13)\pi$.
With $\theta_c = -0.137\pi$,
we have found that ${\cal B}(\Xi^0_c \to \Xi^- e^+ \nu_e) = (3.02 \pm 0.58) \%$ and ${\cal B}(\Xi^+_c \to \Xi^0 e^+ \nu_e ) = (9.00 \pm 1.74) \%$, which are consistent with the results by the ALICE collaboration~\cite{ALICE:2021bli} and LQCD~\cite{Lattice}.
We have shown that $R(\theta_c = -0.137\pi )= 0.70\pm 0.09$ and $R(\theta_c =-0.2\pi) = 0.57\pm 0.08$, which are consistent with $R_{av}^{(\prime)}=0.59\pm 0.10~(0.46\pm 0.07)$ from the experiments~\cite{ALICE:2021bli,Belle:2021crz} and LQCD~\cite{Lattice}, and successfully resolve the puzzle in $\Xi_c^0 \to \Xi ^- e^+ \nu_e$.
We recommend the future experiments to measure $\Xi_c \to \Xi' e^+ \nu_e$, of which the branching fractions for $\Xi_c^0$ and $\Xi_c^+$ have been evaluated to be $(4.4\pm 0.5) \times 10^{-3}$ and $(1.3\pm 0.2)\%$ with $|\theta_c| = 0.137\pi $, and $(8.7\pm1.0)\times 10^{-3}$ and $(2.6\pm0.4)\%$ with $|\theta_c| = 0.2 \pi$, respectively.
A nonvanishing branching fraction of these channels in experiments will be a smoking gun of the $\Xi_c-\Xi_c'$ mixing. In addition, the nonleptonic decays of $\Xi_c^+ \to \Xi' \pi^+$ and $\Xi_c^+ \to \Sigma^{\prime +} \overline{K}^0$ are forbidden without the mixing also.
We have shown that ${\cal B}(\Xi_c^+ \to \Xi^{\prime 0}\pi^+) = (3.8\pm 0.5)\times 10^{-3}$ and ${\cal B} (\Xi_c^ + \to \Sigma^{\prime +} \overline{K}^0 ) = (6.6 \pm 1.0 )\times 10^{-4}$ with $|\theta_c| = 0.137\pi$, and
${\cal B}(\Xi_c^+ \to \Xi^{\prime 0}\pi^+) = (7.5\pm 1.0)\times 10^{-3}$ and ${\cal B} (\Xi_c^ + \to \Sigma^{\prime +} \overline{K}^0 ) = (1.3 \pm 0.2 )\times 10^{-3}$ with $|\theta_c| = 0.2\pi$.
These decay channels are accessible at Belle, Belle II, and LHCb. It is worth to note that the Belle collaboration has already observed signals of $\Xi_c^+ \to (\Xi^- \pi^+ \pi^+, \Lambda K_S^0 \pi^+)$~\cite{Belle:2016lhy}, which can come from the cascade decays of $\Xi_c^+ \to (\Xi^{\prime 0 } \pi^+, \Sigma^{\prime +} \overline{K}^0 )$, respectively.
We emphasize that the mixing is sizable and shall be considered seriously in the studies of the charmed baryons. In particular, most of the $SU(3)_F$ relations in the literature are broken, and revisitations are clearly required.
Finally, we remark that although the mixing of $\Xi_b-\Xi_b'$ is found to be small, its effects should be also explored in the future studies.
\begin{acknowledgments}
We would like to thank Long-Ke Li and Bing-Dong Wan for valuable discussions.
This work is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201501 and the National Natural Science Foundation of China (NSFC) under Grant No. 12147103.
\end{acknowledgments}
|
1,108,101,565,794 | arxiv | \section{Introduction}
Imposing the Ginsparg-Wilson (GW) relation \cite{gi82} in
Refs.~\cite{ha98,lu98} a particular formula for the index of the massless
Dirac operator $D$ on the lattice has been given. Chiu \cite{ch98} has observed
that with the simple form of the GW relation and $\gamma_5$-hermiticity the Dirac
operator $D$ gets normal and its index and the corresponding difference at its
second real eigenvalue add up to zero. This raises the questions what
precisely the general conditions for chiral properties are and which general
rules follow.
Assuming the GW relation, L\"uscher \cite{lu98} has introduced an alternative
chiral transformation under which the measure is no longer invariant, a
generalized finite form of which has been given by Chiu \cite{ch99}. This
reminds of an old claim by Fujikawa \cite{fu79} in continuum theory
that the chiral anomaly could be obtained from the measure.
Thus it appears appropriate now to clarify in general what the r\^ole of the
integration measure is.
Neuberger \cite{ne98} has derived an explicit form of the massless Dirac
operator on the lattice from the overlap formalism \cite{na93}. It is of
interest whether this form, which relies on the hermitean Wilson-Dirac operator
$H$, also follows from other general requirements and under which conditions it
is the only solution of the GW relation.
In the following we start from the general Ward identity holding in a
background gauge field. In this context we also introduce a family of
alternative chiral transformations which give the same Ward identity,
however, allow to transport terms between the action contribution and
the measure contribution.
Next we show that it is necessary, in addition to $\gamma_5$-hermiticity, to have
normality of the Dirac operator $D$ in order to get chiral eigenstates and
thus chiral properties. We derive general consequences for the terms in the
Ward identity, which then gets the general sum rule for chiral differences.
This rule is seen to put severe restrictions on the spectrum of $D$ which are
crucial for allowing a nonvanishing index.
With respect to the GW relation we notice that its general form does not
guarantee normality of $D$ so that we have to restrict to its simple form. We
observe that, given $\gamma_5$-hermiticity, this relation is actually a spectral
constraint. Using a decomposition of $D$ we give an example of a family of
spectral constraints of which the GW one is a member. The alternative
transformation which transports the anomaly term to the measure in the GW case
is seen to get that of L\"uscher.
We point out that in the continuum limit the index theorem follows
from the lattice Ward identity and one has still Tr$(\gamma_5)=0$. With different
origins of the anomaly term, there is agreement with the expectations of
conventional continuum theory. However, the derivation of Fujikawa's
path-integral approach turns out not to agree with what is well defined from
the lattice. A correction of this approach would include the use of an
alternative transformation.
To study what follows starting from the hermitean Wilson-Dirac operator $H$
we require $D$ to be normal, $\gamma_5$-hermitean, and a general function of $H$.
This is seen to lead to the operator of Neuberger. It also establishes the
connection to the GW relation.
\section{Ward identities} \label{Wd}
Fermionic Ward identities arise from the condition that
$\int [\mbox{d}\bar{\psi}\mbox{d}\psi] \mbox{e}^{-S_\f} \cao$ must not change under a transformation of the
integration variables. Considering in particular the transformation
$\psi'= \exp(i\eta\G)\psi$, $\bar{\psi}'= \bar{\psi} \exp(i\eta\bar{\G})$
this can be expressed by the identity
\begin{equation}
\frac{\di}{\di\eta} \int[\mbox{d}\bar{\psi}'\mbox{d}\psi'] \mbox{e}^{-S_\f'} \cao' \Big|_{\eta=0} = 0
\label{w0s}
\end{equation}
where $S_\f'=\bar{\psi'}M\psi'$. Evaluation of \re{w0s} gives
\begin{equation}
i\int[\mbox{d}\bar{\psi}\mbox{d}\psi] \mbox{e}^{-S_\f}\Big(-\mbox{Tr}(\bar{\G}+\G)\cao
-\bar{\psi}(\bar{\G}M+M\G)\psi\cao +
\bar{\psi}\bG\frac{\pt \cao}{\pt \bar{\psi}} -
\frac{\pt \cao}{\pt \psi}\G\psi \Big) = 0 \; ,
\label{w1s}
\end{equation}
with three contributions, one from the derivative of the
integration measure, one from that of the action, and one from that of $\cao$.
In the present context one usually puts $\cao=1$. We can, however, do better
integrating out the $\psi$ and $\bar{\psi}$ fields in the second term of
\re{w1s}, which after a calculation relying on Grassmann properties gives
\begin{equation}
i\;\mbox{Tr}\Big(-\bar{\G}-\G + M^{-1}(\bar{\G}M+M\G)\Big)
\int[\mbox{d}\bar{\psi}\mbox{d}\psi] \mbox{e}^{-S_\f}\cao =0 \;.
\label{iW}
\end{equation}
From \re{iW} it
becomes obvious that in a background gauge field the expectation value
factorizes so that also for arbitrary $\cao$ (and not only for $\cao=1$) it
suffices to consider the identity
\begin{equation}
\frac{1}{2}\mbox{Tr}\Big(-\bar{\G}-\G + M^{-1}(\bar{\G}M+M\G)\Big)=0
\label{W0}
\end{equation}
where $-\frac{1}{2}\mbox{Tr}(\bG+\G)$ is the measure contribution and
$\frac{1}{2}\mbox{Tr}\Big(M^{-1}(\bar{\G}M+M\G)\Big)$ the action contribution.
For the global chiral transformation, in which case one has $\G=\bar{\G}=\gamma_5$,
the measure contribution vanishes and \re{W0} becomes
\begin{equation}
\frac{1}{2}\mbox{Tr}(M^{-1}\{\gamma_5,M\})=0 \;.
\label{WM}
\end{equation}
Obviously this can also be read as Tr$(\gamma_5)=0$, of which the Ward identity is
the particular decomposition which is dictated by the chiral transformation.
In the presence of zero modes of a Dirac operator $D$ it is crucial to take
care that the derived relations remain properly defined. To guarantee this
we put $M=D+\varepsilon$ and let $\varepsilon$ go to zero in the final result.
Thus from \re{WM} we altogether have
\begin{equation}
\mbox{Tr}(\gamma_5) = \frac{1}{2}\mbox{Tr}\Big((D+\varepsilon)^{-1}\{\gamma_5,D\}\Big)+
\varepsilon\mbox{Tr}\Big((D+\varepsilon)^{-1}\gamma_5\Big)=0
\label{gwa2}
\end{equation}
with $\gamma_5$, of course, understood as $\gamma_5$ times the appropriate unit operator.
To have definite names in our discussions we shall call the first term in
\re{gwa2}, which only contributes if $\{\gamma_5,D\}\ne 0$, anomaly term, and the
second one mass term or index term (for $\varepsilon\rightarrow0$). The
operator $D$ is considered to be massless.
We note that a family of alternative global chiral transformations can be
introduced putting $\G=\gamma_5-K$, $\bar{\G}=\gamma_5-\bar{K}$, which inserted
into \re{W0} gives
\begin{equation}
-\frac{1}{2}\mbox{Tr}(\bG+\G)=+\frac{1}{2}\mbox{Tr}(K+\bar{K})
\label{mes}
\end{equation}
for the measure contribution and
\begin{equation}
\frac{1}{2}\mbox{Tr}\Big(M^{-1}(\bG M+M\G)\Big)=
\frac{1}{2}\mbox{Tr}(M^{-1}\{\gamma_5,M\})-\frac{1}{2}\mbox{Tr}(K+\bar{K})
\label{act}
\end{equation}
for the action contribution. Obviously the extra term of the latter cancels
the measure term so that again the result \re{WM} is obtained for any
operators $K$ and $\bar{K}$.
While the Ward identity remains the same for these transformations, they
may be used to change the origin of its terms. For example, with
\begin{equation}
K=\frac{1}{2}M^{-1}\{\gamma_5,D\} \mb{,} \bar{K}=\frac{1}{2}\{\gamma_5,D\}M^{-1}
\label{tg}
\end{equation}
the anomaly term of \re{gwa2} is transported from the action contribution
to the measure contribution.
To get the local chiral transformations one simply has to replace $\gamma_5$ of
the global cases by $\gamma_5\hat{e}(n)$, where $\hat{e}(n)$ in lattice-space
representation reads
$\big(\hat{e}(n)\big)_{n''n'}=\delta_{n''n}\delta_{nn'}$. Thus to see the
essential features it will suffice to consider the relations of the global
case in the following.
\section{Chiral properties}
The derivation of the identity \re{W0} implies that the occurring operators,
acting on a vector space (with dimension number of sites times spinor dimension
times gauge-group dimension) map to this space itself. In fact, instead of the
Grassmann integrals one can equivalently consider minors and determinants, or
generalizations thereof \cite{ke84}, for which this is a prerequisite.
The definition of adjoint operators in addition needs an inner product
so that the vector space must be a unitary one. This then allows to define
normal operators (and their special cases as e.g.~hermitean and unitary ones)
and is also necessary for using the notion of $\gamma_5$-hermiticity.
We require $D$ to be normal and $\gamma_5$-hermitean which will be seen in the
following to be necessary for really having chiral properties. By normality,
$[D,D^{\dagger}]=0$, one gets simultaneous eigenvectors of $D$ and $D^{\dagger}$. Then with
the eigenequation
\begin{equation}
D f_k = \lambda_k f_k
\label{eg}
\end{equation}
we also have
\begin{equation}
D^{\dagger} f_k = \lambda_k^* f_k
\label{egdg}
\end{equation}
where, to get the eigenvalue, also the inner product has been used. From
\re{egdg} by $\gamma_5$-hermiticity, $D^{\dagger} = \gamma_5 D \gamma_5$, we obtain the equation
\begin{equation}
D\gamma_5 f_k = \lambda_k^*\gamma_5 f_k
\label{eg5}
\end{equation}
which has important consequences.
The comparison of \re{eg} multiplied by $\gamma_5$ with \re{eg5} gives
\begin{equation}
[\gamma_5,D] f_k = 0 \quad \mbox{ if } \quad \lambda_k \mbox{ real }
\label{egc}
\end{equation}
so that in the subspace of real eigenvalues of $D$ one can
introduce simultaneous eigenvectors of $D$ and of $\gamma_5$
\begin{equation}
\gamma_5 f_k=c_k f_k \mb{for} \lambda_k \mb{real}
\label{sD}
\end{equation}
with the chirality $c_r$ taking values $+1$ and $-1$. The
comparison of \re{eg} with \re{eg5} shows that one has simultaneously
\begin{equation}
D f_k = \lambda_k f_k \mb{and} D \gamma_5 f_k =\lambda_k^* \gamma_5 f_k \mb{for}
\lambda_k\ne\lambda_k^* \;,
\label{sC}
\end{equation}
i.e. pairs of eigenvectors related to conjugate complex eigenvalues.
Conversely, given $\gamma_5$-hermiticity, \re{eg5} implies normality of $D$.
Further, having the chiral subspace, \re{eg5} follows for real eigenvalues.
For complex ones it follows from \re{sC}. Thus to have \re{egc} and \re{sC}
normality of $D$ is also necessary.
Normality thus turns out to be necessary to get the chiral subspace which is
the basis of chiral properties of $D$. In addition normality of $D$, being
necessary and sufficient in order that its eigenvectors form a complete
orthonormal set, guarantees the completeness of eigenvectors which in the
following will be seen to be crucial for the index relations.
We note that if one would try to do without \re{sC}, one would have normality
of $D$ only in the subspace of real eigenvalues. To specify $D$ generally in
such a way appears not feasible. Further, in the subspace of complex
eigenvalues completeness of the eigenvectors then would only be guaranteed if
there were no degeneracies of eigenvalues \cite{ka66}. To specify $D$ generally
such that the respective nondiagonable cases are excluded appears again not
feasible.
Thus we must insist that $D$ be normal in all unitary space.
Multiplying \re{eg} from the left by $f_l^{\dagger} \gamma_5$ and its adjoint
$f_l^{\dagger} D^{\dagger} = f_l^{\dagger} \lambda_l^*$ from the right by $\gamma_5 f_k$, and using
$\gamma_5$-hermiticity, one obtains the relation
\begin{equation}
f_l^{\dagger} \gamma_5 f_k = 0 \quad \mbox{ for } \quad \lambda_l^* \ne \lambda_k \;.
\label{nela}
\end{equation}
We note that if one makes use of the properties given by \re{sD} and \re{sC}
the relation \re{nela} reflects the orthogonality of eigenvectors with
different eigenvalues. This is most easily seen introducing for the
eigenvectors $f_k$ the more detailed notations
$f_k^{(5)}$ for Im$\lambda_k=0$, $f_k^{(1)}$ for Im$\lambda_k > 0$, and
$f_k^{(2)}=\gamma_5 f_k^{(1)}$ for Im$\lambda_k < 0$.
With \re{sD}, \re{nela}, and the completeness of the eigenvectors of $D$
we obtain for the terms in the identity \re{gwa2} and for this identity itself
\begin{equation}
\lim_{\varepsilon\rightarrow 0}
\mbox{Tr}\Big((D+\varepsilon)^{-1}\gamma_5\varepsilon \Big)= N_+(0) - N_-(0)
\label{re0}
\end{equation}
\begin{equation}
\lim_{\varepsilon\rightarrow 0}\,
\frac{1}{2}\mbox{Tr}\Big((D+\varepsilon)^{-1}\{\gamma_5,D\}\Big) =
\sum_{\lambda\ne 0 \mbox{ \scriptsize real }} \Big(N_+(\lambda) - N_-(\lambda)\Big)
\label{re1}
\end{equation}
\begin{equation}
\mbox{Tr}(\gamma_5)=
\sum_{\lambda \mbox{ \scriptsize real }}\Big(N_+(\lambda)-N_-(\lambda)\Big) =0
\label{res}
\end{equation}
where the numbers of modes with chirality $\pm 1$ at a real eigenvalue
$\lambda$ of $D$ are given by $N_{\pm}(\lambda) =
\sum_{k\;(\lambda_k=\lambda \,\mbox{\scriptsize real})} (1\pm c_k)/2$.
It is seen that \re{re0} gives the index $N_-(0) - N_+(0)$ of $D$. The
r.h.s.~of \re{re1} exhibits a form characteristic of the anomaly term. The sum
rule for real modes \re{res} shows that one has the same total number of
right-handed and of left-handed modes. The mechanism leading to a nonvanishing
index thus is seen to work via compensating numbers of modes at different
$\lambda$.
From \re{res} it follows that the index of $D$ can only be nonvanishing if
a corresponding difference from nonzero eigenvalues exists. This requires
that in addition to 0, allowing for zero modes, there must be at least one
further real value available in the spectrum in order that the index can be
nontrivial. Thus it turns out that this sum rule puts severe restrictions on
the spectrum of $D$. Obviously it is a novel manifestation of the fact
that a nontrivial index requires breaking of the chiral symmetry.
\section{Remarks on GW relation}
From the general GW relation \cite{gi82} $\{\gamma_5,D\}= D \gamma_5 R D$, using
$\gamma_5$-hermiticity of $D$ and $[\gamma_5,R]=0$, one obtains
$[D,D^{\dagger}]=2D^{\dagger}[R,D]D^{\dagger}$. Therefore one should have $[R,D]=0$ in order
that $D$ gets normal which, as we have seen, is crucial for chiral
properties and their consequences in gauge theories. Because it is necessary
to satisfy the relation $[R,D]=0$ in a general way this means to put $R$ equal
to a multiple of the identity.
Thus, having to insist on normality of $D$, we
remain with the simple form of the GW relation
\begin{equation}
\{\gamma_5,D\}= \rho^{-1} D \gamma_5 D
\label{GW}
\end{equation}
with $\rho$ being a real constant.
Requiring also $\gamma_5$-hermiticity of $D$, the condition \re{GW} means that
$\rho(D+D^{\dagger})=D D^{\dagger}=D^{\dagger} D$ should hold, i.e.~that $D/\rho-1$
should be unitary. Thus the actual content of \re{GW} is the restriction of
the spectrum of $D$ to the circle through zero with center at $\rho$.
The crucial properties then are that real eigenvalues get
possible at 0 and at $2\rho$, allowing for zero modes and for a nonzero
index, respectively.
Imposing the GW relation \re{GW}, in the massless case the anomaly term in
\re{gwa2} can be expressed as
\begin{equation}
\lim_{\varepsilon\rightarrow 0}\,
\frac{1}{2}\mbox{Tr}\Big((D+\varepsilon)^{-1}\{\gamma_5,D\}\Big) =
(2\rho)^{-1}\mbox{Tr}(\gamma_5 D)
\label{rgw}
\end{equation}
so that the identity \re{gwa2} can be replaced by
\begin{equation}
\mbox{Tr}(\gamma_5) = (2\rho)^{-1}\mbox{Tr}(\gamma_5 D) +
\lim_{\varepsilon\rightarrow 0}
\mbox{Tr}\Big((D+\varepsilon)^{-1}\gamma_5\varepsilon \Big)=0 \;.
\label{gwa3}
\end{equation}
The relation $\mbox{Tr}(\gamma_5 D)=
\sum_{\lambda\ne 0 \mbox{ \scriptsize real }} \lambda\,\Big(N_+(\lambda) - N_-(\lambda)\Big)$,
which would be not useful in the general case, now simplifies to
$\mbox{Tr}(\gamma_5 D)= 2\rho \Big(N_+(2\rho) - N_-(2\rho)\Big)$ and the sum rule
\re{res} to $\mbox{Tr}(\gamma_5) = N_+(0) - N_-(0)+N_+(2\rho) - N_-(2\rho)=0$.
The combination of these relations is what gives the formula
\begin{equation}
(2\rho)^{-1}\mbox{Tr}(\gamma_5 D) = N_-(0) - N_+(0)
\label{tr1}
\end{equation}
considered in \cite{ha98,lu98} for the index.
Using \re{GW}, one can replace \re{tg} of the alternative transformation
transporting the anomaly term by
$K=(2\rho)^{-1}\gamma_5 D$, $\bar{K}=(2\rho)^{-1}D\gamma_5$. This obviously gives the
transformation introduced in the GW case by L\"uscher \cite{lu98}, tailored to
make the classical action $\bar{\psi} D\psi$ invariant. The measure
contribution \re{mes} then gets $(2\rho)^{-1}\mbox{Tr}(\gamma_5 D)$. However, there
still remains the action contribution
$\lim_{\varepsilon\rightarrow 0}\,
\mbox{Tr}\Big((D+\varepsilon)^{-1}\gamma_5\varepsilon \Big)$.
The remaining action contribution is missing in \cite{lu98} since no zero-mode
regularization has been used. Thus it looks there like the action would also
be invariant in the quantum case with zero modes, as is not correct. In a
separate next step, which implicitly uses the decomposition \re{gwa3} of
Tr$(\gamma_5)=0$, what should have been obtained from the action contribution
is calculated from the measure term. This does not cure the missing in the
originally derived identity.
Clearly one can think of many possibilities satisfying the requirement that
in the spectrum one should allow for at least one further real value in
addition to 0, as imposed by the sum rule \re{res}. For finding appropriate
constraints the decomposition
\begin{equation}
D = u + i v \mb{with}
u= u^{\dagger}= \frac{1}{2} (D+D^{\dagger}) \quad , \quad v= v^{\dagger} = \frac{1}{2i} (D-D^{\dagger})
\end{equation}
appears useful. The reason for this is that by normality of
$D$ one obtains $\,[u,v]=0$ so that for $u$, $v$, and $D$ one gets
simultaneous eigenvectors and the eigenvalues of $u$ and $v$ are simply
the real and imaginary parts, respectively, of those of $D$.
For example, one may use this to define a family of constraints to a
one-dimensional set, allowing eigenvalues at zero and at one further real
value, by
\begin{equation}
v^2=2\rho u + (\beta -1) u^2 \mb{with} \beta\ge 0,\; \beta\ne 1
\end{equation}
in which case the spectrum is restricted for $\beta=0$ to the circle of the GW
case, for $0<\beta<1$ to ellipses, and for $1<\beta$ to hyperbolas. Inserting
$u$ and $v$ and using $\gamma_5$-hermiticity this may be cast into the form
\begin{equation}
\{\gamma_5,D\}=\rho^{-1}\Big((1-\frac{\beta}{2})D\gamma_5 D-\frac{\beta}{4}\{\gamma_5,D^2\}
\Big)
\end{equation}
which generalizes the relation \re{GW}.
\section{Continuum limit}
For the present purpose it suffices to consider the continuum limit to the
quantum field theory of fermions in a background gauge field. The limit of
the anomaly term in the identity \re{gwa2} has been shown to be
\begin{equation}
\frac{1}{2}\mbox{Tr}\Big((D+\varepsilon)^{-1}\{\gamma_5,D\}\Big) \rightarrow
-\,\frac{g^2}{32\pi^2}\int\mbox{d}^4x\;\mbox{tr}(\tilde{F}F)
\label{lim}
\end{equation}
long ago \cite{ke81} for the Wilson-Dirac operator and recently
\cite{ad98,su98} also for the operator of Neuberger \cite{ne98}. In the
latter case one has to note that the l.h.s.~of \re{lim} can be replaced
according to \re{rgw} to get the form used in \cite{ad98,su98}.
Though there are still subtleties \cite{ad98} which deserve further
development, it can be expected that any appropriate form of $D$ should
give \re{lim}.
In the massless case with normal and $\gamma_5$-hermitean $D$ we can insert
\re{re0} and \re{lim} into the identity \re{gwa2} to obtain
\begin{equation}
\mbox{Tr}(\gamma_5)=-\,\frac{g^2}{32\pi^2}\int\mbox{d}^4x\;\mbox{tr}(\tilde{F}F)+
N_+(0) - N_-(0)=0 \;.
\label{ind}
\end{equation}
Thus obviously the index theorem follows in the limit. To see that one also
still has $\mbox{Tr}(\gamma_5)=0$ one has to note that any complete set of vectors
can be used to calculate Tr$(\gamma_5)$. In particular, one may select a set which
exploits the fact that the spinor space factorizes off. Since in the latter
space one has tr$(\gamma_5)=0$, the sequence for Tr$(\gamma_5)$ with increasing lattice
spacing is one with all members zero, so that one has indeed $\mbox{Tr}(\gamma_5)=0$
also in the limit.
We emphasize that, quite remarkably, the index theorem follows here in a
rather different setting from that of mathematics. There the Atiyah-Singer
theorem is obtained solely considering the continuum Dirac (or Weyl) operator
on a compact manifold finding that its index equals a topological invariant.
Here we consider the nonperturbative formulation of the quantum field theory
of fermions in a background gauge field and derive the chiral Ward identity.
This identity then gives the index theorem. An essential property of this
theory is that a chirally noninvariant modification occurs in its action.
Additional features of the field-theoretic setting are that the Ward identity
is a particular decomposition of Tr$(\gamma_5)=0$ and that one gets a local version
explaining the nonconservation of the singlet axial-vector current.
We now compare with the conventional continuum approach, in which (in our
notation) the operator $D$ is antihermitean (and thus also normal) and
$\gamma_5$-hermitean. Because one then has $\{\gamma_5,D\}=0$, the anomaly term in
the identity \re{gwa2} vanishes. However, at the level of the Ward
identity in perturbation theory (in the well known triangle diagram) one
gets an ambiguity which, if fixed in a gauge-invariant way, produces the
anomaly term \cite{ad69}. Thus, though with different origin of this
term, there one gets agreement with \re{ind}.
Nevertheless, there is an essential difference. While in the continuum the
chirally noninvariant modification of the theory occurs only at the level of
the Ward identity, on the lattice the origin of the anomaly sits in the action
itself. Thus, since deriving things from the start is more satisfactory than
only fixing inconsistencies later by hand, the lattice formulation is the
preferable one. The missing of an appropriate modification at the level of
the action in the continuum approach has the concrete consequence that there
are difficulties with making it truely nonperturbative.
This is seen noting that the respective attempts rely on the Pauli-Villars
(PV) term. The motivation there is that in perturbation theory in the PV
difference ambiguous contributions, being mass-independent, drop out so that
the PV term gives the anomaly \cite{ad69}. Assuming the PV term to be
nonperturbatively valid the desired result is obtained neglecting higher
orders in the PV mass \cite{br77}. However, one actually gets zero as
one readily checks using Tr$(\gamma_5)=0$, the neglect of the sum of higher orders
being not correct. This does not come as a surprise since in the lattice
formulation it is obvious that a chirally noninvariant modification of the
action is indispensable to get the correct result.
In the path-integral approach \cite{fu79} the usual chiral transformation
is used so that in the global case the measure contribution is $-$Tr$(\gamma_5)$.
Arguing that it should be regularized this contribution, which is actually
zero, is replaced by a term which can be checked to be equivalent to the
PV term in \cite{br77} and from which essentially as in \cite{br77},
i.e.~incorrectly as pointed out above, the anomaly is obtained. On the other
hand, in the Ward identity then to the mass (index) term in the action
contribution the anomaly term is not added, as would have been necessary in
the continuum \cite{ad69}. This compensates the unjustified replacement of
Tr$(\gamma_5)$ so that the desired result is obtained.
From our results it is obvious that to correct the procedure of
\cite{fu79} one firstly has to use the alternative transformation with \re{tg}
which transports the anomaly term to the measure and secondly to take care
that this term emerges properly in a nonperturbative way (requiring an
appropriate modification of the action as is e.g.~provided by Wilson's
regularization suppressing doublers in lattice theory). It should be
added that the defects of the approach in \cite{fu79} also invalidate
recent lattice considerations \cite{fu99} which rely on it.
\section{Normal $D$ from hermitean $H$}
To get an explicit form of $D$ one can start from the Wilson-Dirac operator or
some generalization of it, which is $\gamma_5$-hermitean, however, not normal. The
$\gamma_5$-hermiticity of this operator $X$ implies that $H=\gamma_5 X$ is even hermitean
so that its spectral representation allows to define functions of $H$.
This suggests to get a normal operator $D$ from a general function of $H$ by
imposing the necessary conditions.
To proceed it is convenient to consider $F=\gamma_5 D$ which should generalize
$H=\gamma_5 X$. From $\gamma_5$-hermiticity of $D$ it follows that $F$ must be hermitean
and normality of $D$ gives the condition $[\gamma_5,F^2]=0$. This does, however,
not yet determine $F$. In fact, with $[\gamma_5,E^2]=0$ for some hermitean operator
$E$ the conditions on $F$ are satisfied by $F^2=E^2+\{\gamma_5,Y\}+c$ where $Y$ is
some hermitean operator and $c$ some real number. From the fact that $F^2$
is a square it then follows that one must have $c=b^2$ nonnegative and $Y=b E$.
One thus arrives at $F=E+b\gamma_5$ in which the operator $E$ and the real number
$b$ are to be determined.
With $ H \phi_l = \alpha_l \phi_l $ the definition of $E$ as a function
of $H$ is $E(H) = \sum_l E(\alpha_l) \phi_l \phi_l^{\dagger}$ where $E(\alpha)$ is a
real function of real $\alpha$. The task then is to determine $E(\alpha)$ in
such a way that the condition $[\gamma_5,E(H)^2]=0$ holds. Because $H$ does not
commute with $\gamma_5$ and since we are not allowed to restrict $H$ in any way
this can only be achieved by requiring the function $E(\alpha)^2$ in
$E(H)^2 = \sum_l E(\alpha_l)^2 \phi_l \phi_l^{\dagger}$ to be constant. Thus we get
$E(H)^2=\rho^2\Id$ and $E(\alpha)=\pm\rho$ with $\rho$ being a real constant.
From $E(H)^2=\rho^2\Id$ we see that $\gamma_5 E(H)/\rho$ is unitary so that the
spectrum of $\gamma_5 E(H)$ is on a circle with radius $|\rho|$ and center at zero.
To allow for zero modes of $D$ we therefore in $D=\gamma_5 F= \gamma_5 E+b$ have to
choose $b=\rho$ or $b=-\rho$ and cannot admit any dependence of $b$ on $H$.
Without restricting generality taking $b=\rho$ we thus get
$D=\rho\,(1+\gamma_5\epsilon_0(H))$ where $\epsilon_0(\alpha)=\pm 1$. For this
form of $D$ obviously already the GW relation \re{GW} holds.
For $\alpha\ne 0$, requiring the function $\epsilon_0(\alpha)$ to be odd and
nondecreasing, it gets the sign function defined by $\epsilon(\alpha)=\pm 1$
for $\alpha{>\atop <} 0$. That this choice is appropriate is confirmed by
checking the classical continuum limit in the free case. Thus if all
$\alpha_l\ne 0$ we obtain
\begin{equation}
D=\rho\,\Big(1+\gamma_5 \epsilon(H)\Big)
\label{neu}
\end{equation}
which is seen to be just the operator of Neuberger \cite{ne98}.
If $\alpha_l=0$ occur also $\epsilon(0)$ is to be specified. Because of the
condition $E(H)^2=\rho^2\Id$ only either $+1$ or $-1$ is possible for this.
To prefer none of these choices we propose to calculate \re{neu} independently
for each choice of $\epsilon(0)$ and to take the mean of the final results.
As will be seen below this gives agreement with what follows from counting
eigenvalue flows of $H$.
To the get index of $D$ by counting eigenvalue flows of $H$ has been
introduced in \cite{na93}. These flows with $m$ rely on the form
$H(m)=H(0)+m\gamma_5$ of the hermitean Wilson-Dirac operator and, with the
eigenequation $ H(m) \phi_l(m) = \alpha_l(m) \phi_l(m)$, are described by the
functions $\alpha_l(m)$. We have recently shown \cite{ke99} that these spectral
flows obey a differential equation and have given a detailed overview of the
solutions of this equation.
The relation to the index can be obtained inserting \re{neu} into \re{tr1}
which in the absence of zero modes of $H$ gives
$ N_-(0) - N_+(0)=\frac{1}{2}\mbox{Tr}(\epsilon(H))$ and in terms of numbers
of positive and negative eigenvalues of $H$ reads
$N_-(0)-N_+(0)=\frac{1}{2}(N_+^H-N_-^H)$. We now note that this form is also
adequate in the presence of zero modes of $H$. In fact, following a flow, up
to crossing there is a change by $\frac{1}{2}$ and after this a further change
by $\frac{1}{2}$. At the very moment of crossing a change of $\frac{1}{2}$
is reached which obviously agrees with the respective result of the procedure
of dealing with $\epsilon(0)$ proposed above.
\section*{Acknowledgement}
I wish to thank Ting-Wai Chiu for his generous hospitality at CHIRAL '99,
Taipei, Taiwan, Sept.~13-18, 1999, and him and all organizers for making
this meeting such an inspiring and enjoyable one.
|
1,108,101,565,795 | arxiv |
\section{Bias Amplification in Binary Classifiers}
\label{sect:bias-def}
In this section, we define bias amplification for binary classifiers, and show that in some cases it may be unavoidable.
Namely, a Bayes-optimal classifier trained on poorly-separated data can end up predicting one label nearly always, even if the prior label bias is minimal.
While our analysis makes strong generative assumptions, we show that its results hold qualitatively on real data that resemble these assumptions.
We begin by formalizing the setting.
We consider the standard binary classification problem of predicting a label $y \in \{0,1\}$ given features $\ensuremath{\mathbf{x}}\xspace = (x_1, \ldots, x_d)\in\ensuremath{\mathcal{X}}\xspace$.
We assume that data are generated from some unknown distribution $\ensuremath{\mathcal{D}}\xspace$, and that the prior probability of $y = 1$ is $p^*$.
Without loss of generality, we assume that $p^* \ge 1/2$.
The learning algorithm recieves a training set $S$ drawn i.i.d. from $\ensuremath{\mathcal{D}}\xspace^n$ and outputs a predictor $h_S : \ensuremath{\mathcal{X}}\xspace \to \{0,1\}$ with the goal of minimizing 0-1 loss on unknown future i.i.d. samples from \ensuremath{\mathcal{D}}\xspace.
\begin{definition}[Bias amplification, systematic bias]
\label{def:bias-amp}
Let $h_S$ be a binary classifier trained on $S\sim\ensuremath{\mathcal{D}}\xspace^n$.
The \emph{bias amplification} of $h_S$ on \ensuremath{\mathcal{D}}\xspace, written $B_\ensuremath{\mathcal{D}}\xspace(h_S)$, is given by Equation~\ref{eq:bias-amp-def}.
\begin{equation}
\label{eq:bias-amp-def}
B_\ensuremath{\mathcal{D}}\xspace(h_S) = \ensuremath{\mathop{\mathbb{E}}}\xspace_{(\ensuremath{\mathbf{x}}\xspace,y)\sim\ensuremath{\mathcal{D}}\xspace}[h_S(\ensuremath{\mathbf{x}}\xspace) - y]
\end{equation}
We say that a learning rule exhibits \emph{systematic bias} whenever it exhibits non-zero bias amplification on average over training samples, i.e. it satisfies Equation~\ref{eq:systematic-def}.
\begin{equation}
\label{eq:systematic-def}
\ensuremath{\mathop{\mathbb{E}}}\xspace_{S\sim\ensuremath{\mathcal{D}}\xspace^n}[B_\ensuremath{\mathcal{D}}\xspace(h_S)] \ne 0
\end{equation}
\end{definition}
Definition~\ref{def:bias-amp} formalizes bias amplification and systematic bias in this setting.
Intuitively, bias amplification corresponds to be the probability that $h_S$ predicts class 1 on instances from class 0 in excess of the prior $p^*$.
Systematic bias lifts the definition to learners, characterizing rules that are expected to amplify bias on training sets drawn from \ensuremath{\mathcal{D}}\xspace.
\input{omniscient}
\input{sgd}
\section{Proofs}
\begin{supptheorem}{\ref{thm:general-optimal}}
Let $\ensuremath{\mathbf{x}}\xspace$ be distributed according to Equation~\ref{eq:gauss-nb}, $y$ be Bernoulli with parameter $p^*$,
$D$ be the Mahalanobis distance between the class means $\ensuremath{\bm{\mu}}\xspace^*_0, \ensuremath{\bm{\mu}}\xspace^*_1$, and
$\beta = -D^{-1}\log(p^*/(1-p^*))$.
Then the bias amplification of the Bayes-optimal classifier $h^*$ is:
\[
B_\ensuremath{\mathcal{D}}\xspace(h^*) = 1 - p^* - (1-p^*)\Phi\left(\beta + \frac{D}{2}\right) - p^*\Phi\left(\beta - \frac{D}{2}\right)
\]
\end{supptheorem}
\noindent\textbf{Proof.}
Note that the Bayes-optimal classifier can be expressed as a linear weighted sum~\citep{murphy-book} in terms of parameters $\hat\ensuremath{\mathbf{w}}\xspace, \hat{b}$ as shown in Equation~\ref{eq:lin-param}.
\begin{align}
\label{eq:lin-param}
\Pr[Y&=1 | X=\ensuremath{\mathbf{x}}\xspace] = (1+\exp-(\hat\ensuremath{\mathbf{w}}\xspace^T\ensuremath{\mathbf{x}}\xspace + \hat{b}))^{-1}
\\
\nonumber
\hat\ensuremath{\mathbf{w}}\xspace &= \hat\ensuremath{\bm{\Sigma}}\xspace^{-1}(\hat\ensuremath{\bm{\mu}}\xspace_1 - \hat\ensuremath{\bm{\mu}}\xspace_0)
\\
\nonumber
\hat{b} &= -\frac{1}{2}(\hat\ensuremath{\bm{\mu}}\xspace_1 - \hat\ensuremath{\bm{\mu}}\xspace_0)^T\hat\ensuremath{\bm{\Sigma}}\xspace^{-1}(\hat\ensuremath{\bm{\mu}}\xspace_1 + \hat\ensuremath{\bm{\mu}}\xspace_0) + \log\frac{\hat{p^*}}{1-\hat{p^*}}
\end{align}
The random variable $\ensuremath{\mathbf{w}}\xspace^TX$ is a univariate Gaussian with variance $\ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\Sigma}}\xspace\ensuremath{\mathbf{w}}\xspace$ and mean $\ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\mu}}\xspace_y$ when $Y=y$. Then the quantity we are interested in is shown in Equation~\ref{eq:expected-outcome1}, where $\Phi$ is the CDF of the standard normal distribution.
\begin{equation}
\label{eq:expected-outcome1}
\Pr\left[\ensuremath{\mathbf{w}}\xspace^TX > -b\right | Y = y] =
1 - \Phi\left(\frac{-b - \ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\mu}}\xspace_y}{\sqrt{\ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\Sigma}}\xspace\ensuremath{\mathbf{w}}\xspace}}\right)
\end{equation}
Notice that the quantity
\[
\ensuremath{\mathbf{w}}\xspace^T(\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0) = (\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0)^T\ensuremath{\bm{\Sigma}}\xspace^{-1}(\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0)
\]
is the square of the Mahalanobis distance between the class means.
\begin{align*}
-b - \ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\mu}}\xspace_0 &= \frac{1}{2}\ensuremath{\mathbf{w}}\xspace^T(\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0) - \log\frac{p^*}{1-p^*}
\\
&= \frac{D^2}{2} - \log\frac{p^*}{1-p^*}
\\
-b - \ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\mu}}\xspace_1 &= -\frac{1}{2}\ensuremath{\mathbf{w}}\xspace^T(\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0) - \log\frac{p^*}{1-p^*}
\\
&= -\frac{D^2}{2} - \log\frac{p^*}{1-p^*}
\end{align*}
Similarly, we can rewrite the standard deviation of $\ensuremath{\mathbf{w}}\xspace^TX$ exactly as $D$.
Rewriting the numerator in the $\Phi$ term of (\ref{eq:expected-outcome1}),
\begin{align*}
(\ensuremath{\mathbf{w}}\xspace^T\ensuremath{\bm{\Sigma}}\xspace\ensuremath{\mathbf{w}}\xspace)^{\frac{1}{2}} &= \left((\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0)^T\ensuremath{\bm{\Sigma}}\xspace^{-1}\ensuremath{\bm{\Sigma}}\xspace\covar^{-1}(\ensuremath{\bm{\mu}}\xspace_1 - \ensuremath{\bm{\mu}}\xspace_0)\right)^{\frac{1}{2}}
\\ &=
\left((\ensuremath{\bm{\mu}}\xspace_1-\ensuremath{\bm{\mu}}\xspace_0)^T\ensuremath{\bm{\Sigma}}\xspace^{-1}(\ensuremath{\bm{\mu}}\xspace_1 - \ensuremath{\bm{\mu}}\xspace_0)\right)^{\frac{1}{2}}
\\ &=
D
\end{align*}
Then we can write $\Pr\left[\ensuremath{\mathbf{w}}\xspace^TX > -b\right]$ as:
\begin{align*}
& (1-p^*)\left(1-\Phi\left(\beta + \frac{D}{2}\right)\right) + p^*\left(1-\Phi\left(\beta-\frac{D}{2}\right)\right)
\\
=&
1 - (1-p^*)\Phi\left(\beta + \frac{D}{2}\right) - p^*\Phi\left(\beta - \frac{D}{2}\right)
\end{align*}
$\square$
\clearpage
\begin{suppcorollary}{\ref{thm:unbiased-optimal}}
When $\ensuremath{\mathbf{x}}\xspace$ is distributed according to Equation~\ref{eq:gauss-nb} and $p^*=1/2$, $B_\ensuremath{\mathcal{D}}\xspace(h^*) = 0$.
\end{suppcorollary}
\noindent\textbf{Proof.}
Note that because $p^*=1/2$, the term $\beta=0$ in Theorem~\ref{thm:general-optimal}.
Using the main result of the theorem, we have:
\begin{align*}
\Pr\left[\ensuremath{\mathbf{w}}\xspace^TX > -b\right]
=&
1 - \frac{1}{2}\left[\Phi\left(\frac{D}{2}\right) + \Phi\left(-\frac{D}{2}\right)\right]
\\
=&
1 - \frac{1}{2}\left[\Phi\left(\frac{D}{2}\right) + \left(1 - \Phi\left(\frac{D}{2}\right)\right)\right]
\\
=&
\frac{1}{2}
\end{align*}
The third equality holds because $\Phi$ has rotational symmetry about $(0,1/2)$, giving the identity $\Phi(-x) = 1 - \Phi(x)$.
\noindent
$\square$
\begin{figure}
\centering
\resizebox{0.5\columnwidth}{!}{\input{figures/plot-sgd-bias-loss}}
\caption{\label{fig:sgd-bias-loss}
Bias from linear classifiers on data generated according to Equation~\ref{eq:feat-asymm-regime} with $\sigma_s=1$ (i.e., generated in the same manner as the experiments in Figure~\ref{fig:sgd-bias}), averaged over 100 training runs. The SVM trained using SMO used penalty $C=1.0$ and the linear kernel. Regardless of the loss used, the bias of classifiers trained using SGD is uniform and consistent, increasing with feature asymmetry. Comparable classifiers trained using other methods are not consistent in this way. While LR trained with L-BFGS does exhibit bias, it is not as strong, and does not appear in as many data configurations, as LR trained with SGD. While linear SVM with penalty trained with SMO results in little bias, SVM trained with SGD shows the same bias as LR. Not shown are results for classifiers trained with SGD using modified Huber, squared hinge, and perceptron losses, all of which closely match the two curves shown here for SGD classifiers.
}
\end{figure}
\section{Experiments}
\label{sect:experiments}
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{lc@{\hskip5pt}c@{\hskip5pt}c|ccc|c|ccc}
\hline
\multirow{2}{*}{\emph{dataset}} & \multirow{2}{*}{$p^*$ (\%)} & \multirow{2}{*}{\emph{asymm. (\%)}} & \multirow{2}{*}{$B_\ensuremath{\mathcal{D}}\xspace(h_S)$ (\%)} & \multicolumn{3}{|c|}{$B_\ensuremath{\mathcal{D}}\xspace(h_S)$ (\%) \emph{(post-fix)}} & \multirow{2}{*}{\emph{acc. (\%)}} & \multicolumn{3}{c}{\emph{acc. (\%)} \emph{(post-fix)}} \\
& & & & \emph{par} & \emph{exp} & $\ell_1$ & & \emph{par} & \emph{exp} & $\ell_1$
\\
\hline
CIFAR10 &
50.0 & 52.0 & 1.8 & 1.7 & \textbf{0.4} & 2.7 & 93.0 & 93.1 & \textbf{94.0} & 92.9 \\
CelebA &
50.4 & 50.2 & 7.7 & 7.7 & \textbf{0.2} & \emph{n/a} & 79.6 & 79.6 & \textbf{79.9} & \emph{n/a} \\
\hline
arcene &
56.0 & 57.7 & 2.7 & \textbf{0.6} & 1.2 & 1.7 & 68.9 & 69.0 & \textbf{74.2} & 69.4 \\
colon &
64.5 & 51.0 & 23.1 & 22.9 & \textbf{22.6} & 35.5 & 58.5 & 58.7 & 58.7 & \textbf{64.5} \\
glioma &
69.4 & 54.8 & 17.4 & 17.4 & \textbf{12.2} & 17.0 & 76.3 & 76.3 & \textbf{76.7} & 75.44 \\
micromass &
69.0 & 54.1 & 0.68 & \textbf{0.66} & 0.69 & 0.68 & \textbf{98.4} & \textbf{98.4} & \textbf{98.4} & \textbf{98.4} \\
pc/mac &
50.5 & 60.6 & 1.6 & 1.6 & \textbf{1.4} & 1.6 & \textbf{89.0} & \textbf{89.0} & 88.0 & \textbf{89.0} \\
prostate &
51.0 & 44.4 & 47.3 & 47.2 & \textbf{10.0} & 28.1 & 52.7 & 52.8 & \textbf{90.2} & 71.3 \\
smokers &
51.9 & 50.4 & 47.4 & 45.4 & \textbf{8.0} & 33.0 & 50.0 & 50.7 & \textbf{59.0} & 51.2 \\
\hline
synthetic &
50.0 & 99.9 & 24.1 & 17.2 & 23.6 & \textbf{5.7} & 74.9 & \textbf{77.9} & 74.8 & 71.4 \\
\hline
\end{tabular}}
\caption{\label{tab:sgd-bias-real-1}
Bias measured on real datasets, and results of applying one of three mitigation strategies: feature parity (\emph{par}), influence-directed experts (\emph{exp}), and $\ell_1$ regularization.
The columns give: $p^*$, percent class prior for the majority class ($y=1$); \emph{asymm}, the percentages of features oriented towards $y=1$; $B_\ensuremath{\mathcal{D}}\xspace(h_S)$ the bias of the learned model on test data, which we measure before and after each fix (\emph{post-fix}); $\emph{acc}$, the test accuracy before and after each fix.
The first two rows are experiments on deep networks, and the remainder are on 20 training runs of logistic regression with stochastic gradient descent.
$\ell_1$ regularization was not applied to the deep network experiments due to the cost of hyperparameter tuning.
}
\end{table}
In this section we present empirical evidence to support our claim that feature-wise bias amplification can safely be removed without harming the accuracy of the classifier.
We show this on both logistic predictors and deep networks by measuring the bias on several benchmark datasets, and running the parity and expert mitigation approaches described in Section~\ref{sect:fix}.
As a baseline, we compare against $\ell_1$ regularization in the logistic classifier experiments.
The results are shown in Table~\ref{tab:sgd-bias-real-1}.
To summarize, on every dataset we consider, at least one of the methods in Section~\ref{sect:fix} proves effective at reducing the classifier's bias amplification.
$\ell_1$ regularization removes bias less reliably, and never to the extent that our methods do.
In all but two cases, the influence-directed experts show the best performance in terms of bias removal, and this method is able to reduce bias in all but one case.
In terms of accuracy, our methods consistently improve classifier performance, and in some cases significantly.
For example, on the \emph{prostate} dataset, influence-directed experts removed 80\% of the prediction bias while improving accuracy from 57.7\% to 90.2\%.
\paragraph{Data.}
We performed experiments over eight binary classification datasets from various domains (rows 3-11 in Table~\ref{tab:sgd-bias-real-1}) and two image classification datasets (CIFAR10-binary, CelebA).
Our criteria for selecting logistic regression datasets were: high feature dimensionality, binary labels, and row-structured instances (i.e., not time series data).
Among the logistic regression datasets, \emph{arcene}, \emph{colon}, \emph{glioma}, \emph{pc/mac}, \emph{prostate}, \emph{smokers} were obtained from the scikit-feature repository~\citep{li2016feature}, and \emph{micromass} was obtained from the UCI repository~\citep{Dua2017}.
The synthetic dataset was generated in the manner described in Section~\ref{sect:sgd}, containing one strongly-predictive feature ($\sigma^2=1$) for each class, 1,000 weak features ($\sigma^2=3$), and $p^*=1/2$.
For the deep network experiments, we created a binary classification problem from CIFAR10~\citep{Krizhevsky09} from the ``bird'' and ``frog'' classes.
We selected these classes as they showed the greatest posterior disparity on VGG16 network trained on the original dataset.
For CelebA, we trained a VGG16 network with one fully-connected layer of 4096 units to predict the \emph{attractiveness} label given in the training data.
\paragraph{Methodology.}
For the logistic regression experiments, we used scikit-learn's SGDClassifier estimator to train each model using the logistic loss function.
Logistic regression measurements were obtained by averaging over 20 pseudorandom training runs on a randomly-selected stratified train/test split.
Experiments involving experts selected $\alpha, \beta$ using grid search over the possible values that minimize bias subject to not harming accuracy as described in Section~\ref{sect:fix}.
Similarly, experiments involving $\ell_1$ regularization use a grid search to select the regularization paramter, optimizing for the same criteria used to select $\alpha,\beta$.
Experiments on deep networks use the training/test split provided by the respective dataset authors.
Models were trained until convergence using Keras 2 with the Theano backend.
\paragraph{Logistic regression.}
Table~\ref{tab:sgd-bias-real-1} shows that on linear models, feature parity always improves or maintains the model in terms of both bias amplification and accuracy.
Notably, in each case where feature parity removes bias, the accuracy is likewise improved, supporting our claim that bias resulting from asymmetric feature regimes is avoidable.
In most cases, the benefit from applying feature parity is, however, rather small.
\emph{arcene} is the exception, which is likely due to the fact that it has large feature asymmetry in the original model, leaving ample opportunity for improvement by this approach.
The results suggest that influence-directed experts are the most effective mitigation technique, both in terms of bias removal and accuracy improvement.
In most datasets, this approach reduced bias while improving accuracy, often substantially.
Most notably on the \emph{prostate} dataset, where the original model failed to achieve accuracy appreciably greater than chance and extreme bias.
The mitigation achieves 90\% accuracy while removing 80\% of the bias, improving the model significantly.
Similarly, for \emph{arcene} and \emph{smokers}, this approach removed over 50\% of the prediction bias while improving accuracy 5-11\%.
$\ell_1$ regularization proved least reliable at removing bias subject to not harming accuracy. In many cases, it was unable to remove much bias (\emph{glioma, micromass, PC/Mac}). On synthetic data $\ell_1$ gave the best bias reduction. Though it did perform admirably on several real datasets (\emph{arcene, prostate, smokers}), even removing up to 40\% of the bias on the prostate dataset, it was consistently outperformed by either the parity or expert method. Additionally, on the \emph{colon} dataset, it made bias significantly worse (150\%) for gains in accuracy.
\paragraph{Deep networks.}
The results show that deep networks tend to have a less significant feature asymmetry than data used for logistic models, which we would expect to render the feature parity approach less effective.
The results confirm this, although on CIFAR10 parity had some effect on bias and a proportional positive effect on accuracy.
Influence-directed experts, on the other hand, continued to perform well for the deep models.
While this approach generally had a greater effect on accuracy than bias for the linear models, this trend reversed for deep networks, where the decrease in bias was consistently greater than the increase in accuracy.
For example, the 7.7\% bias in the original CelebA model was reduced by approximately 98\% to 0.2\%, effectively eliminating it from the model's predictions.
The overall effect on accuracy remained modest (0.3\% improvement).
These results on deep networks are somewhat surprising, considering that the techniques described in Section~\ref{sect:fix} were motivated by observations concerning simple linear classifiers.
While the improvements in accuracy are not as significant as those seen on linear classifiers, they align with our expectations regarding bias reduction.
This suggests that future work might improve on these results by adapting the approach described in this paper to better suit deep networks.
\section{Mitigating Feature-Wise Bias Amplification}
\label{sect:fix}
While Theorem~\ref{thm:general-optimal} suggests that some bias is unavoidable, the empirical analysis in the previous section shows that some systematic bias may not be.
Our analysis also suggests an approach for removing such bias, namely by identifying and removing the weak features that are systematically overestimated by gradient descent.
In this section, we describe two approaches for accomplishing this that are based on measuring the influence~\citep{internal-influence} of features on trained models.
In Section~\ref{sect:experiments}, we show that these methods are effective at mitigating bias without harming accuracy on both logistic predictors and deep networks.
\subsection{Influence-Directed Feature Removal}
Given a model $h : \mathcal{X}_0\rightarrow\mathbb{R}$ and feature, $x_j$, the \emph{influence} $\chi_j$ of $x_j$ on $h$ is a quantitative measure of feature $j$'s contribution to the output of $h$.
To extend this notion to internal layers of a deep network $h$, we consider the \emph{slice abstraction}~\citep{internal-influence} comprised of a pair of functions $f : \mathcal{X}_0\rightarrow\mathcal{X}$, and $g : \mathcal{X}\rightarrow\mathbb{R}$, such that $h = g \circ f$.
We define $f$ to be the network up to the penultimate layer, and $g$ be the final layer.
Intuitively, We can then think of the features as being precomputed by $f$, i.e., $\ensuremath{\mathbf{x}}\xspace = f(\ensuremath{\mathbf{x}}\xspace_0)$ for $\ensuremath{\mathbf{x}}\xspace_0\in\mathcal{X}_0$, allowing us to treat the final layer as a linear model acting on features computed via a deep network.
Note that the slice abstraction encompasses linear models as well, by defining $f$ to be the identity function.
A growing body of work on influence measures~\citep{saliency-map,integrated-grad, internal-influence} provides numerous choices for $\chi_j$, each with different tradeoffs.
We use the \emph{internal distributional influence}~\citep{internal-influence}, as it incorporates the slice abstraction naturally.
This measure is given by Equation~\ref{def:int-infl} for a \emph{distribution of interest} $P$, which characterizes the distribution of test instances.
\begin{equation}\label{def:int-infl}
\chi_j(g \circ f, P) =
\int\limits_{\ensuremath{\mathbf{x}}\xspace\in\mathcal{X}_0}{\frac{\partial g}{\partial f(\ensuremath{\mathbf{x}}\xspace)_j}\Bigg\vert_{f(\ensuremath{\mathbf{x}}\xspace)}P(\ensuremath{\mathbf{x}}\xspace)d\ensuremath{\mathbf{x}}\xspace}
\end{equation}
We now describe two techniques that use this measure to remove features causing bias.
\paragraph{Feature parity.}
Motivated by the fact that bias amplification may be caused by feature asymmetry, we can attempt to mitigate it by enforcing parity in features across the classes.
To avoid removing features that are useful for correct predictions, we order the features by their influence on the model's output, and remove features from the majority class until parity is reached.
If the model has a bias term, we adjust it by subtracting the product of each removed coefficient and the mean of its corresponding feature.
\paragraph{Experts}
Section~\ref{sect:sgd} identifies ``weak'' features as a likely source of systematic bias.
This is a somewhat artificial construct, as real data often does not exhibit a clear separation between strong and weak features.
Qualitatively, the weak features are less predictive than the strong features, and the learner accounts for this by giving less influence to the weak features.
Thus, we can think of imposing a strong/weak feature dichotomy by defining the weak features to be those such that $|\chi_j| < \chi^*$ for some threshold $\chi^*$.
This reduces the feature selection problem to a search for an appropriate $\chi^*$ that mitigates bias to the greatest extent without harming accuracy.
We parameterize this search problem in terms of $\alpha,\beta$, where the $\alpha$ features with the most positive influence and $\beta$ features with the most negative influence are ``strong'', and the rest are considered weak.
This amounts to selecting the \emph{class-wise expert}~\citep{internal-influence} for the dominant class.
Formally, let $F_\alpha$ be the set of $\alpha$ features with the $\alpha$ highest positive influences, and $F_\beta$ the set of $\beta$ features with the $\beta$ most negative influences.
For slice $h = g \circ f$, let $g^\alpha_\beta$ be defined as model $g$ with its weights replaced by $w^\alpha_\beta$ as defined by Equation~\ref{eq:w_ab}. Then we define \emph{expert}, $g^{\alpha^*}_{\beta^*}$, to be the classifier given by setting $\alpha^*$ and $\beta^*$ according to Equation~\ref{eq:expert}. In other words, the $\alpha$ and $\beta$ that minimize bias while maintaining at least the original model's accuracy. Here $L_S$ represents the 0-1 loss on the training set, $S$.
\begin{align}
\label{eq:w_ab}
\ensuremath{\mathbf{w}}\xspace^\alpha_{\beta\ j} &= \begin{cases}
\ensuremath{\mathbf{w}}\xspace_j & j\in F_\alpha\cup F_\beta \\
0 & j\notin F_\alpha\cup F_\beta
\end{cases}
\\
\label{eq:expert}
\alpha^*, \beta^* &= \argmin_{\alpha, \beta}{
\left|B_\mathcal{D}(g^\alpha_\beta)\right|}\ \
\text{subject to}\ \
L_S(g^\alpha_\beta) \leq L_S(g)
\end{align}
We note that this is always feasible by selecting all the features.
Furthermore, this is a discrete optimization problem, which can be solved efficiently with a grid search over the possible $\alpha$ and $\beta$.
In practice, even when there are many features, we can exhaustively search this space. When there are ties we can break them by preferring the model with the greatest accuracy.
\section{Introduction}
\emph{Bias amplification} occurs when the distribution over prediction outputs is skewed in comparison to the prior distribution of the prediction target. Aside from being problematic for accuracy, this phenomenon is also potentially concerning as it relates to the \emph{fairness} of a model's predictions~\citep{men_also,women-also,man-prog,Stock2017} as models that learn to overpredict negative outcomes for certain groups may exacerbate stereotypes, prejudices, and disadvantages already reflected in the data~\citep{Hart2017}.
Several factors can cause bias amplification in practice.
The \emph{class imbalance problem} is a well-studied scenario where some classes in the data are significantly less likely than others~\citep{class-redux}. Classifiers trained to minimize empricial risk are not penalized for ignoring minority classes. However, as we show through analysis and experiments, bias amplification can arise in cases where the class prior is not severely skewed, or even when it is unbiased.
Thus, techniques for dealing with class imbalance alone cannot explain or address all cases of bias amplification.
We examine bias amplification in the context of binary classifiers, and show that it can be decomposed into a component that is intrinsic to the model, and one that arises from the inductive bias of gradient descent on certain feature configurations.
The intrinsic case manifests when the class prior distribution is more informative for prediction than the features, causing the the model to predict the class mode.
This type of bias is unavoidable, as we show that any mitigation of it will lead to less accurate predictions (Section~\ref{sect:intrinsic}).
Interestingly, linear classifiers trained with gradient descent tend to overestimate the importance of moderately-predictive, or ``weak,'' features if insufficient training data is available (Section~\ref{sect:sgd}).
This overestimation gives rise to \emph{feature-wise bias amplification} -- a previously unreported form of bias (see Section~\ref{sec:related} for comparison to related work) that can be traced back to the features of a trained model.
It occurs when there are more features that positively correlate with one class than the other.
If these features are given undue importance in the model, then their combined influence will lead to bias amplification in favor of the corresponding class.
Indeed, we experimentally demonstrate that feature-wise bias amplification can happen even when the class prior is unbiased
Our analysis sheds new light on real instances of the problem, and paves the way for practical mitigations of it.
The existence of such moderately-predictive weak features is not uncommon in models trained on real data. Viewing deep networks as the composition of a feature extractor and a linear classifier, we explain some instances of bias amplification in deep networks (Table~\ref{tab:sgd-bias-real-1}, Section~\ref{sect:experiments}).
Finally, this understanding of feature-wise bias amplification motivates a solution based on feature selection. We develop two new feature-selection algorithms that are designed to mitigate bias amplification (Section~\ref{sect:fix}). We demonstrate their effectiveness on both linear classifiers and deep neural networks (Section~\ref{sect:experiments}).
For example, for a VGG16 network trained on CelebA~\citep{celeba} to predict the ``attractive'' label, our approach removed 95\% of the bias in predictions.
We observe that in addition to mitigating bias amplification, these feature selection methods reduce generalization error relative to an $\ell_1$ regularization baseline for both linear models and deep networks (Table \ref{tab:sgd-bias-real-1}).
\subsection{Systematic Bias in Bayes-Optimal Predictors}
\label{sect:intrinsic}
Definition~\ref{def:bias-amp} makes it clear that systematic bias is a property of the learning rule producing $h_S$ and the distribution, so any technique that aims to address it will need to change one or both.
However, if the learner always produces Bayes-optimal predictors for \ensuremath{\mathcal{D}}\xspace, then any such change will result in suboptimal classifiers, making bias amplification \emph{unavoidable}.
In this section we characterize the systematic bias of a family of linear Bayes-optimal predictors.
Consider a special case of binary classification in which $\ensuremath{\mathbf{x}}\xspace$ are drawn from a multivariate Gaussian distribution with class means $\ensuremath{\bm{\mu}}\xspace^*_0, \ensuremath{\bm{\mu}}\xspace^*_1 \in \mathbb{R}^d$ and diagonal covariance matrix $\ensuremath{\bm{\Sigma}}\xspace^*$, and $y$ is a Bernoulli random variable with parameter $p^*$.
Then \ensuremath{\mathcal{D}}\xspace is given by Equation~\ref{eq:gauss-nb}.
\begin{equation}
\label{eq:gauss-nb}
\ensuremath{\mathcal{D}}\xspace \triangleq \Pr[\ensuremath{\mathbf{x}}\xspace | y] = \mathcal{N}(\ensuremath{\mathbf{x}}\xspace | \ensuremath{\bm{\mu}}\xspace^*_y, \ensuremath{\bm{\Sigma}}\xspace^*), y \sim \mathrm{Bernoulli}(p^*)
\end{equation}
Because the features in \ensuremath{\mathbf{x}}\xspace are independent given the class label, the Bayes-optimal learning rule for this data is Gaussian Naive Bayes, which is expressible as a linear classifier~\citep{murphy-book}.
Making the ideal assumption that we are always able to learn the Bayes-optimal classifier $h^*$ for parameters $\ensuremath{\bm{\mu}}\xspace^*_y, \ensuremath{\bm{\Sigma}}\xspace^*, p^*$, we proceed with the question: does $h^*$ have systematic bias?
Our assumption of $h_S = h^*$ reduces this question to whether $B_\ensuremath{\mathcal{D}}\xspace(h^*)$ is zero.
Proposition~\ref{thm:general-optimal} shows that $B_\ensuremath{\mathcal{D}}\xspace(h^*)$ is strictly a function of the class prior $p^*$ and the Mahalanobis distance $D$ of the class means $\ensuremath{\bm{\mu}}\xspace_y^*$.
Corollary~\ref{thm:unbiased-optimal} shows that when the prior is unbiased, the model's predictions remain unbiased.
\begin{theorem}
\label{thm:general-optimal}
Let $\ensuremath{\mathbf{x}}\xspace$ be distributed according to Equation~\ref{eq:gauss-nb}, $y$ be Bernoulli with parameter $p^*$,
$D$ be the Mahalanobis distance between the class means $\ensuremath{\bm{\mu}}\xspace^*_0, \ensuremath{\bm{\mu}}\xspace^*_1$, and
$\beta = -D^{-1}\log(p^*/(1-p^*))$.
Then the bias amplification of the Bayes-optimal classifier $h^*$ is:
\[
\textstyle
B_\ensuremath{\mathcal{D}}\xspace(h^*) = 1 - p^* - (1-p^*)\Phi\left(\beta + \frac{D}{2}\right) - p^*\Phi\left(\beta - \frac{D}{2}\right)
\]
\end{theorem}
\begin{corollary}
\label{thm:unbiased-optimal}
When $\ensuremath{\mathbf{x}}\xspace$ is distributed according to Equation~\ref{eq:gauss-nb} and $p^*=1/2$, $B_\ensuremath{\mathcal{D}}\xspace(h^*) = 0$.
\end{corollary}
The proofs of both claims are given in the appendix.
Corollary~\ref{thm:unbiased-optimal} is due to the fact that when $p^*=1/2$, $\beta = 0$.
Because of the symmetry $\Phi(-x) = 1 - \Phi(x)$, the $\Phi$ terms cancel out giving $\Pr[h^*(\ensuremath{\mathbf{x}}\xspace)=1] = 1/2$, and thus the bias amplification $B_\ensuremath{\mathcal{D}}\xspace(h^*) = 0$.
\begin{figure}
\centering
\subfloat[\label{tab:nb-real}]{
\footnotesize
\resizebox{0.47\columnwidth}{!}{%
\begin{tabular}[c]{l|c@{\hskip 5pt}c@{\hskip 1pt}c@{\hskip 1pt}c}
\emph{dataset} & $D$ & $p^*$ & $B_\ensuremath{\mathcal{D}}\xspace(h_S)$ & \emph{\% acc}
\\
\hline
banknote & 1.87 & 0.56 & 0.04 & 84.1 \\
breast cancer wisc & 1.81 & 0.63 & 0.02 & 94.2 \\
drug consumption & 0.86 & 0.78 & 0.12 & 75.6 \\
pima diabetes & 1.15 & 0.66 & 0.08 & 79.9
\end{tabular}
}
}
\hfill
\subfloat[\label{fig:bias-mahalanobis}]{
\resizebox{0.4\columnwidth}{!}{\adjustbox{valign=c}{\input{figures/plot-bias-mahalanobis}}}
}
\caption{(a) Bias amplification on real datasets classified using Gaussian Naive Bayes; (b) bias amplification of Bayes-optimal classifier in terms of the Mahalanobis distance $D$ between class means and prior class probability $p^*$.}
\end{figure}
Figure~\ref{tab:nb-real} shows the effect on real data available on the UCI repository classified using Gaussian Naive Bayes (GNB).
These datasets were chosen because their distributions roughly correspond to the naive Bayes assumption of conditional feature independence, and GNB outperformed logistic regression.
In each case, bias amplification occurs in approximate correspondence with Proposition~\ref{thm:general-optimal}, tracking the empirical class prior and class distance to Figure~\ref{fig:bias-mahalanobis}.
Figure~\ref{fig:bias-mahalanobis} shows $B_\ensuremath{\mathcal{D}}\xspace(h^*)$ as a function of $p^*$ for several values of $D$.
As the means grow closer together, there is less information available to make reliable predictions, and the label prior is used as the more informative signal.
Note that $B_\ensuremath{\mathcal{D}}\xspace(h^*)$ is bounded by 1/2, and the critical point corresponds to bias ``saturation'' where the model always predicts class 1.
From this it becomes clear that the extent to which overprediction occurs grows rather quickly when the means are moderately close.
For example when $p^*=3/4$ and the class means are separated by distance $1/2$, the classifier will predict $Y=1$ with probability close to 1.
\emph{Summary}: Bias amplification may be \emph{unavoidable} when the learning rule is a good fit for the data, but the features are less effective at distinguishing between classes than the prior.
Our results show that in the particular case of conditionally-independent Gaussian data, the Bayes-optimal predictor suffers from bias as the Mahalanobis distance between class means decreases, leading to a noticeable increase even when the prior is only somewhat biased.
The effect is strong enough to manifest in real settings where generative assumptions do not hold, but GNB outperforms other linear classifiers.
\section{Related Work}\label{sec:related}
While the term bias is used in a number of different contexts in machine learning, we use \emph{bias amplification} in the sense of \citet{men_also}, where the distribution over prediction outputs is skewed in comparison to the prior distribution of the prediction target.
For example, \citet{men_also} and \citet{women-also} use the imSitu vSRL dataset for the MS-COCO task, i.e. to classify agents and actions in pictures. In the dataset, women are twice as likely to be the agent when the action is cooking, but the model was five times as likely to predict women to be the agent cooking.
In a related example, \citet{Stock2017} identify bias in models trained on the ImageNet dataset. Despite there being near-parity of white and black people in pictures in the basketball class, 78\% of the images that the model classified as \emph{basketball} had black people in them and only 44\% had white people in them. Additionally, 90\% of the misclassified \emph{basketball} pictures had white people in them, whereas only 20\% had black people in them.
Note that this type of bias over classes is distinct from the learning bias in machine learning~\citep{geman92} which has received renewed interest in the context of SGD and under-determined models \citep{conv-sgd-bias,sgd-bias-svm}.
\par Bias amplification is often thought to be result of class imbalance in the training data, which is well-studied in the learning community (see \citet{learning_biased} and \citet{class_imb_nn} for comprehensive surveys). There are a myriad of empirical investigations of the effects of class imbalance in machine learning and different ways of mitigating these effects \citep{Maloof03learningwhen,unknown,MAZUROWSKI2008427,oommen,redux}.
\par It has been shown that neural networks are affected by class imbalance as well \citep{nn_unbalanced}. \citet{class_imb_nn} point out that the detrimental effect of class imbalance on neural networks increases with scale. They advocate for an oversampling technique mixed with thresholding to improve accuracy based on empirical tests. An interesting and less common technique from \citet{brainz} relies on a drastic change to neural network training procedure in order to better detect brain tumors: they first train the net on an even distribution, and then on a representative sample, but only on the output layer in the second half of training.
In contrast to prior work, we demonstrate that bias amplification can occur without existing imbalances in the training set.
Therefore, we identify a new source of bias that can be traced to particular features in the model.
Since we remove bias feature-wise, our approach can also be viewed as method for feature selection.
While feature selection is a well-studied problem, to the authors' knowledge, no one has looked at removing features to mitigate \textit{bias}.
Generally, feature selection has been applied for improving model accuracy, or gaining insight into the data~\citep{feat-survey}.
For example, \citet{Kim2015} use feature selection for interpretability during data exploration.
They select features that have high variance across clusters created based on human-interpretable, logical rules.
Differing from prior work, we focus on bias by identifying features that are likely to increase bias, but can be removed while maintaining accuracy.
\par Naive Bayes classification models comprise a similarly well-studied topic. \cite{bayes-words} point out common downfalls of Naive Bayes classifiers on datasets that do not meet Naive Bayes criteria: bias from class imbalance, and the problem of over-predicting classes with correlated features. Our work shows that similar effects can occur even on data that \emph{does} match Naive Bayes assumptions. \cite{optim-bayes} shows that the naive Bayes classifier is optimal so long as the dependencies between features over the whole network cancel each other out. Our work can mitigate bias in scenarios where these conditions do not hold.
\subsection{Feature Asymmetry and Gradient Descent}
\label{sect:sgd}
When the learning rule does not produce a Bayes-optimal predictor, it may be the case that excess bias can safely be removed without harming accuracy.
To support this claim, we turn our attention to logistic regression classifiers trained using stochastic gradient descent.
Logistic regression predictors for data generated according to Equation~\ref{eq:gauss-nb} converge in the limit to the same Bayes-optimal predictors studied in Proposition~\ref{thm:general-optimal} and Corollary~\ref{thm:unbiased-optimal}~\citep{murphy-book}.
Logistic regression models make fewer assumptions about the data and are therefore more widely-applicable, but as we demonstrate in this section, this flexibility comes at the expense of an inductive bias that can lead to systematic bias in predictions.
To show this, we continue under our assumption that $\ensuremath{\mathbf{x}}\xspace$ and $y$ are generated according to Equation~\ref{eq:gauss-nb}, and consider the case where $p^*=1/2$.
According to Corollary~\ref{thm:unbiased-optimal}, any systematic bias that emerges must come from differences between the trained classifier $h_S$ and the Bayes-optimal $h^*$.
\subsubsection{Feature Asymmetry}
To define what is meant by ``feature asymmetry'', consider the orientation of each feature $x_j$ as given by the sign of $\mu_{1j} - \mu_{0j}$.
The sign of each coefficient in $h^*$ will correspond to its feature orientation, so we can think of each feature as being ``towards'' either class 0 or class 1.
Likewise, we can view the combined features as being \emph{asymmetric towards $y$} when there are more features oriented towards $y$ than towards $1-y$.
As shown in Table~\ref{tab:sgd-bias-real-1}, high-dimensional data with biased class priors often exhibit feature asymmetry towards the majority class.
This does not necessarily lead to excessive bias, and the analysis from the previous section indicates that if $p^* = 1/2$ then it may be possible to learn a predictor with no bias.
However, if the learning rule overestimates the importance of some of the features oriented towards the majority class, then variance in those features present in minority instances will cause mispredictions that lead to excess bias beyond what is characterized in Proposition~\ref{thm:general-optimal}.
This problem is pronounced when many of the majority-oriented features are weak predictors, which in this setting means that the magnitude of their corresponding coefficients in $h^*$ are small relative to the other features (for example, features with high variance or similar means between classes). The weak features have small coefficients in $h^*$, but if the learner systematically overestimates the corresponding coefficients in $h_S$, the resulting classifier will be ``out of balance'' with the distribution generating the data.
Figure~\ref{fig:sgd-bias} explores this phenomenon through synthetic Gaussian data exemplifying this feature asymmetry, in which the strongly-predictive features have low variance $\sigma_s = 1$, and the weakly-predictive features have relatively higher variance $\sigma_w > 1$.
Specifically, the data used here follows Equation~\ref{eq:gauss-nb} with the parameters shown in Equation~\ref{eq:feat-asymm-regime}.
\begin{equation}
\label{eq:feat-asymm-regime}
p^* = 1/2, \ensuremath{\bm{\mu}}\xspace_0^* = (0, 1, 0, \ldots, 0), \ensuremath{\bm{\mu}}\xspace_1^* = (1, 0, 1, \ldots, 1),
\ensuremath{\bm{\Sigma}}\xspace^* = \mathrm{diag}(\sigma_s, \sigma_s, \sigma_w, \ldots, \sigma_w)
\end{equation}
Figure~\ref{fig:feat-est} suggests that overestimation of weak features is precisely the form of inductive bias exhibited by gradient descent when learning logistic classifiers.
As $h_S$ converges to the Bayes-optimal configuration, the magnitude of weak-feature coefficients gradually decreases to the appropriate quantity.
As the variance increases, the extent of the overapproximation grows accordingly.
While this effect may arise when methods other than SGD are used to estimate the coefficients, Figure~\ref{fig:sgd-bias-loss} in the appendix shows that it occurs consistently in models trained using SGD.
\subsubsection{Prediction Bias from Inductive Bias}
While the classifier remains far from convergence, the cumulative effect of feature overapproximation with high-dimensional data leads to systematic bias.
Figure~\ref{fig:sgd-bias-number} demonstrates that as the disparity in weak features towards class $y=1$ increases, so does the expected bias towards that class.
This bias cannot be explained by Proposition~\ref{thm:general-optimal}, because this data is distributed with $p^*=1/2$.
Rather, it is clear that the effect diminishes as the training size increases and $h_S$ converges towards $h^*$.
This suggests gradient descent tends to ``overuse'' the weak features prior to convergence, leading to systematic bias that over-predicts the majority class in asymmetric regimes.
Figure~\ref{fig:sgd-bias-variance} demonstrates that for a fixed disparity in weak features, the features must be sufficiently weak in order to cause bias. This suggests that a feature imbalance alone is not sufficient for causing systematic bias.
Moreover, the weak features, rather than the strong features, are responsible for the bias.
As the training size increases, the amount of variance required to cause bias increases.
However, when the features have sufficiently high variance, the model will eventually decrease their contribution, relieving their impact on the bias and accuracy of the model.
\emph{Summary}: When the data is distributed asymmetrically with respect to features' orientation towards a class, gradient descent may lead to systematic bias especially when many of the asymmetric features are weak predictors.
This bias is a result of the learning rule, as it manifests in cases where a Bayes-optimal predictor would exhibit no bias, and therefore it may be possible to mitigate it without harming accuracy.
\begin{figure}
\centering
\subfloat[\ ]{
\label{fig:sgd-bias-number}
\resizebox{0.32\columnwidth}{!}{%
\input{figures/plot-sgd-bias-wf}}}
\subfloat[\ ]{
\label{fig:sgd-bias-variance}
\resizebox{0.32\columnwidth}{!}{%
\input{figures/plot-sgd-bias-var}}}
\subfloat[\ ]{
\label{fig:feat-est}
\resizebox{0.32\columnwidth}{!}{%
\input{figures/feature_estimate}
}
}
\caption{\label{fig:sgd-bias}
(a), (b): Expected bias as a function of (a) number of weak features and (b) variance of the weak features, shown for models trained on $N=100, 500, 1000$ instances. $\sigma_w$ in (a) is fixed at 10, and in (b) the number of features is fixed at 256.
(c): Extent of overestimation of weak-feature coefficients in logistic classifiers trained with stochastic gradient descent, in terms of the amount of training data.
The vertical axis is the difference in magnitude between the trained coefficient ($h_S$) and that of the Bayes-optimal predictor ($h^*$).
In (a)-(c), data is generated according to Equation~\ref{eq:feat-asymm-regime} with $\sigma_s=1$, and results are averaged over 100 training runs.}
\end{figure}
|
1,108,101,565,796 | arxiv | \section{Introduction}
The interconnectivity of the modern world has allowed for almost instant access to publications and contact among scientists across the globe. We have come a long way since the early days of the internet with electronic bulletin boards. Today, Skype, Google Chat and many other telecommunication platforms allow for instant conversations with colleagues, as well as remote attendance at meetings and even conferences. For example, the presentation that announced the first possible detection of the Higgs particle was streamed live over the internet, so that people all over the world could get access to this information instantly.
In spite of this interconnectivity that we take for granted, personal, face-to-face, scientific meetings have remained popular and one of the most productive and effective ways to develop collaborations. Typically, such meetings can be divided into \emph{conferences} and \emph{workshops}. The main goal of conferences is to bring together experts in a rather wide range of areas to disseminate new results. Because of this, conferences are usually much larger than workshops, involving hundreds or thousands of scientists, with a very large number of short talks (of duration ${\cal{O}}(10)$ minutes), a few long, plenary talks, and rather short breaks between talks.
Workshops, on the other hand, usually have a very different goal: to bring together a smaller number of experts in a specific field or related fields to encourage collaboration, creativity, and progress during and following the workshop. Because of this, workshops are usually much smaller than conferences, perhaps involving less than one hundred scientists, with usually no short talks, a very small number of long talks, and ample time for discussion.
The level to which a workshop achieves such goals is a sensitive function of its organization. We here summarize a few suggestions on how to organize a successful workshop, distilled from our own experiences attending and organizing workshops, and discussions with other organizers and attendees. Scientific workshops come in a variety of forms and by no means do the suggestions that follow exhaust all possible ways to organize a workshop. In fact, as we will discuss below, there are many roads to success. The goal of this paper is to provide some guide to hopefully help workshop organizers (particularly young faculty and postdocs) when planning future meetings.
The motivation for this paper comes from requests from colleagues, who have identified a need for such material. Although we searched the literature, we have not found a concise paper on this topic, written by \emph{scientists for scientists}. Of course, any workshop organizer could read on successful organizational techniques written by other professionals, think about how to translate these techniques to scientific meetings, and then compare and contrast these with techniques applied in the past in other meetings. This, however, requires careful research, planning, and preparation, which takes quite a bit of time and effort.
The intended audience of this note is primarily young scientists that have been tasked with organizing a workshop for the first time. Such scientists may not have the time (or desire) to research and compile information on how to best organize a workshop. Experienced scientists, ie.~those who have organized workshops before, may find this note unnecessary. But of course, he who knows the answer to a problem usually thinks the answer is obvious. Hopefully, this note will present ideas that may not have occurred even to experienced scientists.
\section{Workshop Objectives}
In this section, we discuss how to define workshop objectives and then proceed to list and describe a set of tips that are useful when organizing a workshop. We will concentrate on \emph{workshops} only; there is not a one-to-one mapping between the tips presented here and those that are useful for conference organization.
\subsection{Defining Objectives}
\label{subsec:objectives}
The first step in organizing a successful scientific workshop is to set \emph{workshop objectives}. This may sound obvious, but it is tremendously important because the rest of the planning follows directly from the stated objectives. In other words, the organizers must first determine (very early on) what will define success so that they can plan for it. The definition of success is not unique and it depends on the type of workshop one wishes to organize, the topic of the workshop, etc.
In our interviews, we have found that some of the most common goals are to create a workshop that possesses
\begin{enumerate}
\item[(i)] an open and friendly atmosphere,
\item[(ii)] ample opportunity for discussions,
\item[(iii)] participants with high and distinct levels of expertise,
\item[(iv)] incentives for participation.
\end{enumerate}
All of these objectives are interconnected and support each other. Keeping these objectives in mind can help in the planning of other organizational details.
\subsection{Objective-Centered Organization}
The best organizational techniques will follow from the organizers' objectives. Below, we list some organizational suggestions that support the objectives of Sec.~\ref{subsec:objectives}.
\subsubsection{Topic Selection}
Careful consideration must be given to the selection of the workshop topic. The first issue to consider is that of \emph{broadness}. If one picks too broad of a topic, then the duration of the workshop may not be long enough. Long workshops with the same participants throughout are usually less successful because one runs the risks of ``burning the participants out''. Scientists have other administrative and teaching responsibilities at their own institutions that they must attend to. Usually, one can get away from these for some time, but unless one is on an extended leave or on sabbatical, usually one must return to these responsibilities after a short time. Moreover, long workshops run the risk of becoming repetitive, with participants either repeating each other or eventually losing motivation and interest, which usually results in useless discussions. On the other hand, too narrow of a topic may lead to a very small workshop that resembles more a large group meeting than a true workshop~\footnote{Sometimes, very narrow topics are desirable, as we will discuss below when describing \emph{busy days}}. When picking the breadth of a given topic, it is usually best to strive to strike a balance.
Another issue one must consider is that of \emph{timing} and \emph{community interest}. Having multiple workshops on a given particular topic organized a few months from each other is not desirable unless they are planned jointly and organized in a series. Similarly, picking topics that only a very small subset of the scientific community is interested in will lead to too small a workshop. Identifying the right topic of the right broadness, with the right timing and the right level of interest will strongly aid in attracting high-level participants, fulfilling objective (iii).
A final critical organizational choice one must make is that of workshop \emph{duration}. One must strike a balance between a very short meeting (1 day) versus a meeting that is too long (4 days or longer). The sweet spot seems to be around $2$ or $3$ days. Workshops of this duration are long enough to incite discussion and allow for new ideas to emerge, while short enough to not exhaust participants and to fit in with busy schedules. The duration of the workshop, of course, is directly tied to the breadth of the workshop. Long-term workshops are also laudable enterprises, but they are inherently distinct from the type of meetings we are discussing here.
\subsubsection{Participant Selection}
The success or failure of the workshop depends strongly on the workshop participants. The participants should all be experts (or close to experts) in the topics of the workshop, but at the same time they should span a few different communities. On the one hand, a workshop with participants that are experts only on a very small subtopic might prevent original ideas and new approaches to problems to be discussed. On the other hand, a workshop with participants from widely different communities can prevent in-depth discussions on any particular area. Once more, a delicate balance must be struck between level of expertise and breadth of knowledge.
Needless to say, all participants must maintain a cordial and collegiate, professional relation among each other in a workshop atmosphere. If participants cannot agree to these standards, then one runs the risk of creating a tense environment, that then suppresses discussion by all participants, affecting objectives (i) and (ii). The success of a workshop hinges on the participants feeling comfortable enough to discuss and share their thoughts.
The organizers' ability to select participants can be enforced by inviting speakers first, and only after that inviting other participants directly. This can be done easily via email, once a (free) online registration page has been set up. Invitations should always be through \emph{personal} communications or emails and not through mass emails. Once a subset of people have been invited, the workshop can be advertised more broadly. One can draw from existing collaboration networks that may have been developed by the workshop organizers. Asking people to register for the meeting can also serve as a means to convince people to commit to the workshop.
Attracting good workshop participants can be achieved by providing certain incentives, which is particularly important if the workshop location is somewhat remote. In some order of priority, it is recommended that caffeinated beverages and perhaps some sort of food be provided during coffee breaks. Ideally, the organizers will also provide breakfast, to motivate participants to arrive to the workshop early and on time to listen to the first talk. If funding is available, organizers could also provide the incentive of covering hotel costs. This, however, can be very costly and, if limited funds are available, then organizers can choose to provide ``travel grants'' for the participants that need travel support the most. Such a selection can be made by asking for participants to complete a brief ``travel support'' application, if they need to, when they register online.
Some workshop organizers have chosen to provide lunch during meetings, but we have found that this is usually not the best use of funds. Lunch is a good time for a break and the effort needed to organize lunch for the whole group is usually better spent on other tasks. Long lunch breaks after an interesting morning session serve as excellent environments for smaller groups to continue interesting discussions and start collaborations. In general, we suggest to not charge for registration, unless funding is a severe problem.
\subsubsection{Session Organization}
The organization of the session structure is perhaps one of \emph{the} most important topics when organizing a workshop. There are many possible structures that can lead to a successful workshop. We will present below one particular structure that has led to successful workshops in the past. This structure requires that every workshop day be subdivided into two \emph{sessions} (a morning and an afternoon session), each with a few \emph{blocks} of a given duration, separated by extended coffee discussion breaks and lunch discussion breaks.
The first thing to consider is when the morning session should start and when the afternoon session should end (ie.~the length of the workshop day). Given any set of participants, one is likely to find an admixture of ``morning people'' and ``not morning people.'' That is, workshops will contain a combination of people who are comfortable with very early morning sessions and those people who are not. It is thus important to find a balance that makes the mean happy. We have found that a good compromise is to have morning sessions that start around 9 am and afternoon sessions that end around 5 pm. Such ``late'' and ``early'' start and end times enhance participation and lead to more productive workshop hours.
The second consideration is the duration of the morning and afternoon sessions. This choice is controlled by the length of the lunch break. At the very least, the lunch break should last 1.5 hours. Shorter breaks usually lead to participants arriving late to the beginning of the afternoon session, which can be very disruptive. An ideal medium are breaks of about 2 hours, especially for workshop venues that lack lunch spaces near the workshop location. If one chooses the lunch break between 12:30 and 2:30 pm, this then automatically means the morning sessions go from 9:00am to 12:30am and the afternoon sessions from 2:30pm to 5:00pm.
The third consideration is the subdivision of each session (morning and afternoon) into blocks. The number of blocks that fit into each session is determined by the desired length of each block and of the coffee discussion breaks. For reasons we explain below, blocks of about 1.5 hours appear to be ideal. Shorter blocks do not allow for in-depth discussions, while longer blocks lead to people leaving during the block for short breaks. Coffee breaks are not intended just for participants to have coffee or go to the bathroom, but also to continue in-depth discussions generated during the blocks. For this reason, coffee breaks of around 30 to 45 minutes are ideal.
Given these conditions, one arrives at the following possible breakdown:
\begin{itemize}
\item Morning Session (9:00 am - 12:30 pm):
\begin{itemize}
\item Block 1 (9:00 am - 10:30 am)
\item Coffee Break (10:30 am - 11:00 am)
\item Block 2 (11:00 am - 12:30 pm)
\end{itemize}
\item Lunch Break (12:30 pm - 2:30 pm)
\item Afternoon Session (2:30 pm - 5:00 pm):
\begin{itemize}
\item Block 3 (2:30 pm - 4:00 pm)
\item Coffee Break (4:00 pm - 5:00 pm)
\end{itemize}
\end{itemize}
Of course, as already argued, this is not the only possible breakdown, but rather one that works well. Notice that this structure was inferred by choosing certain criteria, directly associated with the objectives listed in Sec.~\ref{subsec:objectives}. Thus, each block's and break's duration is not chosen arbitrarily.
At the beginning of every session (the morning and afternoon) it is helpful if the organizers remind the audience of the main objectives of that particular session. These objectives are to be developed and planned ahead of time, when deciding what each block is going to cover, so that the topics are well-connected. Such planning also aids in enhancing smooth transitions between blocks. One particularly good technique is to assign each block a set of questions on the particular topic that block is supposed to address. The organizers can then remind the participants of the topic of each block and the associated set of questions.
A direct consequence of the breakdown above is a limit on the number of talks possible, which is a \emph{desired} objective, as ample time for discussion (objective (ii)) automatically implies fewer talks given a fixed workshop duration. One still has the freedom to choose how many talks to include in each block. We find that a single talk per block is truly all that a workshop should have. This is not because the talks should last 1.5 hours, but rather because one wishes each block to have lots of discussion embedded during each talk. Discussion is most naturally generated when participants have questions about material that is being presented.
\subsubsection{Block Organization}
A \emph{critical} ingredient in the organization of blocks under the paradigm described here is the use of ``participative talks''. These talks are those that are created with the goal to encourage discussion \emph{throughout} the talk, as opposed to after the talk in a separate discussion session. For such an interactive discussion situation to emerge naturally, the invited speaker must feel relaxed and willing to devote time during the talk to address questions and allow discussion. This, in turn, can only occur if the speaker has prepared \emph{few} slides (much, much fewer slides than for a 1.5 hour traditional talk). We have found that requesting 30-45 minute talks from the invited speakers is enough to fill up an entire 1.5 hour block that includes discussion.
The organizers must stress and explain to the speakers the participative talk format. Nobody wishes to hear a sequence of 1.5 hour talks at a workshop. When this occurs, workshops begin to resemble conferences instead of collaborative and creative meetings. To prevent this, the organizers could attempt to ask to see the slides ahead of time, which then allows them to suggest cuts, if too many slides have been generated. This, however, can be very difficult to enforce as many speakers choose to prepare their presentations at the last minute. A perhaps better alternative is to clearly describe what a participative talk is supposed to be like and to remind speakers of this several times prior to the workshop.
Another critical element in a participative talk is the level of complexity of the material presented. If the material is too technical, then one runs the risk of ostracizing the audience, thus preventing discussions. To ensure speakers prepare appropriate talks, the organizers could remind the presenters that these talks are not about pushing their personal research or agenda, or to impress the audience. The talks are meant to stimulate discussion, and thus, they should be clear and to the point, with a minimum use of technical jargon. One must avoid for only a small fraction of the participants to understand just a small fraction of the discussions that take place.
Participative talks are more useful in generating discussion than separate discussion sessions because of the way discussion usually emerges. Participants will have questions and will want to discuss material while the speaker is presenting it. It is much easier to introduce these questions and discussion at that time than wait until a formal discussion session later in the day. Such questions then also allow other participants to understand the material better and to follow the discussions. Formal discussion sessions can easily degenerate into ``yet another talk'' given by the moderator or the discussion panel.
Given the limited number of talks, each invited speaker should ideally cover a separate, non-overlapping topic that is carefully planned ahead of time. Organizers have the privilege and responsibility to select these topics carefully and in a coherent manner. Usually, speakers tend to recycle old talks, with perhaps a minimal modification that introduces new results. To avoid this, organizers can ask speakers to address new ideas or present questions, even at the beginning of the talk, rather than answers, about topics they think are worth studying. To avoid overlaps, it is useful for the organizers to put all speakers in contact with each other sufficiently ahead of the workshop. This way, speakers can discuss with each other what topics each of them will present to avoid overlaps. If speakers manage to finish their talks prior to the workshop, organizers may even make these talks available to all speakers to avoid overlaps.
Each block should also have a carefully assigned moderator or ``chair'' to encourage discussion. If not enough discussion is being generated by the speaker's talk, then it is the job of the moderator to ``break the ice'' by asking questions until other participants join in. The moderator must also have the courage to stop discussions in the very rare situation when the latter have gone on for too long. For this reason, the moderator must be somebody familiar with the topic the speaker will discuss, while at the same time willing to ask questions in public. Just as speakers, moderators are not to be chosen at random or at the last minute.
\subsubsection{Coffee Break Discussions}
An important organizational element of workshop sessions is the time between blocks. This is precisely when participants mingle, draw on whiteboards, explain difficult concepts to each other, and discuss freely. There should be enough time in these coffee breaks to allow for such discussions and, in particular, to prevent these discussions from being interrupted abruptly by the end of the coffee session. Ideally, each coffee break session is at least 30-45 minutes long.
Flexibility when starting and ending coffee breaks is also of utmost importance. Sometimes blocks finish a bit earlier than prescribed or coffee breaks seem to go for a bit longer. This is perfectly fine. The role of the organizer is to encourage discussion, collaboration, and creativity and never to stifle it. Ending or starting sessions prematurely at the cost of killing discussions should be avoided whenever possible.
Discussions can be further enhanced if the beverages and food associated with the coffee breaks are served in the right environment. Ideally, coffee breaks would occur in the room adjacent to where the workshop is taking place. Using the same room can cause disruptions, as staff set up tables and prepare refreshments. If refreshments are served in the room adjacent to the workshop room preparations can start before sessions are over without disrupting discussions.
Coffee break rooms should also be well-equipped to enhance collaboration. This means making available plenty of writing material (paper and pens), tables and chairs, as well as white boards. Many times white board discussions naturally emerge during coffee sessions, and this can be very productive.
\subsubsection{Venue Selection}
The venue for a workshop can greatly aid in ensuring its success. The impact of the venue is sometimes underestimated, but we have found (somewhat anecdotal) evidence, in fact, for the contrary. Workshops where the meeting rooms are vast, large, and imposing seem to lead to less discussion than medium size rooms, where all participants are physically close to each other. Moreover, large venues are prone to generate sub-discussions that occur simultaneously and in parallel to the main discussions of the session. This is counter-productive and it isolates participants instead of creating unity.
Ideally, the meeting room where the workshop takes place is the same during all days of the workshop. This is facilitated by organizing workshops of intermediate duration (2-3 days), as opposed to multi-week endeavors. Of course, participants do not usually mind if they have to move from one room to another between workshop days, but sometimes this can create confusion if the room migration is non-trivial. Participants do mind if the venue is too far away from hotels. In this case, either a closer venue must be identified or a shuttle service should be provided to transport participants to the workshop. For all of these reason, the venue should be secured \emph{very early on} (e.g.~at least 6 months before the meeting) in the organization of a workshop.
\subsubsection{Technical Infrastructure}
Collaboration tools and equipment should be placed in the meeting room and be adequate for the size of the meeting. This means in particular securing sufficiently large tables for participants to place their laptops and notes on. Notepads could, for example, be provided freely as part of the registration package. If possible, tables should be arranged such as to encourage conversation. Tables in a $\Lambda$ pattern or great arcs can accomplish this, while allowing everybody to see the projection screens clearly. Moreover, meeting rooms should also have several well-illuminated whiteboards with bright markers. Participants and speakers often wish to add or explain material on whiteboards, making the latter of utmost importance.
Projectors and pointers should be selected ahead of time and tested for brightness in well-illuminated rooms. University physics departments usually have projectors and pointers for colloquia that are much brighter than those that a venue for rent could provide. Usually, universities are willing to lend this equipment to faculty for free. Finally, power cords should be run across the workshop room for participants to charge their laptops, and high-speed wi-fi access should be provided. The latter should be easily accessible, preferably with an open network. Wi-fi access, unfortunately, tends sometimes to distract participants, which is why sometimes organizers would rather such access were not provided. However, the internet can also aid when searching for scientific material on the internet during discussion sessions, which can help in clarifying discussions.
A workshop website should be constructed to serve as a hub to collect information. The website should contain a list of participants, schedule, information about accommodation and travel, as well as nearby restaurants, and possibly directions from nearby airports to hotels. Ideally, one would collect all presentations given at the workshop and upload them to the workshop website. If possible, one could also record the discussion sessions and upload these too. This last idea requires proper placement of relatively high-quality, environmental microphones to record the discussions and may not be available in the rented venue. Moreover, the organizers should ask the participants for permission prior to recording and uploading their presentations to the web. If this means the speakers will withhold information or shy away from discussion, then the meeting should not be recorded.
\subsubsection{Plan B}
It is important to be flexible when organizing a workshop since (almost certainly) not everything will go as planned. One of the most common failures is for invited speakers to cancel in the last minute. One must be prepared for such ``mini-disasters'' and plan accordingly. For this reason, it is always very useful to identify one or two people ahead of time and ask them to be possible back-ups, in case speakers cancel or cannot make it to the workshop for some reason (weather being the most common one).
One can try to resolve this problem by allowing invited speakers to give talks via the internet, for example through Skype. In our experience, this solution is not as effective as it sounds. Skype is an ideal collaboration tool, but it is difficult for a speaker to give a talk with this technology, primarily because of not being able to gauge the audience's reactions in real time. Subconsciously speakers always adjust their talks in real-time in response to the audience. For example, if the audience looks confused, a speaker may choose to rephrase or explain a point further. Such real-time adjustments are impossible, or at the very least very difficult, through Skype. Moreover, tele-conferences of this type make it extremely difficult for participants to ask questions in real-time to interrupt and incite discussion. Without this very important element, participatory talks turn into traditional talks, which are much less productive for a workshop environment.
Other disasters can of course also occur. Common problems include a last minute, forced change of venue, breakdown of collaboration equipment (like outlets, wi-fi, or projectors), absence of organizing members, or failure of the staffing members to provide adequate and timely refreshments during coffee breaks. All of these problems can be dealt with by the organizing committee easily, if solutions are thought of ahead of time, like the identification of a back-up venue.
\subsubsection{Other Suggestions}
The collection of a group of visiting experts in a specific discipline is an excellent opportunity to reach out to the local community through an education or public outreach event. Ideally, the organizers can recruit a colleague or collaborator with experience in outreach events to either advise on their planning or organize the event. The goal is to have all of the participating scientists interact with the public or students on some level during an event that benefits both the audience and the scientists. Public talks are common outreach events but many other formats can be successful, including school visits, panel discussions, and science caf\'{e}s. Collaboration with colleagues in fields such as education, art, or history can lead to interesting and enjoyable outreach events. It is a good experience for graduate and undergraduate students to participate in the planning and execution of outreach events as part of their own professional development.
A successful workshop can only occur if one attracts the appropriate audience. Of course, direct, personal invitations to special attendees are important, but advertising can also play a critical role. Mailing lists (such as professional society mailing lists) as well as community organized mailing lists (such as hyperspace for the relativity community) can be used to advertise meetings effectively. This advertisement should be written carefully and clearly including the date and location of the meeting, as well as a detailed description of what the workshop will be about.
A conference dinner organized for all of the participants, whether provided as part of the workshop registration or organized as a no-host event, can often add to the social and collaborative nature of the workshop. Informal dinner or evening events can often be a venue for the continued discussion of workshop or related topics as well as providing a different atmosphere for participants to get to know one another. Organizers should consider the effort, cost, and accessibility of dinner or social events associated with the workshop.
A good workshop attendance also hinges on when the meeting takes place. One should always try to avoid organizing a meeting in close proximity to other traditional meetings organized by the community of interest. For example, it makes sense to avoid organizing physics workshops on or around the same time as the March or April American Physical Society (APS) meetings. If the workshop is on topics related to astrophysics, one should make sure it does not overlap with the American Astronomical Society (AAS) meeting, with Aspen workshops, or with international general relativity or astrophysics meetings. Failing to do so will mean that potential speakers and workshop participants may choose not to attend the workshop and instead attend a more established and well-known conference.
The particular season when the workshop takes place can play a major role in how well-attended it is. In the United States, the Fall (September to December) and the Spring semesters (January to May) are difficult since potential participants may have teaching duties. Winter can always be a problem due to weather delays, if the workshop is organized in a remote location. Summer can be an ideal time to organize meetings, but of course, this season is quite over-subscribed with other meetings and workshops.
A successful workshop will attract a large number of scientists to the organizer's institution, and thus, the organizers may wish to offer visitors to stay after the workshop for some period of time. This serves two purposes. On the one hand, it allows the organizers to collaborate with the visitors more closely, since during the meeting itself, organizers are usually swamped with organizational duties. On the other hand, it allows graduate students and postdocs at the organizer's institution to establish and pursue new collaborations with the visitors beyond the duration of the meeting.
Organizing a meeting is difficult and should be done \emph{early} enough and methodically. Usually, it is ideal to start organizing a small workshop about 8-12 months ahead of the meeting. After deciding on objectives and the format of the meeting, confirmations of attendance of the invited speakers should be secured, followed by confirmation of attendance of special attendees and the preparation of advertisement. One should simultaneously secure a venue early on to ensure accessibility to the best locations. Details that can be dealt with a few months prior to the meeting include arranging for registration packages (with notebooks, directions to restaurants from the workshop venue, wi-fi connectivity information, direction to hotels, etc) and refreshments.
Graduate student help is also very important for a successful workshop organization. Participants that are new to the town where the workshop is being held sometimes need help to get around, find restaurants, or sometimes even the venue. Enlisting the help of graduate students can help with this, while introducing graduate students to well-known researchers and exposing them to the procedures for the successful organization of a workshop.
Participatory workshops are successful if and only if participants participate. When workshops start, participation is minimal because people are sometimes shy or introverted. It is the responsibility of the block moderator and the organizers to break the ice and show everybody that it is ok to ask questions (many questions) and incite discussion. Participants may not be used to such a workshop format and may need to be shown that it is ok to interrupt and incite discussion. By breaking the ice, the organizers can help to ease the tension at the beginning of every meeting. A relaxed atmosphere will then naturally enhance the tendencies of participants to engage with the workshop.
Proceedings and posters are usually not worth the effort and they take away from the informality needed in workshops to enhance participation. Organizers sometimes feel the need to have their participants write proceedings to have ``something to show for the meeting''. Historically, proceedings had equal weight to refereed scientific papers and served to let people all over the world know what new results had been presented. This is why proceedings were usually associated with conferences. Nowadays, however, proceedings have lost part of their utility given the advent of the internet and the arXiv. Many review papers currently exist that are up to date on several different topics, making proceedings usually redundant. The cost associated with writing such proceedings does not outweigh their benefit to the community.
\subsubsection{Other Formats}
The above discussion has concentrated on a particular structure for a successful workshop. This structure was based on workshops we attended in the past, as well as workshops we organized, techniques used in business administration, and interviews conducted with workshop organizers and attendees. But by no means is this the only way to organize a successful workshop. One alternative format is that of a ``busy-meeting'' or ``hack-meeting''. These workshops are usually shorter than the ones described here and their goal is to solve a very specific problem with a small group of participants. Such hack-meetings can actually be embedded in larger workshops, in which case they turn into ``hack-days.'' When workshops are organized for large collaborations, for example, it is common for afternoon sessions to be hack-sessions, where multiple separate sub-groups of participants get together to correct a computer code, finish up a paper, or solve a theoretical physics problem. Such lightning sessions can be greatly successful.
Other formats to each morning and afternoon session could also be used to enhance participation. A particular interesting alternative is the ``debate-format.'' In this exercise, two speakers are invited to debate on a particular topic, where one is asked to take one view, while the other must defend the opposing view (regardless of their own personal opinions on the topic). In science, it is common for conflicting results to arise in the literature. In such instances, it can be very illuminating to have the parties that discovered the conflicting results defend their position in a friendly by scientifically stimulating atmosphere. Of course, organizers must plan such sessions carefully, as sometimes the atmosphere can turn rather tense and be counter-productive.
A particularly interesting variation is for all blocks to have a debate format. For this to succeed, of course, the organizers must be able to come up with enough questions or debate topics, as well as enough invited speakers who are capable of participating in such debates. This can be difficult if the workshop topic is not sufficiently broad, but it can work particularly well if one wishes to increase the breadth of the workshop.
\section{Lessons Learnt}
The burden of whether a workshop is successful is ultimately on the organizers, not on the participants. Put another way, it is not the participants' fault if they are exhausted after a long day of talks and just not in the spirit to discuss topics any further. The organizers must consider all of these issues and realize that every organizational decision they make has a direct impact on the way the workshop turns out. It is thus the organizers' responsibility to plan ahead and provide participants with the tools to make the workshop successful. Because in the end, it is the participants' involvement that can either make or break a meeting.
In order to quantitatively assess the success of a meeting, organizers should plan to collect data useful in determining the quality of the workshop. This can be achieved with \emph{anonymous} ``exit-surveys'', that can be completed at the end of the workshop. Simple questions like: ``Did you enjoy the meeting?'' or ``Would you attend another meeting of this type?'' can provide useful data to quantitatively study workshop success, for example, as a function of workshop size and venue selection. Ideally, data would be collected for many workshops with the goal of aggregating enough data to quantify the workshop parameters that maximize workshop success.
\acknowledgments We would like to thank several graduate students, including Laura Sampson, Paul Baker, Katerina Chatziioannou, Dimitry Ayzenberg, Nicholas Loutrel, and Meg Millhouse, as well as postdoctoral researchers, including Kent Yagi and Antoine Klein, for helping us organize the workshop ``Gravitational Wave Tests of Alternative Theories of Gravity in the Advanced Detector Era'', held at Montana State University in Bozeman, Montana in April 2013. We would also like to thank several physicists and astrophysicists for discussions on this topic, including Pau Amaro-Seoane, Emanuele Berti, Vitor Cardoso, Neil Cornish, Pedro Ferreira, Jon Gair, Tyson Littenberg, Ed Porter, Leo Stein, and Carlos Sopuerta. NY acknowledges support from NSF grant PHY-1114374, PHY-1234826, PHY-1250636, as well as support provided by NASA grant NNX11AI49G, under sub-award 00001944.
|
1,108,101,565,797 | arxiv | \section{#1}}
\renewcommand{\r}[1]{(\ref{#1})}
\newcommand{\cte}[1]{$^{\mbox{\scriptsize \cite{#1}}}$}
\newcommand{\hs}[1]{\hspace*{#1cm}}
\newcommand{\vs}[1]{\vspace*{#1cm}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\be \begin{array}{rcl}}{\begin{equation} \begin{array}{rcl}}
\newcommand{\end{array} \ee}{\end{array} \end{equation}}
\newcommand{\nonumber}{\nonumber}
\newcommand{\et}[1]{e^{\mbox{\small $#1$}}}
\newcommand{\ul}[1]{\underline{#1}}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\bld}[1]{\mbox{\boldmath $#1$}}
\newcommand{\ket}[1]{| #1 \rangle}
\newcommand{{\cdot}}{{\cdot}}
\newcommand{{\wedge}}{{\wedge}}
\newcommand{{\times}}{{\times}}
\newcommand{{\ast}}{{\ast}}
\newcommand{\langle}{\langle}
\newcommand{\rangle}{\rangle}
\newcommand{\leftrightarrow}{\leftrightarrow}
\newcommand{\; \leftrightarrow \;}{\; \leftrightarrow \;}
\newcommand{{\textstyle \frac{1}{2}}}{{\textstyle \frac{1}{2}}}
\newcommand{{\textstyle \frac{1}{3}}}{{\textstyle \frac{1}{3}}}
\newcommand{{\textstyle \frac{2}{3}}}{{\textstyle \frac{2}{3}}}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{{\textstyle \frac{1}{4}}}{{\textstyle \frac{1}{4}}}
\newcommand{{\textstyle \sqrt{2}}}{{\textstyle \sqrt{2}}}
\newcommand{{\textstyle \tilde{\,}}}{{\textstyle \tilde{\,}}}
\newcommand{\textit{et al.}}{\textit{et al.}}
\newcommand{\textit{etc.}}{\textit{etc.}}
\newcommand{{\textstyle \frac{1}{\sqrt{2}}}}{{\textstyle \frac{1}{\sqrt{2}}}}
\newcommand{\uparrow}{\uparrow}
\newcommand{\downarrow}{\downarrow}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\gamma}{\gamma}
\newcommand{\delta}{\delta}
\newcommand{\epsilon}{\epsilon}
\newcommand{\kappa}{\kappa}
\newcommand{\lambda}{\lambda}
\newcommand{\sigma}{\sigma}
\newcommand{\omega}{\omega}
\newcommand{\Gamma}{\Gamma}
\newcommand{\Delta}{\Delta}
\newcommand{\Lambda}{\Lambda}
\newcommand{\Sigma}{\Sigma}
\newcommand{\Omega}{\Omega}
\newcommand{\mbox{\boldmath $a$}}{\mbox{\boldmath $a$}}
\newcommand{\mbox{\boldmath $b$}}{\mbox{\boldmath $b$}}
\newcommand{\mbox{\boldmath $c$}}{\mbox{\boldmath $c$}}
\newcommand{\mbox{\boldmath $e$}}{\mbox{\boldmath $e$}}
\newcommand{\mbox{\boldmath $i$}}{\mbox{\boldmath $i$}}
\newcommand{\mbox{\boldmath $j$}}{\mbox{\boldmath $j$}}
\newcommand{\mbox{\boldmath $k$}}{\mbox{\boldmath $k$}}
\newcommand{\mbox{\boldmath $n$}}{\mbox{\boldmath $n$}}
\newcommand{\mbox{\boldmath $p$}}{\mbox{\boldmath $p$}}
\newcommand{\mbox{\boldmath $q$}}{\mbox{\boldmath $q$}}
\newcommand{\mbox{\boldmath $r$}}{\mbox{\boldmath $r$}}
\newcommand{\mbox{\boldmath $s$}}{\mbox{\boldmath $s$}}
\newcommand{\mbox{\boldmath $x$}}{\mbox{\boldmath $x$}}
\newcommand{\mbox{\boldmath $A$}}{\mbox{\boldmath $A$}}
\newcommand{\mbox{\boldmath $B$}}{\mbox{\boldmath $B$}}
\newcommand{\mbox{\boldmath $C$}}{\mbox{\boldmath $C$}}
\newcommand{\mbox{\boldmath $D$}}{\mbox{\boldmath $D$}}
\newcommand{\mbox{\boldmath $E$}}{\mbox{\boldmath $E$}}
\newcommand{\mbox{\boldmath $F$}}{\mbox{\boldmath $F$}}
\newcommand{\mbox{\boldmath $H$}}{\mbox{\boldmath $H$}}
\newcommand{\mbox{\boldmath $I$}}{\mbox{\boldmath $I$}}
\newcommand{\mbox{\boldmath $J$}}{\mbox{\boldmath $J$}}
\newcommand{\mbox{\boldmath $M$}}{\mbox{\boldmath $M$}}
\newcommand{\mbox{\boldmath $P$}}{\mbox{\boldmath $P$}}
\newcommand{\mbox{\boldmath $S$}}{\mbox{\boldmath $S$}}
\newcommand{\mbox{\boldmath $T$}}{\mbox{\boldmath $T$}}
\newcommand{\mbox{\boldmath $\sig$}}{\mbox{\boldmath $\sigma$}}
\newcommand{\dot{l}}{\dot{l}}
\newcommand{\dot{n}}{\dot{n}}
\newcommand{\dot{p}}{\dot{p}}
\newcommand{\dot{r}}{\dot{r}}
\newcommand{\dot{s}}{\dot{s}}
\newcommand{\dot{t}}{\dot{t}}
\newcommand{\dot{v}}{\dot{v}}
\newcommand{\dot{x}}{\dot{x}}
\newcommand{\dot{z}}{\dot{z}}
\newcommand{\dot{\theta}}{\dot{\theta}}
\newcommand{\dot{J}}{\dot{J}}
\newcommand{\dot{S}}{\dot{S}}
\newcommand{\dot{T}}{\dot{T}}
\newcommand{\dot{\Om}}{\dot{\Omega}}
\newcommand{{\mathcal{A}}}{{\mathcal{A}}}
\newcommand{{\mathcal{B}}}{{\mathcal{B}}}
\newcommand{{\mathcal{D}}}{{\mathcal{D}}}
\newcommand{{\mathcal{E}}}{{\mathcal{E}}}
\newcommand{{\mathcal{F}}}{{\mathcal{F}}}
\newcommand{{\mathcal{G}}}{{\mathcal{G}}}
\newcommand{{\mathcal{H}}}{{\mathcal{H}}}
\newcommand{{\mathcal{I}}}{{\mathcal{I}}}
\newcommand{{\mathcal{J}}}{{\mathcal{J}}}
\newcommand{{\mathcal{L}}}{{\mathcal{L}}}
\newcommand{{\mathcal{M}}}{{\mathcal{M}}}
\newcommand{{\mathcal{N}}}{{\mathcal{N}}}
\newcommand{{\mathcal{O}}}{{\mathcal{O}}}
\newcommand{{\mathcal{P}}}{{\mathcal{P}}}
\newcommand{{\mathcal{Q}}}{{\mathcal{Q}}}
\newcommand{{\mathcal{R}}}{{\mathcal{R}}}
\newcommand{{\mathcal{S}}}{{\mathcal{S}}}
\newcommand{{\mathcal{T}}}{{\mathcal{T}}}
\newcommand{{\mathcal{U}}}{{\mathcal{U}}}
\newcommand{{\mathcal{V}}}{{\mathcal{V}}}
\newcommand{{\mathcal{W}}}{{\mathcal{W}}}
\newcommand{{\mathcal{Z}}}{{\mathcal{Z}}}
\newcommand{\dif}[1]{\partial_{#1}}
\newcommand{\partial_a}{\partial_a}
\newcommand{\partial_b}{\partial_b}
\newcommand{\partial_c}{\partial_c}
\newcommand{\partial_d}{\partial_d}
\newcommand{\partial_e}{\partial_e}
\newcommand{\partial_i}{\partial_i}
\newcommand{\partial_r}{\partial_r}
\newcommand{\partial_u}{\partial_u}
\newcommand{\partial_v}{\partial_v}
\newcommand{\partial_x}{\partial_x}
\newcommand{\partial_X}{\partial_X}
\newcommand{\partial_Z}{\partial_Z}
\newcommand{\partial_t}{\partial_t}
\newcommand{\partial_{\theta}}{\partial_{\theta}}
\newcommand{\partial_{\alpha}}{\partial_{\alpha}}
\newcommand{\partial_{\eps}}{\partial_{\epsilon}}
\newcommand{\partial_{\mu}}{\partial_{\mu}}
\newcommand{\partial_{\nu}}{\partial_{\nu}}
\newcommand{\partial_{\psi}}{\partial_{\psi}}
\newcommand{D \hs{-0.26}/}{D \hs{-0.26}/}
\newcommand{\sigma_{1}}{\sigma_{1}}
\newcommand{\sigma_{2}}{\sigma_{2}}
\newcommand{\sigma_{3}}{\sigma_{3}}
\newcommand{\bsig_1}{\mbox{\boldmath $\sig$}_1}
\newcommand{\bsig_2}{\mbox{\boldmath $\sig$}_2}
\newcommand{\bsig_3}{\mbox{\boldmath $\sig$}_3}
\newcommand{\sigma_{r}}{\sigma_{r}}
\newcommand{i\sigma_{1}}{i\sigma_{1}}
\newcommand{i\sigma_{2}}{i\sigma_{2}}
\newcommand{i\sigma_{3}}{i\sigma_{3}}
\newcommand{\sig_i}{\sigma_i}
\newcommand{\sig_j}{\sigma_j}
\newcommand{\sig_k}{\sigma_k}
\newcommand{\gamma_{1}}{\gamma_{1}}
\newcommand{\gamma_{2}}{\gamma_{2}}
\newcommand{\gamma_{3}}{\gamma_{3}}
\newcommand{\gamma_{0}}{\gamma_{0}}
\newcommand{i\gamma_{3}}{i\gamma_{3}}
\newcommand{\gamma^\alpha}{\gamma^\alpha}
\newcommand{\gamma_\alpha}{\gamma_\alpha}
\newcommand{\gamma^\mu}{\gamma^\mu}
\newcommand{\gamma_\mu}{\gamma_\mu}
\newcommand{\gamma^\nu}{\gamma^\nu}
\newcommand{\gamma_\nu}{\gamma_\nu}
\newcommand{\sigma_{1}^{1}}{\sigma_{1}^{1}}
\newcommand{\sigma_{1}^{2}}{\sigma_{1}^{2}}
\newcommand{\sigma_{2}^{1}}{\sigma_{2}^{1}}
\newcommand{\sigma_{2}^{2}}{\sigma_{2}^{2}}
\newcommand{\sigma_{3}^{1}}{\sigma_{3}^{1}}
\newcommand{\sigma_{3}^{2}}{\sigma_{3}^{2}}
\newcommand{i\sigma_{1}^{1}}{i\sigma_{1}^{1}}
\newcommand{i\sigma_{1}^{2}}{i\sigma_{1}^{2}}
\newcommand{i\sigma_{2}^{1}}{i\sigma_{2}^{1}}
\newcommand{i\sigma_{2}^{2}}{i\sigma_{2}^{2}}
\newcommand{i\sigma_{3}^{1}}{i\sigma_{3}^{1}}
\newcommand{i\sigma_{3}^{2}}{i\sigma_{3}^{2}}
\newcommand{\sigma^{AA'}_{\mu}}{\sigma^{AA'}_{\mu}}
\newcommand{\sigma^{\mu}_{A'A}}{\sigma^{\mu}_{A'A}}
\newcommand{\bar{\sigma}_{\mu}}{\bar{\sigma}_{\mu}}
\newcommand{\hat{\sigma}_{1}}{\hat{\sigma}_{1}}
\newcommand{\hat{\sigma}_{2}}{\hat{\sigma}_{2}}
\newcommand{\hat{\sigma}_{3}}{\hat{\sigma}_{3}}
\newcommand{\hat{\sigma}_i}{\hat{\sigma}_i}
\newcommand{\hat{\sig}_j}{\hat{\sigma}_j}
\newcommand{\hat{\sigma}_k}{\hat{\sigma}_k}
\newcommand{\hat{\sigma}}{\hat{\sigma}}
\newcommand{\hat{\gamma}_{1}}{\hat{\gamma}_{1}}
\newcommand{\hat{\gamma}_{2}}{\hat{\gamma}_{2}}
\newcommand{\hat{\gamma}_{3}}{\hat{\gamma}_{3}}
\newcommand{\hat{\gamma}_{0}}{\hat{\gamma}_{0}}
\newcommand{\hat{\gamma}^\mu}{\hat{\gamma}^\mu}
\newcommand{\hat{\gamma}_\mu}{\hat{\gamma}_\mu}
\newcommand{\hat{\gamma}}{\hat{\gamma}}
\newcommand{\frm}[1]{\{ #1 \}}
\newcommand{e^\alpha}{e^\alpha}
\newcommand{e_\alpha}{e_\alpha}
\newcommand{e^\beta}{e^\beta}
\newcommand{e_\beta}{e_\beta}
\newcommand{e^\gamma}{e^\gamma}
\newcommand{e_\gamma}{e_\gamma}
\newcommand{e^\delta}{e^\delta}
\newcommand{e_\delta}{e_\delta}
\newcommand{e^\mu}{e^\mu}
\newcommand{e_\mu}{e_\mu}
\newcommand{e^\nu}{e^\nu}
\newcommand{e_\nu}{e_\nu}
\newcommand{e^\theta}{e^\theta}
\newcommand{e_\theta}{e_\theta}
\newcommand{e^\phi}{e^\phi}
\newcommand{e_\phi}{e_\phi}
\newcommand{g^\alpha}{g^\alpha}
\newcommand{g_\alpha}{g_\alpha}
\newcommand{g^\beta}{g^\beta}
\newcommand{g_\beta}{g_\beta}
\newcommand{g^\mu}{g^\mu}
\newcommand{g_\mu}{g_\mu}
\newcommand{g^\nu}{g^\nu}
\newcommand{g_\nu}{g_\nu}
\newcommand{\sig_r}{\sigma_r}
\newcommand{\sig_\theta}{\sigma_\theta}
\newcommand{\sig_\phi}{\sigma_\phi}
\newcommand{\bsig_r}{\mbox{\boldmath $\sig$}_r}
\newcommand{\bsig_\theta}{\mbox{\boldmath $\sig$}_\theta}
\newcommand{\bsig_\phi}{\mbox{\boldmath $\sig$}_\phi}
\newcommand{\tilde{R}}{\tilde{R}}
\newcommand{\tilde{L}}{\tilde{L}}
\newcommand{\dot{R}}{\dot{R}}
\newcommand{R^{\dagger}}{R^{\dagger}}
\newcommand{\tilde{S}}{\tilde{S}}
\newcommand{\tilde{\psi}}{\tilde{\psi}}
\newcommand{\dot{\psi}}{\dot{\psi}}
\newcommand{\psi^{\dagger}}{\psi^{\dagger}}
\newcommand{\bar{\psi}}{\bar{\psi}}
\newcommand{\tilde{\phi}}{\tilde{\phi}}
\newcommand{\dot{\phi}}{\dot{\phi}}
\newcommand{\phi^{\dagger}}{\phi^{\dagger}}
\newcommand{\bar{\phi}}{\bar{\phi}}
\newcommand{\tilde{\Phi}}{\tilde{\Phi}}
\newcommand{\dot{\Phi}}{\dot{\Phi}}
\newcommand{\Phi^{\dagger}}{\Phi^{\dagger}}
\newcommand{\bar{\Phi}}{\bar{\Phi}}
\newcommand{\eps^{\dagger}}{\epsilon^{\dagger}}
\newcommand{\tilde{\eps}}{\tilde{\epsilon}}
\newcommand{\psi_i}{\psi_i}
\newcommand{\phi_i}{\phi_i}
\newcommand{\ul{f}}{\ul{f}}
\newcommand{\ol{f}}{\ol{f}}
\newcommand{\ul{J}}{\ul{J}}
\newcommand{\ol{J}}{\ol{J}}
\newcommand{\ol{h}}{\ol{h}}
\newcommand{\ul{h}}{\ul{h}}
\newcommand{\ul{g}}{\ul{g}}
\newcommand{{\mbox Tr}}{{\mbox Tr}}
\newcommand{\mathsf{f}}{\mathsf{f}}
\newcommand{\bar{\mathsf{f}}}{\bar{\mathsf{f}}}
\newcommand{\mathsf{d}}{\mathsf{d}}
\newcommand{\bar{\mathsf{d}}}{\bar{\mathsf{d}}}
\newcommand{\mathsf{g}}{\mathsf{g}}
\newcommand{\bar{\mathsf{g}}}{\bar{\mathsf{g}}}
\newcommand{\mathsf{h}}{\mathsf{h}}
\newcommand{\bar{\mathsf{h}}}{\bar{\mathsf{h}}}
\newcommand{\mathsf{n}}{\mathsf{n}}
\newcommand{\bar{\mathsf{n}}}{\bar{\mathsf{n}}}
\newcommand{\hat{a}}{\hat{a}}
\newcommand{\hat{p}}{\hat{p}}
\newcommand{\hat{H}}{\hat{H}}
\newcommand{\mbox{\boldmath $\hat{p}$}}{\mbox{\boldmath $\hat{p}$}}
\newcommand{\nabla}{\nabla}
\newcommand{\dot{\ho}(\grad)}{\dot{\ol{h}}(\nabla)}
\newcommand{\dot{\grad}}{\dot{\nabla}}
\newcommand{\mbox{\boldmath $\grad$}}{\mbox{\boldmath $\nabla$}}
\newcommand{\stackrel{\leftarrow}{\grad}}{\stackrel{\leftarrow}{\nabla}}
\newcommand{\stackrel{\rightarrow}{\grad}}{\stackrel{\rightarrow}{\nabla}}
\newcommand{\stackrel{\leftrightarrow}{\grad}}{\stackrel{\leftrightarrow}{\nabla}}
\newcommand{\overleftrightarrow{\grad}}{\overleftrightarrow{\nabla}}
\newcommand{\stackrel{\ast}{\grad}}{\stackrel{\ast}{\nabla}}
\newcommand{\dot{\partial}}{\dot{\partial}}
\newcommand{\deriv}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\backderiv}[2]{\frac{\overleftarrow{\partial} #1}{\partial #2}}
\newcommand{\overleftarrow{\partial}_{\mu}}{\overleftarrow{\partial}_{\mu}}
\newcommand{\sin\!\theta\,}{\sin\!\theta\,}
\newcommand{\cos\!\theta\,}{\cos\!\theta\,}
\newcommand{\sin^2\!\theta\,}{\sin^2\!\theta\,}
\newcommand{\cos^2\!\theta\,}{\cos^2\!\theta\,}
\newcommand{\sin\!\phi\,}{\sin\!\phi\,}
\newcommand{\cos\!\phi\,}{\cos\!\phi\,}
\newcommand{\sin^2\!\phi\,}{\sin^2\!\phi\,}
\newcommand{\cos^2\!\phi\,}{\cos^2\!\phi\,}
\pagestyle{myheadings}
\renewcommand{\partial_a}{\gamma^a}
\renewcommand{\partial_b}{\gamma^b}
\renewcommand{\partial_c}{\gamma^c}
\renewcommand{\partial_d}{\gamma^d}
\renewcommand{\partial_e}{\gamma^e}
\newcommand{\gamma^f}{\gamma^f}
\newcommand{\gamma_a}{\gamma_a}
\newcommand{\gamma_b}{\gamma_b}
\newcommand{\gamma_c}{\gamma_c}
\newcommand{\gamma_d}{\gamma_d}
\newcommand{\dhi}{\det(\ul{h})^{-1}}
\renewcommand{\dh}{\det(\ul{h})}
\newcommand{\numfrac}[2]{{\textstyle \frac{#1}{#2}}}
\newcommand{\hoi}{\ol{h}^{-1}}
\renewcommand{\i}{I}
\markright{Quadratic Lagrangians and Topology \ldots}
\begin{document}
\newcommand{\clr}{{\mathcal{R}}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\title{Quadratic Lagrangians and Topology in Gauge Theory Gravity}
\author{Antony Lewis\thanks{[email protected]},$^1$\ \ Chris
Doran\thanks{[email protected],
http://www.mrao.cam.ac.uk/$\sim$clifford/},$^1$\ \ and Anthony Lasenby$^1$}
\footnotetext[1]{Astrophysics Group, Cavendish Laboratory, Cambridge, U.K.}
\date{\today}
\maketitle
\begin{abstract}
We consider topological contributions to the action integral in a
gauge theory formulation of gravity. Two topological invariants are
found and are shown to arise from the scalar and pseudoscalar parts of
a single integral. Neither of these action integrals contribute to
the classical field equations. An identity is found for the
invariants that is valid for non-symmetric Riemann tensors,
generalizing the usual GR expression for the topological invariants.
The link with Yang-Mills instantons in Euclidean gravity is also
explored. Ten independent quadratic terms are constructed from the
Riemann tensor, and the topological invariants reduce these to eight
possible independent terms for a quadratic Lagrangian. The resulting
field equations for the parity non-violating terms are presented. Our
derivations of these results are considerably simpler that those found
in the literature.
\\KEY WORDS: Quadratic Lagrangians, topology, instantons, ECKS theory.
\end{abstract}
\section{Introduction}
In the construction of a gravitational field theory there is
considerable freedom in the choice of Lagrangian. Einstein's theory
is obtained when just the Ricci scalar is used, but there is no
compelling reason to believe that this is anything other than a good
approximation. Since quadratic terms will be small when the curvature
is small one would expect them to have a small effect at low energies.
However they may have a considerable effect in cosmology or on
singularity formation when the curvature gets larger. Quadratic terms
may also be necessary to formulate a sensible quantum theory.
In this paper we consider the effects of quadratic Lagrangians when
gravity is considered as a gauge theory. Topological invariants place
restrictions on the number of independent quadratic terms one can
place in the Lagrangian. In the gauge theory approach these
invariants arise simply as boundary terms in the action integral. The
Bianchi identity means that these terms do not contribute to the
classical field equations, though they could become important in a
quantum theory. The invariants have a natural analog in Euclidean
gravity in the winding numbers of Yang-Mills instantons. These are
characterized by two integers which can be expressed as integrals
quadratic in the Riemann tensor.
Here we investigate instantons and quadratic Lagrangians in Gauge
Theory Gravity (GTG) as recently formulated by Lasenby, Doran and
Gull~\cite{DGL98-grav}. GTG is a modernized version of ECKS or $U_4$
spin-torsion theory where gravity corresponds to a combination of
invariance under local Lorentz transformations and diffeomorphisms.
With a Ricci Lagrangian GTG reproduces the results of General
Relativity (GR) for all the standard tests, but also incorporates
torsion in a natural manner. When quadratic terms are introduced into
the Lagrangian the theories differ markedly. In GR one obtains fourth
order equations for the metric~\cite{Stelle78}, whereas in GTG one has
a pair of lower order equations. One of these determines the
connection, which in general will differ from that used in GR. A
reason for these differences can be seen in the way that the fields
transform under scale transformations. In the GTG approach, all of
the quadratic terms in the action transform homogeneously under
scalings. In GR the only terms with this property are those formed
from quadratic combinations of the Weyl tensor.
We start with a brief outline of GTG, employing the notation of the
Spacetime Algebra (STA)~\cite{hes-sta,hes-gc}. This algebraic system,
based on the Dirac algebra, is very helpful in elucidating the
structure of GTG. The simplicity of the derivations presented here is
intended in part as an advertisement for the power of the STA. We
continue by constructing the topological invariants for the GTG action
integral. We show that the two invariants are the scalar and
pseudoscalar parts of a single quantity, and our derivation treats
them in a unified way. The relationship with instanton solutions in
Euclidean gravity is explored. As for instantons in Yang-Mills
theory the rotation gauge field becomes pure gauge at infinity and the
topological invariants are the corresponding winding numbers.
We construct irreducible fields from the Riemann tensor and use these
to form ten independent quadratic terms from the Riemann tensor. In
an action integral the two topological terms can be ignored, so only
eight terms are needed. We construct the field equations for the
parity non-violating Lagrangian terms. Units with $\hbar=c=8\pi G=1$
are used throughout.
\section{Gauge Theory Gravity (GTG)}
In this paper we employ the Spacetime Algebra (STA), which is the
geometric (or Clifford) algebra of Minkowski spacetime. For details
of geometric algebra the reader is referred
to~\cite{hes-sta,hes-gc}. The STA is generated by 4
orthonormal vectors, here denoted $\{\gamma_\mu\}, \mu=0\cdots 4$. These
are equipped with a geometric (Clifford) product. This product is
associative, and the symmetrized product of two vectors is a scalar:
\begin{equation}
{\textstyle \frac{1}{2}} (\gamma_\mu \gamma_\nu + \gamma_\nu \gamma_\mu) = \gamma_\mu {\cdot} \gamma_\nu = \eta_{\mu\nu}
= \mbox{diag}(+---).
\end{equation}
Clearly the $\gamma_\mu$ vectors satisfy the same algebraic properties as
the Dirac matrices. There is no need to introduce an explicit matrix
representation for any of the derivations presented here. The
antisymmetrized product of two vectors is a bivector, denoted with a
wedge $\wedge$. For two vectors $u$ and $v$ we therefore have
\begin{equation}
uv = {\textstyle \frac{1}{2}}(uv+vu) + {\textstyle \frac{1}{2}}(uv-vu) = u {\cdot} v + u {\wedge} v.
\end{equation}
These definitions extend to define an algebra with 16 elements:
\begin{equation}
\begin{array}{ccccc}
1 & \{ \gamma_\mu \} & \{ \gamma_\mu {\wedge} \gamma_\nu \} & \{ I \gamma_\mu \} & I \\
\mbox{1 scalar} & \mbox{4 vectors} & \mbox{6 bivectors} &
\mbox{4 trivectors} & \mbox{1 pseudoscalar} \\
\mbox{grade 0} & \mbox{grade 1} & \mbox{grade 2} & \mbox{grade 3} &
\mbox{grade 4},
\end{array}
\label{sta}
\end{equation}
where the pseudoscalar $I$ is defined by
\begin{equation}
I \equiv \gamma_{0}\gamma_{1}\gamma_{2}\gamma_{3}.
\end{equation}
The pseudoscalar satisfies $I^2=-1$, and generates duality
transformations, interchanging grade-$r$ and grade-$(4-r)$
multivectors.
The STA approach to gauge theory gravity, or GTG, was introduced in~\cite{DGL98-grav}.
The notation there relied heavily on the use of geometric calculus.
Here we have chosen to adopt a different notation which is closer to
more familiar systems. These conventions are sometimes not as elegant as those
of~\cite{DGL98-grav,DGL98-spintor}, but they should help to make the
results more accessible. The first of the gravitational gauge fields
is a position-dependent linear function mapping vectors to vectors.
In~\cite{DGL98-grav} this was denoted by $\ol{h}(a)$. Here we will
instead write
\begin{equation}
h^a = \ol{h}(\partial_a)
\end{equation}
The metric $g^{ab}$ can be formed from
\begin{equation}
g^{ab}=h^a {\cdot} h^b
\end{equation}
Clearly $h^a$ is closely related
to a vierbein and this relationship is explained in detail
in the appendix to~\cite{DGL98-grav}. One point to note is that only one type of
contraction is used in GTG, which is that of the underlying
STA~\r{sta}. Our use of Latin indices reflects the fact that these
indices can be read as abstract vectors, and can be regarded as a
shorthand for the notation of~\cite{DGL98-grav,DGL98-spintor}. Of
course the index can also be viewed as a reference to a particular
orthogonal frame vector $\gamma_\mu$.
The second gauge field is a bivector-valued function $\Omega_a$.
This ensures invariance under local Lorentz transformations, which are
written in the STA using the the double-sided formula
\begin{equation}
A \mapsto L A \tilde{L}.
\end{equation}
Here $A$ is an arbitrary multivector, $L$ is a \textit{rotor} --- an
even element satisfying $L\tilde{L}=1$ --- and the tilde denotes the
operation of reversing the order of vectors in any geometric product.
Under a Lorentz transformation $\Omega_a$ transforms as
\begin{equation}
\Omega_a \rightarrow L\Omega_a\tilde{L} - 2\nabla_a L\tilde{L},
\end{equation}
where $\nabla_a = \gamma_a {\cdot} \nabla$ is the flat space derivative in the $\gamma_a$
direction. It follows that $\Omega_a$ takes its values in the Lie
algebra of the group of rotors, which in the STA is simply the space
of bivectors. Of course $\Omega_a$ is a form of spin
connection, the difference here being that it takes its values
explicitly in the bivector subalgebra of the STA.
The $\Omega_a$ function is used to construct a derivative which is
covariant under local spacetime rotations. Acting on an arbitrary
multivector $A$ we define
\begin{equation}
D_a A \equiv \nabla_a A + \Omega_a {\times} A
\end{equation}
where ${\times}$ is the commutator product, $A \times B =
{\textstyle \frac{1}{2}}(AB-BA)$.
The commutator of these derivatives defines the field strength,
\begin{equation}
R_{ab} \equiv \nabla_a\Omega_b-\nabla_b\Omega_a + \Omega_a{\times}\Omega_b.
\end{equation}
This is also bivector-valued, and is best viewed as a linear function
of a bivector argument (the argument being $\gamma_a{\wedge}\gamma_b$ in this
case).
Note that our notation for the derivative differs slightly from that
in~\cite{DGL98-grav}. We wish to use the convention that fully covariant fields are written in
calligraphic type, so here we use the ${\mathcal{D}}_a$ symbol for the fully
covariant derivative
\begin{equation}
{\mathcal{D}}_a \equiv \gamma_a{\cdot} h^b D_b.
\end{equation}
We define the covariant field strength, the
\textit{Riemann tensor}, by
\begin{equation}
\clr_{ab} \equiv \gamma_a{\cdot} h^c \gamma_b{\cdot} h^d R_{cd}.
\label{Riem}
\end{equation}
Again, $\clr_{ab}$ is best viewed as a linear map on the space of
bivectors, and as such it has a total of 36 degrees of freedom.
These covariant objects are at the heart of the
GTG formalism, and distinguish this approach to one based on
differential forms. Covariant objects such as $\clr_{ab}$, or $\partial_a{\mathcal{D}}_a
\alpha$ (where $\alpha$ is a scalar field), are elements of
neither the tangent nor cotangent spaces. Instead they belong in a
separate `covariant' space in which all objects transform simply under
displacements. In this space it is simple to formulate physical laws,
and to isolate gauge invariant variables.
The remaining definition we need is
\begin{equation}
\gamma_b{\cdot} h^a {\mathcal{T}}^b \equiv \partial_b {\wedge} ({\mathcal{D}}_b h^a),
\end{equation}
which defines the \textit{torsion bivector} ${\mathcal{T}}^a$, a covariant
tensor mapping vectors to bivectors. Since the torsion is not assumed to
vanish, we cannot make any assumptions about the symmetries of
the Riemann tensor. Specifically the `cyclic identity' of GR,
$\clr_{ab}\,{\wedge} \,\partial_b=0$, no longer holds.
From the Riemann tensor one forms two contractions, the Ricci tensor
${\mathcal{R}}_a$ and the Ricci scalar ${\mathcal{R}}$,
\begin{equation}
\clr_a = \partial_b {\cdot} \clr_{ba} \quad \quad \clr=\partial_a{\cdot}\clr_a.
\end{equation}
The same symbol is used for the Riemann tensor, Ricci tensor and Ricci
scalar, with the number of subscripts denoting which is intended.
Both of the tensors preserve grade, so it is easy to keep track of the
grade of the objects generated. The Einstein tensor is derived from
the Ricci tensor in the obvious way,
\begin{equation}
{\mathcal{G}}_a = {\mathcal{R}}_a - {\textstyle \frac{1}{2}} {\mathcal{R}} \gamma_a.
\end{equation}
These are all of the definitions required to study the role of
quadratic Lagrangians in GTG.
\section{Topological invariants}
We are interested in the behaviour of quadratic terms in the
gravitational Lagrangian in GTG. We start by constructing the
following quantity (which is motivated by instanton solutions in
Euclidean gravity --- see Section~\ref{S-inst})
\begin{equation}
{\mathcal{Z}}\equiv \partial_a{\wedge} \partial_b{\wedge} \partial_c{\wedge} \partial_d{\mathcal{R}}_{cd} {\mathcal{R}}_{ab} = \partial_a{\wedge}
\partial_b{\wedge} \partial_c{\wedge} \partial_d {\textstyle \frac{1}{2}}({\mathcal{R}}_{cd} {\mathcal{R}}_{ab} +{\mathcal{R}}_{ab}{\mathcal{R}}_{cd}).
\end{equation}
This is a combination of scalar and pseudoscalar terms only, and so
transforms as a scalar under restricted Lorentz transformations.
From equation~\r{Riem} we can write
\begin{equation}
{\mathcal{Z}} = h^a{\wedge} h^b{\wedge} h^c{\wedge} h^d R_{cd}R_{ab}
= h \,\partial_a{\wedge} \partial_b{\wedge} \partial_c{\wedge} \partial_d R_{cd}R_{ab} \equiv h Z
\end{equation}
where $h$ is the determinant defined by
\begin{equation}
h^a{\wedge} h^b{\wedge} h^c{\wedge} h^d \equiv h \,\partial_a{\wedge} \partial_b{\wedge} \partial_c{\wedge} \partial_d
\end{equation}
and
\begin{equation}
Z \equiv \partial_a{\wedge} \partial_b{\wedge} \partial_c{\wedge} \partial_d R_{cd}R_{ab}.
\end{equation}
We can now form an invariant integral that is independent of the
$h^a$ field as
\begin{equation}
S\equiv\int |d^4x| h^{-1} \, {\mathcal{Z}} = \int |d^4x| Z.
\label{Iint}
\end{equation}
From the definition of the Riemann tensor we find that
\begin{align}
Z &= \partial_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\partial_d (2\nabla_c\Omega_d
+\Omega_c\Omega_d)(2\nabla_a\Omega_b+\Omega_a\Omega_b) \nonumber \\
&=-4\partial_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\nabla(\nabla_c\Omega_a\Omega_b +
{\textstyle \frac{1}{3}}\Omega_a\Omega_b\Omega_c) \nonumber \\
&= 2\partial_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\nabla(R_{ac}\Omega_b + {\textstyle \frac{1}{3}}\Omega_a\Omega_b\Omega_c).
\end{align}
The main step in this derivation is the observation that the totally
antisymmetrized product of 4 bivectors vanishes identically in 4-d.
This proof that $Z$ is a total divergence is considerably simpler than
that given in~\cite{Nieh80}, where gamma matrices were introduced in
order to generate a similar `simple' proof in the Riemann-Cartan
formulation. Here we have also treated the scalar and pseudoscalar
parts in a single term, which halves the work.
Since the integral reduces to a boundary term it should only
contribute a global topological term to an action integral, and should
not contribute to the local field equations. This is simple to
check. There is no dependence on the $h^a$ field, so no
contribution arises when this field is varied. When the $\Omega_a$ field
is varied one picks up terms proportional to
\begin{equation}
\gamma_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\partial_d D_d R_{cb} = \numfrac{1}{3}
\gamma_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\partial_d (D_d R_{cb} + D_b R_{dc} + D_c R_{bd}) = 0,
\end{equation}
which vanishes by virtue of the Bianchi identity. Since the two
topological terms do not contribute to the field equations, and can
therefore be ignored in any classical action integral, it is useful to have
expressions for them in terms of simpler combinations of the Riemann
tensor and its contractions. For the scalar term (denoted
$\langle{\mathcal{Z}}\rangle$) we find that
\begin{align}
\langle {\mathcal{Z}}\rangle
&=\partial_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\partial_d\, \clr_{cd}{\wedge} \clr_{ab} \nonumber \\
&=(\partial_a{\wedge}\partial_b{\wedge}\partial_c){\cdot}[ (\partial_d{\cdot}\clr_{cd}){\wedge}\clr_{ab} +
\clr_{cd}{\wedge}(\partial_d{\cdot}\clr_{ab})] \nonumber \\
&= (\partial_a{\wedge}\partial_b){\cdot}[-\clr\R_{ab} + 2\clr_c{\wedge}(\partial_c{\cdot}\clr_{ab}) + \clr_{cd}(\partial_c{\wedge}\partial_d){\cdot}\clr_{ab})] \nonumber \\
&= \clr^2+2\partial_a{\cdot}[\partial_b{\cdot}\clr_c \, \partial_c{\cdot}\clr_{ab} - \partial_b{\cdot}(\partial_c{\cdot}\clr_{ab})\clr_c] +
2 \clr_{ba}{\cdot} \bar{\clr}^{ab} \nonumber \\
&= 2 \clr_{ba}{\cdot} \bar{\clr}^{ab} - 4 \clr_a{\cdot}\bar{\clr}^a+\clr^2,
\end{align}
where the adjoint functions are defined by
\begin{equation}
(\gamma_a{\wedge}\gamma_b){\cdot}\bar{\clr}_{cd} \equiv (\gamma_c{\wedge}\gamma_d) {\cdot} \clr_{ab}
\quad\quad
\gamma_a {\cdot} \bar{\clr}_b = \gamma_b {\cdot} {\mathcal{R}}_a.
\end{equation}
For the pseudoscalar term (denoted $\langle{\mathcal{Z}}\rangle_4$) we similarly obtain
\begin{align}
\langle{\mathcal{Z}}\rangle_4
&= \partial_a{\wedge}\partial_b{\wedge}\partial_c{\wedge}\partial_d \, \clr_{cd} {\cdot} \clr_{ab} \nonumber \\
&= \partial_a{\wedge}\partial_b{\wedge}(\bar{\clr}_{cd} \, (\partial_c{\wedge}\partial_d) {\cdot} \clr_{ab} ) \nonumber \\
&= -I (\partial_c{\wedge}\partial_d) {\cdot} \clr_{ab} \, (I\partial_a{\wedge}\partial_b){\cdot}\bar{\clr}_{cd} \nonumber \\
&= 2I \clr^*_{cd}{\cdot}\clr^{cd}
\end{align}
where we have introduced the dual of the Riemann tensor defined
by
\begin{equation}
\clr^*_{ab}\equiv {\textstyle \frac{1}{2}} I \gamma_a{\wedge} \gamma_b{\wedge}\partial_d{\wedge}\partial_c \clr_{cd}.
\end{equation}
We therefore have
\begin{equation}
S=\int|d^4x| h^{-1} \bigl( 2\clr_{ba}{\cdot}\bar{\clr}^{ab} -
4\clr_a{\cdot}\bar{\clr}^a+\clr^2 + 2I\clr^*_{ab}{\cdot}\clr^{ba}\bigr).
\end{equation}
This generalises the usual GR expressions for the topological
invariants to the case where the Riemann tensor need not be symmetric,
as in the case when there is torsion. Both of the scalar and
pseudoscalar contributions can usually be ignored in the action
integral. The standard GR expressions are recovered by setting
$\bar{\clr}_{ab}=\clr_{ab}$ and $\bar{\clr}_a = \clr_a$.
\section{Relation to Instantons}
\label{S-inst}
The derivation of topological terms in GTG has a Euclidean analog,
which gives rise to instanton winding numbers as found in Yang-Mills
theory. For this section we assume that we are working in a Euclidean
space. Most of the formulae go through unchanged, except that now the
pseudoscalar squares to $+1$. For this section we therefore denote
the pseudoscalar by $E$. The proof that the integral~\r{Iint} is a
total divergence is unaffected, and so it can be converted to a
surface integral. The Riemann is assumed to fall off sufficiently
quickly that we can drop the $R_{ac}$ term, so
\begin{equation}
S=-\frac{2}{3}\int|d^3x|n{\wedge}\partial_a{\wedge}\partial_b{\wedge}\partial_c\,\Omega_a\Omega_b\Omega_c.
\end{equation}
For the Riemann to tend to zero the $\Omega_a$ field must tend to pure
gauge,
\begin{equation}
\Omega_a=-2\nabla_aL\tilde{L},
\end{equation}
where $L$ is a (Euclidean) rotor. The integral is invariant under continuous
transformations of the rotor $L$, so we define the winding numbers
\begin{equation}
\chi + E\tau \equiv \frac{1}{6\pi^2} \int |d^3x|
n{\wedge}\partial_a{\wedge}\partial_b{\wedge}\partial_c\, \nabla_a L\tilde{L}\nabla_b L\tilde{L}\nabla_c L\tilde{L}=
\frac{1}{32\pi^2}S.
\label{TopInv}
\end{equation}
The numbers $\tau$ and $\chi$ are instanton numbers for the solution,
here given by the scalar and pseudoscalar parts of one equation. The
common origin of the invariants is clear, as is the fact that one is a
scalar and one a pseudoscalar. There are two integer invariants
because the 4-d Euclidean rotor group is $Spin(4)$ and the homotopy
groups obey
\begin{equation}
\pi_3(Spin(4))=\pi_3(SU(2){\times} SU(2)) = \pi_3(SU(2)){\times}\pi_3(SU(2)) =
Z{\times} Z.
\end{equation}
In Euclidean 4-d space the pseudoscalar $E$ squares to $+1$ and is used to
separate the bivectors into self-dual and anti-self-dual components,
\begin{equation}
B^{\pm} = {\textstyle \frac{1}{2}}(1\pm E)B, \quad\quad E B^{\pm} = \pm B^{\pm}.
\end{equation}
These give rise to the two separate instanton numbers, one for each of
the $SU(2)$ subgroups. In spacetime, however, the pseudoscalar has
negative square and instead gives rise to a natural complex structure.
The structure frequently re-emerges in gravitation theory. The fact
that the complex structures encountered in GR are geometric in origin
is often forgotten when one attempts a Euclideanized treatment of
gravity.
\section{Quadratic Lagrangians}
We now use the preceding results to construct a set of independent
Lagrangian terms for GTG which are quadratic in the field strength
(Riemann) tensor $\clr_{ab}$. None of these terms contain derivatives
of $h^a$, so all transform homogeneously under rescaling of $h^a$.
Local changes of scale are determined by
\begin{equation}
h^a \mapsto \et{-\alpha} h^a, \quad \Omega_a \mapsto \Omega_a,
\end{equation}
where $\alpha$ is a function of position. The field strength transforms
as
\begin{equation}
{\mathcal{R}}_{ab}\mapsto \et{-2\alpha}{\mathcal{R}}_{ab},
\end{equation}
so all quadratic terms formed from ${\mathcal{R}}_{ab}$ pick up a factor of
$\exp(-4\alpha)$ under scale changes. It follows that all quadratic
combinations contribute a term to the action integral that is
invariant under local rescalings. This situation is quite different
to GR, where only combinations of the Weyl tensor are invariant. As
a result the field equations from quadratic GTG (and ECKS theory) are
very different to those obtained in GR.
To construct the independent terms for a quadratic Lagrangian we need
to construct the irreducible parts of the Riemann tensor. To do this
we write
\begin{equation}
\clr_{ab}={\mathcal{W}}_{ab} + {\mathcal{P}}_{ab} +{\mathcal{Q}}_{ab}
\label{Rdecomp}
\end{equation}
where
\begin{equation}
\partial_a{\mathcal{W}}_{ab}=0 \quad\quad \partial_a{\mathcal{P}}_{ab}=\partial_a{\wedge}{\mathcal{P}}_{ab} \quad\quad
\partial_a{\cdot}{\mathcal{Q}}_{ab}=\clr_b.
\end{equation}
In the language of Clifford analysis, this is a form of monogenic
decomposition of ${\mathcal{R}}_{ab}$~\cite{ASGDL97,som96}. To achieve this
decomposition we start by defining~\cite{DGL98-grav}
\begin{equation}
{\mathcal{Q}}_{ab}={\textstyle \frac{1}{2}}(\clr_a{\wedge} \gamma_b + \gamma_a{\wedge}\clr_b) - \numfrac{1}{6} \gamma_a{\wedge}
\gamma_b\clr,
\end{equation}
which satisfies $\partial_a{\cdot}{\mathcal{Q}}_{ab}=\clr_b$. We next take the protraction
of~\r{Rdecomp} with $\partial_a$ to obtain
\begin{equation}
\partial_a{\wedge}\clr_{ab} - {\textstyle \frac{1}{2}}\partial_a{\wedge}\clr_a{\wedge} \gamma_b = \partial_a{\wedge}{\mathcal{P}}_{ab}.
\label{dawdgr}
\end{equation}
We now define the vector valued function
\begin{equation}
{\mathcal{V}}_b \equiv -I\partial_a{\wedge}\clr_{ab} = \partial_a{\cdot}(I\clr_{ab}).
\end{equation}
The symmetric part of ${\mathcal{V}}_b$ is
\begin{align}
{\mathcal{V}}^+_b
&={\textstyle \frac{1}{2}}({\mathcal{V}}_b + \gamma^a \, {\mathcal{V}}_a {\cdot} \gamma_b) \nonumber \\
&= -I {\textstyle \frac{1}{2}} (\gamma^a {\wedge} {\mathcal{R}}_{ab} + \gamma^a \, \gamma_b {\wedge} \gamma^c {\wedge}
{\mathcal{R}}_{ca} ) \nonumber \\
&= -I(\gamma^a {\wedge} {\mathcal{R}}_{ab} -{\textstyle \frac{1}{2}} \gamma^a {\wedge} {\mathcal{R}}_a {\wedge} \gamma_b)
\end{align}
so we have
\begin{equation}
\partial_a{\wedge}{\mathcal{P}}_{ab} = I{\mathcal{V}}^+_b.
\end{equation}
It follows that
\begin{equation}
{\mathcal{P}}_{ab}= -{\textstyle \frac{1}{2}} I({\mathcal{V}}^+_a{\wedge} \gamma_b + \gamma_a{\wedge}{\mathcal{V}}^+_b)+
\numfrac{1}{6} I \gamma_a{\wedge} \gamma_b {\mathcal{V}}
\end{equation}
where
\begin{equation}
{\mathcal{V}}=\partial_a{\cdot}{\mathcal{V}}_a.
\end{equation}
This construction of ${\mathcal{P}}_{ab}$ ensures that the tensor has zero
contraction, as required.
Splitting the Ricci tensor into symmetric and antisymmetric parts we
can finally write the Riemann tensor as
\begin{align}
\clr_{ab} =& {\mathcal{W}}_{ab} + {\textstyle \frac{1}{2}}(\clr^+_a{\wedge} \gamma_b + \gamma_a {\wedge} R^+_b) -
\numfrac{1}{6}\gamma_a{\wedge} \gamma_b \clr \nonumber\\
& + {\textstyle \frac{1}{2}}(\clr^-_a{\wedge} \gamma_b + \gamma_a {\wedge} R^-_b) - \numfrac{1}{2}I({\mathcal{V}}^+_a{\wedge}
\gamma_b + \gamma_a{\wedge}{\mathcal{V}}^+_b) +\numfrac{1}{6}I \gamma_a{\wedge} \gamma_b {\mathcal{V}}
\end{align}
where $+$ and $-$ superscripts denote the symmetric and antisymmetric
parts of a tensor respectively. This decomposition splits the Riemann
tensor into a Weyl term (${\mathcal{W}}_{ab}$) with 10 degrees of freedom, two
symmetric tensors (${\mathcal{R}}^+_a$ and ${\mathcal{V}}^+_a$) with 10 degrees of
freedom each, and an anti-symmetric tensor (${\mathcal{R}}^-_a$) with 6 degrees
of freedom. These account for all 36 degrees of freedom in
${\mathcal{R}}_{ab}$. The first three terms in the decomposition are the usual
ones for a symmetric Riemann tensor and would be present in GR. The
remaining terms come from the antisymmetric parts of ${\mathcal{R}}_{ab}$ and
only arise in the presence of spin or quadratic terms in the
Lagrangian. It is now a simple task to construct traceless
tensors from ${\mathcal{V}}^+_a$ and $\clr^+_a$ to complete the decomposition
into irreducible parts.
We can write the antisymmetric part of $\clr_a$ as
\begin{equation}
\clr^-_a = a{\cdot} {\mathcal{A}}
\end{equation}
where ${\mathcal{A}}={\textstyle \frac{1}{2}} \partial_a{\wedge}\clr_a$ is a bivector. Using this definition
we can write down 10 independent scalar terms which are quadratic in
the Riemann tensor:
\begin{eqnarray}
\quad \{\,{\mathcal{W}}^{ab}{\cdot}{\mathcal{W}}_{ab}, \quad {\mathcal{W}}^{ab}{\cdot}(\i{\mathcal{W}}_{ab}), \quad
\clr^{+a}{\cdot}\clr^+_a, \quad \clr^2, \quad \nonumber\\
\quad {\mathcal{A}} {\cdot} {\mathcal{A}}, \quad {\mathcal{A}}{\cdot} (\i{\mathcal{A}}), \quad
{\mathcal{V}}^{+a}{\cdot}{\mathcal{V}}^+_a, \quad {\mathcal{V}}^{+a}{\cdot}\clr^+_a, \quad {\mathcal{V}}^2, \quad
\clr{\mathcal{V}} \,\}
\end{eqnarray}
Six of these are invariant under parity and four are parity violating.
The two topological invariants can be used to remove two terms, so
there are only eight possible independent quadratic terms for the
gravitational Lagrangian. The classical field equations arising from
an equivalent set of terms is calculated in~\cite{Obukhov89} where the
Einstein-Cartan formalism is used. The theory is locally equivalent to GTG.
For calculational purposes it is easier to use the six parity
invariant terms
\begin{equation}
\{\,\clr^{ab}{\cdot}\clr_{ba}, \quad \clr^a{\cdot}\clr_a, \quad \bar{\clr}^a{\cdot}\clr_a,
\quad \clr^2, \quad {\mathcal{V}}^a{\cdot}{\mathcal{V}}_a, \quad {\mathcal{V}}^2\,\}
\end{equation}
and the four parity violating terms
\begin{equation}
\{\,\clr^{ab}{\cdot}(\i\clr_{ba}), \quad \clr^a{\cdot}{\mathcal{V}}_a, \quad
\bar{\clr}^a{\cdot}{\mathcal{V}}_a,\quad \clr{\mathcal{V}}\,\}
\end{equation}
which are linear combinations of the irreducible components. The
topological invariants can be used to remove one term from each set.
If we consider just the parity invariant terms and use the topological
invariant to remove $\bar{\clr}^a{\cdot}\clr_a$ we can calculate the field
equations from
\begin{equation}
{\mathcal{L}}_{R^2}={\textstyle \frac{1}{4}}\epsilon_1\clr^2 +{\textstyle \frac{1}{2}}\epsilon_2\clr^a{\cdot}\clr_a
+{\textstyle \frac{1}{4}}\epsilon_3\clr^{ab}{\cdot}\clr_{ba} +\epsilon_4{\textstyle \frac{1}{4}}{\mathcal{V}}^2 +
\epsilon_5{\textstyle \frac{1}{2}}{\mathcal{V}}^a{\cdot}{\mathcal{V}}_a
\end{equation}
The field equations for the $h^a$ give a modified Einstein
tensor of the form
\begin{equation}
{\mathcal{G}}'_a = {\mathcal{G}}_{a} + \epsilon_1{\mathcal{G}}_{1a} + \epsilon_2{\mathcal{G}}_{2a} +
\epsilon_3{\mathcal{G}}_{3a} + \epsilon_4{\mathcal{G}}_{4a} + \epsilon_5{\mathcal{G}}_{5a}
\end{equation}
where
\begin{align}
{\mathcal{G}}_{1a} &= \clr (\clr_a - {\textstyle \frac{1}{4}} \gamma_a \clr) \\
{\mathcal{G}}_{2a} &= \gamma_b\, \clr^b{\cdot}\clr_a + \clr_{ab}{\cdot}\clr^b-{\textstyle \frac{1}{2}} \gamma_a \, \clr^b{\cdot}\clr_b \\
{\mathcal{G}}_{3a} &= \gamma_b\, \clr^{bc}{\cdot}\clr_{ca}-{\textstyle \frac{1}{4}} \gamma_a \, \clr^{bc}{\cdot}\clr_{cb}\\
{\mathcal{G}}_{4a} &= {\mathcal{V}}({\mathcal{V}}_a-{\textstyle \frac{1}{4}} \gamma_a{\mathcal{V}}) \\
{\mathcal{G}}_{5a} &= \gamma_b\, {\mathcal{V}}^b{\cdot}{\mathcal{V}}_a + (I\clr_{ab}){\cdot}{\mathcal{V}}^b-{\textstyle \frac{1}{2}} \gamma_a\,
{\mathcal{V}}^b{\cdot}{\mathcal{V}}_b .
\end{align}
These tensors all have zero contraction, as expected from scale
invariance.
The field equations for $\Omega_a$ give the generalized torsion equation
of the form
\begin{equation}
{\mathcal{N}}_a={\mathcal{S}}_a
\end{equation}
where ${\mathcal{N}}_a$ is the (generalized) torsion tensor and ${\mathcal{S}}_a$ is the
matter spin tensor. Both of these are bivector-valued functions of
their vector argument.
It is convenient to employ the over-dot notation for the
covariant derivative of tensors,
\begin{equation}
\dot{{\mathcal{D}}}_a \dot{T}_b = {\mathcal{D}}_a T_b - T_c \, \gamma^c {\cdot} ({\mathcal{D}}_a \gamma_b),
\end{equation}
which has the property of commuting with contractions. This
definition extends in the obvious manner for tensors with more indices. The contributions to ${\mathcal{N}}_a$ from the five terms in
the action integral are then given concisely by
\begin{align}
{\mathcal{N}}_{1a} &= - {\mathcal{R}}\, \gamma^b {\cdot} ( \gamma_a {\wedge} {\mathcal{T}}_b) + \gamma_a {\wedge} \gamma^b
\, {\mathcal{D}}_b {\mathcal{R}} \\
{\mathcal{N}}_{2a} &= \bigl( (\gamma^b {\wedge} \gamma^c) {\cdot} (\gamma_a {\wedge} {\mathcal{T}}_c)
\bigr) {\wedge} {\mathcal{R}}_b + \gamma_a {\wedge} (\dot{{\mathcal{D}}}_b \dot{{\mathcal{R}}}^b) - \gamma^b
{\wedge} (\dot{{\mathcal{D}}}_b \dot{{\mathcal{R}}}_a) \\
{\mathcal{N}}_{3a} &= \dot{{\mathcal{D}}}_b {{\dot{{\mathcal{R}}}}_a}{}^{b} + (\gamma^b {\wedge}
\gamma^c) {\cdot} {\mathcal{T}}_c \, {\mathcal{R}}_{ab} - {\textstyle \frac{1}{2}} {\mathcal{R}}_{bc} \, (\gamma^c {\wedge} \gamma^b)
{\cdot} {\mathcal{T}}_a \\
I{\mathcal{N}}_{4a} &= - {\mathcal{V}}\, \gamma^b {\cdot}(\gamma_a {\wedge} {\mathcal{T}}_b) + \, \gamma_a {\wedge} \gamma^b \,
{\mathcal{D}}_b {\mathcal{V}} \\
I{\mathcal{N}}_{5a} &= \bigl( (\gamma^b {\wedge} \gamma^c) {\cdot} (\gamma_a {\wedge} {\mathcal{T}}_c)
\bigr) {\wedge} {\mathcal{V}}_b + \, \gamma_a
{\wedge} (\dot{{\mathcal{D}}}_b \dot{{\mathcal{V}}}^b ) - \, \gamma^b {\wedge} (\dot{{\mathcal{D}}}_b \dot{{\mathcal{V}}}_a).
\end{align}
More elegant expressions can be obtained if one uses the linear
function notation and conventions used in~\cite{hes-gc,DGL98-grav}.
\section{Conclusions}
We have shown that in gauge theory gravity topological terms are
simply dealt with and reduce to boundary integrals which do not alter
the (classical) field equations. These topological terms have a
natural analog in the winding numbers for instanton solutions
in Euclidean gravity. The main difference between the two cases are
due to the opposite signs of the squares of the pseudoscalars. This
difference is nicely highlighted by working with the scalar and
pseudoscalar invariants in a unified way. In the Euclidean setup the
pseudoscalar drives duality transformations, which reduce the
$Spin(4)$ group to two $SU(2)$ subgroups. In Minkowski spacetime,
however, the pseudoscalar has negative square, and is responsible for
the frequently made observation that there is a natural complex
structure associated with the gravitational field
equations~\cite{pen-I}.
We constructed ten possible terms for a quadratic Lagrangian, which
the topological invariants then restrict to eight independent terms.
The field equations for these have been derived elsewhere, but the
derivations and formulae presented here are considerably simpler than
in previous approaches.
\section*{Acknowledgements}
Antony Lewis was supported by a PPARC Research Studentship during the
course of this work. Chris Doran is supported by an EPSRC Advanced Fellowship.
|
1,108,101,565,798 | arxiv | \section{Introduction}
Given a set $\mathcal P$ of points and a set $\Pi$ of geometric objects (for example, one might consider lines, circles, or hyperplanes) in $\ensuremath{\mathbb R}^d$, an \emph{incidence} is a pair $(p,V)\in \mathcal P \times \Pi$ such that the point $p$ is contained in the object $V$.
The number of incidences in $\mathcal P\times\Pi$ is denoted as $I(\mathcal P,\Pi)$.
We sometimes associate the set of incidences with a bipartite graph, called the \emph{incidence graph} of $\mathcal P \times \Pi$.
This graph has vertex sets $\mathcal P$ and $\Pi$, and an edge for every incidence.
In incidence problems, one is usually interested in the maximum number of incidences in $\mathcal P \times \Pi$, taken over all possible sets $\mathcal P,\Pi$ of a given size (equivalently, the maximum number of edges that the incidence graph can contain).
Such incidence bounds have many applications in a variety of fields.\footnote{For a few recent examples, see Guth and Katz's distinct distances result \cite{GK15}, a number theoretic result by Bombieri and Bourgain \cite{Bb15}, and several works in harmonic analysis by Bourgain and Demeter, such as \cite{BD15}.}
The following theorem is one of the main results in incidence theory.
\begin{theorem} \label{th:PS} {\bf (Pach and Sharir \cite{PS92,PS98})}
Let $\mathcal P$ be a set of $m$ points and let $\Gamma$ be a set of $n$ algebraic curves of degree at most $D$, both in $\ensuremath{\mathbb R}^2$, such that the incidence graph of $\mathcal P\times\Gamma$ contains no copy of the complete bipartite graph $K_{s,t}$.
Then
$$I(\mathcal P,\Gamma) = O_{D,s,t}\left(m^{s/(2s-1)}n^{(2s-2)/(2s-1)}+m+n \right).$$
\end{theorem}
Better bounds are known for some specific types of curves, such as circles and axis-parallel parabolas.
In 2010 Guth and Katz \cite{GK15} introduced the \emph{polynomial partitioning} technique to handle incidences with lines in $\ensuremath{\mathbb R}^3$.
Since then, this technique has been extended by a series of papers, yielding general bounds for incidences in any dimension.
\begin{theorem} \label{th:UpperBounds}
Let $\mathcal P$ be a set of $m$ points and let $\Pi$ be a set of $n$ varieties of degree at most $D$, both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times \Pi$ contains no copy of $K_{s,t}$. \\
(a) (Zahl \cite{Zahl13}; see also Kaplan, Matou\v sek, Safernov\'a, and Sharir \cite{KMSS12}) When $d=3$ and every three varieties of $\Pi$ have a zero-dimensional intersection, we have
\[I(\mathcal P,\Pi) = O_{D,s,t}\left(m^{2s/(3s-1)}n^{(3s-3)/(3s-1)}+m+n \right). \]
%
(b) (Basu and Sombra \cite{BS14}) When $d=4$ and every four varieties of $\Pi$ have a zero-dimensional intersection, we have
\[ I(\mathcal P,\Pi) = O_{D,s,t}\left(m^{3s/(4s-1)}n^{(4s-4)/(4s-1)}+m+n \right). \]
(c) (Fox, Pach, Sheffer, Suk, and Zahl \cite{FPSSZ14}) For any $d\ge 3$ and ${\varepsilon}>0$, and with no intersection-related restrictions, we have
\[I(\mathcal P,\Pi) = O_{D,s,t}\left(m^{(d-1)s/(ds-1)+{\varepsilon}}n^{(ds-d)/(ds-1)}+m+n \right). \]
\end{theorem}
Most of the non-trivial lower bounds that are known for incidence problems are for the case of curves (e.g., see Elekes' construction \cite{Elekes01}).
The only non-trivial lower bound that we are aware of for objects of dimension at least two is by Brass and Knauer \cite{BK03}. This is a bound for incidences with hyperplanes with no $K_{s,s}$ in the incidences graph, for a large constant $s$.
The case of large $s$ behaves somewhat differently, and upper bounds better than the ones in Theorem \ref{th:UpperBounds} are known for it (below we present these upper bounds, the lower bound of \cite{BK03}, and an improved lower bound that we derive for this scenario).
It may also be worth mentioning a bound for unit spheres in $\ensuremath{\mathbb R}^3$ that was observed by Erd\H os \cite{erd60}. This bound is a straightforward extension of the one for the planar unit distances problem \cite{erd46}.
In this paper we provide lower bounds that match the upper bounds of Theorem \ref{th:UpperBounds} (up to an extra ${\varepsilon}$ in the exponent) in dimension $d\ge 4$, with no $K_{2,t}$ in the incidence graph (for a constant $t$ that depends on ${\varepsilon}$ and $d$), and with $n$ satisfying a specific relation with $m$.
\begin{theorem} \label{th:LowerBounds}
For any $m$, $d\ge 4$, and ${\varepsilon}>0$, there exists a set $\mathcal P$ of $m$ points and a set $\Pi$ of $n=\Theta\left(m^{(3-3{\varepsilon})/(d+1)}\right)$ hyperplanes in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times\Pi$ contains no copy of $K_{2,(d-1)/{\varepsilon}}$ and with $I(\mathcal P,\Pi)=\Omega\left(m^{(2d-2)/(2d-1)}n^{d/(2d-1)-{\varepsilon}} \right)$. This bound remains valid when replacing the hyperplanes with hyperspheres, or with any linearly-closed family of graphs (see definition below).
\end{theorem}
\parag{Remarks.} (a) For the above sizes of $m$ and $n$ we have $m+n=o\left(m^{(2d-2)/(2d-1)}\right)$, so this is indeed the dominating term in the bounds of Theorem \ref{th:UpperBounds}. \\
(b) The extra ${\varepsilon}$ in the exponents can be removed from the theorem at the cost of replacing the condition of no $K_{2,d/{\varepsilon}}$ in the incidence graph with no $K_{2,3\lg n}$, and adding a factor of $n^{-c/\lg \lg n}$ to the incidence bound (for some constant $c$);\footnote{Unless stated otherwise, the logarithms in this paper are all with base $e$.} see below for more details.
\vspace{2mm}
We define a \emph{graph} in $\ensuremath{\mathbb R}^d$ to be a hypersurface that is defined by an equation of the form $x_d = f(x_1,\ldots,x_{d-1})$ (where $f:\ensuremath{\mathbb R}^{d-1}\to \ensuremath{\mathbb R}$).
Notice that $f$ is not required to be a polynomial or even algebraic.
We say that a family $F$ of graphs is \emph{linearly-closed} if it satisfies the following property:
If $V\in F$ is a graph that is defined by $x_d = f(x_1,\ldots,x_{d-1})$, then for every $a_1,\ldots,a_d\in \ensuremath{\mathbb R}$ the graph that is defined by $x_d = f(x_1,\ldots,x_{d-1})+a_1x_1+\cdots + a_{d-1}x_{d-1}+a_d$ is also in $F$.
As examples of linearly-closed families of graphs, consider the set of paraboloids in $\ensuremath{\mathbb R}^d$ that are defined by $x_d = a_1(x_1-b_1)^2+\cdots+a_d(x_1-b_d)^2$ (where $a_1,\ldots,a_d,b_1,\ldots,b_d\in \ensuremath{\mathbb R}$), the set of graphs that are also algebraic varieties of degree $k$ in $\ensuremath{\mathbb R}^d$, and the set of curves in $\ensuremath{\mathbb R}^2$ that are defined by $y=e^{ax}+b\sqrt{x}+cx+d$ (where $a,b,c,d\in \ensuremath{\mathbb R}$).
Theorem \ref{th:LowerBounds} is a special case of a more general theorem that we prove.
Specifically, Theorem \ref{th:LowerBounds} is immediately obtained from Theorem \ref{th:GeneralBound} by setting $\delta= \frac{2d-2}{2d-1}$.
\begin{theorem} \label{th:GeneralBound}
Consider ${\varepsilon},\delta>0$ and positive integers $m$ and $d\ge 4$.
Then there exist a set $\mathcal P$ of $m$ points and a set $\Pi$ of $n=\Theta\left(m^{(3-3{\varepsilon})/(d+1)}\right)$ hyperplanes, both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times \Pi$ contains no copy of $K_{2,d/{\varepsilon}}$, and with
\[ I(\mathcal P,\Pi)=\Omega\left(m^{\delta}n^{(d+2-\delta(d+1))/3-{\varepsilon}}\right). \]
This bound remains valid when replacing the hyperplanes with hyperspheres, or with any linearly-closed family of graphs.
\end{theorem}
\parag{Remarks.} (a) The extra ${\varepsilon}$ in the exponents can be removed from Theorem \ref{th:GeneralBound} at the cost of replacing the condition of no $K_{2,d/{\varepsilon}}$ in the incidence graph with no $K_{2,3\lg n}$, and adding a factor of $n^{-c/\lg \lg n}$ to the incidence bound (for some constant $c$). This is immediately obtained by setting $p=1/t$ in the probabilistic argument that is in Section \ref{ssec:Chefnoff}. \\
(b) In Theorem \ref{th:GeneralBound}, our goal was to minimize the value of $t$ in the condition of having no $K_{2,t}$ in the incidence graph (having Theorem \ref{th:LowerBounds} in mind). One can easily change our analysis to obtain bounds for any $t$ between $d/{\varepsilon}$ and $n^{(d-4)/(d-1)}$ (the case of $t=\Theta\left(n^{(d-4)/(d-1)}\right)$ can be found in Lemma \ref{le:Intermediate}).
\vspace{2mm}
Theorems \ref{th:LowerBounds} and \ref{th:GeneralBound} consider incidences with hypersurfaces, but also apply to surfaces of lower dimensions.
For example, say that we are interested in $d$-dimensional planes in $\ensuremath{\mathbb R}^{d'}$ (where $d<d'$).
We can choose an arbitrary $(d+1)$-dimensional plane in $\ensuremath{\mathbb R}^{d'}$ and apply Theorem \ref{th:LowerBounds} or \ref{th:GeneralBound} in it. This might be considered as cheating, since we get a configuration that is not ``truly $d'$-dimensional''.
Our technique is based on a recent work of Bourgain and Demeter \cite{BD15}.
Bourgain and Demeter study discrete Fourier restriction to the four- and five-dimensional spheres, partly by relying on bounds for incidences with hyperplanes in $\ensuremath{\mathbb R}^4$ and $\ensuremath{\mathbb R}^5$.
In some sense, we reverse the direction of their analysis to obtain a result about incidences.
Several of the ideas in this paper can be found under disguise in \cite{BD15}.
As already mentioned, for large values of $s$ better bounds are known than the ones in Theorem~\ref{th:UpperBounds}.
\begin{theorem} \label{th:LargeS}
Let $\mathcal P$ be a set of $m$ points and let $\Pi$ be a set of $n$ varieties of degree at most $D$, both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times\Pi$ contains no copy of $K_{s,s}$ (for some constant $s$). \\
(a) (Brass and Knauer \cite{BK03}; Apfelbaum and Sharir \cite{AS07}) When the elements of $\Pi$ are hyperplanes, we have
\[ I(\mathcal P,\Pi) = O\left(m^{d/(d+1)}n^{d/(d+1)} + m + n\right). \]
(b) (Fox, Pach, Sheffer, Suk, and Zahl \cite{FPSSZ14}) For any $d$, ${\varepsilon}>0$, and set of constant-degree varieties that can be parameterized with $r$ parameters, we have
\[ I(\mathcal P,\Pi) = O\left(m^{r(d-1)/(dr-1)+{\varepsilon}}n^{(r-1)d/(dr-1)} + m + n\right). \]
\end{theorem}
Notice that part (b) of the theorem is a generalization of part (a) (up to the extra ${\varepsilon}$ in the exponent) since for hyperplanes we have $r=d$. Brass and Knauer \cite{BK03} also presented a configuration of $m$ points and $n$ hyperplanes in $\ensuremath{\mathbb R}^d$ with no $K_{s,s}$ in the incidence graph, for some large $s$. When $d=3$ this configuration has $\Omega\left((mn)^{7/10}\right)$ incidences, when $d>3$ is odd there are $\Omega\left((mn)^{1-2/(d+3)-{\varepsilon}}\right)$ incidences, and when $d>3$ is even there are $\Omega\left((mn)^{1-2(d+1)/(d+2)^2-{\varepsilon}}\right)$ incidences.
Theorem \ref{th:GeneralBound} immediately implies the following improved bound for any $d\ge 4$ (but for $m$ and $n$ that satisfy a specific relation) by setting $\delta=\frac{d+2}{d+4}$.
\begin{corollary} \label{co:LargeS}
Consider an ${\varepsilon}>0$, and positive integers $m$ and $d\ge 4$.
Then there exist a set $\mathcal P$ of $m$ points and a set $\Pi$ of $n=\Theta\left(m^{(3-3{\varepsilon})/(d+1)}\right)$ hyperplanes, both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times \Pi$ contains no copy of $K_{2,d/{\varepsilon}}$, and with $I(\mathcal P,\Pi)=\Omega\left((mn)^{1-2/(d+4)-{\varepsilon}}\right)$.
This bound remains valid when replacing the hyperplanes with hyperspheres, or with any linearly-closed family of graphs.
\end{corollary}
For $d\ge 4$ Corollary \ref{co:LargeS} yields a stronger bound than the one in \cite{BK03}, under a stronger restriction (no $K_{2,d/{\varepsilon}}$ rather than no $K_{s,s}$ for a large $s$), and for more types of hypersurfaces.
On the other hand, the bound in \cite{BK03} has the advantage of applying to every $m$ and $n$, while Corollary \ref{co:LargeS} requires a specific relation between these two sizes.
The three main tools that are used in the proof of Theorem \ref{th:GeneralBound} (in addition to standard incidence techniques) are bounding the additive energy of a set with the Fourier transform, relying on properties of ellipsoids in high-dimensional lattices, and a probabilistic argument that is based on several Chernoff bounds. In Section \ref{sec:Fourier} we study the additive energy, in Section \ref{sec:lattice} we study ellipsoids and lattices, and in Section \ref{sec:MainProof} we finally prove Theorem \ref{th:GeneralBound}.
In our analysis we rely on ellipsoids that are defined by equations with bounded integer parameters.
In $\ensuremath{\mathbb R}^4$, we rely on the fact such a two-dimensional ellipsoid can contain many points of of the integer lattice $\ensuremath{\mathbb Z}^4$, but a one-dimensional ellipse cannot.
A similar yet more involved argument holds in $\ensuremath{\mathbb R}^d$ for $d>4$.
For our analysis to hold in $\ensuremath{\mathbb R}^3$, we would require the existence of such one-dimensional ellipses that contain many points of $\ensuremath{\mathbb Z}^3$.
The analysis fails in $\ensuremath{\mathbb R}^3$ since such ellipses do not exist (for the full details, see Section \ref{sec:lattice}).
Thus, the current state of incidence lower bounds in $\ensuremath{\mathbb R}^3$ remains somewhat embarrassing.
For example, it seems that for the case of point-plane incidences with no $K_{2,t}$ in the incidence graph, nothing is known beyond the trivial $\Omega(m^{2/3}n^{2/3})$ (this bound is easily obtained from the lower bound for point-line incidences in $\ensuremath{\mathbb R}^2$).
\vspace{2mm}
\section{Additive energy on the truncated paraboloid} \label{sec:Fourier}
For $d\ge 4$, we denote by $S_d \subset \ensuremath{\mathbb R}^d$ the paraboloid that is defined by
\[ x_d = x_1^2+\cdots+x_{d-1}^2. \]
Similarly, we define the truncated paraboloid
\begin{equation} \label{eq:truncated}
S_{n,d} = \left\{ (x_1,\ldots,x_d)\in \ensuremath{\mathbb R}^d : x_d=x_1^2+\cdots+x_{d-1}^2 \text{ and } |x_1|,\ldots,|x_{d-1}| \le n^{\frac{1}{d-1}} \right\}. \end{equation}
Let $\mathcal P = S_{n,d} \cap \ensuremath{\mathbb Z}^d$ be the set of points on $S_{n,d}$ with integer coordinates. Notice that $|\mathcal P|=\Theta(n)$.
We are interested in the set of quadruples
\[ Q(\mathcal P) = \{ (a,b,c,d) \in \mathcal P^4 :\, a+b = c+d \}. \]
The quantity $E(\mathcal P)=|Q(\mathcal P)|$ is known as the \emph{additive energy} of $\mathcal P$, and is one of the main objects that are studied in additive combinatorics (e.g., see \cite[Section 2.3]{TV06}). The proof of Theorem \ref{th:GeneralBound} is based on double counting $|E(\mathcal P)|$.
In the current section we derive the following lower bound for $|E(\mathcal P)|$.
The following lemma is a generalization of Remark 3.2 from \cite{BD15}.
\begin{lemma} \label{le: LowerEnergy}
$E(\mathcal P) = \Omega\left(n^{3-2/(d-1)}\right)$.
\end{lemma}
The idea of using the Fourier transform to bound the additive energy of a set is rather common (e.g., see \cite{SchoenShk13}).
One possible reason for this is that such energy can be expressed using convolutions, and often convolutions are easier to study using the Fourier transform.
In some sense, the paraboloid was chosen since it maximizes the lower bound that is implied by the following argument.
\begin{proof}
Let $e(\xi)=e^{2\pi i\xi}$ and let $\ensuremath{\mathbb T}^d$ be the $d$-dimensional torus $[0,1)^d$.
We denote by $1_{\mathcal P}$ the indicator function of $\mathcal P$.
That is, for any $x\in \ensuremath{\mathbb Z}^d$ we have $1_{\mathcal P}(x)=1$ if $x\in \mathcal P$, and otherwise $1_{\mathcal P}(x)=0$.
We consider the Fourier inversion of $1_{\mathcal P}$:
\[ f(x) = \sum_{p \in \ensuremath{\mathbb Z}^d} 1_{\mathcal P}(p) e(x\cdot p) = \sum_{p \in \mathcal P}e(x\cdot p). \]
We then have
\begin{align*}
\int_{\ensuremath{\mathbb T}^d} \left| f(x) \right|^4 &= \int_{\ensuremath{\mathbb T}^d} f(x)^2\overline{ f(x)^2} \\[2mm]
&= \sum_{a_1,a_2,a_3,a_4\in \mathcal P}\int_{\ensuremath{\mathbb T}^d}e\left((a_1+a_2-a_3-a_4)\cdot x\right).
\end{align*}
Consider a fixed $a_1,a_2,a_3,a_4\in \mathcal P$ with $a_1+a_2-a_3-a_4\neq 0$, and notice that
\[ \int_{x\in \ensuremath{\mathbb T}^d}e\left((a_1+a_2-a_3-a_4)\cdot x\right) = 0. \]
%
On the other hand, when $a_1+a_2-a_3-a_4= 0$ we obviously have
%
\[ \int_{x\in \ensuremath{\mathbb T}^d}e\left((a_1+a_2-a_3-a_4)\cdot x\right) = 1. \]
%
By combining the two cases, we get that $\int_{\ensuremath{\mathbb T}^d} \left| f(x) \right|^4$ is the
number of solutions to $a_1+a_2=a_3+a_4$, taken over all possible $a_1,a_2,a_3,a_4\in \mathcal P$.
In other words,
\[ E(\mathcal P) = \int_{\ensuremath{\mathbb T}^d} \left| f(x) \right|^4. \]
For a large constant $c$, consider a point $x\in \ensuremath{\mathbb T}^d$ with $|x_1|,\ldots,|x_{d-1}| <1/(cn^{1/(d-1)})$ and $|x_d|<1/(cn^{2/(d-1)})$. Notice that for any $p\in \mathcal P$ we have $x \cdot p < d/c$. That is, we have a subset of $\ensuremath{\mathbb T}^d$ of measure $\Omega\left(n^{-(d+1)/(d-1)}\right)$ that contains only points $x$ that satisfy $x\cdot p< d/c$ for every $p\in \mathcal P$.
By taking $c$ to be sufficiently large, we get that $e(x\cdot p)$ is close to 1.
Specifically, $\left| 1-e^{2\pi i x}\right| \le 2\pi \|x\|$, where $\|x\|$ is the distance between $x$ and the closest integer (e.g., see \cite[Section 4.4]{TV06}).
We thus have
\[ E(\mathcal P) = \int_{\ensuremath{\mathbb T}^d} \left| f(x) \right|^4 = \int_{\ensuremath{\mathbb T}^d} \left|\sum_{p \in \mathcal P}e(x\cdot p) \right|^4 = \Omega\left(\frac{1}{n^{(d+1)/(d-1)}} \cdot n^4\right) = \Omega\left(n^{3-2/(d-1)}\right). \]
\vspace{-10mm}
\end{proof}
The bound of Lemma \ref{le: LowerEnergy} is tight for any $d\ge 4$ up to an extra ${\varepsilon}$ in the exponent (and it seems likely that this extra ${\varepsilon}$ restriction is unnecessary).
Indeed, combining the analysis in Section \ref{sec:MainProof} with the bound in part (c) of Theorem \ref{th:UpperBounds} yields $E(\mathcal P) = O\left(n^{3-2/(d-1)+{\varepsilon}}\right)$.
\section{Ellipsoids containing lattice points} \label{sec:lattice}
In this section we derive upper bounds for the maximum number of integer lattice points that various types of ellipsoids in $\ensuremath{\mathbb R}^d$ can be incident to.
For this purpose, we rely on the following result of Dirichlet (for example, see \cite[Section 11.2]{Iwaniec97}).
In this section of our paper (but not in the other ones), when writing $\left(\frac{a}{b}\right)$ we refer to the Kronecker symbol rather than to division.
\begin{theorem} \label{th:BinaryQuad}
Let $a,b,c,n$ be positive integers such that $\gcd(a,b,c)=1$, and set $D=b^2-4ac$.
If $D<0$ then there exists an integer $2\le k \le 6$ such the number of integer solutions to $ax^2+bxy+cy^2=n$ is
\[ k\sum_{d|n} \left(\frac{D}{d}\right). \]
\end{theorem}
We first consider the case of ellipses in $\ensuremath{\mathbb R}^d$ for $d\ge 4$. We say that such an ellipse is $n$-\emph{proper} if it is the intersection of the paraboloid $S_d$ and $d-2$ hyperplanes with the following properties.
Two of the hyperplanes are defined by linear equations with every coefficient being an integer with an absolute value of size $O_d(n)$.
Each of the remaining $d-4$ hyperplanes is orthogonal to a different axis.
Notice that an intersection of $S_d$ with $d-2$ hyperplanes may not result in an ellipse.
In the following we only consider intersections that do form ellipses.
\begin{lemma} \label{le:EllipseLattice}
For $d\ge 4$, let $\mathcal P$ be a section of the $d$-dimensional integer lattice of size $n\times n \times \cdots \times n$ (so $\mathcal P$ consists of $n^d$ points of $\ensuremath{\mathbb Z}^d$).
Let $\gamma$ be an $n$-proper ellipse. Then $\gamma$ contains $O_d\left(n^{c_d/\lg \lg n}\right)$ points of $\mathcal P$ (for some constant $c_d$ that only depends on $d$).
\end{lemma}
\begin{proof}
We project both $\mathcal P$ and $\gamma$ onto the two-dimensional plane $\Pi$ that is spanned by the first two axes.
This yields an $n\times n$ lattice $\mathcal P'$ and an ellipse $\gamma'$.
In the rest of the proof we ignore $\ensuremath{\mathbb R}^d$ and work only in $\Pi$.
We can obtain the equation that defines $\gamma'$ by eliminating $x_3,\ldots,x_d$ in the equation $x_d=x_1^2+\cdots+x_{d-1}^2$, using the $d-2$ linear equations that define the hyperplanes in the definition of $\gamma$.\footnote{Obtaining a projection by eliminating variables gives a variety that contains the projection, but is not necessarily identical to it (e.g., see \cite[Chapter 3]{CLOu}). This is sufficient for our purpose, since we are interested in an upper bound for the number of incidences, and this step can only increase their number (moreover, in the special case of an $n$-proper ellipse the projection is equivalent to the variety).} That is, $\gamma$ is the zero set of an equation of the form
\[ a_1x^2+a_2y^2+a_3xy+a_4x+a_5y-a_6, \]
where $a_1,\ldots,a_6$ are rational numbers with numerators and denominators of size at most $n^{O_d(1)}$.
We next study the center $p=(p_x,p_y)$ of $\gamma'$.
We translate the plane by $-p$ so that $\gamma'$ is centered at the origin.
The equation that defines $\gamma'$ becomes
\[ a_1(x+p_x)^2+a_2(y+p_y)^2+a_3(x+p_x)(y+p_y)+a_4(x+p_x)+a_5(y+p_y)=-a_6. \]
Since an ellipse that is centered at the origin has no linear terms in its defining equation, we have
\begin{align} \label{eq:TwoLinear}
2a_1p_x+a_3p_y+a_4=0 \qquad \text{ and } \qquad 2a_2 p_y+a_3p_x+a_5=0.
\end{align}
Since $\gamma'$ is an ellipse by assumption, we have $a_1,a_2\neq 0$ and $a_3^2<4a_1a_2$ ($a_3^2=4a_1a_2$ corresponds to a parabola and $a_3^2>4a_1a_2$ corresponds to a hyperbola).
These properties imply that there there exists a unique solution to \eqref{eq:TwoLinear} (when considering $p_x$ and $p_y$ as the variables).
In this solution $|p_x|$ and $|p_y|$ are rational numbers with denominators and numerators of size at most $n^{O_1(d)}$.
That means that we can refine $\mathcal P'$ to be an $n'\times n'$ square lattice $\mathcal P''$ that contains $p$, with $n' = n^{O_1(d)}$ (that is, the refined lattice is defined by two vectors of the same size and in the directions of the axes, and it fully contains $\mathcal P'$).
We perform a uniform scaling so that $\mathcal P''$ becomes an $n'\times n'$ section $\mathcal P^*$ of the integer lattice.
The translation and scaling take $\gamma'$ into an ellipse $\gamma^*$ that is the zero set of
\[ b_1x^2+b_2y^2+b_3xy-b_4, \]
where $b_1,\ldots,b_4$ are rational numbers with numerators and denominators of size at most $n^{O_d(1)}$.
To remove the denominators, we multiply this polynomial by the smallest common multiple of the denominators.
We then find the greatest common divisor $g$ of the resulting integer coefficients and divide the polynomial by $g$.
We get that $\gamma^*$ is the zero set of
\[ c_1x^2+c_2y^2+c_3xy-c_4, \]
where $c_1,\ldots,c_4$ are integers of size at most $n^{O_d(1)}$ with $\gcd(c_1,c_2,c_3,c_4)=1$.
If $g'=\gcd(c_1,c_2,c_3)>1$, then there are no integer solutions to $c_1x^2+c_2y^2+c_3xy=c_4$ since the above implies that $g'$ does not divide $c_4$. That is, we may assume that $\gcd(c_1,c_2,c_3)= 1$.
Since $\gamma^*$ is still an ellipse, we also have $c_3^2<4c_1c_2$.
Thus, we may apply Theorem \ref{th:BinaryQuad} to $c_1x^2+c_2y^2+c_3xy=c_4$.
This implies that the number of points of $\ensuremath{\mathbb Z}^2$ that are contained in $\gamma^*$ is $O\left(\sum_{m|c_4} \left(\frac{c_3^2-4c_1c_2}{m}\right)\right)$.
As an upper bound for this expression, we consider the case where every element in the sum equals 1. That is, the number of solutions is at most linear in the number of divisors of $c_4$.
This number of divisors is at most $c_4^{O(1/\lg\lg c_4)}$ (e.g., see \cite[Section 1.6]{Tao09}).
Thus, the ellipse $\gamma^*$ contains $O_d\left(n^{c_d/\lg \lg n}\right)$ points of $\ensuremath{\mathbb Z}^2$ (for some constant $c_d$ that depends on $d$).
That is, $\gamma^*$ contains $O_d\left(n^{c_d/\lg \lg n}\right)$ points of $\mathcal P^*$, which in turn implies that $\gamma$ contains $O_d\left(n^{c_d/\lg \lg n}\right)$ points of $\mathcal P$.
\end{proof}
We now extend the result of Lemma \ref{le:EllipseLattice} to ellipsoids of larger dimensions. Let $1\le k\le d-3$. We say that a $k$-dimensional ellipsoid in $\ensuremath{\mathbb R}^d$ is $n$-\emph{proper} if it is the intersection of the paraboloid $S_d$ and $d-k-1$ hyperplanes with the following properties.
Two of the hyperplanes are defined by linear equations with every coefficient being an integer with an absolute value of size $O_d(n)$.
Each of the remaining $d-4$ hyperplanes is orthogonal to a different axis.
\begin{lemma} \label{le:ellipsoids}
For $d\ge 4$, let $\mathcal P$ be a section of the $d$-dimensional integer lattice of size $n\times n \times \cdots \times n$ (so $\mathcal P$ consists of $n^{d}$ points of $\ensuremath{\mathbb Z}^d$) and let $1\le k\le d-3$.
Let $E$ be an $n$-proper $k$-dimensional ellipsoid, also in $\ensuremath{\mathbb R}^d$.
Then $E$ contains $O_d\left(n^{k-1+c_d/\lg \lg n}\right)$ points of $\mathcal P$ (for some constant $c_d$ that only depends on $d$).
\end{lemma}
\begin{proof}
The proof is by induction on $k$. For the induction basis, the case of $k=1$ is Lemma \ref{le:EllipseLattice}.
For the induction step, consider a $k$-dimensional $n$-proper ellipsoid $E$ in $\ensuremath{\mathbb R}^d$ (where $1<k\le d-3$).
Let $h$ denote the $(k+1)$-dimensional flat that fully contains $E$, and notice that $\mathcal P' = h\cap \mathcal P$ is a $(k+1)$-dimensional rectangular lattice of size at most $n\times n \times \cdots \times n$ (if $\mathcal P'$ is of a lower-dimension, then we consider a smaller-dimensional flat $h'$ that contains $\mathcal P'$ and use the induction hypothesis on $h'\cap S$).
We can cover the points of $\mathcal P'$ with at most $n$ parallel $k$-dimensional flats that are fully contained in $h$, with each such flat containing a $k$-dimensional rectangular lattice of size at most $n\times n \times \cdots \times n$.
Moreover, we can take these $k$-dimensional flats to be orthogonal to one of the axes.
For each of these flats $h_i$, notice that $h_i \cap E$ is an $n$-proper ellipsoid of dimension at most $k-1$ (or an empty set).
By the induction hypothesis, each such lower-dimensional ellipsoid contains $O_d\left(n^{k-2+c/\lg \lg n}\right)$ points. The assertion of the lemma is obtained by summing up this bound over the lower-dimensional ellipsoids.
\end{proof}
It seems likely that the bound of Lemma \ref{le:ellipsoids} can be slightly improved to $O_d\left(n^{k-1}\right)$.
This is known to be the case for $k$-dimensional spheres that are centered at a point of the lattice (e.g., see \cite{Gross21}).
Bombieri and Pila \cite{BP89} proved that any irreducible algebraic curve of degree $k$ in $\ensuremath{\mathbb R}^2$ contains $O_k(n^{1/(2k)})$ points of a $\sqrt{n}\times\sqrt{n}$ section of the integer lattice.
While it is easy to verify that this bound is tight, it seems possible that a much stronger bound holds for curves such as arbitrary circles and ellipses.
We are not aware of any results in this direction.
We are also not aware of any non-trivial bounds for general ellipsoids in $\ensuremath{\mathbb R}^d$.
\section{Proof of Theorem \ref{th:LowerBounds}} \label{sec:MainProof}
This section contains the main part of our analysis, and it is divided into two parts. In the first part we prove the following lemma, and in the second we use this lemma to derive Theorem \ref{th:LowerBounds}.
\begin{lemma} \label{le:Intermediate}
Consider constants $\beta>2$ and $\gamma\ge 0$, positive integers $n$ and $d\ge 4$, and let $\alpha = \frac{\beta d+ d-3\beta+1}{d-1}$.
Then there exists a set $\mathcal P$ of $m=\Theta\left(n^{(d+1)/(d-1)}\right)$ points and a set $\Pi$ of $n$ hyperplanes, both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times \Pi$ contains no copy of $K_{2,t}$ where $t=\Theta\left(n^{(d-4)/(d-1)+ c/\lg \lg n}\right)$ (for some constant $c$), and with
\[ I(\mathcal P,\Pi)=\Omega\left(m^{(\beta-1)/\beta} n^{\left(\alpha-\gamma\frac{d-4}{d-1}-\frac{c}{\lg \lg n}\right)/\beta}t^{\gamma/\beta}\right). \]
This bound remains valid when replacing the hyperplanes with hyperspheres, or with any linearly-closed family of graphs.
\end{lemma}
\subsection{Proving Lemma \ref{le:Intermediate}} \label{ssec:InterLemma}
We recall the definitions of $S_d$ and $S_{n,d}$ from Section \ref{sec:Fourier}.
We again define $\mathcal P = S_{n,d} \cap \ensuremath{\mathbb Z}^d$ and recall that $|\mathcal P|=\Theta(n)$.
By Lemma \ref{le: LowerEnergy} we have
\begin{equation} \label{eq:Lower4}
E(\mathcal P)=\Omega\left(n^{3-2/(d-1)}\right).
\end{equation}
We will now reduce the problem of obtaining an upper bound for $E(\mathcal P)$ to a point-hyperplane incidence problem.
The existence of various upper bounds for the maximum number of point-hyperplane incidences in $\ensuremath{\mathbb R}^d$ would then imply an upper bound on $E(\mathcal P)$ that contradicts \eqref{eq:Lower4}.
That is, we would get a contradiction to the existence of various point-hyperplane incidence upper bounds.
Such a contradiction implies that there exist point-hyperplane configurations with a large number of incidences (the full proof below is more constructive and also provides some information about how these configurations look like).
Consider points $a,b\in \mathcal P$ such that $a+b=v\in\ensuremath{\mathbb R}^d$.
We have
\begin{align*}
v_d &= a_1^2+\cdots+a_{d-1}^2 + b_1^2+\cdots+b_{d-1}^2 \\
&= a_1^2+\cdots+a_{d-1}^2 + (v_1-a_1)^2+\cdots+(v_{d-1}-a_{d-1})^2 \\
&= 2a_d -2a_1v_1-\cdots-2a_{d-1}v_{d-1} +\left(v_1^2+\cdots+v_{d-1}^2\right).
\end{align*}
That is, if $a,b\in \mathcal P$ satisfy $a+b=v$ then $a$ and $b$ are both contained in the hyperplane $H_v$ that is defined by
\begin{equation} \label{eq:PiPlane}
2x_d -2x_1v_1-\cdots-2x_{d-1}v_{d-1} + \left(v_1^2+\cdots+v_{d-1}^2\right)-v_d=0.
\end{equation}
Let $\Pi = \{H_v :\, v=a+b \text{ with } a,b\in \mathcal P \}$.
We denote by $N_k$ the number of planes $H_v\in \Pi$ that contain between $2^{k}$ and $2^{k+1}-1$ pairs $a,b\in \mathcal P$ with $a+b=v$.
Since every pair of points of $\mathcal P$ corresponds to a unique $v$, we have $\sum_k N_k \cdot 2^k = O(n^2)$.
For an $r$ that we will determine below, we have
\begin{equation} \label{eq:ESn4}
E(\mathcal P) < \sum_{k=0}^{\lg n} N_k 2^{2(k+1)} = \sum_{k=0}^{r} N_k 2^{2(k+1)} + \sum_{k=r+1}^{\lg n} N_k 2^{2(k+1)}.
\end{equation}
By the above bound for $\sum_k N_k \cdot 2^k$, we have $\sum_{k=0}^{r} N_k 2^{2(k+1)} = O\left(n^2 2^r\right)$.
To handle the second sum, we notice that for any $a\in H_v$ there exists at most one $b\in \mathcal P$ such that $a+b=v$.
Thus, the number of pairs in $H_v$ is at most $|H_v \cap \mathcal P|/2$.
Assume that we have the bound
\begin{equation} \label{eq:NkHyp4}
N_k = O\left(\frac{n^\alpha}{2^{\beta k}}+\frac{n}{2^k}\right),
\end{equation}
for some $\beta> 2$ and $\alpha = (\beta d+ d-3\beta+1)/(d-1)$ (note that for any $\beta>2$ we have $\alpha>0$).
We then get that
\[ \sum_{k=r+1}^{\lg n} N_k 2^{2(k+1)} = \sum_{k=r+1}^{\lg n} O\left(\frac{n^{\alpha}}{2^{(\beta-2)k}}+n2^k\right) = O\left(\frac{n^{\alpha}}{2^{(\beta-2)r}}+n^2\right).\]
To optimize the bound in \eqref{eq:ESn4}, we need $r$ to satisfy $n^2 2^r = \Theta\left(n^{\alpha}/2^{(\beta-2)r}\right)$ (the term $n^2$ in the above bound is always subsumed by $n^2 2^r$).
Since $\beta>2$, we have that $n^{\alpha}/2^{(\beta-2)r}$ is decreasing in $r$, and thus the optimal value for $r$ is indeed obtained when the two bounds are equal.
That is, we set $r=\frac{2-\alpha}{1-\beta}\lg_2 n = \frac{d-3}{d-1}\lg_2 n$ and obtain
\begin{equation} \label{eq:EPupper}
E(\mathcal P) < \sum_{k=0}^{r} N_k 2^{2(k+1)} + \sum_{k=r+1}^{\lg n} N_k 2^{2(k+1)} = O\left(n^{2+(d-3)/(d-1)}\right) = O\left(n^{3-2/(d-1)}\right).
\end{equation}
Notice that this upper bound for $E(\mathcal P)$ matches the lower bound in \eqref{eq:Lower4}.
If no $k'$ satisfies $N_{k'} = \Omega\left(n^\alpha/2^{\beta k'}\right)$, we would be able to improve the above upper bound, which would contradict \eqref{eq:Lower4}.
Thus, such a $k'$ must exist.
Moreover, there must exist such a $k'$ with $|k'-r|=O(1)$.
Indeed, assume for contradiction that there exist functions $f_1(n)=\omega(1)$ and $f_2(n)=\omega(1)$ such that for every $k$ that satisfies $|k-r|\le f_1(n)$ we have $N_{k} = O\left(n^\alpha/(2^{\beta k}f_2(n))\right)$.
We set $f(n)= \min\{f_1(n),f_2(n)\}$, $r'=r-(2\beta-4)^{-1}\cdot \lg_2 f(n)$, and $r''= r+(2\beta-4)^{-1}\lg_2 f(n)$.
By repeating the calculation in \eqref{eq:EPupper}, we obtain
\begin{align*}
E(\mathcal P) &< \sum_{k=0}^{r'-1} N_k 2^{2(k+1)} + \sum_{k=r'}^{r''-1} N_k 2^{2(k+1)} +\sum_{k=r''}^{\lg n} N_k 2^{2(k+1)} \\
&= O\left(n^2 2^{r'} + \frac{n^\alpha}{2^{(\beta-2) r'}f(n)} + \frac{n^\alpha}{2^{(\beta-2) r''}}\right) \\
&= O\left(\frac{n^{3-2/(d-1)}}{f(n)^{1/(2\beta-4)}}+ \frac{n^{3-2/(d-1)}}{f(n)^{1/2}}+ \frac{n^{3-2/(d-1)}}{f(n)^{1/2}}\right) = o\left(n^{3-2/(d-1)}\right).
\end{align*}
This contradicts \eqref{eq:Lower4}, so there must exist $k'$ that satisfies $|k'-r|=O(1)$ and $N_{k'} = \Omega\left(n^\alpha/2^{\beta k'}\right)$.
Let $\Pi'$ denote the set of hyperplanes $H_v \in \Pi$ that contain between $2^{k'}$ and $2^{k'+1}-1$ pairs $a,b\in \mathcal P$ with $a+b=v$. Notice that $N_{k'} = |\Pi'|$.
Every hyperplane of $\Pi'$ intersects $S_d$ in a $(d-2)$-dimensional ellipsoid.
(While there exist hyperplanes that intersect $S_d$ in a paraboloid, none of these are in $\Pi'$. Indeed, notice that the coefficient of $x_d$ in \eqref{eq:PiPlane} is constant.)
Similarly, two hyperplanes of $\Pi'$ intersect $S_{d}$ in an ellipsoid of dimension at most $d-3$.
By inspecting the defining equation \eqref{eq:PiPlane} of the hyperplanes of $\Pi'$, we notice that all of the coefficients in it are integers of absolute value $O_d(n^{2/(d-1)})$.
Thus, the ellipsoids that are obtained from such intersections are $n$-proper.
By Lemma \ref{le:ellipsoids}, the intersection of two hyperplanes of $\Pi'$ cannot contain $t=\Theta\left(n^{(d-4)/(d-1)+ c/\lg \lg n}\right)$ points of $\mathcal P$.
In other words, the incidence graph of $\mathcal P \times \Pi'$ contains no copy of $K_{t,2}$.
We use the standard point-hyperplane duality on $\mathcal P$ and $\Pi'$ to obtain a configuration with no $K_{2,t}$ in the incidence graph. We denote the new point set as $\Pi^*$ and the new set of hyperplanes as $\mathcal P^*$.
With this new notation, we have $|\mathcal P^*|=\Theta(n)$, $|\Pi^*|= \Omega_d\left(n^\alpha/2^{\beta k'}\right)$, and that every point of $\Pi^*$ is incident to at least $2^{k'}$ hyperplanes of $\mathcal P^*$.
We arbitrarily remove points from $\Pi^*$ to obtain $|\Pi^*|= \Theta_d\left(n^\alpha/2^{\beta k'}\right)$. This implies
\begin{align*}
I(\Pi^*,\mathcal P^*) &= \Omega\left(|\Pi^*| \cdot 2^{k'}\right) = \Omega_d\left(|\Pi^*|^{(\beta-1)/\beta} \left(n^\alpha/2^{\beta k'}\right)^{1/\beta}\cdot 2^{k'}\right) \\
&= \Omega_d\left(|\Pi^*|^{(\beta-1)/\beta} |\mathcal P^*|^{\alpha/\beta}\right) = \Omega_d\left(|\Pi^*|^{(\beta-1)/\beta} |\mathcal P^*|^{\left(\alpha-\gamma\frac{d-4}{d-1}-\frac{\gamma c}{\lg \lg n}\right)/\beta}t^{\gamma/\beta}\right).
\end{align*}
It remains to bound the size of $|\Pi^*|$. Recall that $|\Pi^*|= \Theta\left(n^\alpha/2^{\beta k'}\right)$.
Moreover, we know that $2^{k'} = \Theta\left(2^r\right) = \Theta\left(n^{(d-3)/(d-1)}\right)$. We conclude that
\[ |\Pi^*| = \Theta\left(n^\alpha/n^{\beta(d-3)(d-1)}\right) = \Theta\left(n^{\frac{\beta d+ d-3\beta+1}{d-1}-\frac{\beta d-3\beta}{d-1}}\right)=\Theta\left(n^{(d+1)/(d-1)}\right). \]
This completes the case of incidences with hyperplanes.
\parag{Other hypersurfaces}
We now show how to apply the above argument to other types of hypersurfaces.
The case of hyperspheres is obtained by a simple use of the \emph{inversion transformation} around the origin $\rho_d:\ensuremath{\mathbb R}^d \to \ensuremath{\mathbb R}^d$ (e.g., the planar case can be found in \cite[Section 37]{Hart00}). The transformation $\rho_d(\cdot)$ maps the point
$p=(x_1,\ldots,x_d)\neq (0,\ldots,0)$ to the point $\rho_d(p)=(\bar{x}_1,\ldots,\bar{x}_d)$,
where
\[ \bar{x}_i = \frac{x_i}{x_1^2+\cdots+x_d^2},\quad i=1,\ldots,d. \]
If $h\subset \ensuremath{\mathbb R}^d$ is a hyperplane that is not incident to the origin then $\rho_d(h)$ is a hypersphere that is incident to the origin.
This property is easy to verify after noticing that $\rho_d(\cdot)$ is its own inverse.
Another important observation is that a point $p\in \ensuremath{\mathbb R}^d \setminus \{0\}$ is incident to an object $V \subset \ensuremath{\mathbb R}^d$ if and only if $\rho_d(p)$ is incident to $\rho_d(V)$.
We consider the sets $\Pi^*$ and $\mathcal P^*$ that were obtained above, and perform a translation of $\ensuremath{\mathbb R}^d$ so that no plane of $\mathcal P^*$ is incident to the origin. We then set $\mathcal P'' = \rho_d(\Pi^*)$ and $\mathcal O = \rho_d(\mathcal P^*)$, and notice that $\mathcal O$ is a set of hyperspheres. As before, we have $|\mathcal O|=\Theta(n)$, $|\mathcal P''| = \Theta\left(n^{(d+1)/(d-1)}\right)$, no $K_{2,t}$ in the incidence graph of $\mathcal P''\times \mathcal O$, and
\[ I(\mathcal P'',\mathcal O) =\Omega\left(|\mathcal P'|^{(\beta-1)/\beta} |\Pi'|^{\left(\alpha-\gamma\frac{d-4}{d-1}-\frac{\gamma c}{\lg \lg n}\right)/\beta}t^{\gamma/\beta}\right). \]
We next consider the case of a linearly-closed family $F$ of graphs.
Consider a function $f:\ensuremath{\mathbb R}^{d-1}\to \ensuremath{\mathbb R}$ such that $x_d= f(x_1,\ldots,x_{d-1})$ defines a graph of $F$.
We rely on a technique that was introduced by J\'ozsef Solymosi (and should appear in \cite{SolySza}).
We begin with a point-hyperplane configuration in $\ensuremath{\mathbb R}^d$, as described above.
We consider the bijection $\phi(x_1,x_2,\ldots,x_d) =(x_1,\ldots,x_{d-1},x_d+f(x_1,\ldots,x_{d-1}))$ from $\ensuremath{\mathbb R}^d$ to itself, and apply $\phi$ on the above point-hyperplane configuration.
Notice that $\phi$ is a bijection since $\phi^{-1}(x_1,x_2,\ldots,x_d) =(x_1,\ldots,x_{d-1},x_d-f(x_1,\ldots,x_{d-1}))$.
Thus, $\phi$ maintains the incidence structure of the point-hyperplane configuration.
Consider a hyperplane $H$ that is defined by the equation $x_d = a_1x_1+\cdots + a_{d-1}x_{d-1}$ (all of the hyperplanes in our point-hyperplane configuration can be defined like this), and notice that $\phi(H)$ is defined by $x_d = a_1x_1+\cdots + a_{d-1}x_{d-1} +f(x_1,\ldots,x_{d-1})$.
Since $F$ is linearly-closed, $\phi(H)\in F$.
That is, we obtain a configuration of points and graphs from $F$ with the same number of incidences and no $K_{2,t}$ in the incidence graph.
\subsection{Using Lemma \ref{le:Intermediate}} \label{ssec:Chefnoff}
For our analysis we require several Chernoff bounds (e.g., see \cite{CL06} and the last equation of \cite{HR90}).
\begin{lemma}[Chernoff bounds] \label{le:Chernoff}
(a) Given $0 < p < 1$, for every $1\le i \le n$ let $X_i$ be a random variable that equals 1 with probability $p$ and otherwise equals 0. Moreover, let these $n$ variables be pairwise independent.
Let $X= \sum_{i=1}^n X_i$, so $E[X] = \sum_{i=1}^n E[X_i] = pn$.
Let $k$ be an integer larger than $pn$. Then for any $\lambda,k>0$
\begin{align*}
\Pr[X\ge pn+\lambda] &\le e^{\frac{-\lambda^2}{2pn+2\lambda/3}}, \\[2mm]
\Pr[X\le pn-\lambda] &\le e^{-\lambda^2/2pn}, \\[2mm]
\Pr[X\ge k] &\le \left(\frac{np}{k}\right)^k e^{k-np}.
\end{align*}
(b) Given $0 < p < 1$, for every $1\le i \le n$ let $X_i$ be a random variable that equals $m_i$ with probability $p$ and otherwise equals 0. Moreover, let these $n$ variables be pairwise independent.
Let $X= \sum_{i=1}^n X_i$, so $E[X] = \sum_{i=1}^n E[X_i] = p\sum_{i=1}^n m_i$. Then for any $\lambda>0$
\begin{align*}
\Pr[X\le E[X]-\lambda] &\le e^{\frac{-\lambda^2}{2\sum_{i=1}^n E[X_i^2]}}.
\end{align*}
\end{lemma}
For simplicity we consider a set $\Pi$ of hyperplanes.
The following analysis remains valid after replacing the hyperplanes with a different type of objects.
From Lemma \ref{le:Intermediate} we have a set $\mathcal P$ of $m=\Theta\left(n^{(d+1)/(d-1)}\right)$ points and a set $\Pi$ of $a \cdot n$ hyperplanes (for some constant $a$), both in $\ensuremath{\mathbb R}^d$, such that the incidence graph of $\mathcal P\times \Pi$ contains no copy of $K_{2,t}$ with $t=\Theta\left(n^{(d-4)/(d-1)+c/\lg\lg n}\right)$, and with
\[ I(\mathcal P,\Pi)=\Omega\left(m^{(\beta-1)/\beta} n^{\left(\alpha-\gamma\frac{d-4}{d-1}-\frac{\gamma c}{\lg\lg n}\right)/\beta}t^{\gamma/\beta}\right). \]
Set $p=1/(t n^{\varepsilon})$ and let $\Pi'$ be a set that is obtained by taking any element of $\Pi$ with probability $p$.
By lemma \ref{le:Chernoff}, we have\begin{itemize}
\item The probability of $|\Pi'|\le anp/10$ is at most $e^{-0.9^2a^2p^2n^2/2apn}<e^{-0.4apn}$.
\item The probability of $|\Pi'|\ge 10anp$ is at most $e^{\frac{-81a^2p^2n^2}{2pn+6apn}}<e^{-10pn\cdot \min\{a,a^2\}}$.
\item The probability that a given pair of points of $\mathcal P$ is fully contained in at least $3/{\varepsilon}$ hyperplanes of $\Pi'$ is smaller than
%
\[ \left(\frac{tp}{3/{\varepsilon}}\right)^{3/{\varepsilon}} e^{3/{\varepsilon}-tp}<\left(\frac{{\varepsilon}}{3n^{\varepsilon}}\right)^{3/{\varepsilon}}e^{3/{\varepsilon}}<\frac{e^{3/{\varepsilon}}}{n^3}. \]
\item The probability that no pair of points of $\mathcal P$ is fully contained in $3/{\varepsilon}$ hyperplanes of $\Pi'$ is smaller than $n^2 \frac{e^{3/{\varepsilon}}}{n^3} < \frac{e^{3/{\varepsilon}}}{n}$.
\item The probability that $I(\mathcal P,\Pi') \le I(\mathcal P,\Pi)p/100$ is at most
\begin{align*} e^{\frac{-(99\cdot I(\mathcal P,\Pi)p/100)^2}{2\cdot I(\mathcal P,\Pi)mp}} &< e^{\frac{-0.49 p \cdot I(\mathcal P,\Pi)}{m}} = e^{-0.49 n^{\frac{\alpha}{\beta}-{\varepsilon}-\frac{d+1}{(d-1)\beta}-\frac{d-4}{d-1}-\frac{c}{\lg\lg n}}} \\
& = e^{-0.49 n^{\frac{1}{(d-1)}-{\varepsilon}-\frac{c}{\lg\lg n}}} < e^{-0.49 n^{\frac{1}{d-1}-2{\varepsilon}}}.
\end{align*}
This bound is obtained from part (b) of Lemma \ref{le:Chernoff}. By taking $m_i$ to be the number of incidences on the $i$'th hyperplane of $\Pi$ we have $m_i \le m$, so $\sum_{i=1}^n E[X_i^2] < I(\mathcal P,\Pi)mp$. We also assume that $n$ is sufficiently large so that $c/\lg\lg n <{\varepsilon}$.
\end{itemize}
By taking sufficiently large $n$, all of the above probabilities are smaller than $10^{-10}$. That is, with probability of at least $1-4\cdot 10^{-10}$ we have that $|\Pi'|=\Theta(np)$, that no pair of points of $\mathcal P$ is fully contained in $3/{\varepsilon}$ elements of $\Pi'$, and that $I(\mathcal P,\Pi') > I(\mathcal P,\Pi')p/100$. Thus, there must exist a set $\Pi'$ that satisfies all of these properties, and we consider such a set. Notice that the incidence graph of $\mathcal P\times\Pi'$ contains no copy of $K_{2,3/{\varepsilon}}$, that $|\Pi'| = \Theta\left(n^{3/(d-1)-{\varepsilon}}\right)$, and that
\begin{align*} I(\mathcal P,\Pi') &> \frac{p}{100} \cdot I(\mathcal P,\Pi) =\Omega\left(\frac{\left(n^{(d+1)/(d-1)}\right)^{(\beta-1)/\beta} n^{\alpha/\beta}}{n^{(d-4)/(d-1)+{\varepsilon}}}\right) \\[2mm]
& = \Omega\left(n^{\frac{d+1}{d-1}\frac{\beta-1}{\beta}+\frac{\alpha}{\beta}-\frac{d-4}{d-1}-{\varepsilon}}\right) = \Omega\left(n^{\frac{(d+1)(\beta-1)+(\beta d+ d-3\beta+1)-\beta(d-4)}{\beta(d-1)}-{\varepsilon}}\right) \\[2mm]
&=\Omega\left(n^{(d+2)/(d-1)-{\varepsilon}}\right) = \Omega\left(|\mathcal P|^{\delta}|\Pi'|^{\left(\frac{d+2}{d-1}-\frac{\delta(d+1)}{d-1}-{\varepsilon}\right)\frac{d-1}{3}}\right) \\[2mm]
&= \Omega\left(|\mathcal P|^{\delta}|\Pi'|^{(d+2-\delta(d+1)-{\varepsilon}(d-1))/3}\right).
\end{align*}
Note that $|\Pi'| = \Theta\left(n^{3/(d-1)-{\varepsilon}-c/\lg\lg n}\right)= \Theta\left(m^{\frac{3-({\varepsilon}-c/\lg\lg m) (d-1)}{d+1}}\right)$. Replacing ${\varepsilon}$ with $(d-1){\varepsilon}/3$ yields the assertion of Theorem \ref{th:GeneralBound} (the term $c/ \lg\lg m$ is negligible with respect to ${\varepsilon}$, so we ignore it).
\section*{Acknowledgments}
The author is indebted to Ciprian Demeter, who provided the initial inspiration for this work.
He would like to thank Prabath Silva, J\'ozsef Solymosi, Joshua Zahl, and Frank de Zeeuw for several helpful discussions.
He would also like to thank the anonymous referees for carefully reading the paper and helping to improve it.
\bibliographystyle{amsplain}
|
1,108,101,565,799 | arxiv | \section{Introduction}
Over the last few years the theory of half-harmonic maps received a lot of attention, beginning with the pioneering work of Da Lio and Rivi\`{e}re \cite{DaLio-Riviere-2011,DaLio-Riviere-2011-2}, see also the subsequent \cite{Schikorra-2012, DaLio-2013, Millot-Sire-2015, Schikorra-2015}. Half-harmonic maps appear in nature as free boundary problems --- e.g., they are connected to critical points of the energy
\[
\|\nabla u\|^2_{L^2(D,\mathbb{R}^N)} \quad \mbox{s.t. $u(\partial D) \subset {\mathcal{N}}$ in the a.e. trace sense}.
\]
Here, $D\subset\mathbb{R}^n$ {is an open set} and ${\mathcal{N}} \subset \mathbb{R}^N$ is a smooth closed manifold. The Euler-Lagrange equations of the latter problem are
\begin{equation}\label{eq:freebdharm}
\begin{cases}
\Delta u = 0 \quad &\mbox{in $D$}\\
\partial_\nu u\perp T_u {\mathcal{N}} \quad &\mbox{on $\partial D$},\\
\end{cases}
\end{equation}
where $\nu$ denotes the outer normal vector.
For $D = \mathbb{R}^n_+$ and $\partial D = \mathbb{R}^{n-1} \times \{0\}$ the equation \eqref{eq:freebdharm} is equivalent to
\begin{equation}\label{eq:halfharm}
\begin{cases}
\Delta u = 0 \quad &\mbox{in $\mathbb{R}^n_+$}\\
\laps{1}_{\mathbb{R}^{n-1}} u\perp T_u {\mathcal{N}} \quad &\mbox{on $\mathbb{R}^{n-1}\times \{0\}$}.\\
\end{cases}
\end{equation}
Here, $\laps{1}_{\mathbb{R}^{n-1}}$ denotes the half-Laplacian acting on functions defined on $\mathbb{R}^{n-1} \times \{0\}$. The equation $\laps{1}_{\mathbb{R}^{n-1}} u\perp T_u {\mathcal{N}}$ is the half-harmonic map equation, for an overview see \cite{DaLio-Riviere-2011}.
The equivalence of \eqref{eq:freebdharm} and \eqref{eq:halfharm} is crucially related to the fact that we are considering critical points of an $L^2$-energy. Several notions of fractional $p$-harmonic maps have been proposed. In \cite{DaLio-Schikorra-2014,DaLio-Schikorra-2017} Da Lio and the third-named author considered $H^{s,p}$-harmonic maps, i.e., critical points of
\begin{equation}\label{eq:Hsharm}
\|\laps{s} u\|^p_{L^p(\mathbb{R}^{n-1},\mathbb{R}^N)} \quad \mbox{s.t. $u(x) \in {\mathcal{N}}$ for a.e. $x\in\mathbb{R}^{n-1}$}.
\end{equation}
In \cite{Schikorra-2015Lp} energies with a gradient-type structure were studied, namely
\begin{equation}\label{eq:gradsharm}
\|D^s u\|^p_{L^p(\mathbb{R}^{n-1},\mathbb{R}^N)} \quad \mbox{s.t. $u(x) \in {\mathcal{N}}$ for a.e. $x\in\mathbb{R}^{n-1}$},
\end{equation}
where $D^s = D \lapms{1-s}$ is the Riesz-fractional gradient, see also \cite{Shieh-Spector,Shieh-Spector2}. Finally, $W^{s,p}$-harmonic maps were studied in \cite{SchikorraCPDE}, that is critical points of the energy
\begin{equation}\label{eq:wspharm}
\int_{\mathbb{R}^{n-1}}\int_{\mathbb{R}^{n-1}} \frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}\ dx\ dy \quad \mbox{s.t. $u(x) \in {\mathcal{N}}$ for a.e. $x\in\mathbb{R}^{n-1}$},
\end{equation}
see also \cite{Mazowiecka-Schikorra-2017}. All these versions of fractional $p$-harmonic maps have one thing in common: they do not seem related to a free boundary equation \eqref{eq:freebdharm}. For \eqref{eq:Hsharm} and \eqref{eq:gradsharm} this is clear, since the energies {are defined on} the ``wrong'' function space $H^{s,p}$. {Indeed, a map in } $W^{1,p}(D)$ {has a trace in} $W^{1-\frac{1}{p},p}(\partial D)$, {but} $W^{1-\frac{1}{p},p}(\partial D) \neq H^{1-\frac{1}{p},p}(\partial D)$ for $p \neq 2$. For the $W^{s,p}$-energy \eqref{eq:wspharm} it is an interesting open problem if it is possible to find a $p$-harmonic extension that interprets this problem as a free boundary problem.
In this work we concentrate on free boundary problems. {We focus on smooth bounded domains, so in the sequel $D$ is such a domain.} We prove regularity at the free boundary for critical points $u:D \to \mathbb{R}^N$ of the energy
\begin{equation}\label{eq:penergy}
\|\nabla u\|^p_{L^p(D,\mathbb{R}^N)} \quad \mbox{s.t. $u(\partial D) \subset {\mathcal{N}}$ in the a.e. trace sense}.
\end{equation}
It is not clear that the space $\mathcal{A}:=\{u \in W^{1,p}(D,\mathbb{R}^N)\colon \ u(\partial D) \subset {\mathcal{N}}\}$ possesses a natural structure of a smooth Banach manifold. That is why we shall define what we mean by critical point.
\begin{definition}
We say that $u$ is a critical point of $\int_D |\nabla u|^p$ in the space $\mathcal{A}$ if $u$ satisfies
\begin{equation}\label{eq:def}
{\int_D |\nabla u|^{p-2}\nabla u \cdot \nabla \phi=0}
\end{equation}
for all $\phi$ in $W^{1,p}(D,\mathbb{R}^N)$ s.t. its trace $\phi(x)\rvert_{\partial D}$ is in $T_{u(x)}{\mathcal{N}}$ a.e. Such a critical point is called a \textit{$p$-harmonic map with free boundary}.
\end{definition}
Equation \eqref{eq:def} is obtained by requiring that for every $C^{1}$-path $\gamma:(-1,1)\rightarrow \mathcal{A}$ such that $\gamma(0)=u$ we have
\begin{equation}
\frac{d}{dt}\bigg\rvert_{t=0} \int_D |\nabla \gamma(t)|^p=0.
\end{equation}
\begin{remark}
{Although this is not relevant for our purpose, let us remark that equation} \eqref{eq:def} {can be interpreted} as $u$ satisfying {in a distributional sense}
\begin{equation}\label{eq:soludef}
\begin{cases}
\operatorname{div} (|\nabla u|^{p-2} \nabla u) = 0 \quad &\mbox{in $D$}\\
|\nabla u|^{p-2}\partial_\nu u\perp T_u {\mathcal{N}} \quad &\mbox{on $\partial D$}.
\end{cases}
\end{equation}
{Note that, by definition, $u$ is a solution of \eqref{eq:soludef} in the sense of distributions if and only if
\begin{equation}\label{eq:soludef2}
\int_D |\nabla u|^{p-2}\nabla u\cdot \nabla \phi =0
\end{equation}
for all $\phi \in C^\infty(\overline{D},\mathbb{R}^N)$ with $\phi(x)\in T_{u(x)}\mathcal{N}$ for $\mathcal{H}^{n-1}$-a.e.\ $x \in \partial D$. Indeed, taking $\phi \in C^\infty_c(D,\mathbb{R}^N)$ we obtain the interior equation
$$\div( |\nabla u|^{p-2}\nabla u)=0 \text{ in } D.$$
As for the boundary equation, we can see that if $u$ is smooth enough and satisfies \eqref{eq:soludef2} then after an integration by parts we find
\begin{equation}\label{eq:soludefboundary}
\int_{\partial D} |\nabla u|^{p-2}\partial_\nu u \cdot \varphi=0.
\end{equation}
Since any $\varphi \in C^\infty(\partial D,\mathbb{R}^N)$ with $\varphi(x) \in T_{u(x)}\mathcal{N}$ can be extended in a function $\phi \in C^\infty(\overline{D},\mathbb{R}^N)$, thus \eqref{eq:soludefboundary} implies
\[
|\nabla u|^{p-2}\partial_\nu u \perp T_u \mathcal{N} \text{ on } \partial D.
\]
The equivalence between being a solution of \eqref{eq:soludef} in the sense of distributions and being a critical point of the $p$-energy in the space $\mathcal{A}$ is {true if $u$ is smooth enough, for example $u \in C^1(\overline{D},\mathbb{R}^n)$ is sufficient. Indeed, in this case we can see that we have density of
$\{ \phi \in C^\infty(\overline{D},\mathbb{R}^N)\colon \phi \in T_{u} \mathcal{N}\}$ in $\{\phi \in W^{1,p}(D,\mathbb{R}^N)\colon \phi\big\rvert_{\partial D}\in T_u\mathcal{N}\}$.
}}
\end{remark}
The natural starting point, when studying equations of the form \eqref{eq:soludef}, is the regularity theory. The interior regularity is known and follows from the interior equation and results of \cite{Uhlenbeck-1977,tolksdorf1984regularity}, see also the recent \cite{Kuusi-Mingione-2017}. Hence, the main difficulty is the regularity up to the boundary. For an arbitrary manifold ${\mathcal{N}}$ a regularity theory for a solution \eqref{eq:soludef} is out of reach: even the regularity theory for the interior problem
\[
\operatorname{div} (|\nabla u|^{p-2} \nabla u) \perp T_u {\mathcal{N}}
\]
is known only for homogeneous targets ${\mathcal{N}}$, see Fuchs \cite{Fuchs-1993}, Takeuchi \cite{Takeuchi-1994}, Toro and Wang \cite{ToroWang-1995}, Strzelecki \cite{Strzelecki-1994,Strzelecki-1996}, and also the recent survey~\cite{Schikorra-Strzelecki}. For this reason we shall restrict our attention to the sphere ${\mathbb S}^{N-1} \subset \mathbb{R}^N$. In the rest of the paper we consider the problem:
\begin{equation}\label{eq:soluspheredef}
\begin{cases}
\operatorname{div} (|\nabla u|^{p-2} \nabla u) = 0 \quad &\mbox{in $D$}\\
|\nabla u|^{p-2}\partial_\nu u\perp T_u {\mathbb S}^{N-1} \quad &\mbox{on $\partial D$}\\
u(\partial D)\subset {\mathbb S}^{N-1}.
\end{cases}
\end{equation}
We remark that the free boundary conditions can be viewed as boundary conditions mixed between Dirichlet and homogeneous Neumann boundary conditions. Indeed, in the sphere case we have a Dirichlet boundary condition for the norm of $u$: $|u|=1$ on $\partial D$ and homogeneous Neumann condition for the ``phase" $\partial_\nu \left(\frac{u}{|u|}\right)=0$. To see that in the case of a general manifold we can use Fermi coordinates near some points of ${\mathcal{N}}$ as explained in \cite[p.938-939]{Fraser2000} in the context of minimal surfaces with free boundaries (for more on minimal surfaces with free boundaries see also \cite{FraserSchoen2013} and the references therein).
Our main theorem is the following $\epsilon$-regularity type theorem.
\begin{theorem}[$\epsilon$-regularity]\label{th:main}
Let $D \subset \mathbb{R}^n$ be a smooth, bounded domain and $p\geq 2$. Then there exist $\epsilon = \epsilon(p,n,D)>0$ and $\alpha = \alpha(p,n,D)> 0$, such that for any $u \in W^{1,p}(D,\mathbb{R}^N)$ solution to \eqref{eq:soluspheredef} the following holds:
If for some $R > 0$ and for some $x_0 \in \overline{D}$
\begin{equation}\label{eq:thmain:smallnesscond}
\sup_{|y_0-x_0| < R}\ \sup_{\rho < R} \rho^{p-n} \int_{B(y_0,\rho)\cap D} |\nabla u|^p < \epsilon,
\end{equation}
then $u$ and $\nabla u$ are H\"older continuous in $B(x_0,R/2) \cap \overline{D}$. Moreover, we have the following estimates:
\[
\sup_{x, y \in B(x_0,R/2)} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}} \precsim R^{-\alpha} \brac{\sup_{|y_0-x_0| < R}\ \sup_{\rho < R} \rho^{p-n} \int_{B(y_0,\rho)\cap D} |\nabla u|^p}^{\frac{1}{p}}
\]
and
\[
\sup_{x, y \in B(x_0,R/2)} \frac{|\nabla u(x)-\nabla u(y)|}{|x-y|^{\alpha}} \precsim R^{-\alpha-1} \brac{\sup_{|y_0-x_0| < R}\ \sup_{\rho < R} \rho^{p-n} \int_{B(y_0,\rho)\cap D} |\nabla u|^p}^{\frac{1}{p}}.
\]
\end{theorem}
When $p=n$ this $\epsilon$-regularity implies directly (from the absolute continuity of the Lebesgue integral) that $n$-harmonic maps with free boundary {and their gradients} are H\"older continuous.
\begin{corollary}
Let $u$ and $\alpha$ be as in Theorem \ref{th:main} with $p=n$ then $u$ is in $C^{1,\alpha}(\overline{D},\mathbb{R}^N)$.
\end{corollary}
As usual, an $\epsilon$-regularity result such as Theorem~\ref{th:main} implies partial regularity for \emph{stationary} $p$-harmonic maps {with free boundary} (cf.\ \eqref{eq:firstvariation} for the definition).
\begin{theorem}[partial regularity]\label{th:partialregularity}
Let $D \subset \mathbb{R}^n$ be a smooth, bounded domain, $p\geq 2$, and assume that $u \in W^{1,p}(D,\mathbb{R}^N)$, with trace $u \in W^{1-\frac{1}{p},p}(\partial D, {\mathbb S}^{N-1})$, is a \emph{stationary} point of the energy \eqref{eq:penergy} {with free boundary}.
Then there exists a closed set $\Sigma \subset \overline{D}$ such that $\mathcal{H}^{n-p}(\Sigma) {=0}$ and $u \in C^{1,\alpha}(\overline{D} \backslash \Sigma)$, where $\alpha>0$ is from Theorem \ref{th:main}.
\end{theorem}
{\begin{remark}
Although some of our results work for unbounded domains we note that finite energy, stationary $p$-harmonic maps with with free boundary satisfy a Liouville type theorem, cf. Proposition \ref{pr:Liouvilletype}. This is why we focus on bounded domains.
\end{remark}
}
Moreover, besides giving regularity in the case $p=n$ and partial regularity in the case $p<n$, an $\epsilon$-regularity could be useful to describe the possible loss of compactness of sequences of $n$-harmonic maps with free boundaries and an energy decomposition theorem. In the case $p=n=2$, i.e., for harmonic maps with free boundaries such a result was proven in \cite{DaLio2015,LaurainPetrides2017}. Our case requires completely different methods, due to the nonlinearity of the $p$-Laplacian for $p \neq 2$.
Let us comment on our strategy for the proof of Theorem~\ref{th:main}. The natural first attempt to prove a result like Theorem~\ref{th:main} is to adapt the beautiful geometric reflection method that Scheven used in \cite{Scheven-2006} to obtain an $\epsilon$-regularity result up to the free boundary for harmonic maps, i.e., for the case $p=2$ (see also \cite{BerlyandMironescu2004} where the authors also devised a reflection technique to prove regularity up to the boundary of solutions of some Ginzburg-Landau equations with free boundary conditions). This way, one would hope to be able to rewrite the Neumann condition at the boundary into an interior equation. For $p=2$ the reflected equation has again the structure of a harmonic map (with a new metric in the reflected domain). Thus, the regularity theory for harmonic maps with a free boundary follows from the interior regularity for harmonic maps developed by H\'elein \cite{Helein-1991}, see also \cite{Riviere-2007}. For $p > 2$ there is a major drawback to that strategy: as mentioned above, the regularity theory for the interior $p$-harmonic map equation is only understood for round targets. It was not clear to us, how to interpret the reflected equation as a map into such a round target. The reflection, which generates a somewhat ``unnatural metric'' seems to destroy our boundary sphere-structure. Indeed, up to now, only the regularity theory for \emph{minimizing} $p$-harmonic maps with free boundary was understood, see \cite{Duzaar-Gastel-1998,Mueller-2002} {where it is shown that such a map is in $C^{1,\alpha}$, for some $\alpha$, outside a singular set $\mathcal{S}$ with $\text{dim}_\mathcal{H}(\mathcal{\mathcal{S}})=n-\lfloor p \rfloor-1$ and $\mathcal{S}$ is discrete if $n-1\leq p<n$}. For $p=2$ free boundary problems for \emph{minimizing} harmonic maps were studied in \cite{Duzaar-Steffen-1989,Hardt-Lin-1989}.
In this work we follow in spirit the recent work of the third-named author \cite{Schikorra-2017} which does not use a reflection technique, but rather computes an equation along the free boundary and applies a moving frame technique to this free boundary part of the equation itself. This strategy leads to \emph{growth} estimates, Proposition~\ref{pr:growth}, which for the critical case $n = p$ implies directly H\"older regularity of solutions. Once the growth estimates are established we can apply the reflection. Since the reflection is explicit, it is easy to see that the growth estimates still hold for the reflected solution, which we shall call $v$. Now $v$ solves a critical or super-critical equation of the form
\[
|\operatorname{div} (|\nabla v|^{p-2} \nabla v)| \precsim |\nabla v|^p.
\]
In principle, solutions to this equation may be singular, e.g., $x/|x|$ or $\log \log 1/|x|$. But with the growth estimates from Proposition~\ref{pr:growth}, which transfers to $v$, one can employ a blow-up argument due to \cite{Hardt-Kinderlehrer-Lin-1986,Hardt-Lin-1987} and then bootstrap for higher regularity.
The outline of the paper is as follows: In Section~\ref{s:growth} we state and prove the crucial growth estimate for solutions to {\eqref{eq:soluspheredef}}. In Section~\ref{s:pnHoelder} we show how this implies H\"older continuity of solutions for the case $p=n$. For $p < n$ we show in Section~\ref{s:genericHoelder} how a generic super-critical system implies H\"older regularity of solutions once the growth estimates from Proposition~\ref{pr:growth} are guaranteed. Combining this with Scheven's reflection argument, we give in Section~\ref{s:proofthmain} the proof of Theorem~\ref{th:main}. Finally, in Section~\ref{s:partialregularity}, we prove the partial regularity of solutions, i.e., Theorem~\ref{th:partialregularity}.
{\textbf {Notation.}} We denote by $B(x,r)$ the ball of radius $r$ centered at $x\in\mathbb{R}^n$. We write $\mathbb{R}^n_+=\mathbb{R}^{n-1}\times(0,\infty)$, $\mathbb{R}^n_-=\mathbb{R}^{n}\times(-\infty,0)$, and $B^+(x,r) = B(x,r)\cap\mathbb{R}^n_+$. By $(u)_\Omega$ we denote the mean value of a map $u$ on a set $\Omega$, i.e., $(u)_\Omega= \frac{1}{|\Omega|}\int_\Omega u $.
\section{The growth estimates}\label{s:growth}
{{Recall that} we assume that $D$ is a bounded set with a smooth boundary. In view of the Lemma~\ref{la:boundedness} we know that $|u| \leq 1$ holds for any solution to \eqref{eq:soluspheredef}. The arguments can be also extended to unbounded domains like $\mathbb{R}^n_+$ under the assumption that $u \in L^\infty_{loc}(\mathbb{R}^n_+)$, cf. Lemma \ref{la:boundedness2}. Note that in principle, the constants may depend on the $L^\infty$-norm of $u$.}
The main result in this section, and the crucial argument in this work, is the following growth estimate that one could interpret as a kind of Caccioppoli type estimate. We were not able to obtain such an estimate by a geometric reflection argument, since that reflection changes the metric, and only in the case of round targets, such as the sphere, regularity theory (and in particular the related growth estimates) are known.
\begin{proposition}[Growth estimates]\label{pr:growth}
Let $p\ge2$. There exists a radius $R_0$ depending only on $\partial D$ such that for any $u\in W^{1,p}(D,\mathbb{R}^N)$ satisfying \eqref{eq:soluspheredef} the following holds:
Whenever $B(x_0,R) \subset \mathbb{R}^n$, $R \in (0,R_0)$ is such that for some $\lambda \in (0,\infty)$ it holds
\begin{equation}\label{small}
\sup_{B(y_0,r)\subset B(x_0,R)} r^{p-n} \int_{B(y_0,r)\cap D} |\nabla u|^p < \lambda^p,
\end{equation}
then for any $B(y_0,4r)\subset B(x_0,R)$ and any $\mu > 0$,
\begin{equation}\label{eq:growth:peqn}
\int_{B(y_0,r)\cap D} |\nabla u|^p \leq C\ \brac{\lambda + \mu^{p-1}} \int_{B(y_0,4r)\cap D} |\nabla u|^p + C\mu^{-1} \int_{(B(y_0,4r)\backslash B(y_0,r))\cap D} |\nabla u|^p.
\end{equation}
Alternatively, we have the following estimates:
If $B(y_0,2r) \backslash D = \emptyset$, then
\begin{equation}\label{eq:growth:pllnbd0}
\int_{{B(y_0,r)}} |\nabla u|^p \leq C \lambda \int_{B(y_0,4r)\cap D} |\nabla u|^p + C \lambda^{1-p} r^{-p}\int_{B(y_0,4r)\cap D} |u-(u)_{B(y_0,4r){\cap D}}|^p.
\end{equation}
If $B(y_0,2r) \backslash D \neq \emptyset$, then
\begin{equation}\label{eq:growth:pllnbd}
\begin{split}
\int_{B(y_0,r)\cap D} |\nabla u|^p &\leq C \lambda \int_{B(y_0,4r)\cap D} |\nabla u|^p + C \lambda^{1-p} r^{-p}\int_{B(y_0,4r)\cap D} |u-(u)_{B(y_0,4r){\cap D}}|^p\\
&\quad+ C\lambda^{1-p}r^{-p}\int_{B(y_0,4r)\cap D} ||u|^2-1|^p,
\end{split}
\end{equation}
for a constant $C=C(n,p,D)$.
\end{proposition}
Our strategy, in principle, is to adapt the method for harmonic maps into spheres developed by H\'elein \cite{Helein-1990}, see Strzelecki's \cite{Strzelecki-1994} for the $n$-harmonic case. To motivate our approach, we briefly outline their strategy for a $p$-harmonic map $w\in W^{1,p}(D,{\mathbb S}^{N-1})$, i.e., a solution to
\begin{equation}\label{eq:pharmmapeq}
\div(|\nabla w|^{p-2} \nabla w) \perp T_w {\mathbb S}^{N-1}.
\end{equation}
The first step is to rewrite this equation. Since $w \in {\mathbb S}^{N-1}$ we have $w \in \brac{T_w {\mathbb S}^{N-1}}^\perp$. Consequently, \eqref{eq:pharmmapeq} can be rewritten in distributional sense as
\begin{equation}\label{eq:pharmmapeq2}
\int_{D} |\nabla w|^{p-2} \nabla w^i\cdot \nabla \varphi = \int_{D} |\nabla w|^{p-2} \nabla w^k \cdot \nabla (w^k w^i \varphi),
\end{equation}
which holds for all $\varphi \in C_c^\infty(D)$ and $i = 1,\ldots,N$. Here and henceforth, we use the summation convention.
Next, from $|w| \equiv 1$, we get $w^k \nabla w^k \equiv \frac{1}{2} \nabla |w|^2 = 0$. Consequently, \eqref{eq:pharmmapeq2} can be written as
\begin{equation}\label{eq:pharmmapeq3}
\int_{D} |\nabla w|^{p-2} \nabla w^i\cdot \nabla \varphi = \int_{D} |\nabla w|^{p-2} \nabla w^k \cdot \brac{\nabla w^k\ w^i - \nabla w^i\ w^k} \varphi.
\end{equation}
Now one observes that from \eqref{eq:pharmmapeq2} a conservation law follows, a fact that for $p=n=2$ was discovered by Shatah \cite{Shatah-1988},
\begin{equation}\label{eq:pharmmapeq3conslaw}
\div\brac{|\nabla w|^{p-2} \brac{\nabla w^k\ w^i - \nabla w^i\ w^k} } = 0 \quad \mbox{in $D$}.
\end{equation}
Thus, $|\nabla w|^{p-2} \nabla w^k \cdot \brac{\nabla w^k\ w^i - \nabla w^i\ w^k}$ is a div-curl term and with the help of the celebrated result of Coifman, Lions, Meyer, and Semmes, \cite{CLMS}, one obtains a growth estimate.
The above argument heavily relied on the fact that $w^k \nabla w^k \equiv 0$. It is important to observe that this trick will not work in the situation from Theorem~\ref{th:main}: if we only know that $u\Big|_{\partial D} \subset {\mathbb S}^{N-1}$, then there is no reason that $u \cdot \nabla u = 0$ in $D$.
Nevertheless, we will stubbornly follow the strategy outlined above, just along the boundary $\partial D$, keeping the extra terms that involve $u^k \nabla u^k$. Firstly, we find:
\begin{lemma}\label{la:testfunctions}
For $u\in W^{1,p}(D,\mathbb{R}^N)$ satisfying \eqref{eq:soluspheredef} we have
\[
\int_{D} |\nabla u|^{p-2} \nabla u^i\cdot \nabla \varphi = \int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \nabla (u^k u^i \varphi),
\]
for any $\varphi \in W^{1,p}(D)$.
\end{lemma}
Let us stress that the test function $\varphi$ above does not need to vanish at the boundary.
\begin{proof}
Let $\Phi= (0,\ldots,\varphi,\ldots,0)$ (only the i-th coordinate is nonzero and equal to $\varphi$). Observe that
\[\Phi - u \langle u, \Phi\rangle_{\mathbb{R}^N} \in T_u {\mathbb S}^{N-1} \quad \mbox{a.e. on $\partial D$.}
\]
The claim follows now from the definition of $p$-harmonic maps with free boundary \eqref{eq:def}.
\end{proof}
Also we have the following conservation law.
\begin{lemma}\label{la:conservation}
Let $u\in W^{1,p}(D,\mathbb{R}^N)$ satisfying \eqref{eq:soluspheredef}. Then, for
\[
\Omega_{ij} := \brac{u^i\nabla u^j -u^j\nabla u^i},
\]
we have
\[
\div(|\nabla u|^{p-2} \Omega_{ij}) = 0 \quad \mbox{in $D$}
\]
up to the boundary. That is, for any $\varphi \in C^\infty(\overline{D})$ and any $i,j = 1,\ldots,N$,
\begin{equation}\label{eq:conservation}
\int_{D} |\nabla u|^{p-2} \Omega_{ij}\cdot \nabla \varphi = 0.
\end{equation}
Besides, equation \eqref{eq:conservation} is also satisfied for every $\varphi$ in $W^{1,p}\cap L^\infty(D)$.
\end{lemma}
\begin{proof}
By the product rule,
\[
\begin{split}
\int_{D}& \nabla \varphi \cdot |\nabla u|^{p-2}\brac{u^i\nabla u^j -u^j\nabla u^i}\\
&= \int_{D} \brac{\nabla (\varphi u^i) \cdot |\nabla u|^{p-2}\nabla u^j - \nabla (\varphi u^j) \cdot |\nabla u|^{p-2}\nabla u^i}.
\end{split}
\]
Therefore, by Lemma~\ref{la:testfunctions}, we find
\[
\begin{split}
\int_{D}& |\nabla u|^{p-2} \Omega_{ij}\cdot \nabla \varphi\\
&=\int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \nabla (u^k u^i u^j\varphi) -\int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \nabla (u^k u^j u^i \varphi) \\
&= 0.
\end{split}
\]
\end{proof}
We combine Lemma~\ref{la:conservation} and Lemma~\ref{la:testfunctions}. In contrast to the argument for the $p$-harmonic map $w$, we find additional terms. Namely, instead of having $w^k \nabla w^k \equiv 0$ we merely have $u^k \nabla u^k = \frac{1}{2} \nabla (|u|^2-1)$. However, it is an improvement, because $|u|^2-1 \in W^{1,p}_0(D)$.
\begin{lemma}\label{la:goodpde}
Let $u\in W^{1,p}(D,\mathbb{R}^N)$ satisfying \eqref{eq:soluspheredef}. Then for any $\varphi \in W^{1,p}(D)$ we have
\[
\begin{split}
\int_{D}& |\nabla u|^{p-2} \nabla u^i\cdot \nabla \varphi\\
&= \int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \Omega_{ik}\, \varphi+ \int_{D} |\nabla u|^{p-2} \nabla u^i\cdot \nabla \brac{|u|^2-1} \ \varphi\\
&\quad +\frac{1}{2}\int_{D} |\nabla u|^{p-2} \nabla \varphi \cdot \nabla \brac{|u|^2-1}\ u^i.
\end{split}
\]
\end{lemma}
It is important to observe that in particular we do not obtain an equation of the form $|\div(|\nabla u|^{p-2} \nabla u)| \precsim |\nabla u|^p$ as it is the case for $p$-harmonic maps (i.e., the interior situation). This is why for $p < n$ we are forced to combine our growth estimate with the geometric reflection argument, see Proposition~\ref{pr:plapnup}.
\begin{proof}[Proof of Lemma~\ref{la:goodpde}]
By Lemma~\ref{la:testfunctions} we have for any $\varphi \in C^\infty(\overline{D})$,
\[
\begin{split}
\int_{D}& |\nabla u|^{p-2} \nabla u^i\cdot \nabla \varphi \\
&= \int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \nabla u^k\ u^i \varphi +\int_{D} |\nabla u|^{p-2} \nabla u^k \cdot u^k \nabla(u^i \varphi).
\end{split}
\]
Using the definition of $\Omega_{ik}$ from Lemma~\ref{la:conservation} we write
\[
\begin{split}
\int_{D}& |\nabla u|^{p-2} \nabla u^i\cdot \nabla \varphi \\
&= \int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \Omega_{ik}\, \varphi+ 2\int_{D} |\nabla u|^{p-2} \nabla u^i\cdot \nabla u^k u^k \ \varphi\\
&\quad +\int_{D} |\nabla u|^{p-2} \nabla u^k u^i u^k \cdot \nabla \varphi.
\end{split}
\]
Since $u^k \nabla u^k = \frac{1}{2}\nabla \brac{|u|^2-1}$, we have shown that
\[
\begin{split}
\int_{D}& |\nabla u|^{p-2} \nabla u^i\cdot \nabla \varphi\\
&= \int_{D} |\nabla u|^{p-2} \nabla u^k \cdot \Omega_{ik}\, \varphi+ \int_{D} |\nabla u|^{p-2} \nabla u^i\cdot \nabla \brac{|u|^2-1} \ \varphi\\
&\quad +\frac{1}{2}\int_{D} |\nabla u|^{p-2} \nabla \varphi \cdot \nabla \brac{|u|^2-1}\ u^i \cdot \\
\end{split}
\]
\end{proof}
For the second and third term on the right-hand side of the equation in Lemma~\ref{la:goodpde} we observe that $|u|^2-1$ has zero boundary values on $\partial D$. In addition, and this is another crucial ingredient here, we can choose $u$ or (its coordinates) as a test function in Lemma \ref{la:testfunctions}, \ref{la:conservation}, and \ref{la:goodpde} since $u$ is in $W^{1,p}\cap L^\infty(D,\mathbb{R}^N)$ from Lemma \ref{la:boundedness}.
Moreover, in view of the interior equation for $u$, \eqref{eq:soludef},
\[
\int_{D} |\nabla u|^{p-2} \nabla u^i\cdot \nabla (|u|^2-1) = 0.
\]
\begin{proof}[Proof of Proposition~\ref{pr:growth}]
For notational simplicity {we prove the growth estimates when the boundary is flat. More precisely we treat the case where $B^+(0,R)\subset D \subset \mathbb{R}^n_+$ for some $R>0$, and $\partial D\cap B(0,R)=\partial \mathbb{R}^n_+ \cap B(0,R)$.}
The following argument can be easily adapted to general $D$ --- here is where one has to choose $R_0 = R_0(D)$ for flattening the boundary. We leave the details to the reader. {We also recall that, since we work in a smooth bounded domain, from Lemma~\ref{la:boundedness} we {have} that $\|u\|_{L^\infty(D)}\leq 1$.}
Let $\eta \in C_c^\infty(B(0,2))$ be the typical bump function constantly one in $B(0,1)$. {Let $y_0\in \mathbb{R}^n, r>0$ be such that $B(y_0,4r)\subset B(0,R)$}. Denote by
\[
\eta_{B(y_0,r)}(x) := \eta((x-y_0)/r).
\]
Set \[\tilde{u} := \eta_{B(y_0,r)} (u-(u)_{B^{{+}}(y_0,2r)})\] and
\[
\hat{u} := (1-\eta_{B(y_0,r)})\eta_{B(y_0,r)} (u-(u)_{B^{{+}}(y_0,2r)}).
\]
Since $\eta_{B(y_0,r)} \equiv 1$ on $B(y_0,r)$ we have
\[
\int_{ B^+(y_0,r)}|\nabla u|^p \leq \int_{{\mathbb{R}^n_+}} |\nabla u|^{p-2} \nabla \tilde{u} \cdot \nabla \tilde{u}.
\]
We compute
\begin{equation}\label{eq:gradientutildauwidehat}
\begin{split}
\nabla \tilde{u} \cdot \nabla \tilde{u}
&= \nabla u\cdot \nabla \tilde{u}- \nabla u\cdot \nabla \hat{u} \\
&\quad- \nabla \eta_{B(y_0,r)}\cdot \nabla u\ \tilde{u} + \nabla\eta_{B(y_0,r)}\, (u-(u)_{B^{{+}}(y_0,2r)})\cdot \nabla \tilde{u}.
\end{split}
\end{equation}
Since $|\nabla \eta_{B(y_0,r)}| \precsim r^{-1}$,
\begin{equation}\label{eq:thirdterm}
\int_{{\mathbb{R}^n_+}}|\nabla u|^{p-2} (\nabla \eta_{B(y_0,r)}\tilde{u})\cdot \nabla u\ \precsim r^{-1}\int_{ B^+(y_0,2r)\backslash B^+(y_0,r)} |\nabla u|^{p-1} |\tilde{u}|.
\end{equation}
This can be further estimated in two ways. For the estimate \eqref{eq:growth:peqn}, by Young and Poincar\'e inequalities, we have for any $\mu >0$
\[
\int_{{\mathbb{R}^n_+}}|\nabla u|^{p-2} (\nabla \eta_{B(y_0,r)}\tilde{u})\cdot \nabla u\ \precsim \frac{1}{\mu} \int_{ B^+(y_0,2r)\backslash B^+(y_0,r)} |\nabla u|^{p} + \mu^{p-1} \int_{ B^+(y_0,2r)}|\nabla u|^{p}.
\]
For the estimates \eqref{eq:growth:pllnbd0} and \eqref{eq:growth:pllnbd}, by Young's inequality we have for any $\lambda > 0$
\[
\int_{{\mathbb{R}^n_+}}|\nabla u|^{p-2} \nabla \eta_{B(y_0,r)}\cdot \nabla u\ \tilde{u} \precsim \lambda \int_{B^+(y_0,2r)} |\nabla u|^{p}+\lambda^{1-p} r^{-p}\int_{B^+(y_0,2r)} |u-(u)_{B^{{+}}(y_0,2r)}|^p.
\]
For the last term of \eqref{eq:gradientutildauwidehat}
\[
\begin{split}
\int_{{\mathbb{R}^n_+}}|\nabla u|^{p-2}\nabla\eta_{B(y_0,r)}\, (u-(u)_{B^{{+}}(y_0,2r)})\cdot \nabla \tilde{u} &\precsim r^{-2}\int_{B^+(y_0,2r)\setminus B^+(y_0,r)} |\nabla u|^{p-2}|u-(u)_{B^{+}(y_0,2r)}|^2\\
& + r^{-1}\int_{B^+(y_0,2r)\setminus B^+(y_0,r)} |\nabla u|^{p-1}|u-(u)_{B^{{+}}(y_0,2r)}|.
\end{split}
\]
By a similar estimate, we easily get for any $\mu>0$
\[
\int_{\mathbb{R}^n_+}|\nabla u|^{p-2}\nabla\eta_{B(y_0,2r)}\, (u-(u)_{B^{{+}}(y_0,2r)})\cdot \nabla \tilde{u} \precsim
\frac{1}{\mu} \int_{ B^+(y_0,2r)\backslash B^+(y_0,r)} |\nabla u|^{p} + \mu^{p-1} \int_{ B^+(y_0,2r)}|\nabla u|^{p}
\]
and for any $\lambda>0$
\[
\begin{split}
\int_{\mathbb{R}^n_+}|\nabla u|^{p-2}\nabla\eta_{B(y_0,2r)}\, (u-(u)_{B^{{+}}(y_0,2r)})\cdot \nabla \tilde{u} &\precsim
\lambda \int_{B^+(y_0,2r)} |\nabla u|^{p}\\
&\quad +\lambda^{1-p} r^{-p}\int_{B^+(y_0,2r)} |u-(u)_{B^{{+}}(y_0,2r)}|^p.
\end{split}
\]
Consequently, we found
\begin{equation}\label{eq:intest}
\begin{split}
\int_{ B^+(y_0,r)}|\nabla u|^p &\precsim \left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \tilde{u}\right| + \left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \hat{u}\right|\\
& \quad + \frac{1}{\mu} \int_{ B^+(y_0,2r)\backslash B^+(y_0,r)} |\nabla u|^{p} + \mu^{p-1} \int_{ B^+(y_0,2r)}|\nabla u|^{p}
\end{split}
\end{equation}
and
\begin{equation}\label{eq:intest2}
\begin{split}
\int_{ B^+(y_0,r)}|\nabla u|^p &\precsim \left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \tilde{u}\right| + \left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \hat{u}\right|\\
& \quad + \lambda \int_{B^+(y_0,2r)} |\nabla u|^{p}+\lambda^{1-p} r^{-p}\int_{B^+(y_0,2r)} |u-(u)_{B^{{+}}(y_0,2r)}|^p.
\end{split}
\end{equation}
{If we are in the interior case, i.e.,} $B(y_0,2r) \subset {B^+(0,R)}$, then ${\rm supp\,} \tilde{u} \cup {\rm supp\,} \hat{u} \subset {B^+(0,R)}$ and thus $\div(|\nabla u|^{p-2} \nabla u) = 0$ in ${B^+(0,R)}$ implies
\[
\left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \tilde{u}\right| + \left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u \cdot \nabla \hat{u}\right| = 0.
\]
Thus, for $B(y_0,2r) \subset {B^+(0,R)}$ the claim is proven.
From now on we assume that {the ball $B(y_0,r)$ is close to the boundary, i.e, } $B(y_0,2r) {\cap \, \{\mathbb{R}^{n-1}\times\{0\}\}\neq \emptyset}$. By Lemma~\ref{la:goodpde},
\[
\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u^i\cdot \nabla \tilde{u}^i = I + II + \frac{1}{2} III,
\]
where
\[
\begin{split}
I:=& \int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u^k \cdot \Omega_{ik}\, \tilde{u}^i,\\
II :=&\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u^i\cdot \nabla \brac{|u|^2-1} \ \tilde{u}^i,\\
III:=&\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla \tilde{u}^i \cdot \nabla \brac{|u|^2-1}\ u^i.\\
\end{split}
\]
Since $u$ is $p$-harmonic and by Lemma~\ref{la:conservation} all three terms above contain products of divergence-free and rotation-free quantities. However, the div-curl estimate by Coifman, Lions, Meyer, Semmes \cite{CLMS} is only applicable when at least one term vanishes at the boundary, otherwise there are counterexamples, see \cite{DaLio-Palmurella-2017,Hirsch-2019}.
We investigate the \underline{first term $I$}. Let $\tilde{B} \subset {B^+(0,R)}$ be a smooth, bounded, open, and convex set, such that ${B^+(y_0,2r)}\subset\tilde{B} \subset B(y_0,3r)$ and $\partial \tilde{B} \cap \partial \mathbb{R}^n_+ = B(y_0,2r) \cap \partial \mathbb{R}^n_+$. By Hodge decomposition\footnote{
{More precisely, one argues, e.g., as in \cite[(3.6), (3.7)]{Schikorra10}: One solves
\[
\begin{cases}
\Delta \zeta_{ik} = {\rm curl\,}(|\nabla u|^{p-2} \Omega_{ik}) &\mbox{in $\tilde{B}$}\\
\zeta_{ik} = 0 \quad& \mbox{on $\partial\tilde{B}$}.
\end{cases}
\]
such that \eqref{eq:hodge1st:2} is satisfied.
Then one sets
\[
H := |\nabla u|^{p-2} \Omega_{ik} - {\rm Curl} \zeta_{ik}.
\]
By Poincar\'{e} lemma we can write $H = \nabla \xi$.}
} (see \cite[(\textcolor{red}{10.4})]{Iwaniecmartin}) we find $\xi_{ik} \in W^{1,p'}(\tilde{B})$, with $p'=\frac{p}{p-1}$, and $\zeta_{ik} \in W^{1,p'}_0(\tilde{B},\bigwedge\nolimits^{2}\mathbb{R}^n)$ such that
\begin{equation}\label{eq:hodge1st}
|\nabla u|^{p-2} \Omega_{ik} = \nabla \xi_{ik} + {\rm Curl\,}\zeta_{ik} \quad \mbox{in $\tilde{B}$}.
\end{equation}
Moreover, we have
\begin{equation}\label{eq:hodge1st:2}
\|\zeta_{ik}\|_{W^{1,p'}(\tilde{B})} \precsim \||\nabla u|^{p-2} \Omega_{ik}\|_{L^{p'}(B(y_0,3r))}.
\end{equation}
The boundary data of $\zeta$ and Lemma~\ref{la:conservation} imply that
\[
\int_{\tilde{B}} \nabla \xi_{ik} \cdot \nabla \varphi = {\int_{\tilde{B}} |\nabla u|^{p-2} \Omega_{ik} \cdot \nabla \varphi- \int_{\tilde{B}} {\rm Curl\,} \zeta_{ik}\cdot \nabla \varphi } = 0 \quad \mbox{for any $\varphi \in C^\infty(\overline{\tilde{B}})$}.
\]
That is, $\xi_{ik}$ is harmonic with trivial Neumann data, and thus $\xi_{ik}$ is constant. In particular, \eqref{eq:hodge1st} simplifies to
\begin{equation}\label{eq:hodge2nd}
|\nabla u|^{p-2} \Omega_{ik} = {\rm Curl\,} \zeta_{ik} \quad \mbox{in $\tilde{B}$}.
\end{equation}
Consequently,
\[
I = \int_{\mathbb{R}^n_+} {\rm Curl\,} \zeta_{ik} \cdot \nabla u^k\, \tilde{u}^i = \int_{\mathbb{R}^n} {\rm Curl\,} \zeta_{ik} \cdot \nabla u^k\, \tilde{u}^i.
\]
The last equality is true, since $\zeta_{ik}$ {vanishes on} ${\partial \mathbb{R}^n_+\cap B(0,R)}$ and we can extend it by zero to ${\mathbb{R}^n_-\cap B(0,R)}$. Now we use the div-curl structure and apply the result by Coifman, Lions, Meyer, Semmes \cite{CLMS}. Recall that $BMO$ is the space of functions $f$ with finite {seminorm} $[f]_{BMO} < \infty$. Here,
\[
[f]_{BMO} := \sup_{B} |B|^{-1} \int_{B} |f-(f)_{B}|,
\]
where the supremum is taken over all balls $B$. Observe that by Poincar\'e inequality,
\begin{equation}\label{eq:BMOpoincare}
[f]_{BMO} \precsim \sup_{x_0 \in \mathbb{R}^n, \ \rho>0} \brac{\rho^{p-n} \int_{B(x_0,\rho)} |\nabla f|^p}^{\frac{1}{p}}.
\end{equation}
Coifman, Lions, Meyer, Semmes showed in \cite{CLMS} that the following inequality holds
\[
\int_{\mathbb{R}^n} F \cdot G\ \varphi \precsim \|F\|_{L^p(\mathbb{R}^n)}\, \|G\|_{L^{p'}(\mathbb{R}^n)}\, [\varphi]_{BMO}
\]
whenever $F$ and $G$ are vector fields such that $\div F = 0$ and ${\rm curl\,} G = 0$. See also \cite{Lenzmann-Schikorra-2016} for a different proof. In our situation this inequality implies\footnote{{Here, $\tilde{u}$ is extended into the whole space $\mathbb{R}^n$ in such a way that $[\tilde{u}]_{BMO}\precsim \lambda$. This can be done by an appropriate reflection of $u$ outside of $B^+(y_0,3r)$.}}
{\begin{equation}\label{eq:utildainbmo}
\begin{split}
|I| &\precsim \||\nabla u|^{p-2} \Omega_{ik}\|_{L^{p'}(B^+(y_0,4r))}\ \|\nabla u\|_{L^p(B^+(y_0,4r))}\ [\tilde{u}]_{BMO}\\
&\precsim \|\nabla u\|_{L^p(B^+(y_0,4r))}^p\ [\tilde{u}]_{BMO}.
\end{split}
\end{equation}}
The last estimate follows readily from the definition of $\Omega$ in Lemma~\ref{la:conservation}.
Thus, for the $\lambda$ from \eqref{small} we obtain
\[
|I| \precsim \lambda\ \int_{B^+(y_0,4r)} |\nabla u|^p.
\]
\underline{As for $II$}, since $\div(|\nabla u|^{p-2} \nabla u)=0$ in ${B^+(0,R)}$, there exists $\zeta_i\in W^{1,p}(B^+(y_0,2r),\bigwedge\nolimits^2 \mathbb{R}^n)$ such that
\[
|\nabla u|^{p-2} \nabla u^i = {\rm Curl\,} \zeta_i \quad \mbox{in $B^+(y_0,2r)$}.
\]
We can extend $\zeta$ to all of $\mathbb{R}^n$ so that
\[
\|\zeta\|_{W^{1,p'}(\mathbb{R}^n)} \precsim \|\nabla u\|_{L^p(B^{+}(y_0,2r))}^{p-1}.
\]
{Also, since $u$ is assumed to be bounded we have $|u|^2 \in {W}^{1,p}(B^+(0,R))$, and in the sense of traces $|u|^2 \equiv 1$ on $B(0,R)\cap\{\mathbb{R}^{n-1} \times \{0\}\}$. This is equivalent to saying that the extension of $|u|^2-1$ by zero to $B(0,R)\cap \,\mathbb{R}^n_-$ belongs to ${W}^{1,p}({B(0,R)})$, that is we have, $(|u|^2-1)\chi_{\mathbb{R}^n_+} \in W^{1,p}({B(0,R)})$ and the distributional gradient satisfies}
{
\[
\nabla \left ((|u|^2-1)\chi_{\mathbb{R}^n_+} \right) = \chi_{\mathbb{R}^n_+}\nabla |u|^2 \quad \text{a.e. in } B(0,R).
\]
}
In particular, since $(|u|^2-1)\chi_{\mathbb{R}^n_+}$ is zero on $B(y_0,2r)\cap\, \mathbb{R}^n_{-}$ we can use Poincar\'e inequality to get
\begin{equation}\label{eq:poinczero}
\||u|^2 -1\|_{L^p( B^+(y_0,2r))} \precsim r\, \|u\|_{L^{\infty}(B^+(y_0,4r))}\, \|\nabla u\|_{L^p(B^+(y_0,4r))}.
\end{equation}
In particular, by using that $|\nabla \eta_{B(y_0,2r)}|\precsim r^{-1}$, \eqref{eq:BMOpoincare}, the triangle inequality in $L^p$ and \eqref{eq:poinczero}, for the $\lambda$ from \eqref{small},
\[
[\brac{|u|^2-1}\chi_{\mathbb{R}^n_+} \eta_{B(y_0,{2}r)}]_{BMO} \precsim \lambda.
\]
We also observe that $\nabla \tilde{u} \equiv \eta_{B(y_0,2r)} \nabla \tilde{u}$.
Thus, integrating by parts we obtain
\[
II =-\int_{\mathbb{R}^n} {\rm Curl\,} \zeta\cdot \nabla \tilde{u}^i\ \brac{|u|^2-1}\chi_{\mathbb{R}^n_+}\eta_{B(y_0,{2}r)}.
\]
Hence, with the div-curl theorem from \cite{CLMS}, see also the localized version \cite[Corollary 3]{Strzelecki-1994}, we find
\[
|II| \precsim \lambda \|\nabla u\|_{L^p(B^{+}(y_0,4r))}^{p}.
\]
It remains \underline{to treat $III$}. Observe that
\[
\begin{split}
\nabla \tilde{u}^i &\cdot \nabla \brac{|u|^2-1}\ u^i \\
&=\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \eta_{B(y_0,r)} u^i + \nabla \eta_{B(y_0,r)}\, (u^i-(u^i)_{B^{{+}}(y_0,2r)}) \cdot \nabla \brac{|u|^2-1}\ u^i\\
&=\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \tilde{u}^i \\
&\quad+\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \eta_{B(y_0,r)} (u^i)_{B^{{+}}(y_0,2r)} \\
&\quad+ \nabla \eta_{B(y_0,r)}\, (u^i-(u^i)_{B^{{+}}(y_0,2r)}) \cdot \nabla \brac{|u|^2-1}\ u^i.
\end{split}
\]
By integration by parts, using that $\div(|\nabla u|^{p-2} \nabla u) = 0$ {in $B^+(0,R)$}, $|u|^2-1$ is zero on {$\partial \mathbb{R}^n_+\cap B(0,R)$} and then arguing as in the argument for $II$,
\[
\begin{split}
\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u^i \cdot \nabla \brac{|u|^2-1}\ \tilde{u}^i
&= -\int_{\mathbb{R}^n_+} |\nabla u|^{p-2} \nabla u^i \cdot \nabla \tilde{u}^i\ \brac{|u|^2-1}\\
&\precsim \lambda\, \|\nabla u\|_{L^p(B^{+}(y_0,4r))}^{p}.
\end{split}
\]
Moreover, again since $\div(|\nabla u|^{p-2} \nabla u) = 0$ in ${B^+(0,R)}$ and $|u|^2 -1$ is zero on $\partial \mathbb{R}^n_+\cap B(0,R)$,
\[
\begin{split}
\Bigg| \int_{\mathbb{R}^n_+}& |\nabla u|^{p-2}\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \eta_{B(y_0,r)} (u^i)_{B^{{+}}(y_0,2r)} \Bigg|\\
&=\left|\int_{\mathbb{R}^n_+} |\nabla u|^{p-2}\nabla u^i \cdot \brac{|u|^2-1}\ \nabla \eta_{B(y_0,r)} (u^i)_{B^{{+}}(y_0,2r)}\right|\\
&\precsim r^{-1}\, \|u\|_{L^\infty({B^+(0,R)})}\, \|\nabla u\|^{p-1}_{L^p( B^+(y_0,2r)\backslash B^+(y_0,r))}\ \||u|^2 -1\|_{L^p( B^+(y_0,2r))}.
\end{split}
\]
This leads to two estimates. Firstly, if we want to find \eqref{eq:growth:pllnbd}, by Young's inequality,
\[
\begin{split}
\int_{\mathbb{R}^n_+}& |\nabla u|^{p-2}\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \eta_{B(y_0,r)} (u^i)_{B^{{+}}(y_0,r)}\\
&\precsim \lambda \|\nabla u\|^{p}_{L^p( B^+(y_0,2r))} + \lambda^{1-p}\, r^{-p} \||u|^2 -1\|^p_{L^p( B^+(y_0,2r))}.
\end{split}
\]
Secondly, for \eqref{eq:growth:peqn} by \eqref{eq:poinczero} and
by Young's inequality we have for any $\mu > 0$
\[
\begin{split}
\int_{\mathbb{R}^n_+}& |\nabla u|^{p-2}\nabla u^i \cdot \nabla \brac{|u|^2-1}\ \eta_{B(y_0,r)} (u^i)_{B^{{+}}(y_0,2r)}\\
&\precsim \mu^{-1} \|\nabla u\|^{p}_{L^p( B^+(y_0,2r) \backslash B^+(y_0,r))} + \mu^{p-1}\|\nabla u\|^p_{L^p( B^+(y_0,2r))}.
\end{split}
\]
The last remaining term can be treated in a similar way and we have
\[
\begin{split}
\int_{\mathbb{R}^n_+}& |\nabla u|^{p-2}\nabla \eta_{B(y_0,r)}\, (u^i-(u^i)_{B^{{+}}(y_0,2r)}) \cdot \nabla \brac{|u|^2-1}\ u^i \\
&\precsim \mu^{-1} \|\nabla u\|^{p}_{L^p( B^+(y_0,2r)\backslash B^+(y_0,r))} + \mu^{p-1}\|\nabla u\|^p_{L^p( B^+(y_0,2r))}
\end{split}
\]
and
\[
\begin{split}
\int_{\mathbb{R}^n_+}& |\nabla u|^{p-2}\nabla \eta_{B(y_0,r)}\, (u^i-(u^i)_{B^{{+}}(y_0,2r)}) \cdot \nabla \brac{|u|^2-1}\ u^i \\
&\precsim \lambda \|\nabla u\|^{p}_{L^p( B^+(y_0,2r))} + \lambda^{1-p}\, r^{-p} \|u- (u)_{B^{{+}}(y_0,2r)}\|^p_{L^p( B^+(y_0,2r))}.
\end{split}
\]
Combining the estimates of $I$, $II$, and $III$ and plugging them into estimates \eqref{eq:intest} and \eqref{eq:intest2}, we conclude.
\end{proof}
\section{H\"older regularity for the case \texorpdfstring{$p=n$}{p=n}}\label{s:pnHoelder}
For the case $p=n$ H\"older continuity of the solution $u$ from Theorem~\ref{th:main} follows from Proposition~\ref{pr:growth} by a standard iteration argument. For higher regularity, and for $p < n$, we need to combine the growth estimate{s} from Proposition~\ref{pr:growth} with the reflection method.
\begin{proposition}[$\epsilon$-regularity for $p=n$: H\"older continuity]\label{pr:main:hoelderpn}
Let $D \subset \mathbb{R}^n$ be a smooth, bounded domain, then there are positive constants $\epsilon = \epsilon(n,D)$, $\alpha = \alpha(n,D)$ such that the following holds for $p=n$:
Any solution $u\in W^{1,n}(D,\mathbb{R}^N)$ to \eqref{eq:soluspheredef} that satisfies for an $R > 0$ and for an $x_0 \in \overline{D}$
\[
\int_{B(x_0,R)\cap D} |\nabla u|^n < \epsilon
\]
is H\"older continuous in $B(x_0,R/2) \cap \overline{D}$. Moreover, we have the estimate
\[
\sup_{x, y \in B(x_0,R/2)\cap \overline{D}} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}} \precsim R^{-\alpha} \|\nabla u\|_{L^n(B(x_0,R)\cap D)}.
\]
\end{proposition}
\begin{proof}
Let $\lambda := \epsilon^{\frac{1}{n}}$ and apply Proposition~\ref{pr:growth} to any $B(y_0,4r)\subset B(x_0,R/2)$, for $\mu > 0$ to be chosen below. We add
\[
C\mu^{-1} \int_{B(y_0,r)\cap D} |\nabla u|^n
\]
to both sides of \eqref{eq:growth:peqn}. Then we find
\[
\brac{1+C\mu^{-1}} \int_{B(y_0,r)\cap D} |\nabla u|^n \leq C\ \brac{\epsilon^{\frac{1}{n}} + \mu^{n-1}+\mu^{-1}} \int_{B(y_0,4r)\cap D} |\nabla u|^n.
\]
We choose $\epsilon, \mu >0$ small enough so that $\tau < 1$, where
\[
\tau := \brac{\frac{C\ \brac{\epsilon^{\frac{1}{n}} + \mu^{n-1}+\mu^{-1}}}{1+C\mu^{-1}}}^{\frac{1}{n}}.
\]
We have for any $B(y_0,4r)\subset B(x_0,R/2)$
\[
\|\nabla u\|_{L^n(B(y_0,r)\cap D)} \leq \tau \|\nabla u\|_{L^n(B(y_0,4r)\cap D)}.
\]
Iterating this on successively smaller balls, cf. e.g. \cite[Chapter III, Lemma 2.1]{GiaquintaMultipleIntegrals}, we find that for a uniform $\alpha = \alpha(\tau) > 0$ and for any $B(y_0,4r) \subset B(x_0,R/2)$,
\[
\|\nabla u\|_{L^n(B(y_0,r)\cap D)} \precsim \brac{\frac{r}{R}}^{\alpha} \|\nabla u\|_{L^n(B(x_0,R)\cap D)}.
\]
In particular, we have by Poincar\'e inequality
\[
\sup_{B(y_0,4r) \subset B(x_0,R/2)} r^{-\alpha-1}\|u-(u)_{B(y_0,r){\cap D}}\|_{L^n(B(y_0,r)\cap D)} \precsim R^{-\alpha}\, \|\nabla u\|_{L^n(B(x_0,R)\cap D)}.
\]
By the characterization of Campanato spaces and H\"older spaces, e.g. see \cite[Chapter III, p.75]{GiaquintaMultipleIntegrals}, this implies
\[
\sup_{x, y \in B(x_0,R/2){\cap} \overline{D}} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}} \precsim R^{-\alpha} \|\nabla u\|_{L^n(B(x_0,R)\cap D)}.
\]
\end{proof}
\section{H\"older-continuity for solutions to a supercritical system}\label{s:genericHoelder}
In Proposition~\ref{pr:growth} we showed that solutions from Theorem~\ref{th:main} satisfy certain growth estimates. For $p=n$ these growth estimates imply H\"older continuity by an iteration argument, as we have seen in Proposition~\ref{pr:main:hoelderpn}.
For $p < n$ more work is needed. The following Proposition shows that under a smallness assumption solutions to systems satisfying
\begin{equation}\label{eq:plapnablaup} |\div(|\nabla u|^{p-2} \nabla u)| \precsim |\nabla u|^p \end{equation}
are H\"older continuous once the growth conditions from Proposition~\ref{pr:growth} are satisfied, that is when \eqref{eq:growthcond} and \eqref{eq:growthcondinter} below are assumed \emph{a priori}.
Observe that without assuming \emph{a priori} the growth conditions \eqref{eq:growthcond} and \eqref{eq:growthcondinter} below on the solution $u$, there is no hope for proving \emph{any} regularity for solutions to a systems that have a structure of \eqref{eq:plapnablaup}. Indeed, it is easy to check that $\log \log \frac{2}{|x|}$ and $\sin \log \log \frac{2}{|x|}$ satisfy \eqref{eq:plapnablaup} for $p= n$.
In the next section, in order to prove Theorem~\ref{th:main}, we use the reflection method from Scheven's \cite{Scheven-2006} to obtain an equation of the form \eqref{eq:gensys:eq}. Since we already obtained the necessary growth estimates in Proposition~\ref{pr:growth}, the following proposition then leads to regularity.
\begin{proposition}\label{pr:gensysreg}
Let $D \subset \mathbb{R}^n$ be a {smooth, bounded domain} and let $\mathcal{M}$ be a smooth, compact $(n-1)$-dimensional manifold.
Assume that $u \in W^{1,p}(D,\mathbb{R}^N)$ is a solution to
\begin{equation}\label{eq:gensys:eq}
\div(|G(x)\nabla u(x)|^{p-2} G(x)\nabla u(x)) = f_{{u}}(x),
\end{equation}
where $f_{{u}} \in L^1(D,\mathbb{R}^N)$ satisfies the following estimate
\begin{equation}\label{eq:gensys:fgrowth}
|f_{{u}}(x)| \leq C\ |\nabla u(x)|^p
\end{equation}
and $G \in C^\infty(\overline{D},GL(n))$.
Moreover, assume \emph{a priori} that for every $B(x_0,R) \subset D$, $\lambda > 0$ such that
\begin{equation}\label{eq:gensys:lambda}
\sup_{B(y_0,r)\subset B(x_0,R)} r^{p-n} \int_{B(y_0,r)} |\nabla u|^p < \lambda^p
\end{equation}
the solution $u$ already satisfies the following growth condition on any $B(y_0,4r) \subset B(x_0,R)$:
If $B(y_0,2r) \cap \mathcal{M} = \emptyset$, then
\begin{equation}\label{eq:growthcond}
\int_{B(y_0,r)} |\nabla u|^p \leq C \lambda \int_{B(y_0,4r)} |\nabla u|^p + C \lambda^{1-p} r^{-p}\int_{B(y_0,4r)} |u-(u)_{B(y_0,4r)}|^p
\end{equation}
and, if $B(y_0,2r) \cap \mathcal{M} \neq \emptyset$, then
\begin{equation}\label{eq:growthcondinter}
\begin{split}
\int_{B(y_0,r)} |\nabla u|^p \leq C \lambda \int_{B(y_0,4r)} |\nabla u|^p &+ C \lambda^{1-p} r^{-p}\int_{B(y_0,4r)} |u-(u)_{B(y_0,4r)}|^p\\
&+ C\lambda^{1-p}r^{-p}\int_{B(y_0,4r)} |u- (u)_{B(y_0,4r)\cap \mathcal{M}}|^p\\
&+C\lambda^{1-p}r^{1-p}\int_{B(y_0,4r)\cap \mathcal{M}} |u- (u)_{B(y_0,4r)\cap \mathcal{M}}|^p.
\end{split}
\end{equation}
Then there exist constants $\alpha = \alpha(G,p,n,C,D)$, $\epsilon>0$ such that if \eqref{eq:gensys:lambda} holds on some $B(x_0,R) \subset D$ for $\lambda < \epsilon$, then $u \in C^{\alpha}(B(x_0,R/2),{\mathbb{R}^N})$. Moreover, we have the estimate
\[
\sup_{x, y \in B(x_0,R/2)} \frac{|u(x)-u(y)|}{|x-y|^{\alpha}} \leq C_0\, R^{-\alpha} \brac{\sup_{B(y_0,r)\subset B(x_0,R)} r^{p-n} \int_{B(y_0,r)} |\nabla u|^p}^{\frac{1}{p}}.
\]
The constant $C_0$ depends on $\mathcal{M}$, $D$, $C$, and $G$.
\end{proposition}
To prove Proposition~\ref{pr:gensysreg} we follow the strategy developed in \cite{Hardt-Kinderlehrer-Lin-1986} and \cite[Theorem 2.4]{Hardt-Lin-1987}. The crucial result is that the equation for $u$ together with the growth assumptions \eqref{eq:growthcond} and \eqref{eq:growthcondinter} on $u$ imply the following decay estimate.
\begin{proposition}\label{pr:decay}
There are uniform constants $\epsilon, \theta \in (0,1)$ and $\overline{R} = \overline{R}(\mathcal{M}) \in (0,1)$ so that the following holds:
Let $u$ and {$D$} be as in Proposition~\ref{pr:gensysreg} and assume that for a ball $B(x_0,R) \subset {D}$ and $R \in (0,\overline{R})$ it holds
\begin{equation}\label{eq:small2}
\lE{x_0,R}(u) := \sup_{B(y_0,r)\subset B(x_0,R)} r^{p-n} \int_{B(y_0,r)} |\nabla u|^p < \epsilon^p.
\end{equation}
Then
\begin{equation}\label{eq:Ex0tleqhe}
\lE{x_0,\theta R}(u) \leq \frac{1}{2}\lE{x_0,R}(u).
\end{equation}
\end{proposition}
\begin{proof}
It suffices to prove
\begin{equation}\label{eq:insteadEx0tleqhe}
(\theta R)^{p-n} \int_{B(y_0,\theta R)} |\nabla u|^p \leq \frac{1}{2}\lE{x_0,R}(u) \quad \text{for any } B(y_0,4\theta R) \subset B(x_0,R/2).
\end{equation}
Indeed, \eqref{eq:Ex0tleqhe} follows from \eqref{eq:insteadEx0tleqhe} by taking smaller $\theta$ and observing that $B(x_1,R_1) \subset B(x_2,R_2)$ implies $\lE{x_1,R_1}(u) \leq \lE{x_2,R_2}(u)$.
Assume the claim \eqref{eq:insteadEx0tleqhe} is false. Then, for any $\theta \in (0,1)$ we have a sequence of balls with $B(y_i,4\theta R_i)\subset B(x_i,R_i/2)\subset {D}$, a sequence $(\epsilon_i)_{i=1}^\infty$ satisfying $\lim_{i \to \infty} \epsilon_i = 0$, and a sequence $(u_i)_{i=1}^\infty \subset W^{1,p}({D},\mathbb{R}^N)$ of solutions to \eqref{eq:gensys:eq} satisfying the growth assumptions of Proposition~\ref{pr:gensysreg}, such that
\begin{equation}\label{eq:small2i}
\sup_{B(y,r)\subset B(x_i,R_i)} r^{p-n} \int_{B(y,r)} |\nabla u_i|^p = \epsilon_i^p,
\end{equation}
but
\begin{equation}\label{eq:notsmall}
(\theta R_i)^{p-n} \int_{B(y_i,\theta R_i)} |\nabla u_i|^p > \frac{1}{2} \epsilon_i^p.
\end{equation}
For simplicity, we assume that $R_i \equiv R_0$ and $x_i \equiv x_0$ for some $R_0 > 0$ and $x_0 \in \mathbb{R}^n$.
{
This is no loss of generality, since we can rescale the maps $u$ by the factor $R_0/R_i$. Observe that this rescales the manifold $\mathcal{M}$, but in a way that \eqref{eq:growthcondinter} still holds.}
Set
\[
w_i := \frac{1}{\epsilon_i}(u_i-(u_i)_{B(x_0,R_0)}).
\]
Clearly,
\begin{equation*}\label{eq:wimeanvalue}
(w_i)_{B(x_0,R_0)} = 0\quad \mbox{for all $i \in {\mathbb N}$}.
\end{equation*}
Thus, we can apply Poincar\'e inequality and have by \eqref{eq:small2i},
\[
\sup_{i \in {\mathbb N}}\|\nabla w_i \|^p_{L^p(B(x_0,R_0))} \precsim R_0^{n-p} \quad \mbox{and} \quad \sup_{i \in {\mathbb N}} \|w_i\|_{L^p(B(x_0,R_0))}^p \precsim R_0^{n-p+1}.
\]
Thus, up to a subsequence denoted again by $w_i$, we find $w \in W^{1,p}(B(x_0,R_0),\mathbb{R}^N)$ such that as $i \to \infty$,
\begin{align*}
w_i &\rightharpoonup w & &\mbox{weakly in } W^{1,p}(B(x_0,R_0)),\\
w_i& \to w & &\mbox{strongly in } L^p(B(x_0,R_0)),\\
w_i& \to w & &\mbox{strongly in } L^p(B(x_0,R_0) \cap \mathcal{M},d{\mathcal H}^{n-1}), \\
w_i& \to w & &{\mathcal H}^n \mbox{-a.e. on $B(x_0,R_0)$ and ${\mathcal H}^{n-1}$ \mbox{-a.e. on } $B(x_0,R_0) \cap \mathcal{M}$}.
\end{align*}
In particular,
\begin{equation}\label{eq:wmeanvalue}
(w)_{B(x_0,R_0)} = 0,
\end{equation}
also
\[
\|\nabla w \|^p_{L^p(B(x_0,R_0))} \precsim R_0^{n-p} \quad \mbox{and} \quad \|w\|^p_{L^p(B(x_0,R_0))} \precsim R_0^{n-p+1}.
\]
Moreover, for any $\varphi \in C_c^\infty(B(x_0,R_0))$,
\[
\int_{B(x_0,R_0)} |G\nabla w_i|^{p-2}G\nabla w_i\cdot \nabla \varphi = (\epsilon_i)^{1-p} \int_{B(x_0,R_0)} |G\nabla u_i|^{p-2}G\nabla u_i\cdot \nabla \varphi.
\]
Now, by \eqref{eq:gensys:eq} and \eqref{eq:gensys:fgrowth},
\[
\left |\int_{B(x_0,R_0)} |G\nabla w_i|^{p-2}G\nabla w_i\cdot \nabla \varphi \right |\precsim (\epsilon_i)^{1-p} \|\varphi\|_{L^\infty(B(x_0,R_0))}\ \|\nabla u_i\|^p_{L^p(B(x_0,R_0))}.
\]
That is, by \eqref{eq:small2i}
\[
\left |\int_{B(x_0,R_0)} |G\nabla w_i|^{p-2}G\nabla w_i\cdot \nabla \varphi \right | \precsim {\|\varphi\|_{L^\infty(B(x_0,R_0))}}R_0^{n-p} \epsilon_i\le \epsilon_i{\|\varphi\|_{L^\infty(B(x_0,R_0))}}.
\]
Now as in \cite[Section~4]{Dolzmann-Hungerbuehler-Mueller-1997}
\begin{equation}\label{eq:plapsolw0}
\div(|G\nabla w|^{p-2} G\nabla w) = 0 \quad \mbox{in $B(x_0,R_0)$}.
\end{equation}
From \eqref{eq:wmeanvalue} and the Lipschitz estimates for solutions to \eqref{eq:plapsolw0}, see \cite{Uhlenbeck-1977} as well as \cite{Mingione-2011,Duzaar-Mingione-2011} and in particular \cite[(1.7)]{Kuusi-Mingione-2012}, we have for any $B(z,r) \subset B(x_0,R_0/2)$,
\begin{equation*}\label{eq:Lipschitzestplaplace}
r^{-n}\int_{B(z,r)} |w - (w)_{B(z,r)}|^p \precsim r^p,
\end{equation*}
and if additionally $B(z,r) \cap \mathcal{M} \neq \emptyset$ and $r<\overline{R}$ for $\overline{R} = \overline{R}(\mathcal{M})$ small enough, then
\[
r^{1-n}\int_{\mathcal{M} \cap B(z,r)} |w - (w)_{\mathcal{M} \cap B(z,r)}|^p + r^{-n}\int_{B(z,r)} |w - (w)_{\mathcal{M} \cap B(z,r)}|^p \precsim r^p.
\]
On the other hand, by strong $L^p$-convergence of $w_i$ to $w$, we find $i(\theta) \in {\mathbb N}$ so that for $i \geq i(\theta)$ and for any $r \in (\theta R_0,R_0)$ such that $B(z,r) \subset B(x_0,R_0)$,
\begin{equation*}\label{eq:strongconvestimate}
r^{1-n} \int_{B(z,r)\cap \mathcal{M}} |w_i-w|^p + r^{-n} \int_{B(z,r)} |w_i-w|^p \leq \theta^p.
\end{equation*}
Combining these estimates we get for any $i \geq i(\theta)$ and for any $r \in (\theta R_0,R_0)$ such that $B(z,r) \subset B(x_0,R_0/2)$,
\[
r^{-n} \int_{B(z,r)} |u_i-(u_i)_{B(z,r)}|^p = \epsilon_i^p r^{-n} \int_{B(z,r)} |w_i-(w_i)_{B(z,r)}|^p \precsim \epsilon_i^p \brac{r^p + \theta^p}.
\]
If additionally $B(z,r) \cap \mathcal{M} \neq \emptyset$, then
\[
r^{-n}\int_{B(z,r)} |u_i-(u_i)_{B(z,r)\cap \mathcal{M}}|^p = \epsilon_i^p\, r^{-n}\int_{B(z,r)} |w_i-(w_i)_{B(z,r)\cap \mathcal{M}}|^p \precsim \epsilon_i^p \brac{r^p + \theta^p}
\]
and
\[
r^{1-n}\int_{B(z,r)\cap \mathcal{M}} |u_i-(u_i)_{B(z,r)\cap \mathcal{M}}|^p \precsim \epsilon_i^p \brac{r^p + \theta^p}.
\]
{
We now apply the growth estimates \eqref{eq:growthcond} and \eqref{eq:growthcondinter} with $\lambda=\epsilon _0 \geq \epsilon_i$ of the solutions $u_i$ to find
\[
(\theta R_0)^{p-n} \int_{B(y_i,\theta R_0)} |\nabla u_i|^p \leq C\ \epsilon_i^{p} \, \left (\epsilon_0+\epsilon _0^{1-p}\theta^p \right).
\]
By choosing $\epsilon_0$ and $\theta$ sufficiently small so that $\epsilon_0+\epsilon_0^{1-p}\theta^p<1/2$ we arrive at a contradiction with \eqref{eq:notsmall}.}
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pr:gensysreg}]
We argue as in the proof of Proposition~\ref{pr:main:hoelderpn}: Assume that \eqref{eq:gensys:lambda} is satisfied on $B(x_0,R)$ for some $\lambda < \epsilon$.
Iterating the estimate from Proposition~\ref{pr:decay} on successively smaller balls, cf. \cite[Chapter III, Lemma 2.1]{GiaquintaMultipleIntegrals}, we find a small $\alpha > 0$ such that for all $r < R$ and $B(y_0,r) \subset B(x_0,R/2)$,
\[
r^{p-n}\int_{B(y_0,r)} |\nabla u|^p \precsim \brac{\frac{r}{R}}^{\alpha p} E(x_0,R).
\]
In particular, for all $r < R$ and $B(y_0,r) \subset B(x_0,R/2)$,
\[
r^{-\alpha p-n}\int_{B(y_0,r)} |u-(u)_{B(y_0,r)}|^p \precsim r^{p-\alpha p-n}\int_{B(y_0,r)} |\nabla u|^p \precsim R^{-\alpha p} E(x_0,R).
\]
We conclude by the identification of Campanato and H\"{o}lder spaces, see \cite[Chapter III, p.75]{GiaquintaMultipleIntegrals}.
\end{proof}
\section{\texorpdfstring{$\epsilon$}{}-regularity: Proof of Theorem~\ref{th:main}}\label{s:proofthmain}
The proof of Theorem~\ref{th:main} is a combination of the growth estimate for solutions, Proposition~\ref{pr:growth}, the reflection method as in Scheven's \cite{Scheven-2006}, and Proposition~\ref{pr:gensysreg}.
More precisely, we use the reflection method to find a solution to \eqref{eq:gensys:eq} from Proposition~\ref{pr:gensysreg}. The growth estimates \eqref{eq:growthcond} and \eqref{eq:growthcondinter} required in Proposition~\ref{pr:gensysreg} come from Proposition~\ref{pr:growth}: They hold for the unreflected solution and by an easy argument hold also for the reflection.
To set up the reflection method we first gather some standard results.
\begin{lemma}\label{la:distbd}
Let $D$ be a smooth, bounded domain in $\mathbb{R}^n$. There exists some $R_0 = R_0(D)$ such that the following holds for any $R \in (0,R_0)$. Let $u\in W^{1,p}(D,\mathbb{R}^N)$ be a solution to \eqref{eq:soluspheredef} and $\epsilon \in (0,1)$. If
\begin{equation}\label{eq:distbdsmallness}
\sup_{B(y_0,r)\subset B(x_0,R)} r^{p-n} \int_{B(y_0,r)\cap D} |\nabla u|^p < \epsilon^p
\end{equation}
and $B(x_0,R/2) \cap \partial D \neq \emptyset$, then
\[
\sup_{x \in B(x_0,R/2) \cap D} {\rm dist\,}(u(x),{\mathbb S}^{N-1}) \leq C\epsilon.
\]
Here $C$ is a constant depending on $\partial D$.
\end{lemma}
\begin{proof}
Fix $x \in B(x_0,R/2){\cap D}$. Let $r := \frac{1}{10} {\rm dist\,}(x,\partial D)$, then by \eqref{eq:distbdsmallness} and the interior Lipschitz regularity {for} the $p$-Laplace equation, see \cite[(1.7)]{Kuusi-Mingione-2012},
\[
|u(x)-(u)_{B(x,r)}|^p \precsim r^{p-n} \int_{B(x,5r)} | \nabla u|^p \leq \epsilon^p.
\]
Denote by $z_1 \in \partial D\cap B(x_0,R/2)$ the projection of $x$ onto $\partial D\cap B(x_0,R/2)$. Here we assume that $R < R_0$ for $R_0 = R_0(D)$ small enough {such that $z_1$ is well-defined.}
{Let $y_0,y_1,\ldots,y_{10}$ be pairwise equidistant points on the line $[x,z_1]$ where $y_0 = x$ and $y_{10} = z_1$. That is, $|y_{i} - y_{i+1}| = r$.}
By triangle inequality, Poincar\'e inequality and again by \eqref{eq:distbdsmallness},
\[
\begin{split}
|(u)_{B(x,r)}& - (u)_{B(z_1,r)\cap D}|^p\\
&{\precsim \sum_{i=0}^{10} |(u)_{B(y_i,r)\cap D} -(u)_{B(y_{i+1},r) \cap D}|^p}\\
&\precsim {\sum_{i=0}^{10} r^{p-n} \int_{B(y_i,4r) \cap D} |\nabla u|^p}\\
&\precsim {\epsilon^p}.
\end{split}
\]
{From the second to third line, before applying Poincar\'e inequality, we also used that $|y_i-y_{i+1}| = r$, and thus (cf. footnote \ref{ft:AB})
\[
|(u)_{B(y_i,r)\cap D} -(u)_{B(y_{i+1},r) \cap D}|^p \precsim \mvint_{B(y_i,4r) \cap D} |u -(u)_{B(y_i,4r) \cap D}|^p
\]}
Now for any $z_2 \in \partial D$
\[
{\rm dist\,}((u)_{B(z_1,r)\cap D},{\mathbb S}^{N-1}) \precsim r^{-n} \int_{B(z_1,r) \cap D}|u(z_3)-u(z_2)|\ dz_3.
\]
Integrating $z_2$ over $\partial D \cap B(z_1,r)$ we find
\[
\begin{split}
{\rm dist\,}((u)_{B(z_1,r)\cap D},{\mathbb S}^{N-1}) &\precsim r^{-n} \int_{B(z_1,r) \cap D}|u(z_3)-(u)_{B(z_1,r) \cap \partial D}|\ dz_3\\
&\quad+r^{1-n} \int_{B(z_1,r) \cap \partial D}|u(z_2)-(u)_{B(z_1,r) \cap \partial D}|\ dz_2.
\end{split}
\]
By Poincar\'e inequality, trace theorem, and \eqref{eq:distbdsmallness}
\[
{\rm dist\,}((u)_{B(z_1,r)\cap D},{\mathbb S}^{N-1}) \precsim \epsilon.
\]
Now the claim follows by triangle inequality for the distance,
\[
\begin{split}
{\rm dist\,}(u(x),{\mathbb S}^{N-1}) &\leq |u(x)-(u)_{B(x,r)}| + |(u)_{B(x,r)}-(u)_{B(z_1,r)\cap D}|\\
&\quad+ {\rm dist\,}((u)_{B(z_1,r)\cap D},{\mathbb S}^{N-1}).
\end{split}
\]
\end{proof}
As an immediate corollary we obtain.
\begin{corollary}\label{co:solugeq12}
Let $u$ and $D$ be as in Theorem~\ref{th:main}. There exists $\epsilon_0 > 0$ such that if $B(x_0,R/2)\cap \partial D \neq \emptyset$ and \eqref{eq:distbdsmallness} holds for some $\epsilon < \epsilon_0$, then $|u| > \frac{1}{2}$ in $B(x_0,R/2)\cap D$.
\end{corollary}
As a consequence, when we reflect the maps from Theorem~\ref{th:main}, we obtain a critical equation with the growth estimates such that Proposition~\ref{pr:gensysreg} is applicable.
\begin{proposition}\label{pr:plapnup}
Let $u$ and $D$ be as in Theorem~\ref{th:main}. There exists $\epsilon_0 = \epsilon_0(D) > 0$ such that for any $B(x_0,4R) \subset \mathbb{R}^n$ on which $u$ satisfies \eqref{eq:distbdsmallness} for some $\epsilon<\epsilon_0$ there exists $v \in W^{1,p}(B(x_0,R),\mathbb{R}^N)$ such that
\[
v = u \quad \mbox{in $B(x_0,R) \cap D$},
\]
\begin{equation}\label{eq:reflectedeq}
|\div(|\nabla v|^{p-2} \nabla v)|\precsim |\nabla v|^p \quad \text{in }B(x_0,R).
\end{equation}
Moreover, the $v$ satisfies the growth conditions from Proposition~\ref{pr:gensysreg}.
\end{proposition}
\begin{proof}
The main point is to prove that $v$ satisfies the growth conditions. The estimate \eqref{eq:reflectedeq} follows from the geometric reflection, more precisely \cite[Lemma 2.5]{SchevenDiss}. But for reader's convenience we {state the argument in full in the case where the boundary is flat. This means that we work in a ball $B(x_0,4R)$ such that $B^+(x_0,4R)\subset D \subset \mathbb{R}^n_+$ and $\partial D \cap B(x_0,4R)=\partial \mathbb{R}^n_+\cap B(x_0,4R)$.}
If $B(x_0,R) \subset \mathbb{R}^n_+$ then we can just take $v \equiv u$. So assume that $B(x_0,R) \cap \partial \mathbb{R}^n_+ \neq \emptyset$, then for $\epsilon_0$ small enough we have $|u| > \frac{1}{2}$ in $B^+(x_0,R)$ by Corollary~\ref{co:solugeq12}.
Denote by $\tilde u$ the even reflection, i.e.,
\[
\tilde{u}(x',x_n) :=u(x',|x_n|).
\]
Moreover, set
\[
\sigma(q) := \frac{q}{|q|^2}, \quad q \in \mathbb{R}^n \backslash \{0\}.
\]
Now we define the geometric reflection $v$ as
\[
v(x) := \begin{cases}
u(x)\quad &\mbox{$x \in B^+(x_0,R)$}\\
\sigma (\tilde{u}(x)) \quad&\mbox{$x \in B(x_0,R) \backslash \mathbb{R}^n_+$.}
\end{cases}
\]
Since $|u| > \frac{1}{2}$ and $u$ is uniformly bounded by Lemma~\ref{la:boundedness}, $v$ is well-defined in $B(x_0,R)$.
We also set
\[
\Sigma_{ij}(q) := \partial_i \sigma^j (q) = \frac{\delta_{ij} - 2\frac{q^i q^j}{|q|^2}}{|q|^2}.
\]
That is, for $x \in B(x_0, R) \backslash \mathbb{R}^n_+$,
\begin{equation}\label{eq:nablavnablau}
\nabla v(x) = \Sigma(\tilde{u}(x))\ \nabla \tilde{u}(x).
\end{equation}
Observe that $\Sigma$ is symmetric, and
\[
\Sigma(q) = \frac{1}{|q|^2} \brac{I-2 \frac{q}{|q|}\otimes \frac{q}{|q|}}
\]
and that $\frac{q}{|q|}$ is an eigenfunction to the eigenvalue $-\frac{1}{|q|^2}$, and any orthonormal basis of $\left (\frac{q}{|q|} \right )^\perp$ is the basis of the eigenspace of the eigenvalue $\frac{1}{|q|^2}$. In particular,
\[
|\Sigma(q) w| = \frac{1}{|q|^2} |w| \quad \forall w \in \mathbb{R}^N.
\]
Thus,
\begin{equation}\label{eq:nablaveqnablau}
|\nabla v(x)| = \begin{cases}
|\nabla \tilde{u}(x)| \quad &x \in {B^+(x_0,R)}\\
\frac{1}{|\tilde{u}(x)|^2}|\nabla \tilde{u}(x)| \quad &x \in {B(x_0,R)\setminus \mathbb{R}^n_+}.
\end{cases}
\end{equation}
Also observe that for $|q| = 1$,
\[
\Sigma(q) v = \Pi(q)v - \Pi^\perp(q)v \quad \mbox{for all $v \in \mathbb{R}^N$},
\]
where $\Pi(q):=I-q\otimes q$ is the orthogonal projection onto $T_q {\mathbb S}^{N-1} = q^\perp$ and $\Pi^\perp(q):=q\otimes q$ is the orthogonal projection onto $(T_q {\mathbb S}^{N-1})^\perp = \operatorname{span} \{q\}$.
Therefore, for $\varphi \in C_c^\infty(B(x_0,R),\mathbb{R}^N)$, since $\partial_\nu u \perp T_{u} {\mathbb S}^{N-1}$,
\[
\int_{{B^+(x_0,R)}} |\nabla u|^{p-2} \nabla u \cdot \nabla \varphi + \int_{{B(x_0,R)\setminus \mathbb{R}^n_+}} |\nabla \tilde{u}|^{p-2}\nabla \tilde{u} \cdot \nabla (\Sigma(\tilde{u}) \varphi) = 0.
\]
In particular,
\[
\int_{{B(x_0,R)}} |\nabla \tilde{u}|^{p-2} \nabla v \cdot \nabla \varphi = -\int_{{B(x_0,R)\setminus \mathbb{R}^n_+}} |\nabla \tilde{u}|^{p-2} \nabla \tilde{u}\cdot \nabla (\Sigma(\tilde{u}))\ \varphi.
\]
Combining this with \eqref{eq:nablaveqnablau},
\[
\begin{split}
\int_{{B(x_0,R)}} |\nabla v|^{p-2} \nabla v \cdot \nabla (m \varphi) &= -\int_{{B(x_0,R)\setminus \mathbb{R}^n_+}} |\nabla \tilde{u}|^{p-2} \nabla \tilde{u}\cdot \nabla (\Sigma(\tilde{u}))\ \varphi\\
&\quad+ \int_{{B(x_0,R)}} |\nabla v|^{p-2} \nabla v \cdot \nabla m\ \varphi,
\end{split}
\]
where
\[
m(x) = \begin{cases}
1 \quad &\mbox{in {$B^+(x_0,R)$}}\\
|\tilde{u}(x)|^{2(p-2)} \quad &\mbox{in {$B(x_0,R)\setminus \mathbb{R}^n_+$}}.\\
\end{cases}
\]
Observe that $m(x)$ and $m(x)^{-1} \in L^\infty\cap W^{1,p}(B(x_0,R))$. Now \eqref{eq:reflectedeq} follows from \eqref{eq:nablaveqnablau}.
It remains to establish the growth estimates from Proposition~\ref{pr:gensysreg} which follow from Proposition~\ref{pr:growth}. Indeed, set $\mathcal{M} := B(x_0,R) \cap \partial \mathbb{R}^n_+$.
{To obtain \eqref{eq:growthcond} let $B(y_0,4r)\subset B(x_0,R)$ and $B(y_0,2r)\cap\mathcal{M} {=} \emptyset $. Let us consider first $B(y_0, 2r)\subset \mathbb{R}^n_-$. Then we observe that by \eqref{eq:nablaveqnablau} combined with the fact that $|u|>\frac12$ on $B^+(x_0,R)$ we have $\int_{B(y_0,r)}|\nabla v|^p \precsim \int_{B(\tilde{y}_0,r)} |\nabla u|^p$,
where $\tilde{y}_0$ is the point $y_0=(y_0^1,\ldots,y_0^n)$ reflected along the hyperplane $\partial \mathbb{R}^n_+$, i.e., $\tilde{y}_0=(y_0^1,\ldots,-y_0^n)$. Now applying \eqref{eq:growth:pllnbd0} to $u$, we obtain
\begin{equation}\label{eq:growthestimatesv}
\begin{split}
\int_{B(y_0,r)}|\nabla v|^p &\precsim C \lambda\int_{B^+(\tilde{y}_0,4r)}|\nabla u|^p + C\lambda^{1-p}r^{-p}\int_{B^+(\tilde{y}_0,4r)}|u-(u)_{B^+(\tilde{y}_0,4r)}|^p\\
&\le C \lambda\int_{B(y_0,4r)}|\nabla v|^p + C\lambda^{1-p}r^{-p}\int_{B^-(y_0,4r)}\left|\tilde{u}-(\tilde{u})_{B^-({y}_0,4r)}\right|^p.
\end{split}
\end{equation}
To estimate the remaining part we note that since $v=\frac{\tilde{u}}{|\tilde{u}|^2}$ we have $\tilde{u} = \frac{v}{|v|^2}$ in $\mathbb{R}^n_{-}$ { and for any $A\subset B(x_0,R)\setminus \mathbb{R}^n_+$: }
\begin{equation}\label{eq:growthestimateforv1}
\begin{split}
\mvint_A \left|\frac{v}{|v|^2} -\brac{\frac{v}{|v|^2}}_A \right|^p & {\precsim }\mvint_A \mvint_A \left|\frac{v(x)}{|v(x)|^2} - \frac{v(y)}{|v(x)|^2}\right|^p + \mvint_A \mvint_A \left|\frac{v(y)}{|v(x)|^2} - \frac{v(y)}{|v(y)|^2}\right|^p \\
& \precsim \|v^{-1}\|^{2p}_{L^{\infty}} \mvint_A \mvint_A |v(x)-v(y)|^p + \|v^{-1}\|^{{3p}}_{L^\infty}\mvint_A \mvint_A \left|{|v(x)|^2-|v(y)|^2}\right|^p.
\end{split}
\end{equation}
Now, since for any $a,\, b $,
\[
|a|^2 - |b|^2 = (|a|+|b|)(|a| - |b|) \le (|a|+|b|)|a-b|
\]
we have
\begin{equation}\label{eq:growthestimateforv2}
\begin{split}
\mvint_A \mvint_A \left|{|v(x)|^2-|v(y)|^2}\right|^p &\precsim \|v\|^{{p}}_{L^\infty(A)}\mvint_A \mvint_A |v(x) - v(y)|^p\\
&\precsim \|v\|^{{p}}_{L^\infty(A)}\mvint_A |v-(v)_A|^p,
\end{split}
\end{equation}
where the last inequality was obtained by adding and subtracting $(v)_A$ and by the triangle inequality. We deduce from \eqref{eq:growthestimateforv1} and \eqref{eq:growthestimateforv2} that
\[
\mvint_A \left|\frac{v}{|v|^2} -\brac{\frac{v}{|v|^2}}_A \right|^p \precsim \|v^{-1}\|^{2p}_{L^\infty(A)}(1+{\|v\|^{p}_{L^\infty(A)}\|v^{-1}\|^{p}_{L^\infty(A)}})\mvint_A |v-(v)_A|^p.
\]
Due to the fact that $|u|>\frac12$ and $u$ is uniformly bounded we get
\begin{equation}\label{eq:growthestimatesvmiddle}
\mvint_A |\tilde{u} -(\tilde{u})_A |^p \precsim \mvint_A |v - (v)_A|^{p} \quad \text{for any } A\subset B(x_0,R)\setminus\mathbb{R}^n_{+}.
\end{equation}}
To conclude, we note\footnote{\label{ft:AB} {Indeed, for any $\tilde{A}\subset A$ we have by enlarging the domain of integration and applying Jensen's inequality
\[
\mvint_{\tilde{A}} |w-(w)_{\tilde{A}}|^p \precsim \frac{|A|}{|\tilde{A}|} \mvint_{{A}} |w-(w)_{{A}}|^p.
\]
}} {that since $B(y_0,2r) \subset \mathbb{R}^n_-$ we have $\frac{|B(y_0,4r)|}{|B^-(y_0,4r)|}\approx 1$, thus}
\begin{equation}\label{eq:growthestimatesvfinal}
\mvint_{{B^-(y_0,4r)}} |v-(v)_{{B^-(y_0,4r)}}|^p \precsim \mvint_{B(y_0,4r)} |v-(v)_{B(y_0,4r)}|^p.
\end{equation}
Combining estimates \eqref{eq:growthestimatesv}, \eqref{eq:growthestimatesvmiddle}, and \eqref{eq:growthestimatesvfinal} we obtain \eqref{eq:growthcond}.
The second case $B(y_0,2r)\subset\mathbb{R}^n_+$ is easier and we leave it to the reader.
Finally, for \eqref{eq:growthcondinter} we {apply \eqref{eq:growth:pllnbd} and} observe that $|u|^2 \equiv 1$ on $\mathcal{I} := B(y_0,4r) \cap \partial \mathbb{R}^n_+$. Thus,
\[
\int_{B^+(y_0,4r)} \big ||u|^2-1 \big |^p \precsim \brac{\|u\|_{L^\infty}+1} \int_{B^+(y_0,4r)} \big ||u|-(|u|)_{\mathcal{I}} \big|^p.
\]
Now
\[
\big ||u(z)|-(|u|)_{\mathcal{I}} \big | \leq \mvint_{\mathcal{I}} \big | |u(z)|-|u(z_2)| \big |\ dz_2 \leq \mvint_{\mathcal{I}} \big | u(z)- u(z_2) \big |\ dz_2
\]
and thus
\[
\mvint_{B^+(y_0,4r)} \big ||u|-(|u|)_{\mathcal{I}} \big |^p \precsim \mvint_{B^+(y_0,4r)} \big |u -(u)_{\mathcal{I}} \big |^p + \mvint_{\mathcal{I}} \big |u -(u)_{\mathcal{I}}\big |^p.
\]
Proposition~\ref{pr:plapnup} is now established.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:main}]
For $p=n$ H\"older continuity for $u$ follows from Proposition~\ref{pr:main:hoelderpn}. For $p<n$ it follows from the combination of Proposition~\ref{pr:plapnup} and Proposition~\ref{pr:gensysreg}. Now $C^{1,\alpha}$-regularity follows from the reflection, Proposition~\ref{pr:plapnup}, and the fact that a H\"older continuous solution to the reflected system is $C^{1,\alpha}$ for some $\alpha > 0$, see \cite[Theorem 3.1.]{Hardt-Lin-1987} (which is stated for minimizers but actually only uses the continuity of the solution and the equation). See also \cite[Theorem 1.2.]{Riviere-Strzelecki-2005}.
Note that for $p=n$ there is also a more elegant argument to pass from $C^\alpha$ regularity to $C^{1,\alpha}$. Testing the equation \eqref{eq:soluspheredef} in $x$ and $x+h$ with $\varphi(x) := \eta(x) (v(x+h)-v(x))$ for a suitable cutoff function $\eta$ one obtains from the H\"older continuity of $u$ that for some $\sigma > 0$ we have $\nabla v \in W^{1+\sigma,n}$. In particular, by Sobolev embedding $\nabla v \in L^{(n,1)}_{loc}$, and from Duzaar-Mingione's work \cite{Duzaar-Mingione-2010} we get a Lipschitz bound for $v$. Now, $C^{1,\alpha}$-regularity is a consequence of the potential estimates for $p$-Laplace equations, see \cite{Kuusi-Mingione-2012,Kuusi-Mingione-2017}. We leave the details to the reader.
\end{proof}
\section{Partial Regularity: Proof of Theorem \ref{th:partialregularity}}\label{s:partialregularity}
For simplicity we assume {in this section that $B^+(0,R)\subset D \subset \mathbb{R}^n_+$ and $\partial D \cap B(0,R)= \partial \mathbb{R}^n_+\cap B(0,R)$}. We begin with recalling that a map $u\in W^{1,p}({B^+(0,R)},\mathbb{R}^N)$ is said to be \emph{stationary $p$-harmonic} with respect to the free boundary condition $u({\partial D \cap B(0,R)})\subset{\mathbb S}^{N-1}$ if in addition to \eqref{eq:soludef} it is a critical point of the energy with respect to variations in the domain. The latter is equivalent to $u$ satisfying
\begin{equation}\label{eq:firstvariation}
\int_{{B^+(0,R)}}|\nabla u|^{p-2}\brac{|\nabla u|^2 \delta_{ij} - p\,\partial_i u \,\partial_j u}\partial_i\xi^j =0
\end{equation}
for $\xi = (\xi^1,\ldots,\xi^n)\in C_c^\infty({\overline{\mathbb{R}^n_+}\cap B(0,R)},\mathbb{R}^n)$ with $\xi(\partial \mathbb{R}^n_+)\subset\partial\mathbb{R}^n_+$.
By choosing the test function as $\xi(x):=\psi(x)(x_0-x)$ in \eqref{eq:firstvariation}, where $\psi\in C^\infty_c({\overline{\mathbb{R}^n_+}\cap B(0,R)},[0,1])$ is a suitable bump function, one obtains the following.
\begin{lemma}[monotonicity formula]\label{lem:monotonicityformula}
Let $u\in W^{1,p}(B^+(0,R),\mathbb{R}^N)$ be a stationary $p$-harmonic map with respect to the free boundary condition $u( B^+(0,R)\cap\{x_n=0\})\subset{\mathbb S}^{N-1}$ and let $x_0 \in B^+(0,R)\cap \{x_n=0\}$. Then, the normalized $p$-energy is monotone. In particular,
\begin{equation}\label{eq:mono}
r^{p-n}\int_{B^+(x_0,r)}|\nabla u|^p -\rho^{p-n}\int_{B^+(x_0,\rho)}|\nabla u|^p = p\int_{B^+(x_0,r)\setminus B^+(x_0,\rho)}|x-x_0|^{p-n} {|\nabla u|^{p-2}}\, \left|\frac{\partial u}{\partial \nu}\right|^2
\end{equation}
for all $0<\rho<r<R-|x_0|$, where $\nu$ is the outward pointing unit normal for $\partial B(x_0,r)$, $\nu(x):=\frac{x-x_0}{|x-x_0|}$. For $x_0\in B^+(0,R)\setminus\partial\mathbb{R}^n_+$ the same holds if $r$ is such that $B^+(x_0,r) = B(x_0,r)\subset\mathbb{R}^n_+$.
\end{lemma}
This well-known fact was proved for Yang--Mills fields and stationary harmonic maps by Price \cite{Price}, see \cite{Evans1991,Bethuel1993} and also \cite[Section 2.4]{Simon}. Fuchs \cite{Fuchs} observed that \eqref{eq:mono} holds for stationary $p$-harmonic maps. As pointed out by Scheven \cite[p.137]{Scheven-2006} the proof holds true in the case of free boundary condition.
We will need the following lemma (see, e.g., \cite[Corollary 3.2.3.]{Ziemer}).
\begin{lemma}[Frostman's lemma]\label{le:frostman}
If $f\in L^{p}(\mathbb{R}^n)$, $p\ge 1$, and $0\le\alpha<n$, then for
\[
E := \left\{x\in \mathbb{R}^n:\limsup_{r\rightarrow 0} r^{-\alpha}\int_{B(x,r)}|f(y)|^p >0 \right\},
\]
we have ${\mathcal H}^{\alpha}(E)=0$.
\end{lemma}
We shall show, using monotonicity formula \eqref{eq:mono} and Frostman's Lemma \ref{le:frostman}, that the set outside which the condition \eqref{eq:thmain:smallnesscond} is satisfied is of zero $(n-p)$-Hausdorff measure. We then obtain Theorem \ref{th:partialregularity} from Theorem \ref{th:main}.
\begin{proof}[Proof of Theorem~\ref{th:partialregularity}]
Let
\begin{equation*}
S := \left\{ x\in \overline{\mathbb{R}^n_+} : \limsup_{r\rightarrow 0} r^{p-n}\int_{B^+(x,r)}|\nabla u|^p >0 \right\},
\end{equation*}
by Lemma \ref{le:frostman}, we have ${\mathcal H}^{n-p}(S)=0$.
We define for $\epsilon$ as in Theorem \ref{th:main}
\[
\Sigma_{\epsilon} : = \left\{x \in \overline{\mathbb{R}^n_+} : \forall R>0\sup_{|y_0 - x|<R}\sup_{\rho<R}\rho^{p-n}\int_{B^+(y_0,\rho)}|\nabla u|^p \ge \epsilon \right\},
\]
clearly $\Sigma_{\epsilon}$ is a closed set. We will prove that ${\mathcal H}^{n-p}(\Sigma_\epsilon)=0$. Then Theorem \ref{th:partialregularity} is a consequence of Theorem \ref{th:main}.
Let $A_\epsilon$ be the set on which the condition \eqref{eq:thmain:smallnesscond} is satisfied for $\epsilon$, i.e.,
\[
A_{\epsilon} := \overline{\mathbb{R}^n_+}\setminus\Sigma_\epsilon = \left\{x\in \overline{\mathbb{R}^n_+}: \exists R>0 \mbox{ such that } \sup_{|y_0 - x|<R}\sup_{\rho<R}\rho^{p-n}\int_{B^+(y_0,\rho)}|\nabla u|^p <\epsilon \right\}.
\]
In order to prove the theorem it suffices to show that $\brac{\overline{\mathbb{R}^n_+}\setminus S}\subseteq A_\epsilon$.
{\phantom{Let $\epsilon>0$ be as in Theorem \ref{th:main} and }}Let $x_0 \in \brac{\overline{\mathbb{R}^n_+}\setminus S}$, i.e., be such that $\limsup_{r\rightarrow 0} r^{p-n}\int_{B^+(x_0,r)}|\nabla u|^p =0$. There exists an $R>0$ such that
\[ R^{p-n}\int_{B^+(x_0,R)}|\nabla u|^p<4^{p-n}\epsilon.\]
We shall show that
\[
\sup_{|y_0 - x_0|<R/4}\sup_{\rho\, < R/4} \rho^{p-n}\int_{B^+(y_0, \rho)}|\nabla u|^p < \epsilon.
\]
Choose any $y_0$ such that $|y_0-x_0|<R/4$ and any radius $\rho<R/4$. First observe that we may take $y_0\in \overline{\mathbb{R}^n_+}$. {Indeed, suppose that $y_1\in B(x_0,R/4)\cap\mathbb{R}^n_-$, then for any $\rho<R/4$ we can choose $y_0\in B(x_0,R/4)\cap\overline{\mathbb{R}^n_+}$ such that $B(y_1,\rho)\cap\overline{\mathbb{R}^n_+}\subset B(y_0,\rho)\cap \overline{\mathbb{R}^n_+}$ thus}
\[
\sup_{|y_1 - x_0|<R/4}\sup_{\rho\, < R/4} \rho^{p-n}\int_{B^+(y_1, \rho)}|\nabla u|^p = \sup_{y_0\in B(x_0,R/4)\cap\overline{\mathbb{R}^n_+}}\sup_{\rho\, < R/4} \rho^{p-n}\int_{B^+(y_0, \rho)}|\nabla u|^p.
\]
Now assume that $y_0\in\partial\mathbb{R}^n_+$. We have $B^+(y_0,\rho)\subset B^+(y_0,R/4)\subset B^+(x_0,R)$. Thus
\[
\rho^{p-n}\int_{B^+(y_0,\rho)}|\nabla u|^p \le \brac{\frac R4}^{p-n}\int_{B^+(y_0,R/4)}|\nabla u|^p \le 4^{n-p} R^{p-n}\int_{B^+(x_0,R)}|\nabla u|^p < \epsilon,
\]
where the first inequality is a consequence of the monotonicity formula \eqref{eq:mono}.
Now, let us assume that $y_0\notin\partial\mathbb{R}^n_+$. Let $\overline{\rho} = {\rm dist\,}(y_0,\partial\mathbb{R}^n_+)$ and $\overline{y}_0$ be the projection of $y_0$ onto $\partial\mathbb{R}^n_+$. We can assume that $\rho<\overline{\rho}$. Indeed, if not we would have
\begin{align*}
\rho^{p-n}\int_{B^+(y_0,\rho)}|\nabla u|^p &\le \rho^{p-n}\int_{B^+(\overline{y}_0,2\rho)}|\nabla u|^p = 2^{n-p}(2\rho)^{p-n}\int_{B^+(\overline{y}_0,2\rho)}|\nabla u|^p\\
&\le 2^{n-p}\brac{\frac R2}^{p-n}\int_{B^+(\overline{y}_0,R/2)}|\nabla u|^p\le 4^{n-p}R^{p-n}\int_{B^+(x_0,R)}|\nabla u|^p < \epsilon.
\end{align*}
Next, we note that $\overline{\rho}<R/4$ and
observe the following inclusions
\[
B(y_0,\rho)\subset B(y_0,\overline{\rho})\subset B^+(\overline{y}_0,2\overline{\rho})\subset B^+(\overline{y}_0,R/2)\subset B^+(x_0,R)\]
and the following inequalities which are consequences of the monotonicity formula \eqref{eq:mono}:
\begin{align*}
\rho^{p-n}\int_{B(y_0,\rho)} |\nabla u|^p &\le \brac{\overline{\rho}}^{p-n}\int_{B(y_0,\overline{\rho})} |\nabla u|^p,\\
(2\overline{\rho})^{p-n}\int_{B^+(\overline{y}_0,2\overline{\rho})}|\nabla u|^p &\le \brac{\frac R2}^{p-n}\int_{B^+(\overline{y}_0,R/2)} |\nabla u|^p.
\end{align*}
Thus
\begin{align*}
\rho^{p-n}\int_{B(y_0,\rho)} |\nabla u|^p &\le \brac{\overline{\rho}}^{p-n}\int_{B(y_0,\overline{\rho})} |\nabla u|^p \le 2^{n-p} (2\overline{\rho})^{p-n}\int_{B^+(\overline{y}_0,2\overline{\rho})}|\nabla u|^p\\
&\le 2^{n-p} \brac{\frac R2}^{p-n}\int_{B^+(\overline{y}_0,R/2)}|\nabla u|^p\le 4^{n-p} R^{p-n}\int_{B^+(x_0,R)}|\nabla u|^p < \epsilon,
\end{align*}
which gives $x_0\in A_\epsilon$.
We conclude $\Sigma_\epsilon\subset S$ and thus ${\mathcal H}^{n-p}(\Sigma_\epsilon)=0$.
\end{proof}
\subsection{A Liouville type result}
{We note that the monotonicity formula in Lemma~\ref{lem:monotonicityformula} can be used to prove partial regularity but also Liouville type results in the spirit of \cite{Liu_2010}. Indeed, if we work in $\mathbb{R}^n_+$, for $u\in \dot{W}^{1,p}(\mathbb{R}^n_+,\mathbb{R}^N)$ we can say that $u$ is stationary $p$-harmonic with respect to the free boundary condition $u(\partial \mathbb{R}^n_+)\subset \mathbb{S}^{N-1}$ if $u$ satisfies \eqref{eq:soludef} and \begin{equation}\label{eq:firstvariation2}
\int_{\mathbb{R}^n_+}|\nabla u|^{p-2}\brac{|\nabla u|^2 \delta_{ij} - p\,\partial_i u \,\partial_j u}\partial_i\xi^j =0
\end{equation}
for $\xi = (\xi^1,\ldots,\xi^n)\in C_c^\infty(\overline{\mathbb{R}^n_+},\mathbb{R}^n)$ with $\xi(\partial \mathbb{R}^n_+)\subset\partial\mathbb{R}^n_+$. We then have
\begin{proposition}\label{pr:Liouvilletype}
Let $2\le p<n$ and $u\in \dot{W}^{1,p}(\mathbb{R}^n_+,\mathbb{R}^N)$ be such that $u$ is a finite energy, stationary $p$-harmonic map with respect to the free boundary condition $u(\partial \mathbb{R}^n_+)\subset \mathbb{S}^{N-1}$, then $u$ is constant.
\end{proposition}
\begin{proof}
By contradiction, assume that $u$ is not a constant. Then there exists $R_0>0$ such that $\int_{B^+(0,R_0)}|\nabla u|^p \geq c>0$. Now by the monotonicity formula \ref{lem:monotonicityformula} we have that for any $R>R_0$
\begin{equation}
\int_{B^+(0,R)} |\nabla u|^p\geq \left(\frac{R}{R_0}\right)^{n-p} \int_{B^+(0,R_0)}|\nabla u|^p\geq \left(\frac{R}{R_0}\right)^{n-p} c.
\end{equation}
We can then let $R$ go to $+\infty$ and we obtain that the $p$-energy of $u$ in $\mathbb{R}^n_+$ is infinite. This is a contradiction since we assumed that $u\in \dot{W}^{1,p}(\mathbb{R}^n_+,\mathbb{R}^N)$.
\end{proof}
}
\subsection*{Acknowledgments}
A.S. and K.M. are supported by the German Research Foundation (DFG) through grant no.~SCHI-1257-3-1. A.S. received research funding from the Daimler and Benz foundation no. 32-11/16, and Simons foundation through grant no 579261 is gratefully acknowledged. A.S. was Heisenberg fellow. R.R. was supported by the Millennium Nucleus Center for Analysis of PDE NC130017 of the Chilean Ministry of Economy and by the F.R.S.-FNRS under the ``Mandat d'Impulsion scientifique F.4523.17, Topological singularities of Sobolev maps". {We thank M. Willem for indicating us the proof of Proposition \ref{th:boundedness-in-unbounded}. The authors would like to thank the anonymous referees for their helpful suggestions.}
|
1,108,101,565,800 | arxiv | \section{Introduction}
Atmospheric aerosols remain a large source of uncertainty in prediction of changes and variability in the climate system \citep{myhre2001historical, kaufman2002satellite, carslaw2013large, stocker2013ipcc}. Particle characteristics relevant for climate, such as optical cross sections and the critical supersaturation at which particles serve as cloud condensation nuclei (CCN), depend on particle size, shape, and chemical composition \citep[e.g.][]{jacobson2001strong,chung2011effect,cubison2008influence,ervens2010ccn}. Observations show tremendous variability in particle physical and chemical properties \citep[e.g.][]{krieger2012exploring, zhang2014variation}, but complex distributions of multicomponent particles are not easily represented in large-scale models. In this study, we seek a sparse representation of aerosol populations that resolves enough information about particle size and composition distributions to adequately represent climate-relevant properties, while resolving as little information as is necessary in order to minimize computational costs.
Many numerical models have been developed to simulate the aerosol evolution \citep[e.g.][]{wexler1994modelling,mcgraw1997description,jacobson1997development,penner1998climate,binkowski2003models,stier2005aerosol,bauer2008matrix,riemer2009simulating}. The simplest models track particle mass only, without any resolution of particle size or composition \citep[e.g.][]{haywood1997general,myhre1998estimation,penner1998climate,cooke1999construction,lesins2002study}. At the other extreme, the Particle Monte Carlo Model (PartMC) \citep{riemer2009simulating} coupled to the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) \citep{zaveri2008model} tracks the evolution of thousands of individual particles, but is computationally expensive. Many large-scale models now include aerosol microphysical schemes that simulate the evolution of the particle size distribution, but resolve limited information about particle composition. Sectional models track the evolution of particles in different size bins, typically assuming that particles of the same size have identical composition \citep[e.g.][]{wexler1994modelling,jacobson1997development}. Sectional models have also been extended to resolve multivariate distributions with respect to particle composition, such as particle hygroscopicity parameter or BC mass fraction, in addition to particle size \citep{oshima2009aging,matsui2013development,zhu2015size}, or to track separate aerosol populations \citep{kleeman20013d,jacobson2002analysis}. Because sectional schemes simulate the evolution of a discretized representation of the full particle size distribution or, in the multivariate case, the particle size-composition distribution, they require tracking a large number of bins to resolve aerosol populations.
Moment-based models are a class of methods for simulating the evolution of probability density functions in which integral quantities over the distribution, such as radial moments, are tracked, rather than evolving the distribution itself \citep{hulburt1964some,mcgraw1997description}. In this way, moment-based models represent the aerosol using an intermediate level of detail between the simple mass-based aerosol schemes and the more detailed particle-resolved and sectional representations. To date, moment-based aerosol representations have been implemented in global models as modal or monodisperse schemes that track only two low-order radial moments for each population \citep[e.g.][]{stier2005aerosol,bauer2008matrix}, typically aerosol number and volume or mass. Monodisperse aerosol models follow a similar approach \citep{pirjola2003monodisperse}. Modal models begin by assuming a specific distribution shape, typically lognormal, track two moments per mode, and assume a prescribed geometric standard deviation. Particles within a mode are further assumed to contain identical mass fractions of the constituent aerosol species. However, comparison between modal and sectional aerosol schemes show that modal models can yield inaccurate representation of the particle size distribution \citep{mann2012intercomparison}. Further, inadequate resolution of variability in composition within each mode leads to errors in prediction of climate-relevant properties \citep{fierce2016black,fierce2016toward}.
\blue{The present study introduces a new framework for constructing aerosol representations in moment-based models,} including modal models. Focusing on particle CCN activation properties, traditionally a difficult case for moment-based representations \citep{wright2001description,wright2002retrieval}, we show how efficient quadrature representations can be constructed from moments of aerosol distributions bivariate with respect to dry diameter and hygroscopicity parameter. By applying a transformation of variables we construct sparse quadrature point distributions of the aerosol in terms of the critical saturation for cloud droplet activation. Continuous distributions are obtained off line using the maximum entropy spectral representation constrained by the same moment/quadrature parameters that defined the sparse distribution. We show that this approach yields accurate prediction of CCN activity using only a limited number of moments.
\section{Constructing sparse and continuous representations from distribution moments}\label{sec:methods}
Here we describe a new method for utilizing moments and moment-based, multivariate quadrature approximation to obtain sparse representations of benchmark PartMC-generated aerosol populations. Full size distributions, derived in terms of the sparse quadrature representation, are obtained using the principle of maximum entropy. Putting these two steps together we demonstrate that multivariate quadratures can be used to accurately compute CCN activities for the benchmark population.
\subsection{Computing CCN activity for benchmark populations}\label{sec:method_benchmark}
The particle-resolved model PartMC-MOSAIC was used to generate realistically complex particle populations (see Appendix~\ref{sec:appendix_benchmark}), which were used to benchmark the new method. \blue{PartMC-MOSAIC tracks the masses of 20 aerosol components, including water, in thousands of computational particles, corresponding to a stochastic representation of the 20-dimensional distribution that describes aerosol mixing state.} The procedure for computing CCN activity from a particle-resolved population is shown in Figure~\ref{fig:flowchart}a--\ref{fig:flowchart}c. The particle-resolved simulations reveal complex variation in particle size and composition, as illustrated by the number density distribution with respect to two coordinates: dry diameter $D_{\text{dry}}$, which is the volume-equivalent diameter for all species except water, and effective hygroscopicity parameter $\kappa$, which is the volume-weighted hygroscopicity parameter for each constituent aerosol species \citep{petters2007single}. \blue{The bivariate distribution $n(D_{\text{dry}},\kappa)$ is the reduced form of the 20-dimensional distribution with respect to aerosol components that is most relevant for CCN activation, but the procedure described in this study can be extended to different projections of the particle size-composition distribution for aerosol dynamics simulations and calculation of aerosol optical properties.} The bivariate number distribution $n(D_{\text{dry}},\kappa)$ for the benchmark population is shown in Figure~\ref{fig:flowchart}a, where $n(D_{\text{dry}},\kappa)$ is normalized by the total particle number concentration. Because the particle-resolved model resolves composition at the particle level, it is uniquely suited for benchmarking approximate aerosol representations, including the quadrature-based approach discussed here.
For the particle-resolved population, the normalized number distribution with respect to critical supersaturation, $n(s_{\text{c}})$, is shown in Figure~\ref{fig:flowchart}b. The critical supersaturation at which each simulated particle becomes CCN-active, $s_{\text{c}}$, is computed as a function of $D_{\text{dry}}$ and $ \kappa$ (see Appendix~\ref{sec:appendix_sc}). The overall number fraction of particles able to serve as CCN at each supersaturation threshold $N_{\text{CCN}}(s)/N$, where $N$ is the total particle number concentration, is then computed as the cumulative distribution of $n(s_{\text{c}})$, shown for this benchmark population in Figure~\ref{fig:flowchart}c.
\subsection{Constructing quadratures from moments of $n(D_{\text{dry}},\kappa)$}\label{sec:method_quadPoints}
Figure~\ref{fig:flowchart}d shows the location of quadrature abscissas for a specific quadrature approximation for the bivariate distribution shown in Figure~\ref{fig:flowchart}a, derived from a specific set of moment constraints $\mu_k$. A generalized moment $\mu_k$ of the distribution $n(D_{\text{dry}},\kappa)$ is an integral over $n(D_{\text{dry}},\kappa)$ with respect to some kernel function of the coordinates, $\phi_k(D_{\text{dry}},\kappa)$:
\begin{equation}\label{eqn:constraints}
\mu_k=\int_0^{\infty}\int_0^{\infty}n(D_{\text{dry}},\kappa)\phi_k(D_{\text{dry}},\kappa)dD_{\text{dry}}d\kappa.
\end{equation}
For example, for $\phi_k(D_{\text{dry}},\kappa)=D_{\text{dry}}^{m}$, $\mu_k$ is the \blue{$m^{\text{th}}$} \blue{power moment with respect to diameter} and, therefore, independent of $\kappa$. Here, we computed constraints in terms of modified moments \citep{press1990numerical}, which are linear combinations of the power moments (see Appendix~\ref{sec:appendix_moments}). Continuity equations, expressed in terms of moments, can be solved exactly in closed form only in very special cases, but approximate closure can be achieved by representing integrals over the number distribution using numerical quadrature. \blue{Remarkably}, to be accurate, quadrature-based models require tracking only a small number of quadrature abscissas and weights\blue{;} the distribution itself is not required. The quadrature approximation of $n(D_{\text{dry}},\kappa)$ takes the form:
\begin{equation}\label{eqn:quadrature}
\mu_k\approx\sum_{i=1}^Nw_i\phi_k(D_{\text{dry},i},\kappa_i),
\end{equation}
such that $\mu_k$, expressed in terms of the bivariate \blue{abscissa coordinates}, $D_{\text{dry},i}$ and $\kappa_i$, and weights, $w_i$, is identical to the result of Equation~\ref{eqn:constraints} for $k=1,...,N_{\text{q}}$. \blue{In this way, for each} $\mu_k$, the summation over $N_{\text{q}}$ quadrature points in Equation~\ref{eqn:quadrature} is exact. Algorithms have been developed to construct quadrature approximations that satisfy power moments and modified moments of univariate distributions \citep{gordon1968error,press1990numerical}, and these approximations yield on the order of $N_{\text{q}}/2$ quadrature points for $N_{\text{q}}$ moment constraints. This paper introduces a new technique for generating a quadrature representation from sets of generalized constraints, which provides a distributed set of $N_{\text{q}}$ quadrature points for $N_{\text{q}}$ generalized moments and is also suitable for constructing quadrature for multivariate distributions.
We seek a quadrature approximation that can be derived from a set of known constraints $\mu_k$. Ideally, the quadrature abscissas $x_i$ will be distributed across the variable space in order to resolve key features of the probability distribution. We found that a distributed set of abscissas, \{$D_{\text{dry},i}$, $\kappa_i$\}, and weights, $w_i$, can be constructed through a linear program that maximizes an entropy-inspired cost function (see Appendices~\ref{sec:appendix_entropy}~and~\ref{sec:appendix_linearprogram}).
\subsection{Reconstructing $n(s_{\text{c}})$ from quadrature}\label{sec:method_reconstructDist}
For aerosol representations that track only quadrature points (Figure~\ref{fig:flowchart}b), a procedure is required if one seeks a corresponding continuous distribution, and often this can be offline. We propose to transform each quadrature point from the bivariate abscissas, \{$D_{\text{dry},i}$, $\kappa_i$\}, into univariate abscissas, $s_{\text{c},i}$, for each quadrature point $i=1,...,N_{\text{q}}$ \blue{and preserving the weights}, as shown in Figure~\ref{fig:flowchart}.c. The moments of $n(s_{\text{c}})$, $\mu_l$ can then be estimated using the transformed quadrature:
\begin{equation}\label{eqn:sc_quadrature}
\mu_l=\sum_{i=1}^Nw_i\phi_l(s_{\text{c},i}).
\end{equation}
The number of moments in $s_{\text{c}}$ space, $N_{s}$, need not be the same as the number of moments in $D_{\text{dry}}$-$\kappa$ space, $N_{\text{q}}$.
The desired continuous distribution $n(s_{\text{c}})$, shown in Figure~\ref{fig:flowchart}.f, is constructed by finding the distribution having maximum entropy, given the approximated moments in $s_{\text{c}}$ space, $\hat{\mu}_l$ for $l=1,...,N_{\text{s}}$ (see Appendix~\ref{sec:appendix_entropy}). \blue{We use $\hat{\mu}_l$ to indicate the moments in $s_{\text{c}}$ space approximated from the quadrature and $\mu_l$ to indicate exact moments computed from the benchmark population, which would not be known in a quadrature-based simulation.} Similarly, we denote the reconstructed distribution as $\hat{n}(s_{\text{c}})$ to differentiate between the benchmark distribution $n(s_{\text{c}})$. If only the moments of a distribution are known, the distribution that best represents \blue{the state of knowledge afforded by the moment constraints is the distribution having maximal entropy \citep{jaynes1957information,jaynes1957information2}}. Distributions of maximum entropy can be constructed for any set of moment constraints and are not limited to univariate aerosol distributions, \blue{but here we confine ourselves to the univariate case}. \blue{In Section~\ref{sec:optimalMoments}, we show that the accuracy of the reconstructed distribution depends strongly on the set of moments $\mu_l$ chosen to reconstruct the distribution and the accuracy with which the moments approximated from the quadrature $\hat{\mu}_l$ represent the moments taken directly from the particle-resolved distribution $\mu_l$.}
\begin{figure}
\begin{center}
\includegraphics[width = 3in,trim=0.9in 0 0.67in 0,clip=true]{quad_flowchart.pdf}
\caption[flowchart]{\label{fig:flowchart} Procedure for computing CCN spectrum from particle-resolved model (a--c) and from quadrature-based representation (d--g): (a) bivariate number distribution with respect to $D_{\text{dry}}$ and $\kappa$ from particle-resolved model, (b) number distribution with respect to $s_{\text{c}}$ from particle-resolved model, (c) $N_{\text{CCN}}(s)/N$ from particle-resolved model, (d) location of abscissas in $D_{\text{dry}}$-$\kappa$ space for quadrature, (e) abscissas transformed into $s_{\text{c}}$ and weights for quadrature, (f) maximum entropy reconstruction of number density with respect to $s_{\text{c}}$ from quadrature, and (g) $N_{\text{CCN}}(s)/N$ for quadrature.}
\end{center}
\end{figure}
\subsection{Evaluating error in $N_{\text{CCN}}(s)/N$}
The aim of this study is to find a moment-based representation that accurately reproduces CCN activation properties of benchmark aerosol populations from the particle-resolved model PartMC-MOSAIC. To this end, we apply a genetic algorithm (see Appendix~\ref{sec:appendix_geneticalgorithm}) to find optimal sets of bivariate moments to be constrained in constructing the bivariate quadrature. We define $N_{\text{CCN}}(s)$ and $\hat{N}_{\text{CCN}}(s)$ as the number concentration of CCN from the particle-resolved model and from a quadrature representation, respectively, and $N$ is the overall number concentration of particles, which is constrained to be identical in both cases. The error $\varepsilon$ is quantified as the mean squared error:
\begin{equation}
\varepsilon = \frac{1}{N}\int_0^{\infty}\big(\hat{N}_{\text{CCN}}(s)-N_{\text{CCN}}(s)\big)^2ds.
\end{equation}
The following sections describe methods for identifying \blue{optimized} sets of moment constraints for constructing the quadrature in $D_{\text{dry}}$-$\kappa$ space $\mu_k$ and \blue{optimized} sets of moments for reconstructing continuous distributions in $s_{\text{c}}$ space. We define \blue{optimized} moment sets as those yielding the best representation of the CCN activation spectrum, indicated by small mean squared error $\varepsilon$.
\section{Moment constraints for aerosol simulations}\label{sec:optimalMoments}
Although the proposed maximum-entropy approaches for constructing quadrature approximations and continuous distributions can yield accurate representations of the benchmark populations, here we show that this accuracy depends strongly on the selection of moment constraints. \blue{The moments with respect to $s_{\text{c}}$ computed from the bivariate quadrature, $\hat{\mu}_l$ are used to construct the continuous distribution $\hat{n}(s_{\text{c}})$} (see Section~\ref{sec:method_reconstructDist}); Section~\ref{sec:optimalMoments_sc} introduces the necessary and sufficient set of constraints $\mu_l$ to accurately reconstruct $n(s_{\text{c}})$. The accuracy of the continuous distribution $\hat{n}(s_{\text{c}})$ depends on the degree to which the projected quadrature points, which are originally constructed in $D_{\text{dry}}$-$\kappa$ space (see Section~\ref{sec:method_quadPoints}), approximate the true moments of $n(s_{\text{c}})$ for the benchmark distribution; Section~\ref{sec:optimalMoments_diaKap} describes a procedure for selecting moment constraints of $n(D_{\text{dry}},\kappa)$ required to construct \blue{accurate} quadrature approximations of $n(D_{\text{dry}},\kappa)$ and $\hat{n}(s_{\text{c}})$.
\subsection{Moments of $n(s_{\text{c}})$ for reconstructing distributions}\label{sec:optimalMoments_sc}
In this section, we identify the set of moments, $\mu_l$ for $l=1,..,N_{\text{s}}$, that \blue{yield highest accuracy for the reconstructed distribution} $\hat{n}(s_{\text{c}})$. We show that the accuracy of the reconstruction depends on the degree to which the quadrature is able to reproduce the moments taken directly from the particle-resolved distribution $n(s_{\text{c}})$.
Figure~\ref{fig:Sc_moments}a shows distributions constructed from the quadrature using selected combinations of moments $\mu_l$. We found that $n(s_{\text{c}})$ can be estimated with high accuracy using the six modified moments $\mu_l$, corresponding to Legendre polynomials of power $l=0,1,3,4,7,8$, \blue{computed with respect to $\ln{s}_{\text{c}}$} (see Appendix~\ref{sec:appendix_moments}). Provided these six moments can be computed with high accuracy from projections of the quadrature approximation of $n(D_{\text{dry}},\kappa)$, the moment-based representation reproduces the CCN spectrum $N_{\text{CCN}}(s)/N$ of the benchmark populations with high accuracy, as shown through comparison between green and black lines in Figure~\ref{fig:Sc_moments}b.
\begin{figure}
\begin{center}
\includegraphics[width = 2.5in]{compare_sc_moments_t0002.png}
\caption[Sc_moments]{\label{fig:Sc_moments} Comparison between benchmark (black dashed lines) and reconstructions (colored lines) of (a) number density with respect to $s_{\text{c}}$ and (b) $N_{\text{CCN}}(s)/N$ \blue{for different combinations of moments of $n(s_{\text{c}})$, computed directly from the benchmark population, reveals that only~six moments $\mu_l$ are needed, but these moments must be carefully selected.}}
\end{center}
\end{figure}
\subsection{Moments of $n(D_{\text{dry}},\kappa$) for constructing bivariate quadrature}\label{sec:optimalMoments_diaKap}
A good bivariate quadrature representation of the aerosol must reproduce key moments of $n(s_{\text{c}})$ on projection, which are used to construct the CCN activation spectrum $N_{\text{CCN}}(s)/N$. Selecting the optimal six moments from ten possible combinations, as was done for selecting key moments of $n(s_{\text{c}})$, requires testing a total of 210 combinations. On the other hand, selecting optimal combinations of bivariate moments requires testing unfeasibly large sets of combinations. For example, here we identify combinations of bivariate Legendre moments of power $n=-3,..,8$ (Equation \ref{eqn:legendre_DdryKap}) for each of the two variables, $D_{\text{dry}}$ and $\kappa$, yielding 144 possible bivariate moments. Selecting the eight optimal moments from the 144 candidates would require performing the full procedure for \mbox{$3.8\times10^{11}$} combinations. Instead, we applied a genetic algorithm to find suitable sets of bivariate moments $\mu_k$, $k=1,..,N_{\text{q}}$, \blue{defined by Equations~\ref{eqn:constraints}~and~\ref{eqn:quadrature}}, that should be tracked in moment-based models for accurate representation of CCN properties.
For a single population, the CCN spectra computed from the quadrature, $\hat{N}_{\text{CCN}}(s)/N$, is compared with the particle-resolved CCN spectrum, $N_{\text{CCN}}(s)/N$ in Figure~\ref{fig:DiaKap_moments}, using 20~randomly selected combinations (blue lines) of $N_{\text{q}}=8$ moments and for 20~combinations that were selected using the genetic algorithm (orange lines). Although the proposed procedure yields high accuracy in approximated CCN spectra $\hat{N}_{\text{CCN}}(s)/N$ for those moments selected by the genetic algorithm, the large errors in $\hat{N}_{\text{CCN}}(s)/N$ for randomly chosen moments illustrate that the accuracy of the procedure depends strongly on the choice of moment constraints used in assignment of the bivariate quadrature points.
\begin{figure}
\begin{center}
\includegraphics[width = 2.5in]{ga_solutions_t0007.png}
\caption[DiaKap_moments]{\label{fig:DiaKap_moments} Comparison between benchmark (black dashed line), reconstructions from randomly selected moments (blue lines), and reconstructions from moments chosen with the genetic algorithm (orange lines) show that care must be taken in selection of of bivarriate moments constraints for construction of the quadrature.}
\end{center}
\end{figure}
\section{Sparse approximation for efficient and accurate aerosol representation}
For the selected moments of $n(D_{\text{dry}},\kappa)$, $\mu_k$, and the selected univariate moments for estimating $n(s_{\text{c}})$, $\mu_l$, the quadrature-based representation accurately reproduces CCN activation for the three bivariate benchmark populations shown in Figure~\ref{fig:DiaKap_quad}. The \blue{location of quadrature abscissas} associated with a set of $N_{\text{q}}=8$ moments (white dots) are superimposed on benchmark distributions (surface plots). The quadrature was constructed using the approach outlined in Section~\ref{sec:method_quadPoints} for moments of $n(D_{\text{dry}},\kappa)$, where the specific combinations of moment constraints $\mu_k$ for $k=1,...,N_{\text{q}}$ were identified using the genetic algorithm (Section~\ref{sec:optimalMoments_diaKap} and Appendix~\ref{sec:appendix_geneticalgorithm}). \blue{Optimized bivariate moments identified with the genetic algorithm are listed in Table~\ref{tab:optimized_moments} of Appendix~\ref{sec:appendix_geneticalgorithm}.} The quadrature represents key features of each benchmark distribution using only moments.
The critical supersaturation $s_{\text{c},i}$ is computed for each quadrature abscissa $\{D_{\text{dry},i},\kappa_i\}$, shown along with corresponding weights by the stems in Figures~\ref{fig:Sc_reconstruct}a--\ref{fig:Sc_reconstruct}c. These quadrature points are used to compute the projected moments in $s_{\text{c}}$ space (Section~\ref{sec:optimalMoments_sc}), and the full distribution $\hat{n}_{\text{c}})$ is then reconstructed from the estimated moments (Section~\ref{sec:method_reconstructDist}). The reconstructed distributions (blue lines in Figures~\ref{fig:Sc_reconstruct}a--\ref{fig:Sc_reconstruct}c) represent key features of the benchmark distributions (black lines). Although the distributions $\hat{n}(s_{\text{c}})$ are not reconstructed exactly, the CCN activation spectra, $\hat{N}_{\text{CCN}}/N$, that is computed by integrating over $\hat{n}(s_{\text{c}})$ reproduces the benchmark spectrum $N_{\text{CCN}}(s)/N$ with high accuracy (Figures~\ref{fig:Sc_reconstruct}d--\ref{fig:Sc_reconstruct}f).
The shaded region of Figures~\ref{fig:Sc_reconstruct}d--\ref{fig:Sc_reconstruct}f show the bounds on possible solutions for each CCN spectrum, given the modified moments $\mu_k$ that were used to construct the quadrature approximations in Figure~\ref{fig:DiaKap_quad}; all possible distributions that satisfy the moment constraints $\mu_k$ lie within these extreme bounds. The large bounds in the shaded region indicate that constraints on $n(D_{\text{dry}},\kappa)$ do not, on their own, sufficiently constrain CCN activation properties. \blue{On the other hand, adding the maximum-entropy-inspired approach described here accurately represents CCN activation spectra across various particle-resolved distributions.}
\begin{figure}
\begin{center}
\includegraphics[width = 5.5in]{dDryKap_allTimes.png}
\caption[DiaKap_moments]{\label{fig:DiaKap_quad} Bivariate distributions from particle-resolved model (surface plots) and location of abscissas from the quadrature (white dots) for the three populations, which were sampled at (a) 7:00~am, after 1~hour of simulation, (b) 12:00~pm, after 6~hours of simulation, and (c) 6:00~am the following day, after 24~hours of simulation. The corresponding weights are not shown here but are shown by the white dots in the one-dimensional projection plots of Figure~\ref{fig:Sc_reconstruct}. In all cases, the maximum-entropy-inspired technique yields a distributed set of abscissas.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 5.5in]{Sc_reconstruct_allTimes.png}
\caption[DiaKap_moments]{\label{fig:Sc_reconstruct} (a--c) The number distribution with respect to $s_{\text{c}}$ for the benchmark (dashed curves) are compared with reconstructions from the quadrature (blue curves), where the projection of the underlying quadrature abscissas and weights, \blue{indicated by the location and magnitude of the vertical stems.} (d--f) The bivariate moments used to construct the quadrature do not effectively constrain $N_{\text{CCN}}(s)/N$, where bounds on possible solutions are shown by the shading, but the continuous distributions reconstructed from the quadrature (blue curves) accurately represent the benchmark (black dashed curves). Results are shown for the three particle-resolved populations from Figure~\ref{fig:DiaKap_quad}, which were sampled at (a,d) 7:00~am, after 1~hour of simulation, (b,e) 12:00~pm, after 6~hours of simulation, and (c,f) 6:00~am the following day, after 24~hours of simulation.}
\end{center}
\end{figure}
\section{Discussion and Conclusions}
This manuscript introduces a new procedure for inverting between moments, quadratures, and full distributions, which can be used to advance quadrature-based simulations of aerosol dynamics. By applying a linear program, with a maximum-entropy-inspired cost function, we generated sets of distributed quadrature points that are an \blue{ideal model representation of atmospheric aerosols}, where a genetic algorithm was used to identify optimal combinations of modified bivariate moments to be used as constraints in the linear program. We found that CCN activation spectra can be computed with high accuracy by reconstructing univariate distributions with respect to $s_{\text{c}}$, where moment constraints of $n(s_{\text{c}})$ were determined using a variable transform from bivariate quadrature points $\{D_{\text{dry},i},\kappa_i\}$ to univariate points $s_{\text{c},i}$. The combined procedure yields high accuracy in CCN spectra in comparison with particle-resolved benchmark populations. \blue{Although the present manuscript illustrates the efficacy of the proposed procedure for estimating the CCN activation spectra, which is notoriously difficult for moment-based aerosol representations \citep{wright2001description,wright2002retrieval}, the approach is not limited to the variable spaces that we have chosen to analyze here. In a future study, we will demonstrate how the procedure can be applied to advance quadrature-based aerosol simulations and for computing other quantities of interest, such as aerosol optical properties.}
In this study, the traditional quadrature method of moments has been augmented with a maximum-entropy-inspired linear program for quadrature generation and with maximum-entropy spectral analysis to obtain continuous representations of consistent with specified moment constrains. We show that the full particle-resolved data set of over 10,000 particles can be reduced to a sparse set of just eight weighted particles to achieve accurate recovery of CCN activation properties. We show that the quadrature method of moments, combined with maximum entropy methods, can be used to overcome the limitations of traditional moment methods.
\begin{appendices}\label{sec:appendix}
\section{Benchmark Populations from Particle-Resolved Model}\label{sec:appendix_benchmark}
PartMC-MOSAIC simulates the evolution of trace gases and aerosol particles in a well-mixed air parcel. The model tracks the mass of $j=1,...,A$ constituent aerosol species in each particle $i=1,...,N_{\text{p}}$, where $A$ is the total number of aerosol species, including water, and $N_{\text{p}}$ is the total number of particles in the population. In the present implementation $A=20$ and $N_{\text{p}}$ is on the order of $10^5$ particles. The mass of each component in each particle is given by $m_{j,i}$, and the mass composition of each particle is represented by the vector $\hat{m}_i=[m_{1,i},...m_{j,i},...,m_{A,i}]$. The model simulates the aerosol evolution due to condensation and evaporation of semi-volatile gases, coagulation between particles, dilution of background air, and particle emissions. A full description of the model is described in \citep{riemer2009simulating}, and the bivariate distributions with respect to $D_{\text{dry}}$ and $\kappa$ are introduced in \citet{fierce2013cloud}.
\section{Critical supersaturation for CCN activation}\label{sec:appendix_sc}
The critical supersaturation ($s_{\text{c},i}$) was computed for each particle $i$ using the $\kappa$-K\"{o}hler model, which depends on a particle's overall dry volume, its effective hygroscopicity parameter $\kappa_i$, and environmental properties. The equilibrium saturation ratio ($S_i$) over an aqueous droplet is computed through the $\kappa$-K\"{o}hler model \citep{petters2007single} as:
\begin{equation}
\label{eqn:Kohler}
S_i(D_i)=\frac{D_i^3-D_{\text{dry},i}^3}{D_i^3-D_{\text{dry},i}^3(1-\kappa_i)}\exp\left(\frac{4\sigma_{\text{w}}M_{\text{w}}}{RT\rho_{\text{w}}D_i}\right),
\end{equation}
where the wet particle volume $D_i$ is the particle wet diameter, $T$ is the ambient temperature, $R$ is the universal gas constant, $M_{\text{w}}$ is the molecular weight of water, and $\sigma_{\text{w}}$ is the surface tension of the air-water interface. The effective hygroscopicity parameter $\kappa_i$ is the volume-weighted average of the hygroscopicity parameter $\kappa$ of the particle's constituent aerosol species:
\begin{equation}\label{eqn: kappa}
\kappa_i = \frac{\sum_k^{A-1}{v_{k,i}\kappa_k}}{\sum_k^{A-1}{v_{k,i}}}.
\end{equation}
Values for $\kappa$ for each species are given in \citep{fierce2013cloud}.
The critical wet diameter is the diameter at which $S_{i}(D_i)$ is maximal, and the critical saturation ratio ($S_{\text{c},i}$) is the saturation ratio corresponding to this critical wet diameter. The critical supersaturation $s_{\text{c},i}$ is then given by $s_{\text{c},i}=(S_{\text{c},i}-1)\times100$. The number concentration of CCN at each environmental supersaturation $s$ is computed as the number concentration of particles that will activate into at that $s$, that is the total number concentration of particles per volume having a critical supersaturation $s_{\text{c},i}\le{s}$.
\section{Modified Moments from Legendre Polynomials}\label{sec:appendix_moments}
This study uses modified moments based on Legendre polynomials, which are better conditioned than traditional geometric moments. The $l^{\text{th}}$ modified moment over the univariate critical supersaturation distribution $n(s_{\text{c}})$ is defined as the integral over the $l^{\text{th}}$ Legendre polynomial $\phi_l$ of $\ln{s}_{\text{c}}$:
\begin{equation}\label{eqn:legendre_sc}
\phi_{l}(s_{\text{c}}) = \frac{1}{2^ll!}\bigg[\frac{d^l}{d(\ln{s}_{\text{c}})^l}(\ln^2{s}_{\text{c}}-1)^l\bigg].
\end{equation}
Legendre polynomials are defined for $l\ge0$. We define modified moments of $n(s_{\text{c}})$ for \mbox{$l<0$} as integrals over the $l^{\text{th}}$ Legendre polynomials for $\ln{s}_{\text{c}}^{-1}$. \blue{For example, the Legendre polynomials of order 3 and $-3$ are given by \mbox{$\phi_3({s}_{\text{c}})=\frac{1}{2}(5\ln^3{s}_{\text{c}}-3\ln{s}_{\text{c}})$} and \mbox{$\phi_{-3}({s}_{\text{c}})=\frac{1}{2}(5\ln^{-3}{s}_{\text{c}}-3\ln^{-1}{s}_{\text{c}})$}, respectively.}
For the bivariate distributions $n(D_{\text{dry}},\kappa)$, each modified moment $\mu_k$, is computed as the integral over the kernel function $\phi_k(D_{\text{dry}},\kappa)$. We define the modified kernel function $\phi_k$ as the product of the $m^{\text{th}}$ Legendre polynomial of $\ln{D}_{\text{dry}}$ and the $n^{\text{th}}$ Legendre polynomial of $\ln{\kappa}$:
\begin{equation}\label{eqn:legendre_DdryKap}
\phi_{k}(D_{\text{dry}},\kappa)=\bigg(\frac{1}{2^mm!}\bigg[\frac{d^m}{d(\ln{D}_{\text{dry}})^m}(\ln^2{D}_{\text{dry}}-1)^m\bigg]\bigg)\bigg(\frac{1}{2^nn!}\bigg[\frac{d^n}{d(\ln{\kappa})^n}(\ln^2{\kappa}-1)^n\bigg]\bigg).
\end{equation}
\section{Distributional Entropy}\label{sec:appendix_entropy}
The entropy $H_s$ of the reconstructed univariate distribution $\hat{n}(s_{\text{c}})$, which is normalized by total particle number concentration, is given by:
\begin{equation}
H_{s} = \int_0^{\infty}\hat{n}(s_{\text{c}})\ln{\hat{n}}(s_{\text{c}})d{\ln{s}}_{\text{c}}.
\end{equation}
\citet{jaynes1957information,jaynes1957information2} showed that if $n(s_{\text{c}})$ is the density distribution having maximum entropy, given a set of constraints $\mu_l$ for $l=1,...,N_{\text{s}}$, $n(s_{\text{c}})$ will take the form:
\begin{equation}\label{eqn:maxentropy_sc_distribution}
\hat{n}(s_{\text{c}}) = \exp\bigg(-\sum_{l=1}^{N_{\text{s}}}\lambda_l\phi_l(s_{\text{c}})\bigg),
\end{equation}
where $\lambda_l$ is the Lagrange multiplier of the entropy function $H_{\text{s}}$ for $\mu_l$.
Similarly, the entropy $H_{\text{q}}$ for the bivariate number density distribution $n(D_{\text{dry}},\kappa)$ is given by:
\begin{equation}\label{eqn:bivariate_entropy}
H_q = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}n(D_{\text{dry}},\kappa)\ln n(D_{\text{dry}},\kappa)d\ln{D}_{\text{dry}}d\ln\kappa.
\end{equation}
\section{Linear Program to Compute Quadrature Approximation and CCN bounds}\label{sec:appendix_linearprogram}
A linear program was used to construct the optimized quadrature, with abscissa locations shown in Figure~\ref{fig:DiaKap_quad}, and to bound $N_{\text{CCN}}(s)/N$, shown by shading in Figure~\ref{fig:Sc_reconstruct}. The linear program maximizes some cost function $c(D_{\text{dry},i},\kappa_i)$ subject to specified constraints $\mu_k$ (Equation~\ref{eqn:constraints}) and the requirement that $w_i\ge0$:
\begin{align}
& \text{maximize} &\quad-& \sum_{i=1}^{N_{\text{grid}}}w_ic(D_{\text{dry},i},\kappa_i) \\
& \text{subject to} &\quad& \sum_{i=1}^{N_{\text{grid}}}w_i\phi_k(D_{\text{dry},i},\kappa_i)=\mu_k,\\
&&& w_i\ge0, \quad i = 1,...,N_{\text{grid}}.
\end{align}
The application of linear programming to construct numerical quadrature was introduced in \citet{mcgraw2013sparse}, but we now extend this approach to find optimized sets of quadrature points by maximizing an entropy-inspired cost function:
\begin{equation}
c(D_{\text{dry},i},\kappa_i)=-\sum_k\lambda_k\phi_k(D_{\text{dry},i},\kappa_i).
\end{equation}
The bounds on $N_{\text{CCN}}(s)/N$ are computed using two linear programs at each supersaturation threshold $s$, \blue{one for the upper bound and one for the lower bound}. The maximum value for $N_{\text{CCN}}(s)/N$ for some specified $s$ is computed by maximizing the number of particles with $s_{\text{c},i}\le{s}$, which is computed from the linear program with the following cost function:
\begin{equation}
c(D_{\text{dry},i},\kappa_i)=s_{\text{c}}(D_{\text{dry},i},\kappa_i)\le{s}.
\end{equation}
Similarly, the minimum $N_{\text{CCN}}/N$ is computed using a cost function that maximizes the number of particles with $s_{\text{c},i}\ge{s}$:
\begin{equation}
c(D_{\text{dry},i},\kappa_i)=s_{\text{c}}(D_{\text{dry},i},\kappa_i)\ge{s}.
\end{equation}
The linear program is applied on a grid $x_i$ for $i=1,...,N$, where $N$ is much larger than the number of moment constraints. Using the dual simplex algorithm \citep{lemke1954dual}, the linear program yields only a subset of non-zero weights $w_i$, where the number of non-zero values for $w_i$ is equal to the number of moment constraints. It is these non-zero weights $w_i$ and associated abscissas $x_i$ that comprise the sparse set of quadrature points.
\section{Genetic Algorithm to Optimize Constraints}\label{sec:appendix_geneticalgorithm}
We applied a genetic algorithm to find optimal sets of moments for constructing the bivariate quadrature. Genetic algorithms are a class of optimization methods inspired by natural selection, first introduced by \citet{holland1975adaptation}. Implementation of a genetic algorithm requires a genetic encoding of possible solutions, in this case as a series of 0's and 1's, and fitness function to be optimized, in this case minimization of error in $N_{\text{CCN}}(s)/N$ across distinct populations. The algorithm is initiated with a population of candidate solutions. In this study, the initial set of candidate populations was chosen using Latin hypercube sampling. The optimal order of Legendre polynomials for computing the modified moments with respect to $\ln{D}_{\text{dry}}$ and $\ln\kappa$ (Equation~\ref{eqn:legendre_DdryKap}) identified with the genetic algorithm are outlined in Table~\ref{tab:optimized_moments}.
\begin{table}
\centering
\caption[table]{\label{tab:optimized_moments} Kernel function $\phi_k$ used to generate optimized bivariate moments $\mu_k=\sum_i^{N_{\text{q}}}\phi_k$, which were identified using the genetic algorithm.}
\begin{tabular}{l}
\hline
$\phi_1=\frac{1}{8}\big(35\ln^4D_{\text{dry}}-30\ln^2D_{\text{dry}}+3\big)$ \\
$\phi_2=\ln^{-1}\kappa$ \\
$\phi_3=\frac{1}{2}\big(3\ln^{-2}\kappa-1\big) $ \\
$\phi_4=\frac{1}{2}\big(3\ln^{2}D_{\text{dry}}-1\big)\ln\kappa $ \\
$\phi_5=\ln{D_{\text{dry}}}\frac{1}{2}\big(3\ln^{-2}\kappa-1\big) $ \\
$\phi_6=\frac{1}{2}\big(3\ln^{-2}D_{\text{dry}}-1\big)\frac{1}{2}\big(3\ln^{-2}\kappa-1\big) $ \\
$\phi_7=\frac{1}{2}\big(3\ln^{-2}D_{\text{dry}}-1\big)\frac{1}{8}\big(35\ln^4\kappa-30\ln^2\kappa+3\big) $ \\
$\phi_8=\frac{1}{8}(63\ln^5D_{\text{dry}}-70\ln^3D_{\text{dry}}+15\ln{D}_{\text{dry}}) \frac{1}{8}\big(63\ln^5\kappa-70\ln^3\kappa+15\kappa\big)$ \\
\hline
\end{tabular}
\end{table}
\end{appendices}
\section*{Acknowledgements}
LMF is supported by the University Corporation for Atmospheric Research under a NOAA Climate \& Global Global Change Postdoctoral Fellowship. RLM and LMF acknowledge support by the Atmospheric Systems Research Program of the US Department of Energy. The model (PartMC Version 2.1.6) and input files for the particle-resolved model simulations are available at \url{http://lagrange.mechse.illinois.edu/partmc/}. All other methods supporting the conclusions are described in Section~\ref{sec:methods} and in the Appendices.
\newcommand{\noop}[1]{}
|
1,108,101,565,801 | arxiv |
\subsection*{Description of Algorithm~\ref{Alg:ALG} \textsc{Deontic Meta Extension}} The main Algorithm~\ref{Alg:ALG} starts by initialising
\begin{itemize}
\item The $\partial$ sets for the extension (Line 1).
\item The Modal Herbrand Base $\mathit{MHB}$, used to properly cycle over literals (Lines~\ref{ALG:MainFor-Literals-BEGIN}--\ref{ALG:MainFor-Literals-END}), and over standard rules (Lines~\ref{ALG:MainFor-Rules-BEGIN}--\ref{ALG:MainFor-Rules-END}) of the theory.
\item Support sets $R^\Box[]$ (Line~\ref{ALG:For-LinMHB-R}), one for each literal in the modal Herbrand Base; given a modal literal $X l$, $R^X[l]$ plays a fundamental role in proving/refuting it, as some standard rules ($\alpha$ in the algorithm) supporting $X l$ may not be in the initial rule set, but be the conclusion of meta-rules ($\beta$ in the algorithm). The decidability of $X l$ (and naturally that of $\neg Xl$) must thus be postponed until we have proved/rejected such `$\alpha$ rules'.
\item Support sets $R[]^\Box_{\mathit{infd}}$ (Line~\ref{ALG:For-LinMHB-Rinfd}), one for each literal in $\mathit{MHB}$, and used during the ``team fight''; given a modal literal $X l$, $R[l]^X_{\mathit{infd}}$ stores the rules supporting $l$ that which are defeated by an \emph{applicable} rule for $\ensuremath{\mathcal{\sim }} l$ (for a more detailed explanation, see below during description of Procedure~\ref{alg:CheckLiteral}).
\input{AlgorithmMain}
\enlargethispage{3\baselineskip}
\item Support sets $R[\alpha]^\Box_{opp}$ and $R[\alpha]^\Box_{supp}$, by invoking Procedure~\ref{alg:Conflicts}. Such sets are determined according to which $variant$ of conflict (simple or cautious) is given as input; such sets correspond to either: (i) Conditions 1--2 of Definition~\ref{def:SimpleConflict} if $variant = simply$, or (ii) Conditions 1--5 of Definition~\ref{def:RationalConflict} if $variant = cautiously$. We decided to report the simple variant sets in full (Lines~\ref{AlgConfl:Ropp-Cost-V1}--\ref{AlgConfl:Rsupp-Cost-V1}, \ref{AlgConfl:Ropp-Obl-V1}--\ref{AlgConfl:Rsupp-Obl-V1}, and \ref{AlgConfl:Ropp-Perm-V1}--\ref{AlgConfl:Rsupp-Perm-V1}) to provide to the reader a deeper understanding of the signatures of the rules involved: $\ensuremath{\mathcal{\sim }} \varphi$ has same antecedent, modality, and conclusion of $\alpha$, whereas $\chi$ is either $\alpha$, or $\varphi$. This was not necessary for the cautious variant.
\input{AlgConflicts}
\newpage
\item Support matrices $\alpha[2][k]$\footnote{We chose indices ranging $1$ to $n$, instead of typical implementations, for the sake of readability.)} for every obligation rule (\textbf{for} cycle at Line~\ref{ALG:OblRulesArrayArray-BEGIN}--\ref{ALG:OblRulesArrayArray-END}), where $k$ is the number of elements in the $\otimes$-chain. Such matrices are to verify the applicability of an obligation rule at a given index $j$ of its $\otimes$-chain according to Condition 4 of Definition~\ref{def:MetaApplicability}.
\end{itemize}
Cycle \textbf{for} of Line~\ref{ALG:ForFacts} proves (as constitutive\footnote{Note that facts are constitutive statements, i.e., plain literal.}) all literals in the initial set of facts, and rejects all opposites. Symmetrically, every rule in the initial rule set (thus not standard rules that can be derived through meta-rules) are proved as constitutive rules (Line~\ref{ALG:ForFactsRules-Prove}), and we refute standard rules that (i) are conclusions of meta-rules, and (ii) conflict with a given rule (Line~\ref{ALG:ForFactsRules-Refute}).
The algorithm now enters the main cycle (\textbf{repeat--until}, Lines~\ref{ALG:Repeat}--\ref{ALG:Until}); cycle \textbf{for} at Lines~\ref{ALG:MainFor-Literals-BEGIN}--\ref{ALG:MainFor-Literals-END} runs over literals, the one at Lines~\ref{ALG:MainFor-Rules-BEGIN}--\ref{ALG:MainFor-Rules-END} runs over rules: they behave in an identical way. Naturally, if there is no rule supporting an element in the Modal Herbrand Base, we refute it (\textbf{if} at Line~\ref{ALG:lIf-RefuteL} for literals, \textbf{if} at Line~\ref{ALG:lIf-RefuteAlfa} for rules). On the other hand, if there exists an applicable rule $\beta$ that has been proved, we invoke \ref{alg:CheckLiteral}, or \ref{alg:CheckRule}, according to whether the element is a literal or a rule (see below the detailed description on how they operate) to verify if, at the current iteration, we can already prove the element at hand; note that the \textbf{if} check at Line~\ref{ALG:CheckLiterals-Obl} is specific for proving an element as obligatory, and differs from the one at Line~\ref{ALG:CheckLiterals-CostPerm} (same for Line~\ref{ALG:CheckRules-Obl} with respect to Line~\ref{ALG:CheckRules-CostPerm}) because we must consider $\otimes$-chains, hence satisfy Condition 4 of Definition~\ref{def:MetaApplicability}.
The algorithm terminates when no modifications to the extension are made. We use the convention that $\Box$ and $\blacksquare$ represent all three modalities whilst $\Diamond$ is restricted to union of obligations and permissions.
\subsection*{Description of Procedure~\ref{alg:CheckLiteral}.} Once an \emph{applicable} and proved rule $\beta$ for $l$ has been found, we can update the set of the opposite and \emph{defeated} rules $R[\ensuremath{\mathcal{\sim }} l]_{\mathit{infd}}$ with all the $\gamma$ rules that are defeated by $\beta$. We remind the reader that
\begin{itemize}
\item Only constitutive rules oppose constitutive to rules ($X = \ensuremath{\mathsf{C}}\xspace$, Line~\ref{AlgCheckLit:CaseCost-Rinfd});
\item Both obligation and permission rules oppose obligation rules ($X = \ensuremath{\mathsf{O}}\xspace$, Lines~\ref{AlgCheckLit:CaseObl-Rinfd-Obl} and \ref{AlgCheckLit:CaseObl-Rinfd-Perm});
\item Only obligation rules oppose permissive rules ($X = \ensuremath{\mathsf{P}}\xspace$, Line~\ref{AlgCheckLit:CasePerm-Rinfd}).
\end{itemize}
If there are no opposite rules stronger than $\beta$, we can refute the contrary conclusion (Lines~\ref{AlgCheckLit:lIf-RefuteNonL-Cost}, \ref{AlgCheckLit:lIf-RefuteNonL-Obl}, and \ref{AlgCheckLit:lIf-RefuteNonL-Perm}).
Moreover, if: (i) all the opposite rules are defeated ($R[\ensuremath{\mathcal{\sim }} l] \setminus R[\ensuremath{\mathcal{\sim }} l]_{\mathit{infd}} = \emptyset$), (ii) and at least one of the supporting, applicable rules (for $X l$) is a defeasible rule (thus, not a defeater), then we can prove $Xl$ (and we do not even care whether the remaining rules in $R[\ensuremath{\mathcal{\sim }} l]$ are applicable or discarded -- Lines~\ref{AlgCheckLit:If-ProveL-Cost-BEGIN}, \ref{AlgCheckLit:If-ProveL-Obl-BEGIN}, and \ref{AlgCheckLit:If-ProveL-Perm-BEGIN}). Note that: (i) when we prove a literal as obligatory, (i.a) we prove it as permitted as well and (i.b) we refute the opposite as both (Lines~\ref{AlgCheckLit:CaseObl-ProveRefuteObl} and \ref{AlgCheckLit:CaseObl-ProveRefutePerm}), and (ii) when we prove a literal as permitted, we refute the opposite as obligatory but not as permitted (if it is permitted to smoke, ``not smoking'' cannot be mandatory but it can be permitted -- Lines~\ref{AlgCheckLit:CasePerm-ProvePerm} and \ref{AlgCheckLit:CasePerm-RefuteObl}).
\input{AlgCheckLiteral}
\subsection*{Description of Procedures \ref{alg:ProveLiteral} and \ref{alg:RefuteLiteral}}
As described at the start of this section, every time we prove a literal, we can remove it from each antecedent it appears in; as negative deontic literals (e.g., $\neg\ensuremath{\mathsf{O}}\xspace \ensuremath{\mathcal{\sim }} l$ which is satisfied when $+\partial_\ensuremath{\mathsf{O}}\xspace l$ or $+\partial_\ensuremath{\mathsf{P}}\xspace l$) may be in the antecedents as well, Procedure~\ref{alg:ProveLiteral} takes care of such situations at Lines~\ref{AlgProveLit:CaseObl-UpdateRp}, \ref{AlgProveLit:CaseObl-UpdateRalfa}, \ref{AlgProveLit:CasePerm-UpdateRp}, and \ref{AlgProveLit:CasePerm-UpdateRalfa}.
\input{AlgProveLiteral}
Orthogonally, every time we refute a literal, Procedure~\ref{alg:RefuteLiteral} removes all those rules that have such a literal in their antecedent.
Finally, we update the support matrices $\alpha[][]$, according to Condition 4.a of Definitions~\ref{def:MetaApplicability} and \ref{def:MetaDiscardability} at Lines~\ref{AlgProveLit:CaseCost-N2}, \ref{AlgProveLit:CaseCost-J2}, and \ref{AlgProveLit:CaseObl-N1} of Procedure~\ref{alg:ProveLiteral}, and Line~\ref{AlgRefuteLit:UpdateBetaN1} of Procedure~\ref{alg:RefuteLiteral}.
\input{AlgRefuteLiteral}
\subsection*{Description of Procedures \ref{alg:CheckRule}, \ref{alg:ProveRule} and \ref{alg:RefuteRule}}
\smallskip
We shall discuss only key distinctions for the procedures for literals, as the sequence of operations and the basic ideas of Procedures~\ref{alg:CheckRule}, \ref{alg:ProveRule} and \ref{alg:RefuteRule} are fundamentally equivalent to each other.
\newpage
\FloatBarrier
\input{AlgCheckRules}
\FloatBarrier
Procedure \ref{alg:CheckRule} is invoked when we are trying to prove a certain rule $\alpha$, with modality $X$, and we found an applicable and proved rule $\beta$ supporting $\alpha$. To verify that the opposite rules are defeated, we need to distinguish between the two conflict variants. If $variant = simply$, then $R[\alpha]^X_{\mathit{infd}}$ will contains all those $\gamma$ rules that simply conflict with and are defeated by $\beta$. If $variant = cautiously$, we need to consider more rules: not just the $\gamma$s defeated by $\beta$, but also those $\gamma$s \emph{not} stronger than $\beta$ such that their conclusion is defeater by $\alpha$ itself. This perfectly mirrors Condition (2.2.2.2) of Definitions~\ref{def:ConstMetaPT-V2}, \ref{def:OblMetaPT-V2}, and \ref{def:PermMetaPT-V2}.
Lastly, the way of how matrices $\alpha[][]$ are updated at Lines~\ref{AlgProveRule:CaseCost-BetaN2} and \ref{AlgProveRule:CaseOblBetaN1} of Procedure~\ref{alg:ProveRule} (resp. Lines~\ref{alg:RefuteRule-CaseCost-BetaN2} and \ref{algRefuteRule-CaseObl-BetaN1} of Procedure~\ref{alg:RefuteRule}) are because here we have to satisfy Condition 4.b of Definition~\ref{def:MetaApplicability} (and \emph{not} Condition 4.a of Definition~\ref{def:MetaDiscardability}).
\enlargethispage{10\baselineskip}
\input{AlgsRule}
\FloatBarrier
\subsection*{Algorithms Execution}
We end this part by proposing a couple of input theories and analysing the runs of the algorithms in the computation of the extension. Each theory tackles specific characteristics of our algorithms and therefore of our logic. We will pay attention only to the relevant and non-trivial details.
\vspace{5mm}
\noindent Consider $D = (\set{f_1, f_2}, R, \set{(\gamma, \theta)})$ to be a theory such that
\begin{align*}
R = \{&\alpha\colon (\gamma\colon \neg f_1 \Rightarrow_\ensuremath{\mathsf{C}}\xspace a ) \Rightarrow_\ensuremath{\mathsf{C}}\xspace b && \beta\colon f_2 \Rightarrow_\ensuremath{\mathsf{C}}\xspace (\gamma\colon \neg f_1 \Rightarrow_\ensuremath{\mathsf{C}}\xspace a)\\
& \zeta\colon (\nu\colon f_1 \Rightarrow_\ensuremath{\mathsf{C}}\xspace c) \Rightarrow_\ensuremath{\mathsf{C}}\xspace (\kappa\colon f_2 \Rightarrow_\ensuremath{\mathsf{P}}\xspace \neg a) &&\theta\colon f_1, f_2 \Rightarrow_\ensuremath{\mathsf{C}}\xspace \neg a\\
&\mu\colon f_2 \Rightarrow_\ensuremath{\mathsf{O}}\xspace a \otimes b \otimes c\},
\end{align*}
Support sets $R^\Box[l]$ are instantiated; what is relevant here are $R^\ensuremath{\mathsf{C}}\xspace[a] = \set{\gamma}$, $R^\ensuremath{\mathsf{C}}\xspace[\neg a] = \set{\theta}$, $R^\ensuremath{\mathsf{O}}\xspace[a] = \set{\mu}$, and $R^\ensuremath{\mathsf{P}}\xspace[\neg a] = \set{\kappa}$. The only obligation matrix created is $\mu[][]$ ($2 \times 3$), as $\mu$ is the only obligation rule.
Procedure \ref{alg:ProveLiteral} is invoked on $f_1$ and $f_2$, which results in: (i) both literals are added to $+\partial_\ensuremath{\mathsf{C}}\xspace$, (ii) emptying $A(\beta)$, $A(\nu)$, $A(\kappa)$, $A(\theta)$, and (iii) $A(\mu)$.
Procedure~\ref{alg:RefuteLiteral} is symmetrically invoked on $\neg f_1$ and $\neg f_2$, which results in: (i) removing $\gamma$ from $R^\ensuremath{\mathsf{C}}\xspace[a]$ as this makes $\gamma$ to be discarded (note that now there are no more supporting rules to conclude $a$ as constitutive), and (ii) removing $(\gamma, \theta)$ from the superiority relation. Important to highlight here that, even if $\gamma$ is discarded, this does not influence the applicability/discardability of $\alpha$, which depends only on proving either $+\partial^m_\ensuremath{\mathsf{C}}\xspace\gamma$ or $-\partial^m_\ensuremath{\mathsf{C}}\xspace\gamma$. Rules $\alpha$, $\beta$, $\zeta$, $\theta$, and $\mu$ are all proved as constitutive (thus $\alpha, \beta, \zeta, \theta, \mu \in +\partial^{m}_\ensuremath{\mathsf{C}}\xspace$).
The algorithm finally enters the main \textbf{repeat--until} cycle; we shall assume that the loop \textbf{for} at Lines~\ref{ALG:MainFor-Literals-BEGIN}--\ref{ALG:MainFor-Literals-END} first controls whether we can prove $b$ as constitutive. Indeed there exists a constitutive rule for $b$ that has been proved: $\alpha$. Procedure~\ref{alg:CheckLiteral} is thus invoked with $b$, $\ensuremath{\mathsf{C}}\xspace$, and $\alpha$, as parameters. The first half of the check $\{\, R[\neg b] \setminus R[\neg b]_{\mathit{infd}} = \emptyset \wedge \exists \zeta \in R^\ensuremath{\mathsf{C}}\xspace_\ensuremath{\mathrm{d}}\xspace[l].\, A(\zeta) = \emptyset \,\}$ is in fact satisfied as there are no rules supporting $\neg b$, but not the second half as applicability/discardability of $\alpha$ has yet to be determined; anyhow, $\neg b$ has no support and Procedure~\ref{alg:RefuteLiteral} is hence invoked, with the only result to eliminate it from $\mathit{MHB}$ and add it to $-\partial_\ensuremath{\mathsf{C}}\xspace$.
Assume that the loop \textbf{for} at Lines~\ref{ALG:MainFor-Literals-BEGIN}--\ref{ALG:MainFor-Literals-END} now checks $\neg a$ for constitutive; $\theta$ is applicable and no rules support $a$ (we do not need to wait for $\gamma$ to be proved/rejected as we already know that, if proved, $\gamma$ is discarded as previously we computed that $\neg f_1 \in -\partial_\ensuremath{\mathsf{C}}\xspace$). Accordingly, Procedure~\ref{alg:ProveLiteral}: (i) computes $\neg a \in +\partial_\ensuremath{\mathsf{C}}\xspace$, and (ii) updates $\mu[2][1]$ to $+$. Analogously to the previous case, we cannot yet decide for $a$ as obligation since there is potentially the permissive rule $\kappa$ for $\neg a$, and we thus need to wait until we compute either $\pm\partial_\ensuremath{\mathsf{C}}\xspace\kappa$.
The algorithm passes then to run the loop \textbf{for} at Lines~\ref{ALG:MainFor-Rules-BEGIN}--\ref{ALG:MainFor-Rules-END}, and let us consider $\gamma$; $\beta$ is applicable and no rule conflicts with it. We hence compute $\gamma\in +\partial_\ensuremath{\mathsf{C}}\xspace^{meta}$, which in turn updates $\{\,A(\alpha) \setminus \set{\alpha} = \emptyset\,\}$ ($\alpha$ is now applicable). On the contrary, no rules support $\nu$, thus in cascade: (i) $\nu\in -\partial_\ensuremath{\mathsf{C}}\xspace^{meta}$, (ii) $\zeta$ is discarded, hence (iii) no rules support $\kappa$ and so $\kappa\in -\partial_\ensuremath{\mathsf{C}}\xspace^{meta}$.
At the next iteration of the \textbf{repeat--until}, (i) we compute $b \in +\partial_\ensuremath{\mathsf{C}}\xspace$ and update $\mu[2][2]$ to $-$, (ii) $a \in +\partial_\ensuremath{\mathsf{O}}\xspace$ and we update $\mu[1][1]$ to $+$. This makes $\mu$ applicable at index $2$ for $b$, and we compute $b \in +\partial_\ensuremath{\mathsf{O}}\xspace$. Lastly, as $\mu[1][2] = +$ and $\mu[2][2] = -$ makes $\mu$ discarded at index $3$ for $c$, the algorithm computes $c \in -\partial_\ensuremath{\mathsf{O}}\xspace$.
\medskip
\noindent The next example illustrates: (i) how the algorithms work on the chains of meta-rules, and (ii) the computation of the supporting sets $opp$ and $supp$ for the cautious conflict variant, including cases where the conflict is not restricted to the negations of the rules.
\enlargethispage{2\baselineskip}
Consider $D = (\ensuremath{F}\xspace, R, {>}= \set{(\alpha, \beta), (\alpha, \lambda)})$ to be a theory such that
\begin{align*}
R = \{&\dRule\alpha:\dots=>\ensuremath{\mathsf{O}}\xspace {(\dRule\eta:a=>\ensuremath{\mathsf{P}}\xspace b) \otimes c \otimes (\dRule\kappa:c=>\ensuremath{\mathsf{O}}\xspace d\otimes e)}\\
& \dRule\beta:\dots=>\ensuremath{\mathsf{P}}\xspace{(\dRule\theta:a=>\ensuremath{\mathsf{O}}\xspace\neg b)}\qquad \ \dRule\gamma:\dots=>\ensuremath{\mathsf{O}}\xspace{\neg(\dRule\zeta:a=>\ensuremath{\mathsf{O}}\xspace\neg b)}\\
&\dRule\lambda:\dots=>\ensuremath{\mathsf{O}}\xspace {(\dRule\mu:c=>\ensuremath{\mathsf{O}}\xspace d)}\qquad \ \dRule\psi:\dots=>\ensuremath{\mathsf{C}}\xspace \neg c\},
\end{align*}
where for the sake of simplicity we assume that all the rules' antecedents have been emptied, and would thus have satisfied Conditions 1--3 of Definition~\ref{def:MetaApplicability}. According to Definitions~\ref{def:SimpleConflict} and \ref{def:RationalConflict}, we notice that:
\begin{enumerate}
\item\label{enum:1} $\eta$ cautiously conflicts with $\theta$, hence $\alpha$ conflicts with $\beta$ (at index 1);
\item\label{enum:2} $\theta$ simply conflicts with $\zeta$, hence $\beta$ conflicts with $\gamma$;
\item For \ref{enum:1} and \ref{enum:2}, $\gamma$ supports $\alpha$ (and the other way around);
\item $\kappa$ cautiously conflicts with $\mu$, hence $\lambda$ cautiously conflicts with $\alpha$ (at index 3).
\end{enumerate}
Ergo, $R[\alpha]_{opp}^\ensuremath{\mathsf{O}}\xspace = \set{\beta, \lambda}$, $R[\alpha]_{supp}^\ensuremath{\mathsf{O}}\xspace = \set{\gamma}$, and so on. When Procedure~\ref{alg:CheckRule} is invoked on $\eta$, \textbf{if} check of Procedure~\ref{alg:ProveRule} at Line~\ref{AlgCheckRules:If-ProveAlfa-Obl-BEGIN} is satisfied we compute $\eta, \neg\zeta \in +\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$, $\theta \in -\partial_\ensuremath{\mathsf{P}}\xspace^{meta}$ as well as $\theta \in -\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$.
As later on, the algorithms compute $\neg c \in +\partial_\ensuremath{\mathsf{C}}\xspace$ as well as $c \in +\partial_\ensuremath{\mathsf{O}}\xspace$ (and thus $\alpha[j][2]$, $j = 1, 2$, are updated to $+$), then $\alpha$ becomes applicable at index 3: given that $\alpha > \lambda$, we compute $\kappa \in +\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$ and $\mu \in -\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$.
Note that, if we had considered the simple conflict variant, $\alpha$ would have not conflicted with $\beta$ and since the superiority relation does not solve the conflict between $\beta$ and $\gamma$, we would have computed $\neg\zeta \in -\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$ instead of $\neg\zeta \in +\partial_\ensuremath{\mathsf{O}}\xspace^{meta}$.
\subsection{Computational Properties}\label{subsec:CompProperties}
We discuss the computational properties of the algorithms presented. In order to discuss termination and computational complexity, we start by defining the \emph{size} of a theory $D$ as $\Sigma(D)$ to be the number of the occurrences of literals plus the number of occurrences of rules plus 2 for every tuple in the superiority relation. Consequently, theory $D = (\ensuremath{F}\xspace = \{a, b, c\}$, $R = \set{(\alpha\colon a \Rightarrow_\ensuremath{\mathsf{O}}\xspace d), (\beta\colon b \Rightarrow \ensuremath{\mathcal{\sim }} d), \big(\gamma\colon c \Rightarrow (\zeta\colon a \Rightarrow d)\big)}$, $> = \{(\zeta, \beta)\})$ has size $3 + 11 + 2 = 16$.
Note that, by implementing hash tables with pointers to rules where a given literal occurs, each rule can be accessed in constant time. We also implement hash tables for the tuples of the superiority relation where a given rule appears as either one of the two elements, and even those can be accessed in constant time.
\begin{lemma}\label{lemm:ComplexityProveRefute}
Procedures~\ref{alg:ProveLiteral}, \ref{alg:RefuteLiteral}, \ref{alg:ProveRule}, and \ref{alg:RefuteRule} terminate and their complexity is $O(\Sigma^2)$.
\end{lemma}
\begin{proof}
Termination of such procedures is straightforward, as (i) the size of the input theory is finite, (ii) we modify finite sets only, and (iii) all the \textbf{for} cycles loop on a finite number of elements, as the Modal Herbrand Base is finite.
Given that all set assignments/modifications are linear in the size of the theory and that all the \textbf{for} cycles are iterated $|MHB| \in O(\Sigma)$ times, this proves our claim setting their complexity to $O(\Sigma^2)$.
\end{proof}
\begin{lemma}\label{lemm:ComplexityCheckLiteral}
Procedure~\ref{alg:CheckLiteral} terminates and its complexity is $O(\Sigma^3)$.
\end{lemma}
\begin{proof}
Termination is straightforward (see motivations above) and by the fact that Lemma~\ref{lemm:ComplexityProveRefute} proves that its inner Procedures~\ref{alg:ProveLiteral} and \ref{alg:RefuteLiteral} terminate.
Again, all set assignments/modifications are linear in the size of the theory, and so are all the \textbf{if} checks that which invoke Procedures~\ref{alg:ProveLiteral} and \ref{alg:RefuteLiteral}. Accordingly, $O(\Sigma) + O(\Sigma) * O(\Sigma^2) = O(\Sigma^3)$.
\end{proof}
\begin{lemma}\label{lemm:ComplexityConflicts}
Procedure~\ref{alg:Conflicts} terminates and its complexity is: $O(\Sigma^3)$ if $variant = simply$, or $O(\Sigma^4)$ if $variant = cautiously$.
\end{lemma}
\begin{proof}
Termination is guaranteed by the same considerations above and from what follows, depending on the variant.
If $variant = simply$, set assignments operate on finite sets and: (i) $R^X_{opp}$ are linear, (ii) $R^X_{supp}$ are in $O(\Sigma^2)$. This sets the overall complexity to $O(\Sigma^3)$.
If $variant = cautiously$, we report hereafter how a set $R^X_{opp}$ is.
\[
\begin{split}
R[\alpha]_{opp}^X \gets R[\alpha]_{opp}^X &\cup \set{\gamma\in R[\varphi]|\, C(\alpha) = c_1\otimes \dots\otimes c_m \\
& \wedge C(\varphi) = d_1\otimes\dots\otimes d_n \wedge
\big ( A(\alpha) = A(\varphi) \big ) \wedge \\
&\big ( \exists i \leq m, n.\, \forall k < i.\, ( c_k = d_k \wedge c_i = \ensuremath{\mathcal{\sim }} d_i ) \\
&\vee ( m < n \wedge \forall j \leq m.\, c_j = d_j ) \big ) }.
\end{split}
\]
Such a set is finite, and is computed in $O(\Sigma^2)$ (which implies that $R^X_{supp}$ is in $O(\Sigma^3)$). As the main \textbf{for} cycle at Lines~\ref{AlgConfl:ForAlfaSupports-BEGIN}--\ref{AlgConfl:ForAlfaSupports-END} is in $O(\Sigma)$, this proves our claim.
\end{proof}
As before, Procedure~\ref{alg:Conflicts} uses hash tables to store such information, and so after its iteration to verify whether a certain rule $variant$ conflicts with another requires constant time, whereas to identify all the rules that $variant$ conflict with a certain rule requires linear time.
\begin{lemma}\label{lemm:ComplexityCheckRule}
Procedure~\ref{alg:CheckRule} terminates and its complexity is $O(\Sigma^4)$.
\end{lemma}
\begin{proof}
Termination is straightforward (see motivations above) and given by the fact that Lemma~\ref{lemm:ComplexityProveRefute} proves that its inner Procedures~\ref{alg:ProveRule} and \ref{alg:RefuteRule} terminate.
Again, all set assignments/modifications are linear in the size of the theory, even the ones requiring the verification of \emph{variant conflicts with} based on the considerations in (and after) Lemma~\ref{lemm:ComplexityConflicts}.
Given that all the \textbf{if} checks and the \textbf{for} cycles are linear as well, this proves that the overall complexity is $O(\Sigma) + (\Sigma) * (O(\Sigma^2) + O(\Sigma) * O(\Sigma^2)) = O(\Sigma^4)$.
\end{proof}
\begin{theorem}\label{th:Complexity}
Algorithm~\ref{Alg:ALG} \textsc{Deontic Meta Extension} terminates and its complexity is $O(\Sigma^5)$.
\end{theorem}
\begin{proof}
Termination of Algorithm~\ref{Alg:ALG} \textsc{Deontic Meta Extension} is given by Lemmas~\ref{lemm:ComplexityProveRefute}--\ref{lemm:ComplexityCheckRule}, and bound to the termination of the \textbf{repeat-until} cycle at Lines~\ref{ALG:Repeat}--\ref{ALG:Until}, as all other cycles loop over finite sets of elements of the order of $O(\Sigma)$. Given that (i) the Modal Herbrand Base $\mathit{MHB}$ is finite and (ii) since, every time a literal or a rule is proved/refuted, they are removed from $\mathit{MHB}$, thus the algorithm eventually empties such sets, and, at the next iteration, no modification to the extension can be made. This proves the termination of Algorithm~\ref{Alg:ALG}.
Regarding its complexity, let us notice that: (i) all set modifications are in linear time, and (ii) the aforementioned \textbf{repeat-until} cycle is iterated at most $O(\Sigma)$ times, and so are the two \textbf{for} loops at Lines \ref{ALG:MainFor-Literals-BEGIN}--\ref{ALG:MainFor-Literals-END} and \ref{ALG:MainFor-Rules-BEGIN}--\ref{ALG:MainFor-Rules-END}. This would suggest that the \textbf{repeat-until} cycle would contribute to the overall complexity for a factor of $O(\Sigma^2)$. A more discerning analysis shows that the complexity is actually $O(\Sigma)$, as the complexity of each \textbf{for} loop cannot be considered separately from the complexity of the external loop (they are strictly dependent on one another). Indeed, the overall number of operations made by the sum of all loop iterations cannot outrun the number of occurrences of the literals or rules ($O(\Sigma)+ O(\Sigma)$), because the operations in the inner cycles directly decrease, iteration after iteration, the number of the remaining repetitions of the out-most loop, and the other way around.
Based on the results of Lemmas~\ref{lemm:ComplexityProveRefute}--\ref{lemm:ComplexityCheckRule}, and the fact that the complexity of the \textbf{repeat-until} cycle, which is $O(\Sigma^5) = O(\Sigma) * (O(\Sigma^2) + O(\Sigma^3) + O(\Sigma^2) + O(\Sigma^4))$, dominates all the operations in the first part of the algorithm, we have that the overall complexity is $O(\Sigma^5)$.
\end{proof}
The final result concerns the soundness and correctness of the algorithm, in the sense that the extension computed by the algorithm corresponds to the set of provable/refutable literals/rules.
\begin{theorem}\label{th:SoundCompl}
Algorithm~\ref{Alg:ALG} \textsc{Deontic Meta Extension} is sound and complete, that is, for $\Box\in\set{\ensuremath{\mathsf{C}}\xspace,\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$:
\begin{enumerate}
\item $D \vdash_L +\partial_\Box p$ iff $p \in +\partial_\Box p$ of $E_L(D)$, $p \in \ensuremath{\mathrm{Lit}}\xspace$
\item $D \vdash_L +\partial^{m}_\Box \alpha$ iff $p \in +\partial^m_\Box \alpha$ of $E_L(D)$, $\alpha \in \ensuremath{\mathrm{Lab}}\xspace$
\item $D \vdash_L -\partial_\Box p$ iff $p \in -\partial_\Box p$ of $E_L(D)$, $p \in \ensuremath{\mathrm{Lit}}\xspace$
\item $D \vdash_L -\partial^{m}_\Box \alpha$ iff $p \in -\partial^{m}_\Box \alpha$ of $E_L(D)$, $\alpha \in \ensuremath{\mathrm{Lab}}\xspace$.
\end{enumerate}
\end{theorem}
\begin{proof} (Sketch)
The aim of Algorithm~\ref{Alg:ALG} \textsc{Deontic Meta Extension} is to compute the extension of the input theory through successive transformations on the set of facts, rules, and the superiority relation. Such transformations allow us: (i) to obtain a simpler theory, (ii) while retaining the same extension. By simpler theory we mean a theory with less symbols in it. Note that if $D\vdash_L +\partial_\Box l$ then $D\vdash_L -\partial_L \ensuremath{\mathcal{\sim }} l$, and that if $D\vdash_L +\partial^{m}_\Box \alpha$ then $D\vdash_L -\partial^{m}_\Box \gamma$, with $\alpha$ conflicting with $\gamma$.
Suppose that the algorithm computes $+\partial_\Box l$ or $+\partial^m_\Box\alpha$ (meaning that $l \in +\partial_\Box$, or $\alpha\in +\partial^{m}_\Box$). Accordingly, we remove $l/\Box l$ or $\alpha/\Box\alpha$ from every antecedent where it appears in as, by Definition~\ref{def:MetaApplicability}, the applicability of such rules will not depend any longer on $l/\Box l$ or $\alpha\Box\alpha$, but only on the remaining elements in their antecedents. Moreover, we can eliminate from the rule sets all those rules with $\ensuremath{\mathcal{\sim }} l/\Box\ensuremath{\mathcal{\sim }} l$ or $\gamma/\Box\gamma$ in their antecedent (with $\alpha$ conflicting with $\gamma$), as such rules are discarded by Definition~\ref{def:MetaDiscardability} (and then adjust the superiority relation accordingly). Finally, proving $+\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$ makes $\alpha$ to become active in supporting its conclusion, and rebutting the opposite.
The proof follows the schemas of the ones in \cite{GovernatoriORS13,GovernatoriOSRC16}, and proves that the original theory $D$ and the simpler theory $D'$ are equivalent.
Formally, suppose that $D\vdash_L +\partial_\Box l$ (symmetrically $D\vdash_L +\partial^{m}_\Box \alpha$) at $P(n)$. $R'$ of $D'$ is obtained by the following transformation.
Given a standard rule $\alpha\colon A(\alpha)\hookrightarrow_\blacksquare C(\alpha)$ and a literal $l$, $\alpha\ominus l$ is the rule
\[
\alpha\colon A(\alpha)\setminus \set{l} \hookrightarrow_\blacksquare C(\alpha).
\]
Given a meta-rule rule $\alpha\colon A(\alpha)\hookrightarrow_\blacksquare C(\alpha)$ and a literal $l$, $\alpha\ominus l$ is the rule
\[
\alpha\colon A(\alpha)[\beta/\beta\ominus l]\setminus\set{l} \hookrightarrow_\Box C(\alpha),
\]
for all rules $\beta\in A(\alpha)$, where $A(\alpha)[\beta/\beta\ominus l]$ denotes the substitution of $\beta$ in $\alpha$ with $\beta\ominus l$.
Furthermore, if $\alpha\ominus\beta$ is the rule
\[
\alpha\colon A(\alpha)\setminus\set{\beta}\hookrightarrow_\blacksquare C(\alpha),
\]
then\footnote{If $\Box=\ensuremath{\mathsf{C}}\xspace$, in the transformation $\Box l$ is just $l$, following the conditions of Definition~\ref{def:MetaApplicability} to make a rule applicable, and $\ensuremath{\mathcal{\sim }}\Box l$ is $\ensuremath{\mathcal{\sim }} l$ according to Definition~\ref{def:MetaDiscardability} to discard a rule.}
\[
R'=\set{\alpha\ominus \Box l|\,\alpha\in R}\setminus \set{\alpha\in R|\, \ensuremath{\mathcal{\sim }}\Box l\in A(\alpha)}
\]
for $D\vdash_L +\partial_\Box l$, while for $D\vdash_L +\partial^m_\Box\alpha$ we use
\[
R'=\set{\beta\ominus \Box\alpha|\,\beta\in R}\setminus \set{\beta\in R|\, \ensuremath{\mathcal{\sim }}\Box \alpha\in A(\beta)}
\]
Finally, if $>'$ of $D'$ is obtained from $>$ of $D$ as follows
\[
{>'} = {>} \setminus \set{(\beta,\zeta),(\zeta,\beta)\, |\, \ensuremath{\mathcal{\sim }} l \in A(\zeta) \text{ or } \ensuremath{\mathcal{\sim }} \gamma \in A(\zeta)},
\]
then by induction on the length of a proof, we can show that
\begin{itemize}
\item $D\vdash_L \pm\partial_\Box p$ iff $D'\vdash_L \pm\partial_\Box p$, and
\item $D\vdash_L \pm\partial^{m}_\Box \chi$ iff $D'\vdash_L \pm\partial^{m}_\Box \chi$.
\end{itemize}
The key step to proof the equivalences above is that: when we prove $l$ (resp., $\alpha$), we can transform a derivation $P$ in $D$ into a derivation $P'$ in $D'$. We do so by removing from $P$, first, all steps where $l$ occurs, and then all subsequent steps justified by steps where $l$ occurred. It is immediate to see that if a rule $\beta$ is applicable in $D$ at a particular step in $P$, then the version of $\beta$ in $D'$ (if any) is applicable in $D'$ at some step in $P'$.
The next proprieties to conclude the proof of the theorem are the following: if $R$ contains a rule $\alpha$ such that $A(\alpha)=\emptyset$, then
\begin{enumerate}
\item If $\set{\beta\in R|\, \beta \text{ conflicts with } \alpha\wedge \beta>\alpha}=\emptyset$, then $D\vdash_L-\partial_\Box \ensuremath{\mathcal{\sim }} C(\alpha)$.
\item If $\set{\beta\in R|\, \beta \text{ conflicts with }\alpha\wedge \beta>\alpha}\setminus\{\beta\in R|\, \beta$ conflicts with $\alpha\wedge \exists\gamma, \gamma$ conflicts with $\beta\wedge A(\gamma)=\emptyset\wedge
\gamma>\beta\}=\emptyset$,\footnote{For $L=S$ we require the additional condition that $\gamma$ is either $\alpha$ or $\ensuremath{\mathcal{\sim }}\beta$, and for meta-rules and $L=R$, we have to consider not only the meta-rules stronger than $\beta$, but the standard rules stronger than $C(\beta)$.} then $D\vdash_L-\partial_\Box \ensuremath{\mathcal{\sim }} C(\alpha)$.
\end{enumerate}
First of all, $\alpha$ is applicable. Then for 1., it is a witness of an applicable and undefeated rule for $\ensuremath{\mathcal{\sim }} C(\alpha)$. It is mundane to verify that
it satisfies the $\exists\gamma\dots$ clause of the proof conditions for the various $-\partial$. Similarly, for 2., if all rules conflicting with $\alpha$ have been defeated (the construction, denoted by $R[]_{\mathit{infd}}$ in the algorithms, ensures that every rule opposing $\alpha$ is defeated by an applicable rule), satisfying the $\exists\zeta$ clauses of the proof conditions for the various $+\partial$. Condition 1. is used by Procedures~\ref{alg:RefuteLiteral} and \ref{alg:RefuteRule}, and Condition 2. by the Procedures~\ref{alg:ProveLiteral} and \ref{alg:ProveRule} to populate the extension of a theory.
\end{proof}
\section{Synopsis}\label{sec:synopsis}
In devising a new computational logical system for meta-rules in deontic reasoning we adopt the following roadmap and take the following steps:
\begin{itemize}
\item we offer a conceptual analysis of meta-reasoning in the normative domain;
\item we present the new logic;
\item we investigate the computational properties of the logic.
\end{itemize}
First of all, we need to set up a conceptual framework (Section \ref{sec:conceptual_framework}), where some basic philosophical problems are discussed and which is the basis of our formal choices. For the sake of illustration, just consider the complexities behind the definition of norm change mechanisms in domains such as the law or the interplay of norm dynamics and deontic concepts such as permissions and permissive norms.
Once this is done, we need to introduce the formalism that we use for modelling deontic and normative reasoning and to which meta-rules are added. Such a formalism must move from the above-mentioned conceptual analysis and be as richest as possible. This is done in Sections \ref{sec:Method} and \ref{sec:Logic}: the logic is an extension of a Defeasible Deontic Logic where we can (a) represent the distinction between rules and meta-rules, (b) model the distinction between constitutive and deontic rules, (c) handle complex reasoning patterns related to contrary-to-duty and compensatory obligations, and various types of deontic statements.
Finally, we expect the logic be as well computationally efficient. Thus, we have to study the properties of the resulting system---and in particular, its computational behaviour---a study which is offered in Section \ref{sec:Algo},
where algorithms to compute meta-extensions and extensions are provided, and analysed from a correctness/completeness and complexity viewpoints.
The paper is thus structured as follows.
Section \ref{sec:conceptual_framework} provides a discussion that frames meta-rules in legal reasoning.
Section \ref{sec:Method} is devoted to introduce the methodological issues
of the investigation, and to discuss the basic structure of Defeasible Deontic Logic, including proof theory associated
to it. In Section \ref{sec:Logic} we formally provide the meta-logical framework that is the subject of the investigation
documented in this paper. The theoretical framework is then considered from a computational problem in Section \ref{sec:Algo},
where algorithms to compute meta-extensions and extensions are provided, and analysed from a correctness/completeness and
complexity viewpoints. Section \ref{sec:RelatedWork} reviews current relevant literature, and Section \ref{sec:Conc}
takes some conclusions and discusses how this research can be taken further.
\section{The Conceptual Framework}\label{sec:conceptual_framework}
\cite{igpl10normchange} argued that meta-rules describe the dynamics of any normative institution (such as a legal system) on which norms are formalised and can be used to establish conditions for the creation and modification of other rules or norms, while proper rules correspond to norms in a normative system. In particular, it was pointed out that meta-rules can be represented in the language of Defeasible Logic as follows
\begin{gather*}
mr_1\colon\xseq{a}{n}\Rightarrow (r_1:\xseq{b}{m}\Rightarrow c)
\end{gather*}
precisely to grasp norm change mechanisms in the law. For instance, if the rule `$r_1:\xseq{b}{m}\Rightarrow c$' does not exist in the theory at hand, then the successful application of $mr_1$ leads to derive such a rule, which amounts to enact $r_1$ as a new norm in the legal system. Similarly, if $r_1$ already exists but has the form `$r_1\colon b_1\Rightarrow c$', then the successful application of $mr_1$ corresponds to modifying $r_1$ from `$r_1\colon b_1\Rightarrow c$' to `$r_1\colon\xseq{b}{m}\Rightarrow c$'.
In addition, with meta-rules we can admit the \emph{negation of rules}. If we are able to conclude that a (positive) rule holds, then it means that we can insert the rule (the content of the rule, with a specific name) in the system, and we can use the resulting rule to derive new conclusions. For a negated rule, the meaning is that it is not possible to obtain a rule with that specific content (this can be either formally prescribed for the whole rule, or in a way that results irrespective of the name).
In this paper, we go further and discuss the conceptual meaning of modalising rules via meta-rules, i.e., what we mean when we establish the obligatoriness of the enactment of a certain rule. Put in this way, making a rule obligatory in fact amounts to firing a meta-rules like the following:
\begin{gather*}
mr_2\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
Rule $r_2$ is a standard deontic rule as described in previous works \cite{GovernatoriOSRC16,GovernatoriORS13,GovernatoriR08}: if $\xseq{a}{n}$ are the case, then rule $r_2$ allows us to derive that $c$ is obligatory. If a similar intuition is extended to meta-rules, then $mr_2$ asserts that enacting rule `$r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c$' is obligatory.
We should now ask: For whom `$r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c$' is obligatory? There is no unique answer, since the question has different meanings depending on which normative domain is considered. If we work, again, in the legal domain, we can imagine that a legislative authority imposes the obligation to enact a certain norm over another authority, lower in the legislative hierarchy but competent in enacting such a norm. Concrete examples occur in the law, when, for instance, European authorities impose to member states the implementation of European directives, or when Constitutional courts require national parliaments to amend, or to integrate, the legislative corpus.
We should note that structures like $mr_2$ do not lead to a meaningless iteration of deontic modalities, an issue that was explored in early deontic literature such as in \cite{Barcan:1966,Goble:1966}. Indeed, while it looks meaningless to have an expression like $\ensuremath{\mathsf{O}}\xspace \ensuremath{\mathsf{O}}\xspace c$ standing for ``It is obligatory that it is obligatory that everyone keeps their promises'', an expression like ``Parking on highways ought to be forbidden'' makes sense \cite{Barcan:1966}. This second example suggests that a norm forbidding to park in highways is obligatory, thus assuming a conceptual distinction between norms, on one side, and obligations (and permissions), on the other side \cite{alchourron71normative}. This general approach, hence, clearly distinguishes norms from obligations: obligations are the effects (i.e., the conclusions) of the application of prescriptive norms. Under this reading, the application of rule $r_2$ leads the obligation $\ensuremath{\mathsf{O}}\xspace c$, while the application of meta-rule $mr_2$ leads to state that norm $r_2$ ought to be the case.
What about the permission? The elusive character of permission affects also the case of permissive meta-rules. Consider this meta-rule:
\begin{gather*}
mr_3\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
Does the well-known distinction between \emph{weak}
and \emph{strong} permission \cite{vonwright:1963} apply to this case as well? In the standard scenario, the former type of permission corresponds to saying that something is allowed by a code precisely when it is not prohibited by that code. This idea is preserved when meta-rules are considered. Indeed, one may simply argue that, if the legal system \emph{does not} support the derivation of
\begin{gather*}
\ensuremath{\mathsf{O}}\xspace\neg(r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\\
\ensuremath{\mathsf{O}}\xspace(r_3\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c)
\end{gather*}
then one may also conclude that the following holds:
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
For similar reasons, since weak permission is the dual of obligation---i.e., $\ensuremath{\mathsf{O}}\xspace A =_{\mathit{def}} \neg \ensuremath{\mathsf{O}}\xspace \neg A$---imposing consistency means $\ensuremath{\mathsf{O}}\xspace A \to \ensuremath{\mathsf{P}}\xspace A$, and so it looks reasonable that from $\ensuremath{\mathsf{O}}\xspace(r_2:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)$ we can
obtain $\ensuremath{\mathsf{P}}\xspace (r_2:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)$.
As is well-known, the concept of strong permission is more complicated, as it
amounts to saying that some $a$ is permitted by a code iff such a
code explicitly states that $a$ is permitted. Various sub-types of permissions can be thus identified, such as the following ones \cite{makinson-torre:2003,boella-torre-icail:2003,stolpe-jal:2010,GovernatoriORS13}:
\begin{description}
\item[Static permission:] $X$ is a static permission wrt a normative system when
$X$ is derived from a strong permission, i.e., from an explicit permissive norm;
\item[Dynamic permission:] $X$ is a dynamic permission wrt a normative system
when it guides the legislator by describing the limits on what may be prohibited typically without violating static permissions in the system;
\item[Exemption:] $X$ is an exemption wrt a normative system
when it is an exception of a prohibition contained in the system.
\end{description}
It seems reasonable that all these types may be extended to the case of meta-rules. The first case simply amounts to when a rule is explicitly permitted in the theory or it is derived from a permissive meta-rule (such as $mr_3$). The third case is when, for instance, we have in the system two rules such as
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (r_2:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\\
\ensuremath{\mathsf{O}}\xspace (r_3:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c)
\end{gather*}
and we know that $r_2$ is stronger than $r_3$ or, at a meta-rule level, we have
\begin{gather*}
mr_3: \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (r_2:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c) \\
mr_4: \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (r_3:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c)
\end{gather*}
and, similarly, $mr_3$ is stronger than $mr_4$. The idea of dynamic permission consists in preventing the theory from deriving any incompatible deontic rule, for example, by setting that the meta-rule $mr_3$ is stronger than any other meta-rule supporting any conflicting rule.
The discussion above shows how it is crucial to establish some logical properties of permitted rules and
to determine when modalised rules are in conflict.
Accordingly, desirable basic properties are the following ones:
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \vdash \neg\ensuremath{\mathsf{O}}\xspace(\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \label{eq:reviewer1}\\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\Box} b) \vdash \neg\ensuremath{\mathsf{O}}\xspace \neg (\xseq{a}{n}\Rightarrow_{\Box} b) \label{eq:reviewer2}
\end{gather}
\sloppy (\ref{eq:reviewer2}) is trivially desirable because $\ensuremath{\mathsf{P}}\xspace \phi$ should imply $\neg \ensuremath{\mathsf{O}}\xspace \neg \phi$ even when $\phi$ is a rule. Instead, if the rule in the scope of the permission is a permissive norm---i.e., $\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} b$---the principle (\ref{eq:reviewer1}) does not hold\footnote{Many thanks to a reviewer for commenting on this point.}. Suppose that the normative system (for example, on the basis of constitutional values) permits the enactment of a norm which allows for the temporary limitation of liberties due to public health reasons. If so, it would not be contradictory to have that the normative system prescribes that the lack of limitation is also permitted. The following rules are in fact not necessarily incompatible:
\begin{gather*}
\mathit{public\_health}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \mathit{limit\_liberties}\\
\mathit{public\_health}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg \mathit{limit\_liberties}
\end{gather*}
which, essentially, amount to saying that $\ensuremath{\mathsf{P}}\xspace \mathit{limit\_liberties}$ and $\ensuremath{\mathsf{P}}\xspace \neg \mathit{limit\_liberties}$ are deontically compatible.
Other properties depend on at to what extent we assume that legislator was rational in a subtler way. For instance, the following one
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \vdash \neg\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \label{eq:DPermRuleInternal}
\end{gather}
can be rational, since it does not make much sense for a rational legislator to permit that two deontically incompatible rules are the case.
Let us discuss the case of (\ref{eq:DPermRuleInternal}). If we accept it, one may similarly argue that we should also adopt the following (where $b$ is a literal):
\begin{gather}
\ensuremath{\mathsf{P}}\xspace b \vdash \neg\ensuremath{\mathsf{P}}\xspace \neg b \label{eq:D_Perm}
\end{gather}
which, however, cannot be accepted. Why? As we know, the deontic facultativeness of a certain $b$ precisely amounts to stating $\ensuremath{\mathsf{P}}\xspace b \wedge \ensuremath{\mathsf{P}}\xspace \neg b$, so (\ref{eq:D_Perm}) would make facultativeness inconsistent.
What about facultative norms? We may have two cases like the following ones:
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \qquad \ensuremath{\mathsf{P}}\xspace \neg (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \label{eq:FacultativeRule1} \\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \qquad \ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b). \label{eq:FacultativeRule2
\end{gather}
Clearly, (\ref{eq:FacultativeRule1}) and (\ref{eq:FacultativeRule2}) are deontically different. While the former states that
$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$ is facultative, the latter says that two deontically incompatible and different rules are permitted. What is facultative according to (\ref{eq:FacultativeRule2})? Certainly we cannot say that $\ensuremath{\mathsf{O}}\xspace b$ is facultative, since we would need to permit two rules, one supporting $\ensuremath{\mathsf{O}}\xspace b$---which is the case: $\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$---and one supporting $\neg \ensuremath{\mathsf{O}}\xspace b$---which we do not have. (\ref{eq:FacultativeRule2}) licenses that both $\ensuremath{\mathsf{O}}\xspace b$ and $\ensuremath{\mathsf{O}}\xspace \neg b$ are permitted given $\xseq{a}{n}$. Is it factually possible? Yes, it is. Is it deontically admissible? Perhaps, it is. Is it always deontically rational? We have sometimes arguments to answer No.
Consider the idea of static permission. Assume we would like to avoid having two deontically incompatible rules in the system. Then, it would not be reasonable to explicitly permit that both rules are the case. Precisely as in skeptical defeasible reasoning, if we have arguments for deriving $b$ and $\neg b$ we refrain from concluding anything on the assumption that $b$ and $\neg b$ are in contradiction, we could reason similarly when two conflicting deontic rules like $\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$ and $\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b$ are considered.
Consider instead the idea of dynamic permission: the aim here is limiting the legislator in dynamically adding prohibitions. If so,
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \text{ prevents the derivation of }\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b\\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \text{ prevents the derivation of }\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b.
\end{gather*}
If we have it, it means that we want the system to be deontically indifferent with respect to $b$ whenever $\xseq{a}{n}$ are the case. But suppose that we also have two norms like the following:
\begin{gather*}
\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c \\
\ensuremath{\mathsf{O}}\xspace c \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b
\end{gather*}
Can we still say that the system is deontically indifferent with respect to $b$ given $\xseq{a}{n}$?
Accordingly, under the reading above, one may prudentially establish that the following meta-rules are in conflict:
\begin{gather}
mr_3\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (r_2\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c) \label{eq:1}\\
mr_5\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (r_3\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c) \label{eq:-1}
\end{gather}
Clearly, we assume in the above discussion that $\ensuremath{\mathsf{P}}\xspace$ is not the dual of $\ensuremath{\mathsf{O}}\xspace$, but rather as another $\Box$-operator. A logic for meta-rules assuming that \ref{eq:1} and \ref{eq:-1} \emph{are in conflict} is named \textbf{prudential}.
On the contrary, if we believe that the permission, as applied to rules, behaves exactly as when literals are modalised, then $mr_3$ and $mr_5$ are not in conflict, and this holds precisely because
\begin{gather}
r_4\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} b\\
r_5\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg b
\end{gather}
are likewise compatible in virtue of the intuition that $\ensuremath{\mathsf{P}}\xspace b$ and $\ensuremath{\mathsf{P}}\xspace \neg b$ are consistent. A logic for meta-rules assuming that \ref{eq:1} and \ref{eq:-1} \emph{are not in conflict} is named \textbf{simple}.
Let us finally comment on the application of the $\otimes$ operator \cite{GovernatoriORS13} to rules. In the standard reading of this operator, a rule like
$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes c$ means that if $\xseq{a}{n}$ are the case, then $b$ is
obligatory, but if the obligation $b$ is not fulfilled, then the obligation
$c$ is activated and becomes in force until it is satisfied, or violated. Since we argued that a legislator $L_1$ can impose to another legislator $L_2$ to enact a norm $r_1$, we can imagine a scenario where $L_2$ violates $\ensuremath{\mathsf{O}}\xspace r$, but we can also imagine that $L_1$ has considered a sanction as the result of such a violation, or rather that another normative solution is advanced by $L_2$. Accordingly, expressions such as
\begin{gather}
mr_6\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (r_2:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\otimes (r_6\colon \xseq{b}{m} \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} d)
\end{gather}
are admissible.
The $\otimes$ operator can be also seen as a preference operator, where the first element is the most preferred and the last is the least of the acceptable options \cite{GovernatoriORS13,GovernatoriOSRC16}. According to this reading, meta-rule $mr_6$ establishes that the underlying normative systems should introduce rule $r_2$ (if such a rule is not already in the system). Alternatively, a less preferable but still acceptable outcome is to impose $r_6$. Generally, in real-life normative systems, the idea is to use this kind of structure to prescribe more and more stringent norms/policies.
\section{Synopsis and Structure of the paper}\label{sec:synopsis}
In devising a new computational logical system for meta-rules in deontic reasoning, we shall adopt the following road map and take the following steps:
\begin{itemize}
\item We offer a conceptual analysis of meta-reasoning in the normative domain;
\item We present the new logical framework;
\item We investigate the computational properties of the logic.
\end{itemize}
We thus commence with Section~\ref{sec:conceptual_framework} by setting up a conceptual framework where some basic philosophical problems are discussed; such a conceptual framework provides a discussion that frames meta-rules in legal reasoning and is hence the basis for our formal choices. For the sake of illustration, we discuss and consider (1) what are the complexities behind the definition of norm change mechanisms in domains such as the law and (2) the interplay of norm dynamics and deontic concepts such as permissions and permissive norms.
Once this is done, we move on and provide the proper formalisation of the logical apparatus that models deontic and normative reasoning, and that which is enriched with meta-rules. Such a formalism must take as its starting point the above-mentioned conceptual analysis and be as rich as possible.
To help the reader, the logic is presented progressively. As the technicalities of the proof-theoretic part can be harsh to grasp for a non-expert reader, a gentle introduction to the formal machinery is thus offered: in particular, we split the logic presentation and formalisation in two distinct sections.
Section \ref{sec:Method} presents the Defeasible Deontic Logic's framework of \cite{GovernatoriORS13}. Its main purpose is twofold: First, it introduces the reader to all the notions (some specific to Defeasible Logic) that will be needed later, such as: (i) explaining the meaning of a rule being applicable or discarded and their importance, (ii) representing the different types of rules -- strict, defeasible and defeaters -- (iii) modelling the distinction between constitutive and deontic rules, (iii) how such a framework is capable of solving conflicts, or again (iv) handling complex reasoning patterns related to contrary-to-duty and compensatory obligations (as well as various types of deontic statements). Second, all those notions are formalised: language, strict and defeasible rules vs constitutive, obligation and permission rules, and so on. For every definition and concept, we thoroughly explain their meaning and give examples.
Following Section~\ref{sec:Logic} where we present Defeasible Deontic Logic with Meta-Rules: such a framework specialises the framework of \cite{GovernatoriORS13} by (i) representing the distinction between rules and meta-rules, (ii) introducing and formalising different \emph{variants} of \emph{conflict} among meta-rules, and naturally (iii) providing the proof theory extended with meta-rules and such conflicts. We end that section by proving the coherence and consistency of the logical apparatus.
Lastly, we prove that the logic proposed is correct and computationally efficient. Section~\ref{sec:Algo} thus advances algorithms that compute the extension of a given input logic and explain their behaviours through two examples. We end that section by studying their complexity and proving their correctness.
This work is concluded by Section~\ref{sec:RelatedWork}, that reviews current relevant literature, and by Section~\ref{sec:Conc} where we discuss how this research can be taken further.
\section{The Conceptual Framework}\label{sec:conceptual_framework}
It was argued in \cite{Governatori2009157} that meta-rules can describe the dynamics of any normative institution (such as a legal system) where norms are formalised and can be used to establish conditions for the creation and modification of other rules or norms. In turn, proper rules precisely correspond to norms in normative systems. In particular, it was pointed out that meta-rules can be represented in the language of Defeasible Logic as follows:\footnote{From now on, as formalised in the language in the two following sections, literals will be denoted by Latin letters, standard rules by Greek letters, and meta-rules by Greek letters preceded by lower-case `\emph{m}'.}
\begin{gather*}
m\alpha\colon\xseq{a}{n}\Rightarrow (\beta\colon\xseq{b}{m}\Rightarrow c),
\end{gather*}
precisely to grasp norm change mechanisms in the law. For instance, if the rule `$\beta\colon\xseq{b}{m}\Rightarrow c$' does not exist in the theory at hand, then the successful application of $m\alpha$ leads to deriving such a rule, which amounts to enacting $\beta$ as a new norm in the legal system. Similarly, if $\beta$ already exists but has the form $\beta\colon b_1\Rightarrow c$', then the successful application of $m\alpha$ corresponds to modifying $\beta$ from `$\beta\colon b_1\Rightarrow c$' into `$\beta\colon\xseq{b}{m}\Rightarrow c$'.
In addition, with meta-rules we can admit the \emph{negation of rules}. If we are able to conclude that a (positive) rule holds, then it means that we can insert the rule (the content of the rule, with a specific name) into the system, and we can use the resulting rule to derive new conclusions. For a negated rule, the meaning is that it is not possible to obtain a rule with that specific content (this can be either formally prescribed for the whole rule or in a way that results irrespective of the name).
In this paper we go further. We discuss the conceptual and logical meanings of modalising rules via meta-rules, i.e., what we mean when we establish the obligatoriness of enacting a certain rule. Put it in this way, making a rule obligatory amounts to firing a meta-rules like the following:
\begin{gather*}
m\zeta\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
Rule $\gamma$ is a standard deontic rule as described in previous works \cite{GovernatoriOSRC16,GovernatoriORS13,GovernatoriR08}: if $\xseq{a}{n}$ are the case, then rule $\gamma$ allows us to derive that $c$ is obligatory. If a similar intuition is extended to meta-rules, then $m\zeta$ asserts that enacting rule `$\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c$' is obligatory.
We should now ask: For whom `$\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c$' is obligatory? There is no unique answer, since the question has different meanings, depending on which normative domain is considered. Suppose we work, again, in the legal domain. In that case, we can imagine that a legislative authority imposes the obligation to enact a certain norm over another authority, lower in the legislative hierarchy but competent in enacting such a norm. Concrete examples occur in the law, when, for instance, European authorities impose on member states the implementation of European directives or when Constitutional courts require national parliaments to amend, or to integrate, the legislative corpus.
In this investigation, we aim to unravel some important aspects of meta-norms, looking closely at the \textit{mechanism} that allows meta-rules to develop rules or prevent their existence. In the current investigation, we are not concerned with the complex aspects underlying the \textit{goals} (and potentially the \textit{intentions}) of these meta-rules. This is a different issue, which has been partly explored in the recent literature \cite{Cristani201739} but it is not the focus of this study.
We remark that structures like $m\zeta$ do not lead to a meaningless iteration of deontic modalities, an issue explored in early deontic literature such as in \cite{Barcan:1966,Goble:1966}. First of all, the obligatory enactment of a prescriptive norm is not equivalent to a simple iteration of obligations. In addition, while it looks meaningless to have an expression like $\ensuremath{\mathsf{O}}\xspace \ensuremath{\mathsf{O}}\xspace c$ standing for ``It is obligatory that it is obligatory that everyone keeps their promises'', an expression like ``Parking on highways ought to be forbidden'' makes sense \cite{Barcan:1966}. The latter example suggests that a norm forbidding parking on highways is obligatory, thus assuming a conceptual distinction between norms, on the one side, and obligations (and permissions) on the other side \cite{alchourron71normative,makinson99on}. Our general approach, hence, clearly distinguishes norms from obligations: obligations are the effects (i.e., the conclusions) of the application of prescriptive norms. Under this reading, the application of rule $\gamma$ leads to the obligation $\ensuremath{\mathsf{O}}\xspace c$, while the application of meta-rule $m\zeta$ leads to a state where norm $\gamma$ ought to be the case.
What about the permission? The elusive character of permission also affects the case of permissive meta-rules. Consider the meta-rule
\begin{gather*}
m\xi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
Does the well-known distinction between \emph{weak}
and \emph{strong} permission \cite{vonwright:1963} apply to this case as well? In the standard scenario, the former type of permission corresponds to saying that something is allowed by a code precisely when that code does not prohibit it. This idea is preserved when meta-rules are considered. Indeed, one may simply argue that, if the legal system \emph{does not} support the derivation of
\begin{gather*}
\ensuremath{\mathsf{O}}\xspace\neg(\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\\
\ensuremath{\mathsf{O}}\xspace(\varphi\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c),
\end{gather*}
then one may also conclude that the following holds:
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c).
\end{gather*}
For similar reasons, since weak permission is the dual of obligation -- i.e., $\ensuremath{\mathsf{O}}\xspace A =_{\mathit{def}} \neg \ensuremath{\mathsf{O}}\xspace \neg A$ -- imposing consistency means $\ensuremath{\mathsf{O}}\xspace A \to \ensuremath{\mathsf{P}}\xspace A$, and it hence looks reasonable that from `$\ensuremath{\mathsf{O}}\xspace(\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)$' we can obtain `$\ensuremath{\mathsf{P}}\xspace (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)$'.
As is well-known, the concept of strong permission is more complicated, as it amounts to saying that some $a$ is permitted by a code iff the code explicitly states that $a$ is permitted. Various sub-types of permissions can be thus identified, such as the following ones \cite{makinson-torre:2003,boella-torre-icail:2003,stolpe-jal:2010,GovernatoriORS13}:
\begin{description}
\item[Static permission:] $X$ is a static permission, wrt a normative system, when $X$ is derived from a strong permission, i.e., from an explicit permissive norm.
\item[Dynamic permission:] $X$ is a dynamic permission, wrt a normative system, when it guides the legislator by describing the limits on what may be prohibited typically without violating static permissions in the system.
\item[Exemption:] $X$ is an exemption, wrt a normative system, when it is an exception of a prohibition contained in the system.
\end{description}
It seems reasonable that all these types may be extended to the case of meta-rules. The first case simply amounts to when a rule is explicitly permitted in the theory or is derived from a permissive meta-rule (such as $m\xi$). The third case is when, for instance, we have in the system two rules, such as
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (\gamma:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\\
\ensuremath{\mathsf{O}}\xspace (\varphi:\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c),
\end{gather*}
and we know that $\gamma$ is stronger than $\varphi$, or, at a meta-rule level, we have
\begin{gather*}
m\xi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c) \\
m\chi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (\varphi\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c)
\end{gather*}
and, similarly, $m\xi$ is stronger than $m\chi$. The idea of dynamic permission consists in preventing the theory from deriving any incompatible deontic rule, for example, by setting that the meta-rule $m\xi$ is stronger than any other meta-rule supporting any conflicting rule.
The discussion above shows how crucial it is to establish some logical properties of permitted rules and determine when modalised rules conflict. Accordingly, desirable basic properties are the following:
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \vdash \neg\ensuremath{\mathsf{O}}\xspace(\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \label{eq:reviewer1}\\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\Box} b) \vdash \neg\ensuremath{\mathsf{O}}\xspace \neg (\xseq{a}{n}\Rightarrow_{\Box} b), \label{eq:reviewer2}
\end{gather}
\sloppy where (\ref{eq:reviewer2}) is trivially desirable because, for a rule $\phi$, $\ensuremath{\mathsf{P}}\xspace \phi$ should imply $\neg \ensuremath{\mathsf{O}}\xspace \neg \phi$. Instead, if the rule in the scope of the permission is a permissive norm (i.e., `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} b$'), then principle (\ref{eq:reviewer1}) does not hold in general\footnote{Although we may have concrete examples where it seems a bit odd that the legislator explicitly issues \emph{both} `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} b$' and `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg b$', this is deontically possible: see below. Many thanks to a reviewer for commenting on this point.}. Suppose that the normative system (for instance, based on constitutional values) permits the enactment of a norm that allows for the temporary limitation of liberties due to public health reasons. If so, it would not be contradictory that the normative system prescribes that the lack of limitation is also permitted. The following rules are in fact not necessarily incompatible:
\begin{gather*}
\mathit{public\_health}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \mathit{limit\_liberties}\\
\mathit{public\_health}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg \mathit{limit\_liberties}
\end{gather*}
which, essentially, amount to saying that, under certain conditions, $\ensuremath{\mathsf{P}}\xspace \mathit{limit\_liberties}$ and $\ensuremath{\mathsf{P}}\xspace \neg \mathit{limit\_liberties}$ are deontically compatible.
Other properties depend upon to what extent we assume that the legislator was rational in a subtler way. For instance, the following one
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \vdash \neg\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \label{eq:DPermRuleInternal
\end{gather}
can be rational, since it does not make much sense for a rational legislator to permit that two deontically incompatible rules are the case.
Let us discuss further this case.
If we accept it, one may similarly argue that we should also adopt the following (where $b$ is a literal):
\begin{gather}
\ensuremath{\mathsf{P}}\xspace b \vdash \neg\ensuremath{\mathsf{P}}\xspace \neg b \label{eq:D_Perm
\end{gather}
which, however, cannot be accepted. Why? Because the deontic facultativeness of a certain $b$ precisely amounts to stating $\ensuremath{\mathsf{P}}\xspace b \wedge \ensuremath{\mathsf{P}}\xspace \neg b$, so (\ref{eq:D_Perm}) would make facultativeness inconsistent.
What about facultative norms? We may have two cases like:
\begin{gather}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \qquad \ensuremath{\mathsf{P}}\xspace \neg (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \label{eq:FacultativeRule1} \\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \qquad \ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b). \label{eq:FacultativeRule2}
\end{gather}
Clearly, (\ref{eq:FacultativeRule1}) and (\ref{eq:FacultativeRule2}) are deontically different. While the former states that
`$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$' is facultative, the latter states that two deontically incompatible and different rules are permitted.
What is facultative according to (\ref{eq:FacultativeRule2})? Certainly we cannot say that $\ensuremath{\mathsf{O}}\xspace b$ is facultative, since we would need to permit two rules, one supporting $\ensuremath{\mathsf{O}}\xspace b$ -- which is the case: `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$' -- and one supporting $\neg \ensuremath{\mathsf{O}}\xspace b$ -- which we do not have because `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b$' rather supports $\ensuremath{\mathsf{O}}\xspace \neg b$. Principle (\ref{eq:FacultativeRule2}) licenses that both $\ensuremath{\mathsf{O}}\xspace b$ and $\ensuremath{\mathsf{O}}\xspace \neg b$ are permitted given `$\xseq{a}{n}$'. Is it factually possible? Yes, it is. Is it deontically admissible? Perhaps, it is. Is it always deontically rational? We sometimes have arguments to answer: No.
Consider the idea of static permission. Assume we would like to avoid having two deontically incompatible rules in the system. Then, it would not be reasonable to explicitly permit that both rules are the case. As in sceptical defeasible reasoning, if we have arguments for deriving $b$ and $\neg b$, we refrain from concluding anything on the assumption that $b$ and $\neg b$ are in contradiction; we could reason similarly when two conflicting deontic rules like `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$' and `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b$' are taken into account.\footnote{One may argue that, if the conflict between `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$' and `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b$' cannot be solved, this would imply (but would not be equivalent in Defeasible Logic to) $b$ being weakly permitted. Therefore, we would make a weird use of static positive permissions to state that $b$ is weakly permitted.}
Consider instead the idea of dynamic permission: the aim here is limiting the legislator in dynamically adding prohibitions. If so, we would have that
\begin{gather*}
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b) \text{ prevents the derivation of }\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b\\
\ensuremath{\mathsf{P}}\xspace (\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b) \text{ prevents the derivation of }\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b.
\end{gather*}
If we have it (i.e., we prevent both derivations), it means that we want the system to be deontically indifferent with respect to $b$ whenever `$\xseq{a}{n}$' are the case. Suppose, however, that we also have other two norms like the following:
\begin{gather*}
\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c \\
\ensuremath{\mathsf{O}}\xspace c \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b.
\end{gather*}
Despite the fact that we prevent the derivation of both `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b$' and `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b$', can we still say that the system is deontically indifferent with respect to $b$ given `$\xseq{a}{n}$'?
Accordingly, under the reading above, one may prudentially establish that the following meta-rules are somehow incompatible:
\begin{gather}
m\xi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c) \label{eq:1} \\
m\chi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} (\varphi\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg c) \label{eq:-1}
\end{gather}
In the above discussion, we assume that $\ensuremath{\mathsf{P}}\xspace$ is not the dual of $\ensuremath{\mathsf{O}}\xspace$, but rather as another $\Box$-operator. A logic for meta-rules assuming that \ref{eq:1} and \ref{eq:-1} \emph{are in conflict} is named \textbf{cautious}.
On the contrary, if we believe that the permission, as applied to rules, behaves exactly as when literals are modalised, then $m\alpha$ and $m\zeta$ are not in conflict, and this holds precisely because
\begin{gather}
\xi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} b\\
\psi\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg b
\end{gather}
are likewise compatible in virtue of the intuition that $\ensuremath{\mathsf{P}}\xspace b$ and $\ensuremath{\mathsf{P}}\xspace \neg b$ are consistent. A logic for meta-rules assuming that \ref{eq:1} and \ref{eq:-1} \emph{are not in conflict} is named \textbf{simple}.
Finally, let us comment on applying the $\otimes$ operator \cite{GovernatoriORS13} to rules. In the standard reading of this operator, a rule like `$\xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes c$' means that if `$\xseq{a}{n}$' are the case, then $b$ is obligatory; on the contrary, if the obligation $b$ is not fulfilled, then the obligation $c$ is activated, and becomes in force until it is satisfied or violated. Since we argued that a legislator $L_1$ can impose on another legislator $L_2$ to enact a norm $\mu$, we can imagine a scenario where $L_2$ violates $\ensuremath{\mathsf{O}}\xspace\mu$. Still, we can also imagine that $L_1$ has considered a sanction as the result of such a violation, or rather that another normative solution is advanced by $L_2$. Accordingly, expressions such as
\begin{gather}
m\alpha\colon \xseq{a}{n}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} (\gamma\colon\xseq{b}{m}\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c)\otimes (\zeta\colon \xseq{b}{m} \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} d)
\end{gather}
are admissible.
The $\otimes$ operator can also be seen as a preference operator, where the first element is the most preferred, and the last is the least of the acceptable options \cite{GovernatoriORS13,GovernatoriOSRC16}. According to this reading, meta-rule $m\alpha$ establishes that the underlying normative systems should introduce rule $\beta$ (if such a rule is not already in the system). Alternatively, a less preferable but still acceptable outcome is to impose $\zeta$. Generally, in real-life normative systems, the idea is to use this kind of structure to prescribe more and more stringent norms/policies.
\subsection{Conflicting Rules}
\label{sub:conflicts}
The richness of this extension makes the logic very expressive and leads to examining a potentially large variety of conflicts between meta-rules. Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta).
\end{gather*}
On the one hand, we can simply extend to rules the intuition covering literals: as literal $s$ conflicts with another literal $l$ whenever $s$ is the complement of $l$ (i.e., $s=\ensuremath{\mathcal{\sim }} l$), rule $\nu$ conflicts with rule $\zeta$ whenever $\beta=\ensuremath{\mathcal{\sim }} \alpha$ (thus, when $\alpha =\neg \beta$ or $\beta= \neg \alpha$).
What does it mean to apply $\neg$ to a rule? In our context -- normative reasoning -- if a rule $\alpha$ corresponds to a norm, then $\neg \alpha$ means that such a norm does not exist in the normative system or is removed.
\begin{example}\label{example:standard}
Consider this case from Italian Law. The Parliament issues the following Legislative Act (n. 124, 23 July 2008):
\begin{quote}[{\bf Target of the modification}]
Except for the cases mentioned under Articles 90 and 96 of the Constitution, criminal proceedings against the President of the Republic, the President of the Senate, the President of the House of Representatives, and the Prime Minister, are suspended for the entire duration of tenure. [\dots]
\end{quote}
Suppose the Constitutional Court declares that the legislative act above is illegitimate and states its annulment. This means that such a norm no longer exists in the system. Hence, if issuing the norm corresponds to
\begin{gather*}
\zeta\colon \mathit{Parliament}, \mathit{Promulgation} \Rightarrow_C (\mathit{L.\; 124}\colon\mathit{Crime}, \mathit{Tenure} \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \mathit{Suspended})
\end{gather*}
then the annulment of $\mathit{L.\; 124}$ can be reconstructed by having, in the theory, a meta-rule such as
\begin{gather*}
\nu\colon \mathit{Constitutional\_Court} \Rightarrow_C \neg (\mathit{L.\; 124}\colon\mathit{Crime}, \mathit{Tenure} \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \mathit{Suspended}).
\end{gather*}
\end{example}
Clearly, $\nu$ conflicts with $\zeta$.
If conflicts between meta-rules occur only in cases like this, our variant of the logic extends the standard intuitions of rule conflict in DDL to meta-rules. On the basis of the discussion in Section \ref{sec:conceptual_framework}, we call this variant \textit{simple}.
However, we can inspect the deontic meaning of rules occurring in the head of meta-rules, an inspection that allows us to introduce a second variant of the logic with meta-rules. Again based on the discussion in Section \ref{sec:conceptual_framework}, this variant, which we call \textit{cautious}, considers the following scenarios of when meta-rules conflict.
\paragraph{Case 1.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\Box} b)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\Box} \neg b)
\end{gather*}
The two meta-rules can be seen as incompatible (especially when $\Rightarrow_{\Box}$ is $\Rightarrow_{\ensuremath{\mathsf{O}}\xspace}$), since rules $\alpha$ and $\beta$ have exactly the same antecedent, they are labelled by the same $\Box$-modality but support complementary conclusions. A variant is
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_{\Box} (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b)\\
\nu\colon A(\nu) \Rightarrow_{\Box} (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \neg b)
\end{gather*}
where the modalities of $\alpha$ and $\beta$ are different (specifically, one is obligation and the other permission), but we have a conflict as well.
\paragraph{Case 2.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes c)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b\otimes d).
\end{gather*}
This is somehow similar to the previous case, but (i) it focuses on the obligation rules, and (ii) the head of rules are $\otimes$-expressions. Again, $\alpha$ and $\beta$ have exactly the same antecedent, but, although the heads state consistent compensations, their primary obligations are incompatible. Therefore, $\zeta$ and $\nu$ are in conflict because they support the conclusions $\ensuremath{\mathsf{O}}\xspace b$ and $\ensuremath{\mathsf{O}}\xspace \neg b$.
\paragraph{Case 3.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes d)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c\otimes \ensuremath{\mathcal{\sim }} d).
\end{gather*}
Following the analysis for Case 2 above but for opposite reasons: the two meta-rules are not in conflict. Rules $\alpha$ and $\beta$ have exactly the same antecedent, but, although the heads state inconsistent compensations, their primary obligations are compatible. Ergo, $\zeta$ and $\nu$ are \emph{not} in conflict because support the conclusions $\ensuremath{\mathsf{O}}\xspace b$ and $\ensuremath{\mathsf{O}}\xspace c$.
\paragraph{Case 4.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \neg b\otimes d).
\end{gather*}
This is similar to Case 2; hence $\zeta$ and $\nu$ are in conflict.
\paragraph{Case 5.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes d).
\end{gather*}
Strictly speaking, these meta-rules are not in conflict as they support the derivation of rules both supporting $\ensuremath{\mathsf{O}}\xspace b$. On a different level -- the compliance angle -- we can argue that a theory containing both meta-rules is somehow odd since $\beta$ specifies that the violation of $b$ is compensated by $d$; hence, a situation $s$ satisfying $a$, $\neg b$ and $c$ complies with $\beta$. On the contrary, $\alpha$ does not admit a compensation for the violation of $b$, and the situation $s$ contravenes $\alpha$. Therefore, a system where both $\alpha$ and $\beta$ are in force admits a situation that is, at the same time, compliant and non-compliant.
\paragraph{Case 6.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes d)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes \neg d)
\end{gather*}
Following the analysis of other cases, the two meta-rules conflict. Rules $\alpha$ and $\beta$ have exactly the same antecedent and prove the same primary obligation, but they also state inconsistent compensations. In fact, if $\ensuremath{\mathsf{O}}\xspace b$ is violated, $\alpha$ and $\beta$ support opposite conclusions: $\ensuremath{\mathsf{O}}\xspace d$ and $\ensuremath{\mathsf{O}}\xspace \neg d$.
\paragraph{Case 7.}
Consider the following meta-rules:
\begin{gather*}
\zeta\colon A(\zeta) \Rightarrow_\Box (\alpha\colon a\Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes c)\\
\nu\colon A(\nu) \Rightarrow_\Box (\beta\colon a \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} b\otimes \neg d).
\end{gather*}
The two meta-rules are not in conflict, even though they can be seen as deontically odd: the `apparent oddity' lies in the fact that the same violation of $\ensuremath{\mathsf{O}}\xspace b$ leads to different compensations.
Consider this second legal example to further illustrate the intuitions behind the cautious logic's perspective.
\begin{example}\label{ex:derogation1}
Consider Art. 3 of the Italian Constitution:
\begin{quote}
Article 3\\
All citizens have equal social status and are equal before the
law, without regard to their sex, race, language, religion, political
opinions, and personal or social conditions.
\end{quote}
Suppose the Constitution is amended by stating the following:
\begin{quote}
In derogation to the provisions set out in Article 3, paragraph 1, of
the Constitution, EU citizens may have different social statuses when Italy is no longer a member state of the EU.
\end{quote}
If Art.~3 can be represented as follows
\[
\mathit{Art.\; 3}\colon \mathit{Citizen} \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} \mathit{Equal\_status}
\]
its amendment corresponds to applying the following meta-rule:
\[
\begin{array}{l}
derog_{Art.\;3}\colon \neg\mathit{EU} \Rightarrow_C
(\mu\colon\mathit{EU\_Citizen} \Rightarrow_{\ensuremath{\mathsf{P}}\xspace} \ensuremath{\mathcal{\sim }} \mathit{Equal\_status}).
\end{array}
\]
The example corresponds to the second variant of Case 1 above. Similar examples can be elaborated to illustrate the other scenarios previously discussed.
\end{example}
\subsection{Basics}
\label{sec:commonV1V2}
The new language accommodates both rules and meta-rules, where a meta-rule is a rule whose elements are other rules. We are now providing the formal machinery to capture such notions properly. The main idea is to re-use as much as possible the definitions and constructions given in Section~\ref{sec:Method} and to revise/extend them to include the new notions. In this way, the variants of the Defeasible Deontic Logic with meta-rules we are going to present in this Section are conservative extensions of the logic of Section~\ref{sec:Method}. Accordingly, for instance, there is no need to give a new definition of what a derivation is or to adjust the conditions when literals and deontic literals are provable.
For the terminology used, we reserve the term \emph{standard rules} for expressions satisfying Definition~\ref{def:Rule} (rules containing only literals), and we call \emph{meta-rules} expressions containing other (standard) rules. Notice that we do not allow for the nesting of meta-rules.
In most cases, we do not need to distinguish between standard and meta-rules: Definition~\ref{def:RuleFinal} captures both into a single definition of what a rule is (when we speak of a generic rule, it can be either a standard rule or a meta-rule).
\begin{definition}[Rule Expressions]\label{def:RuleExpression}
Given a standard rule $\viola\alpha$, $\alpha$ and $\neg \alpha$ are \emph{rule expressions}; we use \ensuremath{\mathrm{REx}}\xspace to denote the set of rule expressions.
If $\beta$ is a rule expression, then, for $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$ $\Box \beta$ and $\neg \Box \beta$ are \emph{deontic rule expressions}.
\end{definition}
We also extend the definition of $\otimes$-expression allowing every element to be either a plain literal or a rule expression.
Accordingly, the following is an $\otimes$-expression:
\[
a \otimes (\beta\colon b \Rightarrow_\ensuremath{\mathsf{O}}\xspace c) \otimes (\gamma\colon d,\ensuremath{\mathsf{O}}\xspace a\Rightarrow_\ensuremath{\mathsf{O}}\xspace e\otimes \neg f).
\]
Note that we can mix rules and literals in reparation chains. This allows us to represent situations where, for instance, an entity needs to enforce a particular policy and is subject to a sanction if it does not.
Consequently, we redefine the notion of rule as follows.
\begin{definition}[Rule]\label{def:RuleFinal}
A \emph{rule} is an expression $\alpha: A(\alpha) \hookrightarrow_\Box C(\alpha)$, where
\begin{enumerate}
\item $\alpha \in \ensuremath{\mathrm{Lab}}\xspace$ is the unique name of the rule.
\item $A(\alpha)$ is a (possibly empty) set of elements $a_1, \dots, a_n$, where each $a_i$ is either a literal, a rule expression, or a deontic rule expression;
\item $\hookrightarrow \in \set{\Rightarrow, \leadsto}$ with the same meaning as before;
\item $\Box = \{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace \}$;
\item its consequent $C(\alpha)$, which is either
\begin{enumerate}
\item a single plain literal $l \in \ensuremath{\mathrm{PLit}}\xspace$, or a rule expression $\beta$, if either (i) $\hookrightarrow \equiv \leadsto$ or (ii) $\Box \in \set{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{P}}\xspace}$, or
\item an $\otimes$-expression, if $\Box \equiv \ensuremath{\mathsf{O}}\xspace$.
\end{enumerate}
\end{enumerate}
\end{definition}
We extend the convention used to represent the complement of a literal to the cases of (deontic) rule expressions: (i) if $\beta=\alpha$ then $\ensuremath{\mathcal{\sim }} \beta= \neg\alpha$, and (ii) if $\beta=\neg\alpha$ then $\ensuremath{\mathcal{\sim }}\beta=\alpha$.
Definition~\ref{def:RuleFinal} allows us to reuse the notation specified to identify particular sets of rules given in Section~\ref{sec:Method}. Moreover, we can continue to use the notion of defeasible deontic theory just as defined in Definition \ref{def:DeonticTheory} but using the reviseddefinition of rule we just gavce. We extend the terminology and we say that a rule $\alpha$ \emph{appears in a rule} $\beta$, if $\alpha\in A(\beta)\cup C(\beta)$, and that $\alpha$ \emph{appears in a set of rules} $S$, if either $\alpha\in S$, or $\exists\beta\in S$ such that $\alpha$ appears in $\beta$.
As standard rules may now be conclusions of meta-rules, we have to update the definition of tagged modal formula given in Definition~\ref{def:TagFormula} as follows.
\begin{definition}[Tagged modal formula]\label{def:MetaConclusion}
An expression is a \emph{tagged modal formula} if satisfies Definition~\ref{def:TagFormula}, or has one of the following forms:
\begin{itemize}
\item $+\partial^m_\Box \alpha$: meaning that rule $\alpha$ is defeasibly provable with mode $\Box$, for $\Box\in\set{\ensuremath{\mathsf{C}}\xspace,\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$;
\item $-\partial^m_\Box \alpha$: meaning that rule $\alpha$ is defeasibly refuted with mode $\Box$, for $\Box\in\set{\ensuremath{\mathsf{C}}\xspace,\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$.
\end{itemize}
\end{definition}
The intuition here is the same as for literals. In a defeasible theory with meta-rules, the conclusion of a derivation can be either a (deontic) literal or a (deontic) rule. Accordingly, when a theory proves $+\partial^m_{\ensuremath{\mathsf{O}}\xspace}\alpha$, it is obligatory to have the rule $\alpha$; when we prove a rule with mode $\ensuremath{\mathsf{C}}\xspace$, the meaning is that the rule is in the system and can produce its conclusion; $-\partial^m_{\ensuremath{\mathsf{C}}\xspace}\alpha$, on the other hand, indicates that rule $\alpha$ is not present in the underlying normative system.
What about the case when we have the negation of a rule, $\neg\alpha$? For $+\partial^m_\ensuremath{\mathsf{C}}\xspace\neg\alpha$, two interpretations are possible: (i) an affirmative statement that $\alpha$ is not in the system, and (ii) that $\alpha$ has been removed from the rules of the system. The second reading was used in \cite{Governatori2009157} to model abrogation and annulment. For \ensuremath{\mathsf{O}}\xspace, a prohibition is defined as $\ensuremath{\mathsf{O}}\xspace\neg$. Accordingly, $+\partial^m_{\ensuremath{\mathsf{O}}\xspace}\neg\alpha$ specifies that, in the normative system, it is forbidden to have rule $\alpha$. For instance, Article 27 of the Italian Constitution makes capital punishment not admissible in the Italian Legal System.
The definition of Proof does not change wrt Definition~\ref{def:Proof}. However, we need to reformulate definitions of a rule being applicable/discarded to accommodate the case when a standard rule (or, more properly, a rule expression) is in the antecedent of a meta-rule. A meta-rule is applicable when (i) the rule itself is provable and (ii) the rule expressions appearing in it are provable (or discarded) with the appropriate mode.
\begin{definition}[Applicability]\label{def:MetaApplicability}
Assume a deontic defeasible theory $D$ with $D = (\ensuremath{F}\xspace, R, >)$. We say that rule $\alpha \in R^\ensuremath{\mathsf{C}}\xspace \cup R^\ensuremath{\mathsf{P}}\xspace$ is \emph{applicable} at $P(n+1)$, iff $+\partial^{m}_{\ensuremath{\mathsf{C}}\xspace}\alpha\in P(1..n)$ and for all $a \in A(\alpha)$ Conditions~\ref{item:l}--\ref{item:Diamondl} of Definition~\ref{def:StandardApplicability} hold and
\begin{enumerate}
\item\label{item:Beta} if $a = \beta \in \ensuremath{\mathrm{REx}}\xspace$, then $+\partial_\ensuremath{\mathsf{C}}\xspace \beta \in P(1..n)$,
\item if $a = \Box \beta$, $\beta \in \ensuremath{\mathrm{REx}}\xspace$, then $+\partial_\Box \beta \in P(1..n)$, $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$,
\item\label{item:nonDiamondnonBeta} if $a = \neg\Box \beta$, $\beta \in \ensuremath{\mathrm{REx}}\xspace$, then $-\partial_\Box \beta \in P(1..n)$, $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$.
\end{enumerate}
We say that rule $\alpha \in R^\ensuremath{\mathsf{O}}\xspace$ is \emph{applicable at index} $i$ \emph{and} $P(n+1)$ iff $+\partial^{m}_{\ensuremath{\mathsf{C}}\xspace}\alpha\in P(1..n)$ and Conditions~\ref{item:l}--3 of Definition~\ref{def:StandardApplicability} and Conditions \ref{item:Beta}-\ref{item:nonDiamondnonBeta} above hold, and for all $c_j \in C(\alpha),\, j < i$
\begin{enumerate}[resume]
\item
\begin{enumerate}[label={\alph*.}]
\item if $c_j = l$, then $+\partial_\ensuremath{\mathsf{O}}\xspace l \in P(1..n)$ and $+\partial_\ensuremath{\mathsf{C}}\xspace \ensuremath{\mathcal{\sim }} l \in P(1..n)$, $l \in \ensuremath{\mathrm{PLit}}\xspace$;
\item if $c_j = \beta$, then $+\partial_\ensuremath{\mathsf{O}}\xspace \beta \in P(1..n)$ and $-\partial_\ensuremath{\mathsf{C}}\xspace \beta \in P(1..n)$.
\end{enumerate}
\end{enumerate}
\end{definition}
Note that, whilst 4.a is the same of Condition 4 of Definition 4, when $c_j$ is a rule then we require in Condition 4.b only $-\partial_\ensuremath{\mathsf{C}}\xspace \beta$ and not $+\partial_\ensuremath{\mathsf{C}}\xspace\ensuremath{\mathcal{\sim }}\beta$. Both conditions indicate that rule $\beta$ is not a rule effective. As discussed in Footnote~\ref{foot:violation}, the former means that there is no evidence that the rule is in the system, while the latter guarantees that the rule is removed from or prevented from being in the system. In a positivist view, a normative system explicitly gives the norms that hold in it. Consequently, the set of effective norms (the norms that can produce an effect) is constrained by the norms that appear in the normative system. We have argued that $-\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$ states that rule/norm $\alpha$ is not in the system (or that it is not effective). Therefore, this is enough to contravene the obligation of the norm to be effective in the normative system.
\begin{definition}[Discardability]\label{def:MetaDiscardability}
Assume a deontic defeasible theory $D$ with $D = (\ensuremath{F}\xspace, R, >)$. We say that rule $\alpha \in R^\ensuremath{\mathsf{C}}\xspace \cup R^\ensuremath{\mathsf{P}}\xspace$ is \emph{discarded} at $P(n+1)$, iff $-\partial^{m}_{\ensuremath{\mathsf{C}}\xspace}\alpha\in P(1..n)$ or at at least one of the Conditions~\ref{item:notl}--\ref{item:notDiamondl} of Definition~\ref{def:StandardDiscardability} holds, or there exists $a \in A(\alpha)$ such that
\begin{enumerate}
\item\label{item:notBeta} if $a = \beta \in \ensuremath{\mathrm{REx}}\xspace$, then $-\partial_\ensuremath{\mathsf{C}}\xspace \beta \in P(1..n)$,
\item if $a = \Box \beta$, $\beta \in \ensuremath{\mathrm{REx}}\xspace$, then $-\partial_\Box \beta \in P(1..n)$, $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$
\item\label{item:notDiamondBeta} if $a = \neg\Box \beta$, $\beta \in \ensuremath{\mathrm{REx}}\xspace$, then $+\partial_\Box \beta \in P(1..n)$, $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$.
\end{enumerate}
We say that rule $\alpha \in R^\ensuremath{\mathsf{O}}\xspace$ is \emph{discarded at index} $i$ \emph{and} $P(n+1)$ iff $-\partial^{m}_{\ensuremath{\mathsf{C}}\xspace}\alpha\in P(1..n)$ or at least one of the Conditions~\ref{item:notl}--3 of Definition~\ref{def:StandardDiscardability} or of the Conditions~\ref{item:notBeta}--\ref{item:notDiamondBeta} above holds or there exists $c_j \in C(\alpha),\, j < i$ such that
\begin{enumerate}[resume]
\item
\begin{enumerate}[label={\alph*.}]
\item if $c_j = l$, then $-\partial_\ensuremath{\mathsf{O}}\xspace l \in P(1..n)$ or $-\partial_\ensuremath{\mathsf{C}}\xspace \ensuremath{\mathcal{\sim }} l \in P(1..n)$, $l \in \ensuremath{\mathrm{PLit}}\xspace$;
\item if $c_j = \beta$, then $-\partial_\ensuremath{\mathsf{O}}\xspace \beta \in P(1..n)$ or $+\partial_\ensuremath{\mathsf{C}}\xspace \beta \in P(1..n)$.
\end{enumerate}
\end{enumerate}
\end{definition}
Before moving to the presentation of the two variants of the logic, we introduce a key feature of our logical apparatus: the notion of \emph{conflict of rules}. The definition below formalises the intuitions discussed at the beginning of this section.
\begin{definition}[Simple Conflict]\label{def:SimpleConflict}
Given two rules or rule expressions $\alpha$ and $\beta$, we say that $\alpha$ and $\beta$ \emph{simply conflict} iff
\begin{enumerate}
\item
\begin{description}
\item $\alpha\colon A(\alpha) \hookrightarrow_{\Box} C(\alpha)$
\item $\beta$ is $\ensuremath{\mathcal{\sim }}(\gamma\colon A(\alpha) \hookrightarrow_{\Box} C(\alpha))$
\end{description}
\item If $\nu\in C(\alpha)$ at index $i$, $\zeta\in C(\beta)$ at index $j$, and $\nu$ and $\zeta$ simply conflict.
\end{enumerate}
\end{definition}
\begin{definition}[Cautious Conflict]\label{def:RationalConflict}
Given two rules or rule expressions $\alpha$ and $\beta$, we say that $\alpha$ and $\beta$ \emph{cautiously conflict} iff
\begin{enumerate}
\item
\begin{description}
\item $\alpha\colon A(\alpha) \hookrightarrow_{\Box} C(\alpha)$
\item $\beta$ is $\ensuremath{\mathcal{\sim }}(\gamma\colon A(\alpha) \hookrightarrow_{\Box} C(\alpha))$
\end{description}
\item
\begin{description}
\item $\alpha\colon A(\alpha) \hookrightarrow_{\Box} C(\alpha)$,
\item $\beta\colon A(\alpha) \hookrightarrow_\Box \ensuremath{\mathcal{\sim }} C(\alpha)$;
\end{description}
\item
\begin{description}
\item $\alpha\colon A(\alpha) \hookrightarrow_{\ensuremath{\mathsf{O}}\xspace} C(\alpha)$,
\item $\beta\colon A(\alpha) \hookrightarrow_{\ensuremath{\mathsf{P}}\xspace} \ensuremath{\mathcal{\sim }} C(\alpha)$;
\end{description}
\item
\begin{description}
\item $\alpha\colon A(\alpha) \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} c_1\otimes\dots\otimes c_m$,
\item $\beta\colon A(\alpha) \Rightarrow_{\ensuremath{\mathsf{O}}\xspace} d_1\otimes\dots\otimes d_n$, and
\begin{enumerate}
\item $\exists i \leq m, n,\ \forall k < i.\, c_k = d_k \wedge c_i = \ensuremath{\mathcal{\sim }} d_i$; or
\item $m < n \wedge \forall i \leq m$ $c_i=d_i$.
\end{enumerate}
\end{description}
\item If $\nu\in C(\alpha)$ at index $i$, $\zeta\in C(\beta)$ at index $j$, and $\nu$ and $\zeta$ cautiously conflict.
\end{enumerate}
\end{definition}
Given two rules or rule expressions $\alpha$ and $\beta$ that (simply/cautiously) conflict, we will say that $\alpha$ (simply/cautiously) conflicts with $\beta$ (and the other way around).
Notice that the definitions above apply to both standard rules and meta-rules; only the recursive conditions (condition 2
in Defintion~\ref{def:SimpleConflict} and condition 5 of Definition~\ref{def:RationalConflict}) are limited to meta-rules.
It is easy to verify that every simple conflict is also a cautious conflict.
\begin{example}
The most immediate example of a simple conflict is given by two rule expressions like
\[
\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b \qquad
\neg(\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b).
\]
Here the two expressions share the same rule label. However, we have a simple conflict even when the rule label is different. Thus,
\[
\dRule\beta:a,\ensuremath{\mathsf{O}}\xspace b=>\ensuremath{\mathsf{P}}\xspace c\qquad
\neg(\dRule\gamma:a,\ensuremath{\mathsf{O}}\xspace b=>\ensuremath{\mathsf{P}}\xspace c)
\]
simply conflict with each other.
Two meta-rules are in a simple conflict relation when their conclusions are in the same relation. Therefore,
\[ \dRule\epsilon:A(\epsilon)=>\ensuremath{\mathsf{O}}\xspace{a\otimes(\dRule\zeta:c=>\ensuremath{\mathsf{C}}\xspace d)}\qquad \dRule\eta:A(\eta)=>\ensuremath{\mathsf{O}}\xspace{a\otimes\neg(\dRule\theta:c=>\ensuremath{\mathsf{C}}\xspace d)}
\]
simply conflict with each other since the initial part of the $\otimes$-chain in their conclusion is the same, but the next elements are conflicting rule expressions.
For the notion of cautious conflict, the rules
\[
\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b \qquad
\dRule\kappa:a=>\ensuremath{\mathsf{C}}\xspace \neg b
\]
and the meta-rules
\[
\dRule\lambda:{(\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b),e}=>\ensuremath{\mathsf{O}}\xspace f
\qquad
\dRule\mu:{(\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b),e}=>\ensuremath{\mathsf{P}}\xspace\neg f
\]
conflict since, for each pair, they have the same antecedent and opposite conclusions (the meta-rules also show an $\ensuremath{\mathsf{O}}\xspace$/$\ensuremath{\mathsf{P}}\xspace$ conflict). The next pair of meta-rules
\begin{gather*}
\dRule\nu:\ensuremath{\mathsf{O}}\xspace({\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b)}=>\ensuremath{\mathsf{O}}\xspace{(\dRule\beta:a,\ensuremath{\mathsf{O}}\xspace b=>\ensuremath{\mathsf{P}}\xspace c)
\otimes
(\dRule\xi:f=>\ensuremath{\mathsf{O}}\xspace g\otimes h)}
\\
\dRule o:{\neg(\dRule\kappa:a=>\ensuremath{\mathsf{C}}\xspace \neg b)}=>\ensuremath{\mathsf{O}}\xspace{c\otimes d
\otimes
(\dRule\pi:f=>\ensuremath{\mathsf{O}}\xspace g\otimes \neg h)}
\end{gather*}
cautiously conflict because the rules $\xi$ and $\pi$ do: they have the same antecednet, the same element at index 1, but complementary literals at index 2. Notice, that $\nu$ and $o$ can have different antecedents, and the conflcliting rules in their conclusions can appear at different indices.
\end{example}
In the rest of the section, we shall introduce two variants of the logic, where in addition to proving literals we prove rules. The two variants share the proof conditions for literals given in Section~\ref{sec:Method} (Definitions \ref{def:StandardCostProof}--\ref{def:StandardPermProof}), whilst they differ from each other in the proof conditions to derive (standard) rules.
\subsection{Simple Conflict Defeasible Deontic Logic}\label{sec:Variante1}
We are now ready to present the proof conditions for the Simple Conflict variant of Defeasible Deontic Logic. In this variant, we focus on the case of applicable meta-rules for a standard rule and its negation. In other words, there are two meta-rules: one to introduce a rule in the system and one to remove it (or to prevent it from being inserted). To handle this case, we restrict the logic to the notion of simple conflicts.
\begin{definition}[Constitutive Meta-Proof Conditions -- Simple Variant]\label{def:ConstMetaPT-V1}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$ then\+\\
(1) $\alpha\in R$ or\\
(2) \=(1) $\forall \omega\in R$, $\omega$ does not simply conflict with $\alpha$, and
\+\\
(2) \=$\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{C}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c)]$ s.t.\+\\
(1) $\beta$ is applicable, and\\
(2) \=$\forall \gamma\in R^\ensuremath{\mathsf{C}}\xspace[\ensuremath{\mathcal{\sim }}(\varphi: A(\alpha) \hookrightarrow_\Box c)]$ either\+\\
(1) $\gamma$ is discarded, or \\
(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{C}}\xspace[(\chi : A(\alpha) \hookrightarrow_\Box c)]$ s.t.\+\\
(1) $\chi \in \set{\alpha, \varphi}$,\\
(2) $\zeta$ is applicable, and\\
(3) $\zeta > \gamma$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$ then\+\\
(1) $\alpha\notin R$ and\\
(2) \=(1) $\exists\rho\in R$, $\rho$ simply conflicts with $\alpha$, or\+\\
(2) \= $\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{C}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c)]$ either \+\\
(1) $\beta$ is discarded, or\\
(2) \=$\exists \gamma\in R^\ensuremath{\mathsf{C}}\xspace[\ensuremath{\mathcal{\sim }}(\varphi: A(\alpha) \hookrightarrow_\Box c)]$, s.t. \+\\
(1) $\gamma$ is applicable \\
(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{C}}\xspace[(\chi : A(\alpha) \hookrightarrow_\Box c)]$ \+\\
(1) $\chi \notin \set{\alpha, \varphi}$, or\\
(2) $\zeta$ is discarded, or\\
(3) $\zeta \not> \gamma$.
\end{tabbing}
\end{definition}
For the explanation, we focus on the positive case. We first remark that the general structure of the proof conditions for rules is the same as that for literals. Either the conclusion is given in the theory, or there is an applicable rule for it and all possible attacks are either discarded or defeated. Ergo, the first option to prove a rule is to see if it is one of the rules given in $R$. Notice that this step applies to both standard rules and meta-rules (Condition 1). If rule $\alpha$ is not in $R$, Condition (2) checks that rules simply conflicting with $\alpha$ are not in the set of the initial, given rules $R$; if this step succeeds, we then look for an applicable meta-rule whose conclusion is $\alpha$. It is worth noticing that, with the extended definition of applicability, $\beta$ has to be provable, and this condition applies to the proof conditions for literals. Next, we have to examine all meta-rules for the ``opposite'' of $\alpha$: here, we consider all rule expressions starting with a negation and having the same antecedent as well as the same consequent of $\alpha$; nevertheless, they can have a different label (we do not impose that label $\varphi$ must be $\alpha$). The reason is that it would be irrational to introduce a rule (with a given name) and, at the same time to remove it (potentially with a different name). For all attacking meta-rules, we have to rebut them. As usual, there are two ways: either the attacking rule is discarded or is defeated. However, contrary to the attacking phase, the defeating meta-rules must be for either $\alpha$ or $\varphi$; in other terms, we cannot reinstate a specific rule using other generic rules.
\begin{example}\label{ex:MetaConstV1-1}
Consider the theory $D$ where $a\in F$, the following rules are in $R$ \footnote{From now on, we use the convention to use $\dots$ to indicate the antecedent of rules that are applicable and where the content of the antecedent is not relevant to the discussion.}
\begin{align*}
&\drule{\beta}{\dots}{(\drule{\alpha}{a}{b})} &
&\drule{\eta}{\dots}{c}&
&\Drule{\lambda}{\dots}{(\drule{\alpha}{a}{b})}\\
&\drule{\gamma}{c}{\neg(\drule{\epsilon}{a}{b})} &
&\drule{\theta}{\dots}{\neg(\drule{\eta}{\dots}{c})} &
&\drule{\mu}{d}{\neg(\drule{\alpha}{a}{b})}
\end{align*}
and $\lambda>\gamma$.
It is immediate to see that the meta-rules $\beta,\gamma,\theta,\lambda,\mu$ are provable being in $R$. Also we have $+\partial^m_\ensuremath{\mathsf{C}}\xspace\eta$, while there is an applicable meta-rule for $\neg\eta$ ($\theta$), $\eta\in R$ and, by Condition (1), this takes precedence, and by the conjunction of Conditions (1) and (2.1) for $-\partial^m_\ensuremath{\mathsf{C}}\xspace$, we conclude $-\partial^m_\ensuremath{\mathsf{C}}\xspace\neg\eta$. At this stage, $\eta$ is applicable and, since there are no rules for $\neg c$, we derive $+\partial_\ensuremath{\mathsf{C}}\xspace c$; this makes $\gamma$ applicable. Now, we have an applicable meta-rule, $\beta$, for $\alpha$. We have to look for meta-rules for standard rules that simply conflict with $\alpha$; we have two candidates: $\gamma$ and $\mu$. There is no rule for $d$; hence $-\partial_\ensuremath{\mathsf{C}}\xspace d$ holds, and $\mu$ is discarded. On the other hand, as we just discussed, $\gamma$ is applicable. Thus we have to determine if it is defeated or not. Again, two options: $\beta$ itself, but $\beta$ is not stronger than $\gamma$; and $\lambda$ which, even if it is a defeater, succeeds in defeating $\gamma$, we hence derive $+\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$. Finally, since $\alpha$ is applicable and there are no rules for $\neg b$, then $+\partial_\ensuremath{\mathsf{C}}\xspace b$ holds in $D$.
Suppose now that we extend the theory by inserting the rule $\rho$
\[
\drule[\ensuremath{\mathsf{O}}\xspace]{\rho}{(\drule{\alpha}{a}{b})}{e}
\]
in $R$. Since $\rho$ is in $R$, then it is derivable. Given that we proved $+\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$, $\rho$ is applicable, we can now prove $+\partial_\ensuremath{\mathsf{O}}\xspace e$. This shows that it is possible to have a meta-rule whose conclusion is not a rule but a literal. Suppose now that, in addition to update $R$ with $\rho$, we also replace $\lambda$ with
\[
\drule{\nu}{\dots}{\neg(\drule{\sigma}{a}{b})}
\]
and include $\nu>\gamma$.
Given that $\sigma$ is neither $\alpha$ nor $\epsilon$, we cannot use $\nu$ to reinstate $\alpha$ (even if $\nu$ is stronger than $\gamma$). This means, that now $D\vdash-\partial_\ensuremath{\mathsf{C}}\xspace^m\alpha$, and then $D\vdash-\partial_\ensuremath{\mathsf{O}}\xspace e$. However, we can use $\nu$ in both Conditions (2.2) and (2.2.2.2), to prove $+\partial^m_\ensuremath{\mathsf{C}}\xspace\sigma$; again we have an applicable undefeated rule for $b$, and $+\partial_\ensuremath{\mathsf{C}}\xspace b$ continues to hold.
\end{example}
\begin{definition}[Obligation Meta-Proof Conditions -- Simple Variant]\label{def:OblMetaPT-V1}
\begin{tabbing}
$+\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha$: \=If $P(n+1)=+\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha$ then\+\\
$\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{O}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c), i]$ s.t.\\
(1) $\beta$ is applicable at index $i$, and\\
(2) \=$\forall \gamma\in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }}(\psi: A(\alpha) \hookrightarrow_\Box c), j] \cup R^\ensuremath{\mathsf{P}}\xspace[\ensuremath{\mathcal{\sim }}(\psi: A(\alpha) \hookrightarrow_\Box c)]$ either\\
\>(1) $\gamma$ is discarded (at index $j$), or \\
\>(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{O}}\xspace[(\chi : A(\alpha) \hookrightarrow_\Box c), k]$ s.t.\\
\>\>(1) $\chi \in \set{\alpha, \psi}$,\\
\>\>(2) $\zeta$ is applicable at index $k$, and\\
\>\>(3) $\zeta > \gamma$.
\end{tabbing}
\begin{tabbing}
$-\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha$: \=If $P(n+1)=-\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha$ then\+\\
$\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{O}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c), i]$ either\\
(1) $\beta$ is discarded at index $i$, or\\
(2) \=$\exists \gamma\in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }}(\psi: A(\alpha) \hookrightarrow_\Box c), j] \cup R^\ensuremath{\mathsf{P}}\xspace[\ensuremath{\mathcal{\sim }}(\psi: A(\alpha) \hookrightarrow_\Box c)]$, s.t.\\
\>(1) $\gamma$ is applicable (at index $j$), and \\
\>(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{O}}\xspace[(\chi : A(\alpha) \hookrightarrow_\Box c), k]$ either\\
\>\>(1) $\chi \not\in \set{\alpha, \psi}$, or\\
\>\>(2) $\zeta$ is discarded at index $k$, or\\
\>\>(3) $\zeta \not> \gamma$.
\end{tabbing}
\end{definition}
The conditions to derive when a rule is proved or refuted as an obligation combine: (i) the issues to derive a literal as an obligation, and (ii) the above elements for proving a rule. Note that a deontic rule expression can appear in the body of a rule but cannot stand on its own. Thus $F$ and $R$ cannot contain expressions like $\ensuremath{\mathsf{O}}\xspace\alpha$.
\begin{example}\label{ex:MetaObligation}
Let $D$ be a theory where $a,d\in\ensuremath{F}\xspace$, $R$ contains the following rules
\begin{align*}
&\dRule\alpha:\dots=>\ensuremath{\mathsf{O}}\xspace {
(\dRule\gamma:a=>\ensuremath{\mathsf{C}}\xspace c) \otimes
c \otimes
(\dRule\epsilon: d=>\ensuremath{\mathsf{O}}\xspace {e\otimes f})
} &
&\dRule\beta:\dots=>\ensuremath{\mathsf{C}}\xspace {(\dRule\eta: a=>\ensuremath{\mathsf{C}}\xspace {\neg c})}\\
&\dRule\theta:\ensuremath{\mathsf{P}}\xspace({\dRule\epsilon: d=>\ensuremath{\mathsf{O}}\xspace {e\otimes f}})=>\ensuremath{\mathsf{O}}\xspace {\neg(\dRule\kappa:\ensuremath{\mathsf{O}}\xspace e,\neg e=>\ensuremath{\mathsf{O}}\xspace h)} &
& \dRule\lambda:g=>\ensuremath{\mathsf{P}}\xspace {(\dRule\kappa:\ensuremath{\mathsf{O}}\xspace e,\neg e=>\ensuremath{\mathsf{O}}\xspace h)}
\end{align*}
and $\lambda>\theta$.
Let us consider the following derivation (where, for the sake of clarity, we removed trivial and irrelevant steps):
\begin{tabbing}xxx\=xxxxxxxxxxxxxxxxxxxxx\=\kill
1. \>$+\partial^m_\ensuremath{\mathsf{O}}\xspace (\dRule\gamma:a=>\ensuremath{\mathsf{C}}\xspace c)$ \> $\alpha$ applicable for $\gamma$ at index $1$, $R^\Diamond[\neg\alpha]=\emptyset$ \\
2. \>$-\partial^m_\ensuremath{\mathsf{C}}\xspace (\dRule\gamma:a=>\ensuremath{\mathsf{C}}\xspace c)$ \> $\gamma\notin R$ and $R^\ensuremath{\mathsf{C}}\xspace[\gamma]=\emptyset$ \\
3. \>$+\partial_\ensuremath{\mathsf{O}}\xspace c$ \> 1. and 2. $\alpha$ applicable for $c$ at index $2$, $R^\Diamond[\neg c]=\emptyset$\\
4. \>$+\partial^m_\ensuremath{\mathsf{C}}\xspace (\dRule\eta:a=>\ensuremath{\mathsf{C}}\xspace \neg c)$ \> $\beta$ applicable and $R^\ensuremath{\mathsf{C}}\xspace[\neg\eta]=\emptyset$\\
5. \>$+\partial_\ensuremath{\mathsf{C}}\xspace a$ \> $a\in\ensuremath{F}\xspace$\\
6. \>$+\partial_\ensuremath{\mathsf{C}}\xspace \neg c$ \> 4. and 5. $\eta$ applicable and $R^\ensuremath{\mathsf{C}}\xspace[\neg c]=\emptyset$\\
7. \>$+\partial^m_\ensuremath{\mathsf{O}}\xspace(\dRule\epsilon: d=>\ensuremath{\mathsf{O}}\xspace {e\otimes f})$ \> 3. and 6. $\alpha$ applicable for $\epsilon$ at index 3, $R^\Diamond[\neg\epsilon]=\emptyset$\\
8. \>$+\partial^m_\ensuremath{\mathsf{P}}\xspace(\dRule\epsilon: d=>\ensuremath{\mathsf{O}}\xspace {e\otimes f})$ \> $+\partial^m_\ensuremath{\mathsf{O}}\xspace(\epsilon)\in P(1..7)$, Condition (1) of $+\partial^m_\ensuremath{\mathsf{P}}\xspace$ of Def. \ref{def:PermMetaPT-V1}\\
9. \>$-\partial_\ensuremath{\mathsf{C}}\xspace g$ \> $g\notin\ensuremath{F}\xspace$ and $R^\ensuremath{\mathsf{C}}\xspace[g]=\emptyset$\\
10.\>$+\partial^m_\ensuremath{\mathsf{O}}\xspace \neg(\dRule\kappa:\ensuremath{\mathsf{O}}\xspace e,\neg e=>\ensuremath{\mathsf{O}}\xspace h)$ \> $\theta$ applicable (7.), $\lambda$ discarded (9.)\\
11. \>$+\partial_\ensuremath{\mathsf{C}}\xspace d$ \> $d\in\ensuremath{F}\xspace$\\
12. \>$+\partial_\ensuremath{\mathsf{O}}\xspace e$ \> 7. and 11. $\epsilon$ applicable for $e$ at index 1, $R^\Diamond[\neg e]=\emptyset$\\
13. \>$-\partial_\ensuremath{\mathsf{C}}\xspace \neg e$ \> $\neg e\notin\ensuremath{F}\xspace$ and $R^\ensuremath{\mathsf{C}}\xspace[\neg e]=\emptyset$\\
14. \>$-\partial_\ensuremath{\mathsf{O}}\xspace f$ \> 13. $\epsilon$ discarded for $f$ at index 2
\end{tabbing}
The derivation above mostly illustrates the proof conditions for $+\partial^m_\ensuremath{\mathsf{O}}\xspace$ with repeated use of Condition 5 of Definition~\ref{def:MetaApplicability}, and particularly the distinction between rules and literals appearing in $\otimes$-expressions: compare the combinations Steps 1. and 2. to obtain Step 3., and Steps 3. and 6. to conclude Step 7. Note that most cases of discarded occur when there are no rules for the opposite.
Let us focus on the rules in the theory. First of all, rule $\theta$ shows that it is possible to have rule expressions in the antecedent of meta-rules. Second, let us analyse the meaning of rules $\theta$ and $\lambda$: $\lambda$ allows $\kappa$ to be an admissible norm in the normative system. In case $\epsilon$ is present and effective (produces the effects of its conclusion, $\ensuremath{\mathsf{O}}\xspace e$), $\kappa$ corresponds to a contrary-to-duty norm whose normative effect is in force in case of a violation. Typically, this kind of expression represents an accessory penalty (see \cite{icail2015:thou} for the distinction between \emph{compensatory obligations} and \emph{contrary-to-duty obligations}). On the other hand, $\epsilon$ establishes a compensatory condition for violating the obligation of $e$. Accordingly, $\theta$ prescribes that it is forbidden to have an accessory penalty if a compensatory condition is admissible.
\end{example}
\begin{definition}[Permission Meta-Proof Conditions -- Simple Variant]\label{def:PermMetaPT-V1}
\begin{tabbing}
$+\partial^m_\ensuremath{\mathsf{P}}\xspace \alpha$: \=If $P(n+1)=+\partial^m_\ensuremath{\mathsf{P}}\xspace \alpha$ then\+\\
(1) \= $+\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha \in P(1..n)$, or\\
(2) \= $\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c)]$ s.t.\+\\
(1) $\beta$ is applicable, and\\
(2) \=$\forall \gamma\in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }}(\varphi: A(\alpha) \hookrightarrow_\Box c), j]$ either\\
\>(1) $\gamma$ is discarded at index $j$, or \\
\>(2) \= $\exists \zeta \in R^\Diamond[(\chi : A(\alpha) \hookrightarrow_\Box c), k]$ s.t.\\
\>\>(1) $\chi \in \set{\alpha, \varphi}$,\\
\>\>(2) $\varepsilon$ is applicable (at index $k$), and\\
\>\>(3) $\varepsilon > \gamma$.
\end{tabbing}
\begin{tabbing}
$-\partial^m_\ensuremath{\mathsf{P}}\xspace \alpha$: \=If $P(n+1)=-\partial^m_\ensuremath{\mathsf{P}}\xspace \alpha$ then\+\\
(1) \= $-\partial^m_\ensuremath{\mathsf{O}}\xspace \alpha \in P(1..n)$, and\\
(2) \= $\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[(\alpha: A(\alpha) \hookrightarrow_\Box c)]$ either \+\\
(1) $\beta$ is discarded, or\\
(2) \=$\exists \gamma\in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }}(\varphi: A(\alpha) \hookrightarrow_\Box c), j]$, s.t.\\
\>(1) $\gamma$ is applicable at index $j$, and \\
\>(2) \= $\forall \zeta \in R^\Diamond[(\chi : A(\alpha) \hookrightarrow_\Box c), k]$ either\\
\>\>(1) $\chi \not\in \set{\alpha, \varphi}$, or\\
\>\>(2) $\varepsilon$ is discarded (at index $k$), or\\
\>\>(3) $\varepsilon \not > \gamma$.
\end{tabbing}
\end{definition}
Similarly to $\partial_\ensuremath{\mathsf{O}}\xspace^{m}$, the conditions to determine the provability of rules as permission combine the elements of $\partial_\ensuremath{\mathsf{P}}\xspace$ and the new features included in $\partial^m_\ensuremath{\mathsf{C}}\xspace$. As illustrated by Example~\ref{ex:MetaObligation} above, Condition (1) corresponds to the `obligation implies permission principle' (axiom D), characteristic of Deontic Logic.
\begin{example}\label{ex:MetaPerm-V1}
Consider theory $D$ with the following four applicable rules
\begin{align*}
&\dRule\beta:\dots=>\ensuremath{\mathsf{P}}\xspace {(\dRule\alpha:a=>\ensuremath{\mathsf{P}}\xspace b)} &
&\dRule\gamma:\dots=>\ensuremath{\mathsf{C}}\xspace {(\dRule\alpha:a=>\ensuremath{\mathsf{P}}\xspace b)}\\
&\dRule\eta:\dots=>\ensuremath{\mathsf{P}}\xspace {\neg(\dRule\alpha:a=>\ensuremath{\mathsf{P}}\xspace b)} &
&\dRule\theta:\dots=>\ensuremath{\mathsf{C}}\xspace {\neg(\dRule\alpha:a=>\ensuremath{\mathsf{P}}\xspace b)}.
\end{align*}
It is immediate to see that $\beta$ and $\eta$ conflict with each other, and so are $\gamma$ and $\theta$ (in both cases, the conclusion of a rule is the complement of the other). However, in Condition (2.2) of $+\partial^m_\ensuremath{\mathsf{P}}\xspace$, only obligation rules attack a permissive rule, and no obligation rules conflict with $\beta$ (and symmetrically $\eta$); therefore, Condition (2.2) is vacuously satisfied: we have $+\partial^m_\ensuremath{\mathsf{P}}\xspace\alpha$ and $+\partial^m_\ensuremath{\mathsf{P}}\xspace\neg\alpha$. Conversely, we have to evaluate $\theta$ according to Condition (2.2.2) of $+\partial^m_\ensuremath{\mathsf{C}}\xspace$ to determine whether $D\vdash+\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$. Since $\theta$ is neither discarded nor defeated, $\alpha$ is not defeasibly provable, and we can repeat the argument for $\neg\alpha$. Consequently, $D\vdash-\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$ and $D\vdash-\partial^m_\ensuremath{\mathsf{C}}\xspace\neg\alpha$. The example illustrates the case where it is possible for a normative system to make admissible a norm and, at the same time, it is admissible not to have that norm. However, it is not possible to have a norm and not to have the same norm (or alternatively, to include a norm and remove it at the same time).
\end{example}
We will use $D\vdash_S\pm\# c$ to indicate that there is a proof of $\pm\# c$ from $D$ using the proof conditions for Simple Conflict Defeasible Deontic Logic.
\subsection{Cautious Conflict Defeasible Deontic Logic}\label{sec:Variante2}
We now introduce a second variant of the logic with meta-rules. This variant considers a different idea of when meta-rules conflict. In the previous variant, the idea was that a conflict occurs when two statements fire concurrently, one affirming that a rule holds (is, or is inserted, in the system), and the second denying the rule (i.e., asserting that the rule is not in the system, has been removed). The simple variant took into account the specific label of the rule.
As discussed in the initial part of this section, the second variant (cautious) focuses on the ``content'' of the rules. It is partially inspired by some statutory interpretation principles, mostly the principle of non-redundancy of the prescriptions in a statute. Suppose that we have the rules
\begin{align*}
\dRule\alpha:a=>\ensuremath{\mathsf{C}}\xspace b && \dRule\beta:a=>\ensuremath{\mathsf{C}}\xspace\neg b.
\end{align*}
The logic of Section~\ref{sec:Method} is very well-equipped to handle such a case. If $a$ is derivable, both rules are applicable, and they are for opposite conclusions. The logic is sceptical, and it prevents to derive positive conclusions. It derives that both $b$ and $\neg b$ are refuted. Given that the two rules have exactly the same antecedent, it is \emph{not} possible that one of them is applicable and the other is not. Therefore, (i) the combination of the two rules introduces a normative gap: when $a$ holds, we are not able to determine if $b $ or $\neg b$ is the case; (ii) suppose that the theory specifies that one rule is stronger than the other. In this case, we can derive the conclusion of the stronger rule. Given that the two rules have the same antecedent, and the weaker rule is always defeated, there is no situation where the weaker rule can produce its conclusion. Accordingly, the weaker rule is redundant.
The principle of non-redundancy in statutory interpretation states that every term in a statute has a purpose \cite{bowman}. Therefore, if a provision in a statute never produces an effect, then there is no purpose for such a provision, and the provision is redundant. The proof conditions we present in the rest of this section implement two key elements: (i) at every step, they look for a rule whose content is incompatible with the content of the rule we want to prove, and (ii) they extend the superiority relation mechanism to check not only if a meta-rule is stronger than another (meta-)rule, but also the superiority for the (standard) rules themselves.
We have now all the tools to reformulate the proof tags for proving/refuting rules in Definitions~\ref{def:ConstMetaPT-V1}--\ref{def:PermMetaPT-V1} to accommodate the new ideas of conflicts introduced at the beginning of Section~\ref{sec:Logic}.
\begin{definition}[Constitutive Meta-Proof Conditions -- Cautious Variant]\label{def:ConstMetaPT-V2}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$ then\+\\
(1) $\alpha\in R$ or\\
(2) \=(1) $\forall \omega\in R$, $\omega$ does not cautiously conflict with $\alpha$, and\+\\
(2) \= $\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{C}}\xspace[\alpha]$ s.t.\+\\
(1) $\beta$ is applicable, and\\
(2) \=$\forall \gamma \in R^\ensuremath{\mathsf{C}}\xspace$ s.t. $\gamma$ cautiously conflicts with $\beta$, either\+\\
(1) $\gamma$ is discarded, or \\
(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{C}}\xspace$ s.t. $\zeta$ cautiously conflicts with $\gamma$, and \+\\
(1) \= $\zeta$ is applicable, and \\
(2) \= (1) \=$\zeta > \gamma$, or\+\\
(2) $\gamma \not> \zeta$, and $C(\zeta) > C(\gamma)$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{C}}\xspace^m \alpha$ then\+\\
(1) $\alpha\notin R$ and\\
(2) \=(1) $\exists \omega\in R$, $\omega$ cautiously conflicts with $\alpha$, or\+\\
(2) \= $\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{C}}\xspace[\alpha]$ either\+\\
(1) $\beta$ is discarded, or\\
(2) \=$\exists \gamma \in R^\ensuremath{\mathsf{C}}\xspace$ s.t. $\gamma$ cautiously conflicts with $\beta$, s.t.\+\\
(1) $\gamma$ is applicable, or \\
(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{C}}\xspace$ s.t. $\zeta$ cautiously conflicts with $\gamma$, either\+\\
(1) \= $\zeta$ is discarded, or\\
(2) (1) $\zeta \not> \gamma$, and\+\\
(2) $\gamma> \zeta$ or $C(\zeta) \not> C(\gamma)$.
\end{tabbing}
\end{definition}
Notice that the only new parts wrt Definition~\ref{def:ConstMetaPT-V1} are: (i) Condition (2.1) uses cautiously conflict instead of simply conflict, and (ii) Condition (2.2.2.2.2) verifies whether the meta-rule supporting the conclusion is stronger than the meta-rule attacking the conclusion ($\gamma$); if this is not the case, we can see if the is an instance of the superiority relation for the standard rules (i.e., $C(\zeta) > C(\gamma)$).
\begin{example}\label{ex:MetaCost-v2}
Consider again the theory of Example~\ref{ex:MetaConstV1-1}, with $\lambda>\gamma$, and:
\begin{align*}
&\drule{\beta}{\dots}{(\drule{\alpha}{a}{b})} &
&\drule{\eta}{\dots}{c}&
&\Drule{\lambda}{\dots}{(\drule{\_}{a}{b})}\\
&\drule{\gamma}{c}{\neg(\drule{\epsilon}{a}{b})} &
&\drule{\theta}{\dots}{\neg(\drule{\eta}{\dots}{c})} &
&\drule{\mu}{d}{\neg(\drule{\alpha}{a}{b})}.
\end{align*}
All rules that were in conflict according to the Simple Conflict variant are in conflict in the Cautious Conflict version. Now, we do not have to ensure that the label of the standard rule that is the conclusion of $\lambda$ is either the label of the rule that is the conclusion of $\beta$, or the conclusion of $\gamma$. Any rule conflicting with $\epsilon$ will do.
\end{example}
The following two definitions, that determine the proof conditions for obligations and permissions, are obtained directly from Definitions~\ref{def:OblMetaPT-V1} and \ref{def:PermMetaPT-V1} using the changes we just explained for Definition~\ref{def:ConstMetaPT-V2}.
\begin{definition}[Obligation Meta-Proof Conditions -- Cautious Variant]\label{def:OblMetaPT-V2}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{O}}\xspace^m \alpha$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{O}}\xspace^m \alpha$ then\+\\
$\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{O}}\xspace[\alpha, i]$ s.t. \\
(1) $\beta$ is applicable at index $i$, and\\
(2) \=$\forall \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\nu, j] \cup R^\ensuremath{\mathsf{P}}\xspace[\nu]$ s.t. $\gamma$ cautiously conflicts with $\beta$, either\\
\>(1) $\gamma$ is discarded (at index $j$), or \\
\>(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{O}}\xspace[\chi, k]$ s.t. $\zeta$ cautiously conflicts with $\gamma$\\
\>\> (1) \= $\zeta$ is applicable (at index $k$), and either\\
\>\> (2) (1) $\zeta > \gamma$, or\\
\>\>\> (2) $\gamma \not> \zeta$, and $C(\zeta) > C(\gamma)$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{O}}\xspace^m \alpha$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{O}}\xspace^m \alpha$ then\+\\
$\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{O}}\xspace[\alpha, i]$ either \\
(1) $\beta$ is applicable at index $i$, or\\
(2) \=$\exists \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\nu, j] \cup R^\ensuremath{\mathsf{P}}\xspace[\nu]$ s.t. $\gamma$ cautiously conflicts with $\beta$, s.t.\\
\>(1) $\gamma$ is applicable (at index $j$), and \\
\>(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{O}}\xspace[\chi, k]$ s.t. $\zeta$ cautiously conflicts with $\gamma$ either\\
\>\> (1) \= $\zeta$ is discarded (at index $k$), or\\
\>\> (2) (1) $\zeta \not> \gamma$, and\\
\>\>\> (2) $\gamma > \zeta$ or $C(\zeta) \not> C(\gamma)$.
\end{tabbing}
\end{definition}
\begin{definition}[Permission Meta-Proof Conditions -- Cautious Variant]\label{def:PermMetaPT-V2}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{P}}\xspace^m \alpha$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{P}}\xspace^m \alpha$ then\+\\
$\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[\alpha]$ s.t.\\
(1) $\beta$ is applicable, and\\
(2) \=$\forall \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\nu, j]\cup R^\ensuremath{\mathsf{P}}\xspace[\nu]$ s.t. $\gamma$ cautiously conflicts with $\beta$, either\\
\>(1) $\gamma$ is discarded at index $j$, or \\
\>(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{O}}\xspace[\chi, k] \cup R^\ensuremath{\mathsf{P}}\xspace[\chi]$ s.t. $\zeta$ cautiously conflicts with $\gamma$\\
\>\> (1) \= $\zeta$ is applicable (at index $k$), and either\\
\>\>(2) \=(1) $\zeta > \gamma$, or\\
\>\>\>(2) $\gamma \not > \zeta$, and $C(\zeta) > C(\gamma)$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{P}}\xspace^m \alpha$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{P}}\xspace^m \alpha$ then\+\\
$\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[\alpha]$ either\\
(1) $\beta$ is discarded, or\\
(2) \=$\exists \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\nu, j]\cup R^\ensuremath{\mathsf{P}}\xspace[\nu]$ s.t. $\gamma$ cautiously conflicts with $\beta$, s.t.\\
\>(1) $\gamma$ is applicable at index $j$, and \\
\>(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{O}}\xspace[\chi, k] \cup R^\ensuremath{\mathsf{P}}\xspace[\chi]$ s.t. $\zeta$ cautiously conflicts with $\gamma$ either\\
\>\> (1) \= $\zeta$ is discarded (at index $k$), or\\
\>\>(2) \=(1) $\zeta \not> \gamma$, and\\
\>\>\>(2) $\gamma > \zeta$ or $C(\zeta) \not> C(\gamma)$.
\end{tabbing}
\end{definition}
\begin{example}\label{ex:MetaV2-superiority}
Consider theory $D = (F = \set{c}, R, >=\set{(\eta,\theta)})$, with $R= \{$
\begin{align*}
&\dRule\alpha:\dots=>\ensuremath{\mathsf{C}}\xspace {(\dRule\beta:a=>\ensuremath{\mathsf{P}}\xspace b)} &
&\dRule\gamma:\dots=>\ensuremath{\mathsf{C}}\xspace {(\dRule\epsilon:a=>\ensuremath{\mathsf{P}}\xspace \neg b)}\\
&\dRule\eta: c=>\ensuremath{\mathsf{P}}\xspace d &
&\dRule\theta: c=>\ensuremath{\mathsf{P}}\xspace {\neg d}\}.
\end{align*}
Rules $\alpha$ and $\gamma$ conflict with one another since $\beta$ cautiously conflicts with $\epsilon$
(and the other way around). Rules $\alpha$ and $\gamma$ are both applicable (thus neither a discarded, nor a stronger rule exists). Therefore, $\alpha$ satisfies Condition (2.2.2) of $-\partial^m_\ensuremath{\mathsf{C}}\xspace$ (Definition~\ref{def:ConstMetaPT-V2}) to derive $-\partial^m_\ensuremath{\mathsf{C}}\xspace\gamma$, and $\gamma$ does the same to prove $-\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$.
Let us focus on $\eta$ and $\theta$. These two rules are in $R$; thus, even if they are in a cautious conflict (Case 2 of Definition~\ref{def:RationalConflict}), we can prove them with $\ensuremath{\mathsf{C}}\xspace$ modality. Their antecedent $c$ is a fact; hence, they are applicable. Can we prove $\ensuremath{\mathsf{P}}\xspace p$ and $\ensuremath{\mathsf{P}}\xspace\neg p$? The answer is positive. Condition (2.2) of $+\partial_\ensuremath{\mathsf{P}}\xspace$ (Definition~\ref{def:StandardPermProof}) specifies that only obligation rules can attack a permission rule. Ergo, the two rules are individually unopposed, and we conclude $+\partial_\ensuremath{\mathsf{P}}\xspace p$ and $+\partial_\ensuremath{\mathsf{P}}\xspace\neg p$. Rules $\beta$ and $\epsilon$ have the same rule structure of $\eta$ and $\theta$, and one might ask if preventing the inclusion of both rules is meaningful. Again, from the legal drafting point of view, where the aim is to include only provisions that add content that would not be drawn otherwise, a better alternative would be to adopt weak permission, where something is permitted if it is not possible to derive the obligation to the contrary. In this view, $p$ is permitted when we prove $-\partial_\ensuremath{\mathsf{O}}\xspace\ensuremath{\mathcal{\sim }} p$; for a detailed analysis of how to model weak permission in DDL, see \cite{GovernatoriORS13}.
Note that, even if we had an instance of the superiority relation between $\eta$ and $\theta$, it would play no role whatsoever in establishing the provability of the two permissions. However, the superiority relation is relevant when the two permission rules are the conclusion of meta-rules. The proof conditions for the cautious variant of DDL allow us to consider two cases.
Case $\alpha>\gamma$. This is the same circumstance adopted by the simple conflict variant. The legal intuition is that meta-rules are norms from a higher order in a legal hierarchy which provides a method to solve the conflict. Here, we conclude $+\partial^m_\ensuremath{\mathsf{C}}\xspace\beta$ and $-\partial^m_\ensuremath{\mathsf{C}}\xspace\epsilon$, and if $a$ is derivable, we are entitled to conclude $+\partial_\ensuremath{\mathsf{P}}\xspace b$.
Case $\epsilon>\beta$.
Here, the normative system is equipped with a mechanism to solve the conflict between the (standard) rules, even if such a mechanism (as is the case for permission) turns out to be irrelevant for the direct solution. In this instance, we reverse the result of the case above, deriving $+\partial^m_\ensuremath{\mathsf{C}}\xspace\epsilon$ and $+\partial_\ensuremath{\mathsf{P}}\xspace\neg b$.
\end{example}
Note that the superiority relation $>$ is defined over the union of standard rules and
meta-rules; thus, it is possible to have an instance where a standard rule and a meta-rule are
involved. This is useful for cases with rules like:
\begin{align*}
&\dRule\alpha:a=>\ensuremath{\mathsf{O}}\xspace b &
&\dRule\beta:{(\dRule\gamma:c=>\ensuremath{\mathsf{C}}\xspace d)}=>\ensuremath{\mathsf{O}}\xspace\neg b.
\end{align*}
Here, it is meaningful to add a superiority relation, for instance $\alpha >\beta$, given the
conclusions of the two rules are opposite. Moreover, the proof conditions for the cautious variants
allow us to use instances on meta-rules and instances on rules for the derivation of rules.
Also, it is possible to have opposing information about the strength of rules.
\begin{example}\label{ex:referee}
Consider the a defeasible with the following applicable rules\footnote{We are very grateful to one of the anonymous referees for pointing out the example.}
\begin{align*}
&\dRule\alpha_1:A(\alpha_1)=>\ensuremath{\mathsf{C}}\xspace{(\dRule\gamma:a=>\ensuremath{\mathsf{C}}\xspace b)}&
&\dRule\alpha_2:A(\alpha_2)=>\ensuremath{\mathsf{C}}\xspace{(\dRule\gamma:a=>\ensuremath{\mathsf{C}}\xspace b)}\\
&\dRule\beta_1:A(\beta_1)=>\ensuremath{\mathsf{C}}\xspace{(\dRule\zeta:a=>\ensuremath{\mathsf{C}}\xspace\neg b)}&
&\dRule\beta_2:A(\beta_2)=>\ensuremath{\mathsf{C}}\xspace{(\dRule\zeta:a=>\ensuremath{\mathsf{C}}\xspace\neg b)}.
\end{align*}
According to Condition 2 of Definition~\ref{def:RationalConflict} $\gamma$ and $\zeta$ cautiously conflict: they have the same antecedent and opposite conclusions. It is hence meaningful to establish that one prevails over the other; let us $\zeta > \gamma$. Given that $\gamma$ and $\zeta$ conflict, the conclusions of the $\alpha$ rules cautiously conflict with the conclusion of the $\beta$ rules (Condition 5 of Definition~\ref{def:RationalConflict}).
Therefore, again, it is meaningful to have instances of the superiority relation. Suppose we have $\alpha_1 > \beta_1$, $\beta_1 > \alpha_2$ and $\alpha_2 > \beta_2$, and that all meta-rules are body-applicable. We ask whether $+\partial^m_\ensuremath{\mathsf{C}}\xspace\gamma$? We thus have to check the conditions set in Definition~\ref{def:ConstMetaPT-V2}:
since $\alpha_1$ is applicable by Condition (2.2.1), we then have to corroborate Condition (2.2.2). The two $\beta$ rules are applicable as well, thus we have to verify if they are defeated by stronger and applicable (meta-)rules; they are given $\alpha_1 > \beta_1$ and $\alpha_2 > \beta_2$. Accordingly, we can use Condition (2.2.2.2.1) in both cases to satisfy (2.2.2), and so conclude $+\partial^m_\ensuremath{\mathsf{C}}\xspace\gamma$.
We now turn our attention to $+\partial^m_\ensuremath{\mathsf{C}}\xspace\zeta$. Similarly to the other case we have an applicable rule for it (any of the two $\beta$s will do), and again there exists an applicable (meta-)rule for a rule conflicting with $\zeta$ (the two $\alpha$s). We have to rebut them: (i) For $\alpha_2$ -- as for the case of $\gamma$ above -- we can apply Condition (2.2.2.2.1) and the instance of $>$ on the meta-rules $\beta_2 > \alpha_2$; (ii) For $\alpha_1$ we do not have $\alpha_1>\beta_2$, but we have $C(\beta_2)=\zeta>\gamma=C(\alpha_1)$, that which satisfies Condition (2.2.2.2.2). Hence, we conclude $+\partial^m_\ensuremath{\mathsf{C}}\xspace\zeta$.
The intuitions of Condition (2.2.2.2) of Definition~\ref{def:ConstMetaPT-V2}, as well as of Condition (2.2.2) of Definitions~\ref{def:OblMetaPT-V2} and \ref{def:PermMetaPT-V2}, is to provide a mechanism to solve conflicts over meta-rules. If a preference over meta-rules exists, use it; otherwise, check whether we have a preference over the conflicting (standard) rules. Accordingly, instances of the superiority relation on standard rules can fill gaps in the superiority relation over meta-rules. However, as the example illustrates, we can end with a cycle in the extended superiority relation when we fill these gaps. The superiority relation over the standard rule
prefers $\beta$ to $\alpha$, while the superiority relation over the
meta-rules indicates a preference for $\alpha$ over $\beta$. Thus,
in some sense, the theory contains some inconsistencies, and conflicting rules can be derived (see Theorem~\ref{th:SoundCompl} for the proper result).
\end{example}
We shall use $D\vdash_C\pm\# c$ to indicate that there is a proof of $\pm\# c$ from $D$ using the proof conditions for Cautious Conflict Defeasible Deontic Logic.
\subsection{Formal Properties of the Logical Apparatus}\label{subsec:Properties}
In this subsection, we prove two properties of the logic showing
that the variants of the logic do not produce inconsistent results
unless the theory we started with contains some inconsistency.
In addition, we refine the definition of \textit{extension} and the definition of \textit{theory equivalence}. These two notions play
an important role to prove the results in Section~\ref{sec:Algo}.
\begin{theorem}[Coherence]\label{prop:coerenza}
Let $D$ be a defeasible deontic theory. There is no pair of tagged modal formulas $+\# x$ and $-\# x$ such that $D\vdash_{L}+\# x$ and $D\vdash_{L}-\#x$, for $L\in\set{S,C}$.
\end{theorem}
\begin{proof}
The result follows from Theorem 1 of \cite{DBLP:journals/igpl/GovernatoriPRS09} proving that if the proof conditions for a positive proof tag and the corresponding negative are the strong negation of each other, then no theory can prove both $+\# x$ and $-\# x$ for any conclusion $x$. It is immediate to verify that all pairs of proof conditions defined in this paper obey the principle of strong negation.
\end{proof}
For the next results we have to provide an additional definition
(to capture the scenario depicted in Example~\ref{ex:referee}).
Given a defeasible deontic meta-theory $D$, the extended superiority
relation $\succ$ is defined as follows:
\begin{equation}
{\succ} = {>}\cup\set{(\alpha,\beta)| (\alpha,\beta) \notin {>} \text{ and } (C(\alpha),C(\beta))\in {>}}
\end{equation}
\begin{theorem}[Consistence]\label{prop:consistenza}
Let $D$ be defeasible deontic theory such that the superiority relation $>$ is acyclic\footnote{A relation is acyclic if the transitive closure of it does not contain a cycle.}. For any literal $l$, and rules $\alpha$ and $\varphi$:
\begin{enumerate}
\item If $D\vdash_{L}+\partial_\ensuremath{\mathsf{C}}\xspace l$ and $D\vdash_{L}+\partial_\ensuremath{\mathsf{C}}\xspace\neg l$, then $l, \neg l\in F$, for $L\in\set{S,C}$;
\item It is no possible to prove both $D\vdash_{L}+\partial_\ensuremath{\mathsf{O}}\xspace l$ and $D\vdash_{L}+\partial_\ensuremath{\mathsf{O}}\xspace \neg l$, for $L\in\set{S,C}$.
\item If $D\vdash_{S}+\partial^m_\Box \alpha$ and $D\vdash_{S}+\partial^m_\Box\varphi$, such that $\alpha$ simply conflicts with $\varphi$, then $\alpha, \varphi\in R$;
\end{enumerate}
In addition if $\succ$ is acyclic:
\begin{enumerate}[resume]
\item If $D\vdash_{C}+\partial^m_\Box \alpha$ and $D\vdash_{C}+\partial^m_\Box\varphi$, such that $\alpha$ cautiously conflicts with $\varphi$, then $\alpha, \varphi\in R$.
\end{enumerate}
\end{theorem}
\begin{proof}
It is immediate to verify that, according to Definitions~\ref{def:MetaApplicability} and \ref{def:MetaDiscardability}, no rule is applicable and discarded (for the same literal)
at the same time. All proof conditions have the same structure that specifies that there is an applicable rule and every non-discarded rule for a conflicting conclusion is defeated by a stronger applicable rule. Suppose that $l$ and $\neg l$ ($\alpha$ and $\beta$) are both provable for $\Box$. This means that rules $\beta\in R[l]$ and $\gamma\in R[\neg l]$, for $l$ and $\neg l$, exist and are applicable. Given that both $l$ and $\neg l$ are provable for $\Box$, then for every applicable rule $\beta$ for $l$, there is an applicable rule $\gamma'$ for $\neg l$ such that $\gamma' > \beta$, and the other way around. However, since the rule set is finite, the situation is possible if, only if, there is a cycle in the superiority relation (see the proof of Theorem 3.5 of \cite{billington93}). A contradiction.
For the cases for $\ensuremath{\mathsf{C}}\xspace$, condition (1) of the appropriate proof conditions allows us to conclude $+\partial_\ensuremath{\mathsf{C}}\xspace l$, $+\partial^m_\ensuremath{\mathsf{C}}\xspace\alpha$ when $l\in F$ and $\alpha\in R$; however, when $l\in F$ ($\alpha\in R$) it is not possible to use condition (2) to derive $+\partial_\ensuremath{\mathsf{C}}\xspace\neg l$ ($+\partial^m_\ensuremath{\mathsf{C}}\xspace\beta$), but Condition (1) applies, thus $\neg l\in F$ ($\beta\in R$). Note
that in addition to the situations captured by $>$ or $\succ$ being cyclic (and for which we can replicate as in the previous case the argument of Theorem 3.5 of \cite{billington93}) Cases 3. and 4. are not possible when $\Box$ is either $\ensuremath{\mathsf{O}}\xspace$ or $\ensuremath{\mathsf{P}}\xspace$, since we do not allow deontic literal in $\ensuremath{F}\xspace$ and there are no deontic rule expressions in $R$.
\end{proof}
As the theorem shows, and as illustrated in Example~\ref{ex:referee}, it is still possible to derive conflicting rules (for the
cautions variants). However, the derivation of one literal and its
complement is impossible (unless they are given as facts or there is a cycle in the superiority relation, both cases where the theory we started with contains some inconsistency).
We can now modify the interpretation of extension of Definition~\ref{def:StandardExtension} to adapt the notions discussed above. First, we revise the notion of Herbrand Base to include rule labels. Hence
$\mathit{HB}(D)=\set{l,\ensuremath{\mathcal{\sim }} l \in\ensuremath{\mathrm{PLit}}\xspace|\,l \text{ appears in }D}\cup\set{\alpha,\neg\alpha\in\ensuremath{\mathrm{Lab}}\xspace|\, \alpha \text{ appears in }D}$, second we define the so called modal Herbrand Base to close the element of $\mathit{HB}(D)$ under the various modalities; thus $\mathit{MHB}(D)=\set{\Box e|\; e\in\mathit{HB}(D) \wedge \Box\in\set{\ensuremath{\mathsf{C}}\xspace,\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}}$.
\begin{definition}[Extension]\label{def:Extension}
Given a defeasible deontic theory $D$, we define the \emph{extension} of $D$ according to variant $L$, $L\in\set{S,R}$ of the logic as $E_L(D) = (+\partial_\Box, -\partial_\Box, +\partial^m_\Box, -\partial^{m}_\Box)$, where $\pm\partial_\Box = \set{l\in\ensuremath{\mathrm{PLit}}\xspace |\, l\in\mathit{HB}(D)\wedge D\vdash_L \pm\partial_\Box l}$ and $\pm\partial^m_\Box = \set{\alpha\in\ensuremath{\mathrm{Lab}}\xspace |\, \alpha\in\mathit{HB}(D)\wedge D\vdash_L \pm\partial^m_\Box l}$, with $\Box \in \set{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$.
\end{definition}
We say that two theories are equivalent when they conclude the same conclusions, that is, they have the same extension. Formally:
\begin{definition}[Theory Equivalence]\label{def:Equivalence}
Two defeasible deontic theories $D$ and $D'$ are \emph{equivalent}, notationally $D \equiv D'$, iff $E_L(D) \equiv E_L(D')$.
\end{definition}
The notion of theory equivalence will play a key role in proving that our algorithms are sound and complete.
\section{Defeasible Deontic Logic}\label{sec:Method}
Defeasible Logic (DL) \cite{nute,DBLP:journals/tocl/AntoniouBGM01} is a simple, flexible, and efficient rule-based non-monotonic formalism. Its strength lies in its constructive proof theory, allowing it to draw meaningful conclusions from (potentially) conflicting and incomplete knowledge base. In non-monotonic systems, more accurate conclusions can be obtained when more pieces of information become available.
Many variants of DL have been proposed for the logical modelling of different application areas, specifically agents \cite{kravari2015,GovernatoriOSRC16,prima07:contextual}, legal reasoning \cite{Governatori2009157,Cristani201739}, and workflows from a business process compliance perspective \cite{DBLP:conf/ruleml/GovernatoriOSC11,Olivieri2015603}.
In this research we focus on the Defeasible Deontic Logic (DDL) framework \cite{GovernatoriORS13} that allows us to determine what prescriptive behaviours are in force in a given situation.
We start by defining the language of a defeasible deontic theory.
Let $\ensuremath{\mathrm{PROP}}\xspace$ be a set of propositional atoms, and $\ensuremath{\mathrm{Lab}}\xspace$ be a set of arbitrary labels (the names of the rules). We use lower-case Roman letters to denote literals and lower-case Greek letters to denote rules.
Accordingly, $\ensuremath{\mathrm{PLit}}\xspace = \ensuremath{\mathrm{PROP}}\xspace \cup \set{\neg l\, |\, l \in \ensuremath{\mathrm{PROP}}\xspace}$ is the set of \emph{plain literals}. The set of \emph{deontic literals} $\ensuremath{\mathrm{ModLit}}\xspace = \set{\Box l, \neg \Box l\ |\ l \in \ensuremath{\mathrm{PLit}}\xspace \wedge \Box \in \set{\ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}}$. Finally, the set of \emph{literals} is $\ensuremath{\mathrm{Lit}}\xspace = \ensuremath{\mathrm{PLit}}\xspace \cup \ensuremath{\mathrm{ModLit}}\xspace$. The \emph{complement} of a literal $l$ is denoted by $\ensuremath{\mathcal{\sim }} l$: if $l$ is a positive literal $p$ then $\ensuremath{\mathcal{\sim }} l$ is $\neg p$, and if $l$ is a negative literal $\neg p$ then $\ensuremath{\mathcal{\sim }} l$ is $p$. Note that we will not have specific rules nor modality for prohibitions, as we will treat them according to the standard duality that something is forbidden iff the opposite is obligatory (i.e., $\ensuremath{\mathsf{O}}\xspace\neg p$).
\begin{definition}[Defeasible Deontic Theory]\label{def:DeonticTheory}
A \emph{defeasible deontic theory} $D$ is a tuple $(\ensuremath{F}\xspace, R, >)$, where $\ensuremath{F}\xspace$ is the set of facts, $R$ is the set of rules, and $>$ is a binary relation over $R$ (called superiority relation).
\end{definition}
Specifically, the set of facts $\ensuremath{F}\xspace \subseteq \ensuremath{\mathrm{PLit}}\xspace$ denotes simple pieces of information that are always considered to be true, like ``Sylvester is a cat'', formally $cat(Sylvester)$.
In this paper, we subscribe to the distinction between the notions of obligations and permissions, and that of norms, where the norms in the system determine the obligations and permissions in force in a normative system. A Defeasible Deontic Theory is meant to represent a normative system, where the rules encode the norms of the systems, and the set of facts corresponds to a case. As we will see below, the rules are used to conclude the institutional facts, obligations and permissions that hold in a case.
Accordingly, we do not admit obligations and permissions as facts of the theory.
The set of rules $R$ contains three \emph{types} of rules: \emph{strict rules}, \emph{defeasible rules}, and \emph{defeaters}. Rules are also of two \emph{kinds}:
\begin{itemize}
\item \emph{Constitutive rules} (non-deontic rules) $R^\ensuremath{\mathsf{C}}\xspace$ model constitutive statements (count-as rules);
\item \emph{Deontic rules} to model prescriptive behaviours, which are either \emph{obligation rules} $R^\ensuremath{\mathsf{O}}\xspace$ which determine when and which obligations are in force, or \emph{permission rules} which represent \emph{strong} (or \emph{explicit}) permissions $R^\ensuremath{\mathsf{P}}\xspace$.
\end{itemize}
Lastly, ${>} \subseteq R \times R$ is the \emph{superiority} (or \emph{preference}) relation, which is used to solve conflicts in case of potentially conflicting information.
A theory is \emph{finite} if the set of facts and rules are so.
A strict (constitutive) rule is a rule in the classical sense: whenever the premises are indisputable, so is the conclusion. The statement ``All computing scientists are humans'' is hence formulated through the strict rule\footnote{Here, we introduce informally the symbols to represent different types of rules, which are formally defined below in Definition~\ref{def:Rule}, where $\rightarrow$ denotes a strict rule, $\Rightarrow$ for a defeasible rule, and $\leadsto$ for a defeater.}
\[
\mathit{CScientist}(X) \rightarrow_\ensuremath{\mathsf{C}}\xspace \mathit{human}(X),
\]
as there is no exception to it%
\footnote{Like in \cite{DBLP:journals/tocl/AntoniouBGM01}, we consider only a propositional version of this logic, and we do not take into account function symbols. Every expression with variables represents the finite set of its variable-free instances.}.
On the other hand, defeasible rules are to conclude statements that can be defeated by contrary evidence. In contrast, defeaters are special rules whose only purpose is to prevent the derivation of the opposite conclusion. Accordingly, we can represent the statement ``Computing scientists travel to the city of the conference'' through a defeasible rule, whereas ``During pandemic travels might be prohibited'' through a defeater, like
\[
\mathit{CScientist}, \mathit{PaperAccepted} \Rightarrow_\ensuremath{\mathsf{C}}\xspace \mathit{TravelConference}
\]
\[
\mathit{Pandemic} \leadsto_\ensuremath{\mathsf{C}}\xspace \neg \mathit{TravelConference}.
\]
On the other hand, a prescriptive behaviour like ``At traffic lights it is forbidden to perform a U-turn unless there is a `U-turn Permitted' sign'' can be formalised via the general obligation rule
\[
\mathit{AtTrafficLight} \Rightarrow_\ensuremath{\mathsf{O}}\xspace \neg \mathit{UTurn}
\]
and the exception through the permissive rule
\[
\mathit{UTurnSign} \Rightarrow_\ensuremath{\mathsf{P}}\xspace \mathit{UTurn}.
\]
While \cite{GovernatoriORS13} discusses how to integrate strong and weak permission in
defeasible deontic logic, in this paper, we restrict our attention to the notion of strong permission, namely, when permissions are explicitly stated using permissive rules, i.e., rules whose conclusion is to be asserted as a permission.
Following the ideas of \cite{ajl:ctd}, obligation rules gain more expressiveness with the \emph{compensation operator} $\otimes$ for obligation rules, which is to model reparative chains of obligations. Intuitively, $a \otimes b$ means that $a$ is the primary obligation, but if for some reason we fail to obtain, to comply with, $a$ (by either not being able to prove $a$, or by proving $\ensuremath{\mathcal{\sim }} a$) then $b$ becomes the new obligation in force. This operator is used to build chains of preferences, called $\otimes$-expressions.
The formation rules for $\otimes$-expressions are: (i) every plain literal is an $\otimes$-expression, (ii) if $A$ is an $\otimes$-expression and $b$ is a plain literal then $A \otimes b$ is an $\otimes$-expression \cite{GovernatoriORS13}.
In general an $\otimes$-expression has the form `$c_1 \otimes c_2 \otimes \dots \otimes c_m$', and it appears as consequent of a rule `$A(\alpha) \hookrightarrow_\ensuremath{\mathsf{O}}\xspace C(\alpha)$' where $C(\alpha)=c_1 \otimes c_2 \otimes \dots \otimes c_m$; the meaning opf the $\otimes$-expression is: if the rule is allowed to draw its conclusion, then $c_1$ is the obligation in force, and only when $c_1$ is violated then $c_2$ becomes the new in force obligation, and so on for the rest of the elements in the chain. In this setting, $c_m$ stands for the last chance to comply with the prescriptive behaviour enforced by $\alpha$, and in case $c_m$ is violated as well, then we will result in a non-compliant situation.
For instance, the previous prohibition to perform a U-turn can foresee a compensatory fine, like
\[
\mathit{AtTrafficLight} \Rightarrow_\ensuremath{\mathsf{O}}\xspace \neg \mathit{UTurn} \otimes \mathit{PayFine}.
\]
that has to be paid in case someone does perform an illegal U-turn.
It is worth noticing that we admit $\otimes$-expressions with only one element. The intuition, in this case, is that the obligatory condition does not admit compensatory measures or, in other words, that it is impossible to recover from its violation.
In this paper, we focus exclusively on the defeasible part of the logic ignoring the monotonic component given by the strict rules; consequently, we limit the language to the cases where the rules are either defeasible or defeaters. From a practical point of view, the restriction does not effectively limit the expressive power of the logic: a defeasible rule where there are no rules for the opposite conclusion, or where all rules for the opposite conclusion are weaker than the given defeasible rules, effectively behaves like a strict rule. Formally a rule is defined as below.
\begin{definition}[Rule]\label{def:Rule}
A \emph{rule} is an expression of the form $\alpha\colon A(\alpha) \hookrightarrow_\Box C(\alpha)$, where
\begin{enumerate}
\item $\alpha \in \ensuremath{\mathrm{Lab}}\xspace$ is the unique name of the rule;
\item $A(\alpha) \subseteq \ensuremath{\mathrm{Lit}}\xspace$ is the set of antecedents;
\item An arrow ${\hookrightarrow} \in \set{\Rightarrow, \leadsto}$ denoting, respectively, defeasible rules, and defeaters;
\item $\Box \in \{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace \}$;
\item its consequent $C(\alpha)$, which is either
\begin{enumerate}
\item a single plain literal $l \in \ensuremath{\mathrm{PLit}}\xspace$, if either (i) ${\hookrightarrow} \equiv {\leadsto}$ or (ii) $\Box \in \set{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{P}}\xspace}$, or
\item an $\otimes$-expression, if $\Box \equiv \ensuremath{\mathsf{O}}\xspace$.
\end{enumerate}
\end{enumerate}
\end{definition}
If $\Box = \ensuremath{\mathsf{C}}\xspace$ then the rule is used to derive non-deontic literals (constitutive statements), whilst if $\Box$ is $\ensuremath{\mathsf{O}}\xspace$ or $\ensuremath{\mathsf{P}}\xspace$ then the rule is used to derive deontic conclusions (prescriptive statements). The conclusion $C(\alpha)$ is, as before, a single literal in case $\Box = \{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{P}}\xspace\}$; in case $\Box = \ensuremath{\mathsf{O}}\xspace$, then the conclusion is an $\otimes$-expression.
Note that $\otimes$-expressions can only occur in prescriptive rules though we do not admit them on defeaters (Condition 5.(a).i), see \cite{GovernatoriORS13} for a detailed explanation.
We use some abbreviations on sets of rules. The set of defeasible rules in $R$ is $R_\Rightarrow$, the set of defeaters is $R_\ensuremath{\mathrm{dft}}\xspace$. $R^\Box[l]$ is the rule set appearing in $R$ with head $l$ and modality $\Box$, while $R^\ensuremath{\mathsf{O}}\xspace[l,i]$ denotes the set of obligation rules where $l$ is the $i$-th element in the $\otimes$-expression. Given that the consequent of a rule is either a single literal or an $\otimes$-expression (that, due to the associative property, can be understood as a sequence of elements, and then as an ordered set), in what follows we are going to abuse the notation and use $l\in C(\alpha)$. $R^\Box$ is the set of rules $\alpha\colon A(\alpha)\hookrightarrow_{\Box} C(\alpha)$ such that $\alpha$ \emph{appears in} $R$. For a theory as determined by Definitions~\ref{def:DeonticTheory} and \ref{def:Rule}, $\alpha$ appears in $R$ means that $\alpha\in R$; thus $R^\ensuremath{\mathsf{P}}\xspace$ is the set of permissive rules appearing in $R$. We use $R^\Diamond$ and $R^\Diamond[l]$ as shorthands for $R^\ensuremath{\mathsf{O}}\xspace\cup R^\ensuremath{\mathsf{P}}\xspace$ and $R^\ensuremath{\mathsf{O}}\xspace[l]\cup R^\ensuremath{\mathsf{P}}\xspace[l]$ respectively. The abbreviations can be combined. Finally, a literal $l$ appears in a theory $D$, if there is a rule $\alpha\in R$ such that $l\in A(\alpha)\cup C(\alpha)$.
\begin{definition}[Tagged modal formula]\label{def:TagFormula}
A \emph{tagged modal formula} is an expression of the form
$\pm\partial_\Box l$, with the following meanings
\begin{itemize}
\item $+\partial_\Box l$: $l$ is \emph{defeasibly provable} (or simply provable) with mode $\Box$,
\item $-\partial_\Box l$: $l$ is \emph{defeasibly refuted} (or simply refuted) with mode $\Box$;
\end{itemize}
\end{definition}
Accordingly, the meaning of $+\partial_\ensuremath{\mathsf{O}}\xspace p$ is that
$p$ is provable as an obligation, and $-\partial_\ensuremath{\mathsf{P}}\xspace\neg p$ is that
we have a refutation for the permisison of $\neg p$. Similarly, for the other combinations.
As we will shortly see (Definitions~\ref{def:StandardApplicability} and \ref{def:StandardDiscardability}), one of the key ideas of Defeasible Deontic Logic is that we use tagged modal formulas to determine what formulas are (defeasibly) provable or rejected given a theory and a set of facts (used as input for the theory). Therefore, when we have asserted the tagged modal formula $+\partial_\ensuremath{\mathsf{O}}\xspace l$ in a derivation (see Definition~\ref{def:Proof} below), we can conclude that the obligation of $l$ ($\ensuremath{\mathsf{O}}\xspace l$) follows from the rules and the facts and that we used a prescriptive rule to derive $l$; similarly for permission (using a permissive rule). However, the $\ensuremath{\mathsf{C}}\xspace$ modality is silent, meaning that we do not put the literal in the scope of the $\ensuremath{\mathsf{C}}\xspace$ modal operator, thus for $+\partial_\ensuremath{\mathsf{C}}\xspace l$, the derivation simply asserts that $l$ holds (and not that $\ensuremath{\mathsf{C}}\xspace l$ holds, even if the two have the same meaning). For the negative cases (i.e., $-\partial_\Box l$), the interpretation is that it is not possible to derive $l$ with a given mode. Accordingly, we read $-\partial_\ensuremath{\mathsf{O}}\xspace l$ as it is impossible to derive $l$ as an obligation. For $\Box\in\set{\ensuremath{\mathsf{O}}\xspace,\ensuremath{\mathsf{P}}\xspace}$ we are allowed to infer $\neg\Box l$, giving a constructive interpretation of the deontic modal operators. Notice that this is not the case for $\ensuremath{\mathsf{C}}\xspace$, where we cannot assert that $\ensuremath{\mathcal{\sim }} l$ holds (this would require $+\partial_\ensuremath{\mathsf{C}}\xspace\ensuremath{\mathcal{\sim }} l$); in the logic, failing to prove $l$ does not equate to proving $\neg l$.
We will use the term \emph{conclusions} and tagged modal formulas interchangeably.
The definition of proof is also the standard in DDL.
\begin{definition}[Proof]\label{def:Proof}
Given a defeasible deontic theory $D$, a proof $P$ of length $m$ in $D$ is a finite sequence $P(1), P(2),\dots,P(m)$ of tagged modal formulas, where the proof conditions defined in the rest of this paper hold.
\end{definition}
Hereafter, $P(1..n)$ denotes the first $n$ steps of $P$, and we also use the notational convention $D\vdash \pm\partial_\Box l$, meaning that there is a proof $P$ for $\pm\partial_\Box l$ in $D$.
Core notions in DL are that of \emph{applicability/discardability}. As knowledge in a defeasible theory is circumstantial, given a defeasible rule like `$\alpha\colon a, b \Rightarrow_\Box c$', there are four possible scenarios: the theory defeasibly proves both $a$ and $b$, the theory proves neither, the theory proves one but not the other. Naturally, only in the first case, where both $a$ and $b$ are proved, we can use $\alpha$ to \emph{support/try to conclude} $\Box c$. Briefly, we say that a rule is \emph{applicable} when every antecedent's literal has been proved at a previous derivation step. Symmetrically, a rule is \emph{discarded} when one of such literals has been previously refuted. Formally:
\begin{definition}[Applicability]\label{def:StandardApplicability}
Assume a deontic defeasible theory $D = (\ensuremath{F}\xspace, R, >)$. We say that rule $\alpha \in R^\ensuremath{\mathsf{C}}\xspace \cup R^\ensuremath{\mathsf{P}}\xspace$ is \emph{applicable} at $P(n+1)$, iff for all $a \in A(\alpha)$
\begin{enumerate}
\item\label{item:l} if $a \in \ensuremath{\mathrm{PLit}}\xspace$, then $+\partial_\ensuremath{\mathsf{C}}\xspace a \in P(1..n)$,
\item if $a = \Box q$, then $+\partial_\Box q \in P(1..n)$, with $\Box \in\set{\ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$,
\item\label{item:Diamondl} if $a = \neg \Box q$, then $-\partial_\Box q \in P(1..n)$, with $\Box \in \set{\ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$.
\end{enumerate}
We say that rule $\alpha \in R^\ensuremath{\mathsf{O}}\xspace$ is \emph{applicable at index} $i$ \emph{and} $P(n+1)$ iff Conditions~\ref{item:l}--\ref{item:Diamondl} above hold and
\begin{enumerate}[resume]
\item\label{item:cj} $\forall c_j \in C(\alpha),\, j < i$, then $+\partial_\ensuremath{\mathsf{O}}\xspace c_j \in P(1..n)$ and $+\partial_\ensuremath{\mathsf{C}}\xspace \ensuremath{\mathcal{\sim }} c_j \in P(1..n)$\footnote{\label{foot:violation}As discussed above, we are allowed to move to the next element of an $\otimes$-expression when the current element is violated. To have a violation, we need (i) the obligation to be in force, and (ii) that its content does not hold. $+\partial_\ensuremath{\mathsf{O}}\xspace c_i$ indicates that the obligation is in force. For the second part we have two options. The former, $+\partial_\ensuremath{\mathsf{C}}\xspace\ensuremath{\mathcal{\sim }} c_i$ means that we have ``evidence'' that the opposite of the content of the obligation holds. The latter would be to have $-\partial_\ensuremath{\mathsf{C}}\xspace c_j \in P(1..n)$ corresponding to the intuition that we failed to provide evidence that the obligation has been satisfied. It is worth noting that the former option implies the latter one. For a deeper discussion on the issue, see \cite{jurix2015burden}.}.
\end{enumerate}
\end{definition}
\begin{definition}[Discardability]\label{def:StandardDiscardability}
Assume a deontic defeasible theory $D$, with $D = (\ensuremath{F}\xspace, R, >)$. We say that rule $\alpha \in R^\ensuremath{\mathsf{C}}\xspace \cup R^\ensuremath{\mathsf{P}}\xspace$ is \emph{discarded} at $P(n+1)$, iff there exists $a \in A(\alpha)$ such that
\begin{enumerate}
\item\label{item:notl} if $a \in \ensuremath{\mathrm{PLit}}\xspace$, then $-\partial_\ensuremath{\mathsf{C}}\xspace l \in P(1..n)$, or
\item if $a = \Box q$, then $-\partial_\Box q \in P(1..n)$, with $\Box \in \set{\ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$, or
\item\label{item:notDiamondl} if $a = \neg \Box q$, then $+\partial_\Box q \in P(1..n)$, with $\Box \in \set{\ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$.
\end{enumerate}
We say that rule $\alpha \in R^\ensuremath{\mathsf{O}}\xspace$ is \emph{discarded at index } $i$ \emph{and} $P(n+1)$ iff either at least one of the Conditions~\ref{item:notl}--\ref{item:notDiamondl} above hold, or
\begin{enumerate}[resume]
\item\label{item:notcj} $\exists c_j \in C(\alpha),\, j < i$ such that $-\partial_\ensuremath{\mathsf{O}}\xspace c_j \in P(1..n)$, or $-\partial_\ensuremath{\mathsf{C}}\xspace \ensuremath{\mathcal{\sim }} c_j \in P(1..n)$.
\end{enumerate}
\end{definition}
Note that discardability is obtained by applying the principle of \emph{strong negation} to the definition of applicability. The strong negation principle applies the function that simplifies a formula by moving all negations to an innermost position in the resulting formula, replacing the positive tags with the respective negative tags, and the other way around see \cite{DBLP:journals/igpl/GovernatoriPRS09}.
Positive proof tags ensure that there are effective decidable procedures to build proofs; the strong negation principle guarantees that the negative conditions provide a constructive and exhaustive method to verify that a derivation of the given conclusion is not possible. Accordingly, condition 3 of Definition~\ref{def:StandardApplicability} allows us to state that
$\neg\Box p$ holds when we have a (constructive) failure to prove $p$ with mode $\Box$ (for obligation or permission), thus it corresponds to a constructive version of negation as failure.
We are finally ready to formalise the proof conditions, which are the standard conditions of DDL \cite{GovernatoriORS13}. We start with positive proof conditions for constitutive statements. In the following, we shall omit the explanations for negative proof conditions, when trivial, reminding the reader that they are obtained through the application of the strong negation principle to the positive counterparts.
\begin{definition}[Constitutive Proof Conditions]\label{def:StandardCostProof}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{C}}\xspace l$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{C}}\xspace l$ then\+\\
(1) \= $l \in \ensuremath{F}\xspace$, or\\
(2) \> (1) \= $\ensuremath{\mathcal{\sim }} l \not\in \ensuremath{F}\xspace$, and\\
\> (2) \= $\exists \beta \in R^\ensuremath{\mathsf{C}}\xspace_\Rightarrow[l]$ s.t. $\beta$ is applicable, and\\
\> (3) \=$\forall \gamma\in R^\ensuremath{\mathsf{C}}\xspace[\ensuremath{\mathcal{\sim }} l]$ either\\
\>\>(1) \=$\gamma$ is discarded, or \\
\>\>(2) $\exists \zeta \in R^\ensuremath{\mathsf{C}}\xspace[l]$ s.t. \\
\>\>\> (1) $\zeta$ is applicable and \\
\>\>\> (2) $\zeta > \gamma$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{C}}\xspace l$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{C}}\xspace l$ then\+\\
(1) \= $l \not\in \ensuremath{F}\xspace$ and either\\
(2) \= (1) \= $\ensuremath{\mathcal{\sim }} l \in \ensuremath{F}\xspace$, or\\
\> (2) $\forall \beta \in R^\ensuremath{\mathsf{C}}\xspace_\Rightarrow[l]$, either $\beta$ is discarded, or\\
\> (3) \=$\exists \gamma\in R^\ensuremath{\mathsf{C}}\xspace[\ensuremath{\mathcal{\sim }} l]$ such that\+\+\\
\=(1) $\gamma$ is applicable, and\\
\=(2) $\forall \zeta \in R^\ensuremath{\mathsf{C}}\xspace[l]$, either $\zeta$ is discarded, or $\zeta \not > \gamma$.
\end{tabbing}
\end{definition}
A literal is defeasibly proved if: it is a fact, or there exists an applicable, defeasible rule supporting it (such a rule cannot be a defeater), and all opposite rules are either discarded or defeated. To prove a conclusion, not all the work has to be done by a stand-alone (applicable) rule (the rule witnessing condition (2.2): all the applicable rules for the same conclusion (may) contribute to defeating applicable rules for the opposite conclusion. Note that both $\gamma$ as well as $\zeta$ may be defeaters.
\begin{example}\label{ex:Standard}
Let $D = (F = \set{a, b, c, d, e}, R, {>} = \set{(\alpha, \varphi), (\beta, \psi)})$ be a theory such that
\begin{align*}
R = \{&\alpha\colon a \Rightarrow_\ensuremath{\mathsf{C}}\xspace l &&\beta\colon b \Rightarrow_\ensuremath{\mathsf{C}}\xspace l && \gamma\colon c \Rightarrow_\ensuremath{\mathsf{C}}\xspace l\\
&\varphi\colon d \Rightarrow_\ensuremath{\mathsf{C}}\xspace \neg l && \psi\colon e \Rightarrow_\ensuremath{\mathsf{C}}\xspace \neg l && \chi\colon g \Rightarrow_\ensuremath{\mathsf{C}}\xspace \neg l \}.
\end{align*}
\end{example}
Here, $D \vdash +\partial_\ensuremath{\mathsf{C}}\xspace f_i$, for each $f_i \in \ensuremath{F}\xspace$ and, by Condition (1) of $+\partial$. Therefore, all rules but $\chi$ (which is discarded) are applicable: $\chi$ is indeed discarded since no rule has $g$ as consequent nor is a fact. The team defeat supporting $l$ is made by $\alpha$, $\beta$ and $\gamma$, whereas the team defeat supporting $\neg l$ is made by $\varphi$ and $\psi$. Given that $\alpha$ defeats $\varphi$ and $\beta$ defeats $\psi$, then we conclude that $D\vdash +\partial_\ensuremath{\mathsf{C}}\xspace l$. Note that, despise being applicable, $\gamma$ does not effectively contribute in proving $+\partial_\ensuremath{\mathsf{C}}\xspace l$, i.e. $D$ without $\gamma$ would still prove $+\partial_\ensuremath{\mathsf{C}}\xspace l$.
Suppose to change $D$ such that both $\alpha$ and $\beta$ are defeaters. Even if $\gamma$ defeats neither $\varphi$ nor $\psi$, $\gamma$ is now needed to prove $+\partial l$ as Condition (2.2) requires that at least one applicable rule must be a defeasible rule.
Below we present the proof conditions for obligations.
\begin{definition}[Obligation Proof Conditions]\label{def:StandardOblProof}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{O}}\xspace l$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{O}}\xspace l$ then\+\\
$\exists \beta \in R^\ensuremath{\mathsf{O}}\xspace_\Rightarrow[l,i]$ s.t.\\
(1) \=$\beta$ is applicable at index $i$ and\\
(2) $\forall \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }} l, j] \cup R^\ensuremath{\mathsf{P}}\xspace[\ensuremath{\mathcal{\sim }} l]$ either \+\\
(1) \= $\gamma$ is discarded (at index $j$), or\\
(2) \= $\exists \zeta \in R^\ensuremath{\mathsf{O}}\xspace[l, k]$ s.t.\+\\
\= (1) $\zeta$ is applicable at index $k$ and\\
\= (2) $\zeta > \gamma$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{O}}\xspace l$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{O}}\xspace l$ then\+\\
$\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{O}}\xspace[l,i]$ either\\
(1) $\beta$ is discarded at index $i$, or\\
(2) \= $\exists \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }} l, j] \cup R^\ensuremath{\mathsf{P}}\xspace[\ensuremath{\mathcal{\sim }} l]$ s.t. \+\\
(1) \= $\gamma$ is applicable (at index $j$), and\\
(2) \= $\forall \zeta \in R^\ensuremath{\mathsf{O}}\xspace[l, k]$ either\+\\
\= (1) $\zeta$ is discarded at index $k$, or\\
\= (2) $\zeta \not > \gamma$.
\end{tabbing}
\end{definition}
Note that: (i) in Condition (2) $\gamma$ can be a permission rule as explicit, opposite permissions represent exceptions to obligations, whereas $\zeta$ (Condition 2.2) must be an obligation rule as a permission rule cannot reinstate an obligation, and that (ii) $l$ may appear at different positions (indices $i, j,$ and $k$) within the three $\otimes$-chains. The example below supports the intuition behind the restriction to obligation rules in Conditions (2.2).
\begin{example}
Suppose that the medical guidelines of a hospital forbid the use of opioids to sedate patients with an addiction history. However, at the same time, the guidelines mandate the same kind of drugs for terminal patients with cancer. However, physicians are permitted to refuse to treat patients with opioids based on moral ground objections.
\begin{gather*}
\alpha\colon \mathit{AddictionHistory} \Rightarrow_\ensuremath{\mathsf{O}}\xspace \neg\mathit{Opioids}\\
\beta\colon \mathit{TerminalCancer} \Rightarrow_\ensuremath{\mathsf{O}}\xspace \mathit{Opioids}\\
\gamma\colon \mathit{MoralGround} \Rightarrow_\ensuremath{\mathsf{P}}\xspace \neg\mathit{Opioids}
\end{gather*}
where $\gamma >\beta$ and $\beta >\alpha$. Is it forbidden to use opioids for a terminally ill patient with an addiction history where moral ground objections apply? Here, rule $\gamma$ establishes an exemption from the obligation to prescribe opioids; however, the opposite course of action is asmissible, and the use of opioids for terminally ill cancer patients appears to be not forbidden (with or without moral ground objections).
\end{example}
\noindent Below, we introduce the proof conditions for permissions.
\begin{definition}[Permission Proof Conditions]\label{def:StandardPermProof}
\begin{tabbing}
$+\partial_\ensuremath{\mathsf{P}}\xspace l$: \=If $P(n+1)=+\partial_\ensuremath{\mathsf{P}}\xspace l$ then\+\\
(1) \= $+\partial_\ensuremath{\mathsf{O}}\xspace l \in P(1..n)$, or\\
(2) \= $\exists \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[l]$ s.t.\+\\
(1) $\beta$ is applicable and\\
(2) \= $\forall \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }} l, j]$ either \+\\
(1) \=$\gamma$ is discarded at index $j$, or\\
(2) \=$\exists \zeta \in R^\ensuremath{\mathsf{P}}\xspace[l] \cup R^\ensuremath{\mathsf{O}}\xspace[l,k]$ s.t.\+\\
(1) \= $\zeta$ is applicable (at index $k$) and\\
(2) $\zeta > \beta$.
\end{tabbing}
\begin{tabbing}
$-\partial_\ensuremath{\mathsf{P}}\xspace l$: \=If $P(n+1)=-\partial_\ensuremath{\mathsf{P}}\xspace l$ then\+\\
(1) \= $-\partial_\ensuremath{\mathsf{O}}\xspace l \in P(1..n)$, and\\
(2) \= $\forall \beta \in R_\Rightarrow^\ensuremath{\mathsf{P}}\xspace[l]$ either\+\\
(1) $\beta$ is discarded or\\
(2) \= $\exists \gamma \in R^\ensuremath{\mathsf{O}}\xspace[\ensuremath{\mathcal{\sim }} l, j]$ s.t. \+\\
(1) \=$\gamma$ is applicable at index $j$ and\\
(2) \=$\forall \zeta \in R^\ensuremath{\mathsf{P}}\xspace[l] \cup R^\ensuremath{\mathsf{O}}\xspace[l,k]$ either\+\\
(1) \= $\zeta$ is discarded (at index $k$), or\\
(2) $\zeta \not > \beta$.
\end{tabbing}
\end{definition}
Condition (1) allows us to derive a permission from the corresponding obligation. Thus it corresponds to the $\ensuremath{\mathsf{O}}\xspace a\to\ensuremath{\mathsf{P}}\xspace a$ axiom of Deontic Logic. Condition (2.2) considers as possible counter-arguments \emph{only} obligation rules in situations where both $\ensuremath{\mathsf{P}}\xspace l$ and $\ensuremath{\mathsf{P}}\xspace \neg l$ hold are allowed.
We refer the readers interested in a deeper discussion on how to model permissions and obligations in DDL to \cite{GovernatoriORS13}.
Hereafter, whenever the applicability conditions of a given rule are not relevant for the example, we will set the corresponding set of antecedents to empty as such rules are vacuously applicable.
\begin{example}\label{ex:StandardDeontic}
Assume the theory of Example~\ref{ex:Standard}, where we extend the rule set and the superiority relation as follows
\begin{align*}
R &\cup \set{\zeta\colon \emptyset \Rightarrow_\ensuremath{\mathsf{O}}\xspace \neg l \otimes p\quad \eta\colon \emptyset \Rightarrow_\ensuremath{\mathsf{P}}\xspace l \quad \nu\colon \neg\ensuremath{\mathsf{O}}\xspace l \Rightarrow_\ensuremath{\mathsf{C}}\xspace q }\\
> &\cup \set{(\zeta, \eta)}.
\end{align*}
\end{example}
Since $\zeta > \eta$, we conclude $D\vdash +\partial_\ensuremath{\mathsf{O}}\xspace \neg l$, $D\vdash -\partial_\ensuremath{\mathsf{P}}\xspace l$ due to Condition (3) of $-\partial_\ensuremath{\mathsf{P}}\xspace$, and $D\vdash -\partial_\ensuremath{\mathsf{O}}\xspace l$ due Condition (1) of $-\partial_\ensuremath{\mathsf{O}}\xspace$ as there are no obligation rules supporting $l$. Condition (3) of Definition~\ref{def:StandardApplicability} is satisfied, which makes $\nu$ applicable and hence $D\vdash +\partial_\ensuremath{\mathsf{C}}\xspace q$.
Given that, from Example~\ref{ex:Standard}, $D\vdash +\partial_\ensuremath{\mathsf{C}}\xspace l$, Condition (4) of Definition~\ref{def:StandardApplicability} is true which makes $\zeta$ applicable at index 2 for $p$: since the are no deontic rules supporting either $\ensuremath{\mathsf{O}}\xspace\neg p$ or $\ensuremath{\mathsf{P}}\xspace\neg p$, we conclude that $D\vdash +\partial_\ensuremath{\mathsf{O}}\xspace p$.
The set of positive and negative conclusions of a theory is called \emph{extension}. The extension of a theory is computed based on the literals that appear in it; more precisely, the literals in the Herbrand Base of the theory $\mathit{HB}(D)=\set{l,\ensuremath{\mathcal{\sim }} l \in\ensuremath{\mathrm{PLit}}\xspace|\,l \text{ appears in }D}$.
\begin{definition}[Extension]\label{def:StandardExtension}
Given a defeasible deontic theory $D$, we define the \emph{extension} of $D$ as
\[E(D) = (+\partial_\ensuremath{\mathsf{C}}\xspace, -\partial_\ensuremath{\mathsf{C}}\xspace,+\partial_\ensuremath{\mathsf{O}}\xspace, -\partial_\ensuremath{\mathsf{O}}\xspace,+\partial_\ensuremath{\mathsf{P}}\xspace, -\partial_\ensuremath{\mathsf{P}}\xspace),\]
where $\pm\partial_\Box = \set{l \in \mathit{HB}(D) |\, D\vdash \pm\partial_\Box l}$, with $\Box \in \set{\ensuremath{\mathsf{C}}\xspace, \ensuremath{\mathsf{O}}\xspace, \ensuremath{\mathsf{P}}\xspace}$.
\end{definition}
In Example~\ref{ex:StandardDeontic}, $E(D)$ consists of the following sets:
\begin{align*}
+\partial_\ensuremath{\mathsf{C}}\xspace&=\set{l, q}\cup \ensuremath{F}\xspace,
&-\partial_\ensuremath{\mathsf{C}}\xspace &= \set{\neg l, \neg q}\cup\set{\neg f_i|f_i\in\ensuremath{F}\xspace},\\
+\partial_\ensuremath{\mathsf{O}}\xspace &= \set{\neg l, p},
&-\partial_\ensuremath{\mathsf{O}}\xspace &= \set{l, q, \neg q}
\cup \ensuremath{F}\xspace\cup\set{\neg f_i|f_i\in\ensuremath{F}\xspace}
\\
+\partial_\ensuremath{\mathsf{P}}\xspace&= \set{\neg l, p},
&-\partial_\ensuremath{\mathsf{P}}\xspace &= \set{l, \neg p, q, \neg q}\cup \ensuremath{F}\xspace\cup\set{\neg f_i|f_i\in\ensuremath{F}\xspace}.
\end{align*}
\section{Introduction and Background}\label{sec:Intro}
\input{Intro2}
\input{Methodology}
\section{Defeasible Deontic Logic with Meta-Rules}\label{sec:Logic}
\input{Logic}
\section{Algorithms}\label{sec:Algo}
\input{Algorithms}
\input{relatedworkmetalogic}
\section{Conclusions and Further Developments}\label{sec:Conc}
\input{Conclusions}
\bibliographystyle{splncs04}
\section{Related Work}\label{sec:RelatedWork}
The philosophical issue of admitting meta-rules in the logical language was critically discussed for a long time in conditional logics \cite{nute:1980}. At the end of the `60s, such logics have been studied to give formal account to linguistic structures such as indicative conditionals and subjunctive conditionals to express counterfactuals. Later on, conditional logics have also been used to deal with non-monotonic reasoning \cite{Delgrande:87,Farinas:94}. While some, like \cite{adams:1975}, rejected nested rules when a probabilistic treatment of conditionals is considered, others admitted this possibility even though it was argued that the intuitive meaning of such formulas is not clear \cite{Lewis:1973,nute:1980,Delgrande:87}; meaningful uses of nested conditionals were advanced in \cite{Boutilier:92}.
In mathematical logic and theoretical computer science meta-logic and meta-reasoning have been a discussed theme over the years. A historical perspective towards the argument shows that the topic has been dealt with, from different perspectives, since 1989, when Paulson \cite{Paulson1989363} presented the issues related to the construction of a generic theorem prover and showed that reasoning with the notion of \emph{relevance} as a means to choose the most useful subset of axioms can boost a significant speed-up of the process itself.
Thenceforth, the discussion stood open for a few years. Numerous scholars borrowed the notion of meta-reasoning from the original presentation of the issue proposed by Tarski \cite{Tarski1986143} where the notion of \emph{metamathematics} is devised as applied to logical frameworks. This was employed in the foundational work by Russell and Wefald \cite{Russell1991361}. The level of discussion provided in Russell and Wefald's study was yet too far from concrete cases, but it has the great merit of providing a reference for the basic idea needed for devising the \emph{motivation} for introducing meta-reasoning systems, defined as \emph{bounded rationality}. More concretely, it became possible to employ these notions in many-valued logics \cite{Murray1994237}, in image processing, a field that is not relevant to this investigation, and in knowledge-based systems by Rowel \emph{et al.} \cite{Rowe19921}.
Almost immediately a significant attention has been posed on the definition of meta-logic methods in logic programming. This investigation line started by the pioneering study by Costantini and Lanzarone \cite{Costantini1994239}, but was inspired by the early investigation by Lloyd and Topor \cite{Lloyd1984225} and further investigated by Brogi \emph{et al.} \cite{Brogi19997,Brogi1998123}, Lifschitz \emph{et al.} \cite{Lifschitz1999369}, and many others.
Subsequently, a very important novelty in meta-logical systems was introduced by Grundy in 1996 \cite{Grundy1996}: the notion of hierarchical reasoning. The idea behind hierarchical reasoning is that the setting of decision-making does not depend on one single level, but on more than one. Numerous applications have been devised, and in many of these, hierarchical reasoning has shown to be decisive. Papadias \emph{et al.} \cite{Papadias1997251} provided specific algorithms for spatial reasoning, and Seidel \emph{et al.} \cite{Seidel1997239} applied it to probabilistic Constraint Satisfaction Problems.
Unlike hierarchical reasoning, where different layers of logical frameworks are employed to represent a hierarchy of decisions, meta-reasoning can be based on higher-order logics, where reasoning becomes an element of the logic itself. This approach is the inspiration for methods for the construction of systems with nested rules that we introduced as a base for this research in Section~\ref{sec:Intro}. Original investigation on higher-order logic and meta-reasoning should be attributed to McDowell \emph{et al.} \cite{Mcdowell200280} and independently to Momigliano \emph{et al.} \cite{Momigliano2003375}.
Meta-reasoning has also been the motivation for investigations on a more theoretical level. On the one hand, the notion of continual computation has been deeply investigated as a base for machine reasoning by Horvitz \cite{Horvitz2001159}. His revolutionary notion has shown to be the base of many unified models of reasoning for humans and machines, as discussed widely by Gershman \emph{et al.} in the impactful study on the notion of computational rationality \cite{Gershman2015273}.
Another aspect of meta-logic that has been dealt with is the \emph{temporal} one, that has been studied for the rewriting methods by Clavel and Meseguer \cite{Clavel2002245}, and by Baldan \emph{et al.} \cite{Baldan20021}.
To come to meta-reasoning and meta-logic in non-monotonic systems, this work is an extensive revision and substantial extension to the research by Olivieri \emph{et al.} \cite{Olivieri202169}. Their investigation provided a framework for reasoning with meta-rules as a generalisation of the approach employed by Governatori and Rotolo \cite{Governatori2009157} and by Cristani \emph{et al.} \cite{Cristani201739}. These studies have mainly focused on the need for specific meta-rules for the revision of norms and to accommodate, in the form of meta-reasoning, the issues related to the effects of rules over time. Concerning business process compliance, some studies have been developed by Governatori \emph{et al.} \cite{GovernatoriOSRC16,GovernatoriORS13,DBLP:conf/ruleml/GovernatoriOSC11,Olivieri2015603} where some of the issues that we solved here emerged at first.
The specific issue in developing methods for reasoning with rules as the object of an argument has been the focus of the study by Modgil \emph{et al.} \cite{Modgil2011959}. This research focuses mainly on meta-arguments that have been investigated as the base for a general theory of argumentation structures. Finocchiaro has explored these issues critically \cite{Finocchiaro2007253} and interpreted the notion of meta-argument from a general point of view. Similarly, Dhovhannisyan and Djijian \cite{Hovhannisyan2017345} recently developed a general theory of meta-argumentation.
On the other hand, Boella \emph{et al.} have defined when a meta-argument is acceptable \cite{Boella2009259}.
|
1,108,101,565,802 | arxiv | \chapter*{}
\vfill
{\footnotesize
\begin{flushright}
\emph{}\\
\emph{To my family.\\}
\begin{CJK*}{UTF8}{gbsn}
\emph{致我的家人。}
\end{CJK*}
\end{flushright}
}
\vfill
\cleardoublepage
\frontmatter
\pagestyle{headings}
\input{04_abstract.tex}
\cleardoublepage
\begin{otherlanguage*}{german}
\input{05_zusammenfassung.tex}
\end{otherlanguage*}
\cleardoublepage
\input{06_acknowledgments.tex}
\cleardoublepage
\phantomsection\pdfbookmark[0]{Contents}{Contents}
{\hypersetup{linkcolor=black} \tableofcontents}
\cleardoublepage
\phantomsection\addcontentsline{toc}{chapter}{List of Figures}
{\hypersetup{linkcolor=black} \listoffigures}
\cleardoublepage
\phantomsection\addcontentsline{toc}{chapter}{List of Tables}
{\hypersetup{linkcolor=black} \listoftables}
\cleardoublepage
\mainmatter
\pagenumbering{arabic}
\setcounter{page}{1}
\pagestyle{headings}
\input{10_introduction}
\input{20_inference}
\input{30_adaptation}
\input{40_learning}
\input{50_edgeserver}
\input{60_conclusion}
\cleardoublepage
\defBibliography{Bibliography}
\bibliographystyle{alphaabbr}
\phantomsection\addcontentsline{toc}{chapter}{Bibliography}
{\small
\chapter[Abstract]{Abstract}
Deep neural networks (DNNs) have succeeded in many different perception tasks, e.g.,\xspace computer vision, natural language processing, reinforcement learning, etc.
The high-performed DNNs heavily rely on intensive resource consumption.
For example, training a DNN requires high dynamic memory, a large-scale dataset, and a large number of computations (a long training time); even inference with a DNN also demands a large amount of static storage, computations (a long inference time), and energy.
Therefore, state-of-the-art DNNs are often deployed on a cloud server with a large number of super-computers, a high-bandwidth communication bus, a shared storage infrastructure, and a high power supplement.
Recently, some new emerging intelligent applications, e.g.,\xspace AR/VR, mobile assistants, Internet of Things, require us to deploy DNNs on resource-constrained edge devices.
Compare to a cloud server, edge devices often have a rather small amount of resources.
To deploy DNNs on edge devices, we need to reduce the size of DNNs, i.e.,\xspace we target a better trade-off between the resource consumption and the model accuracy.
In this thesis, we study four edge intelligent scenarios and develop different methodologies to enable deep learning in each scenario.
Since current DNNs are often over-parameterized, our goal is to find and reduce the redundancy of the DNNs in each scenario.
We summarize the four studied scenarios as follows,
\begin{itemize}
\item \fakeparagraph{Inference on Edge Devices}
Firstly, we enable efficient inference of DNNs given the fixed resource constraints on edge devices.
Compared to cloud inference, inference on edge devices avoids transmitting the data to the cloud server, which can achieve a more stable, fast, and energy-efficient inference.
Regarding the main resource constraints from storing a large number of weights and computation during inference, we proposed an Adaptive Loss-aware Quantization (ALQ\xspace) for multi-bit networks.
ALQ\xspace reduces the redundancy on the quantization bitwidth.
The direct optimization objective (i.e.,\xspace the loss) and the learned adaptive bitwidth assignment allow ALQ\xspace to acquire extremely low-bit networks with an average bitwidth below 1-bit while yielding a higher accuracy than state-of-the-art binary networks.
\item \fakeparagraph{Adaptation on Edge Devices}
Secondly, we enable efficient adaptation of DNNs when the resource constraints on the target edge devices dynamically change during runtime, e.g.,\xspace the allowed execution time and the allocatable RAM.
To maximize the model accuracy during on-device inference, we develop a new synthesis approach, Dynamic REal-time Sparse Subnets (DRESS\xspace) that can sample and execute sub-networks with different resource demands from a backbone network.
DRESS\xspace reduces the redundancy among multiple sub-networks by weight sharing and architecture sharing, resulting in storage efficiency and re-configuration efficiency, respectively.
The generated sub-networks have different sparsity, and thus can be fetched to infer under varying resource constraints by utilizing sparse tensor computations.
\item \fakeparagraph{Learning on Edge Devices}
Thirdly, we enable efficient learning of DNNs when facing unseen environments or users on edge devices.
On-device learning requires both data- and memory-efficiency.
We thus propose a new meta learning method p-Meta\xspace to enable memory-efficient learning with only a few samples of unseen tasks.
p-Meta\xspace reduces the updating redundancy by identifying and updating structurewise adaptation-critical weights only, which saves the necessary memory consumption for the updated weights.
\item \fakeparagraph{Edge-Server System}
Finally, we enable efficient inference and efficient updating on edge-server systems.
In an edge-server system, several resource-constrained edge devices are connected to a resource-sufficient server with a constrained communication bus.
Due to the limited relevant training data beforehand, pretrained DNNs may be significantly improved after the initial deployment.
On such an edge-server system, on-device inference is preferred over cloud inference, since it can achieve a fast and stable inference with less energy consumption.
Yet retraining on the cloud server is preferred over on-device retraining (or federated learning) due to the limited memory and computing power on edge devices.
We proposed a novel pipeline Deep Partial Updating (DPU\xspace) to iteratively update the deployed inference model.
Particularly, when newly collected data samples from edge devices or from other sources are available at the server, the server smartly selects only a subset of critical weights to update and send to each edge device.
This weightwise partial updating reduces the redundant updating by reusing the pretrained weights, which achieves a similar accuracy as full updating yet with a significantly lower communication cost.
\end{itemize}
\chapter[Zusammenfassung]{Zusammenfassung}
Deep Neural Networks (DNNs) haben sich bei vielen verschiedenen Wahrnehmungsaufgaben bewährt, z. B. Computer Vision, Verarbeitung natürlicher Sprache, Verstärkungslernen usw.
Die leistungsstarken DNNs sind stark auf einen intensiven Ressourcenverbrauch angewiesen.
Beispielsweise erfordert das Training eines DNN einen hohen dynamischen Speicher, einen großen Datensatz und eine große Anzahl von Berechnungen (eine lange Trainingszeit); Selbst die Inferenz mit einem DNN erfordert auch eine große Menge an statischem Speicher, Berechnungen (eine lange Inferenzzeit) und Energie.
Daher werden moderne DNNs häufig auf einem Cloud-Server mit einer großen Anzahl von Supercomputern, einem Kommunikationsbus mit hoher Bandbreite, einer gemeinsam genutzten Speicherinfrastruktur und einem Hochleistungszusatz eingesetzt.
In letzter Zeit erfordern einige neu entstehende intelligente Anwendungen, z. B. AR/VR, mobile Assistenten, Internet of Things, den Einsatz von DNNs auf ressourcenbeschränkten Edge-Geräten.
Im Vergleich zu einem Cloud-Server verfügen Edge-Geräte oft über eine eher geringe Menge an Ressourcen.
Um DNNs auf Edge-Geräten einzusetzen, müssen wir die Größe von DNNs reduzieren, d. h. wir streben einen besseren Kompromiss zwischen dem Ressourcenverbrauch und der Modellgenauigkeit an.
In dieser Doktorarbeit untersuchen wir vier intelligente Edge-Szenarien und entwickeln verschiedene Methoden, um Deep Learning in jedem Szenario zu ermöglichen.
Da aktuelle DNNs oft überparametrisiert sind, ist unser Ziel, die Redundanz der DNNs in jedem Szenario zu finden und zu reduzieren.
Wir fassen die vier untersuchten Szenarien wie folgt zusammen,
\begin{itemize}
\item \fakeparagraph{Inferenz auf Edge-Geräten}
Erstens ermöglichen wir eine effiziente Inferenz von DNNs angesichts der festen Ressourcenbeschränkungen auf Edge-Geräten.
Im Vergleich zur Cloud-Inferenz wird bei der Inferenz auf Edge-Geräten die Übertragung der Daten an den Cloud-Server vermieden, wodurch eine stabilere, schnellere und energieeffizientere Inferenz erreicht werden kann.
In Bezug auf die wichtigsten Ressourcenbeschränkungen, die sich aus der Speicherung einer großen Anzahl von Gewichten und Berechnungen während der Inferenz ergeben, haben wir eine Adaptive Loss-aware Quantization (ALQ\xspace) für Multibit-Netzwerke vorgeschlagen.
ALQ\xspace reduziert die Redundanz in der Quantisierungsbitbreite.
Das direkte Optimierungsziel (d. h. der Verlust) und die erlernte adaptive Bitbreitenzuweisung ermöglichen es ALQ\xspace, Netze mit extrem niedrigen Bits mit einer durchschnittlichen Bitbreite unter 1-Bit zu erfassen und gleichzeitig eine höhere Genauigkeit als moderne binäre Netze zu erzielen.
\item \fakeparagraph{Anpassung auf Edge-Geräten}
Zweitens ermöglichen wir eine effiziente Anpassung von DNNs, wenn sich die Ressourcenbeschränkungen auf den Zielgeräten während der Laufzeit dynamisch ändern, z. B. die erlaubte Ausführungszeit und der zuweisbare RAM.
Um die Modellgenauigkeit während der Inferenz auf dem Gerät zu maximieren, entwickeln wir einen neuen Syntheseansatz, Dynamic REal-time Sparse Subnets (DRESS\xspace), der Subnetze mit unterschiedlichen Ressourcenanforderungen von einem Backbone-Netz abtasten und ausführen kann.
DRESS\xspace reduziert die Redundanz in mehreren Subnetzen durch gemeinsame Nutzung von Gewicht und Architektur, was zu Speichereffizienz bzw. Rekonfigurationseffizienz führt.
Die erzeugten Subnetze weisen unterschiedliche Sparsamkeit auf und können daher abgerufen werden, um unter variierenden Ressourcenbeschränkungen durch Verwendung von spärliche Tensorberechnungen zu folgern.
\item \fakeparagraph{Lernen auf Edge-Geräten}
Drittens ermöglichen wir ein effizientes Lernen von DNNs, wenn Sie mit unsichtbaren Umgebungen oder Benutzern auf Edge-Geräten konfrontiert sind.
Lernen auf dem Edge-Gerät erfordert sowohl Dateneffizienz als auch Speichereffizienz.
Wir schlagen daher eine neue Meta-Lernmethode p-Meta\xspace vor, die speichereffizientes Lernen mit nur wenigen Datenbeispielen von unbekannten Aufgaben ermöglicht.
p-Meta\xspace reduziert die Aktualisierungsredundanz, indem es nur strukturweise anpassungskritischen Gewichte identifiziert und aktualisiert, wodurch der notwendige Speicherverbrauch für die aktualisierten Gewichte eingespart wird.
\item \fakeparagraph{Edge-Server-System}
Schließlich ermöglichen wir effiziente Inferenz und effiziente Aktualisierung auf Edge-Server-Systemen.
In einem Edge-Server-System sind mehrere ressourcenbeschränkte Edge-Geräte mit einem ressourcenstarken Server mit einem eingeschränkten Kommunikationsbus verbunden.
Aufgrund der begrenzten Anzahl relevanter Trainingsdaten im Voraus können vortrainierte DNNs nach dem anfänglichen Einsatz erheblich verbessert werden.
In einem solchen Edge-Server-System wird die Inferenz auf dem Gerät der Inferenz in der Cloud vorgezogen, da sie eine schnelle und stabile Inferenz mit weniger Energieverbrauch erreichen kann.
Aufgrund des begrenzten Speichers und der begrenzten Rechenleistung auf Edge-Geräten wird jedoch die Re-Training in der Cloud gegenüber der Re-Training auf dem Gerät (oder föderiertem Lernen) bevorzugt.
Wir haben eine neuartige Pipeline, Deep Partial Updating (DPU\xspace) vorgeschlagen, um das eingesetzte Inferenzmodell iterativ zu aktualisieren.
Insbesondere, wenn neu gesammelte Datenbeispielen von Edge-Geräten oder aus anderen Quellen auf dem Cloud-Server verfügbar sind, wählt der Server intelligenterweise nur eine Teilmenge kritischer Gewichte aus, um sie zu aktualisieren und an jedes Edge-Gerät zu senden.
Diese gewichtsmäßige Teilaktualisierung reduziert die redundante Aktualisierung durch Wiederverwendung der vortrainierten Gewichtungen, wodurch eine ähnliche Genauigkeit wie bei der vollständigen Aktualisierung erreicht wird, jedoch mit deutlich geringeren Kommunikationskosten.
\end{itemize}
\chapter[Acknowledgements]{\vspace{-2cm}Acknowledgements}
I was born in Nanyang China. Nanyang is a typical third-tier city in China, with a large number of citizens yet a rather low growth. It is not easy for children who are similar to me to finally receive a doctoral degree from a top-tier university. The entire study career was full of occasionality and uncertainty. There were many critical steps where you only had a small chance and the only thing you can do was to try your best and submitted to the will of god. But for the ones who are struggling like me in the past and happen to read my thesis (only a few ;-)), I would encourage you with a sentence: \textit{I strive to run, just to catch up with those who have been high hopes of their own} (originally from the show Total Soccer).
I am very lucky that at every crossroad, I could always wait till the best option that I believe was often far beyond my deserving. Therefore, I try my best to remember and appreciate all the people who recognized me, guided me, criticized me, supported me, and accompanied me in my 24-year study. I also appreciate the younger me who had the courage to choose the path that is made of difficulties but also leads me to explore the unknown beauties.
My interest in engineering was first inspired by my physics teacher in middle school. In high school, I started to systematically learn physics, and this unforgettable period with my friends built the foundation of my mathematical logic. In my first year of bachelor study at Tongji University, I had a serious ankle fracture in a football game. My peers provided me with countless help in both study and life. During my master study at TU Munich, I received kind supervision when I wrote my master theses in \textit{Computer Vision Group} and \textit{Robotics \& Artificial Intelligence Group}. These pleasant research experiences finally motivated and also helped me to proceed with my academic career at ETH Zurich as a doctoral student.
The PhD study at ETH is my most memorable and enriched period. The four-year study taught me to view a new problem from an unprecedented width and depth, which I believe is even more beneficial to my future life than the knowledge gained from research. I sincerely thank Prof. Lothar Thiele for offering me this great opportunity, and for your supervision and guidance. Thanks for revising my work from the early morning until the late night. I can not imagine a better advisor than you. I wish you all the best in your retirement life. I appreciate my ETH colleagues in \textit{Computer Engineering Group}, e.g.,\xspace Prof. Rehan Ahmed, Prof. Jan Beutel, Andreas Biri, Dr. Yun Cheng, Reto Da Forno, Dr. Stefan Drašković, Tonio Gsell, Dr. Xiaoxi He, Dr. Romain Jacob, Prof. Cong Liu, Dr. Balz Maag, Dr. Matthias Meyer, Dr. Philipp Miedl, Dr. Lukas Sigrist, Naomi Stricker, Dr. Roman Trüb, etc. for the interesting discussion and happy daily working hours. Many thanks to Susann Arreghini and Beat Futterknecht for getting me familiar with Swiss life. I also appreciate all other collaborators for your patient discussion and constructive suggestions, e.g.,\xspace Hu Cao, Prof. Guang Chen, Xin Dong, Junfeng Guo, Lennart Heim, Dr. Shu Liu, Zhao Meng, Prof. Yongxin Tong, Prof. Ye Wang, etc. Particular gratitude goes to Prof. Zimu Zhou. You played a major role in leading me into academic research when I was at the beginning of my PhD study. I hope our enormous audio calls on weekends were not too annoying for you. I also appreciate the team members, particularly my supervisor Dr. Syed Shakib Sarwar and Dr. Barbara De Salvo as well as my peers Dominika Przewłocka-Rus and Dr. Peter Liu at \textit{Facebook Reality Labs} for providing me with a remote yet productive internship during the COVID pandemic.
In addition, I want to thank Prof. Olga Saukh for being the examiner of my doctoral defense.
I also would like to thank my family and my friends in my personal life. I appreciate my mom and my dad for their guidance and support since my childhood, which shaped my body and my soul.
Last but not least, I can not complete this journey alone without my wife, who tolerated, encouraged, and accompanied me through countless days and nights, and countless ups and downs.
\chapter{Introduction}
\label{ch1:introduction}
Deep learning is a new disruptive technology that extremely drives the development of artificial intelligence.
Deep neural networks (DNNs) are widely used in deep learning, which can make predictions according to the given inputs.
A DNN consists of a large number of cascaded layers, where each layer often comprises (\textit{i}) trainable weights that can perform matrix multiplication on the layer's input to output extracted features, (\textit{ii}) a non-linear function that can bring non-linear behaviors.
DNNs can often achieve superior performance than prior computational models or even human beings in many areas, e.g.,\xspace computer vision, natural language processing, mathematics, biochemistry, etc.
In image classification, AlexNet \cite{bib:NIPS12:Krizhevsky} automatically learns the features by training a deep convolutional neural network with GPUs, and the competition results on ImageNet Large Scale Visual Recognition Challenge (ILSVRC) show that AlexNet surpasses the prior classifiers that are built based on hand-crafted features e.g.,\xspace random forest and support vector machine, by a large margin (over 10\% accuracy gain).
AlphaGo Zero \cite{bib:Nature16:Silver} reinforce-learns a deep policy model to predict the movement on the Go board via playing games against itself, and the learned model can even defeat a human world champion of Go games.
BERT \cite{bib:ACL19:Devlin} pretrains deep bidirectional representations from unlabeled text and then fine-tunes the pretrained model, which exhibits a better performance in language understanding on SQuAD test than humans.
Recently, graph convolutional neural networks \cite{bib:Nature19:Eraslan} have also been applied to many biological and chemical problems e.g.,\xspace predicting protein function, predicting binarized gene expression, etc.
As a result, DNNs not only can conduct some intelligent tasks that previously must rely on cumbersome human efforts in our daily life, but also may bring new scientific inspirations that are less explored in the long-term human history.
\section{High Resource Demands of DNNs}
\label{ch1-sec:recource}
The high performance of state-of-the-art DNNs benefits from the intensive resource consumption during both the \textit{training} phase and the \textit{inference} phase.
During the training phase, current DNNs are often optimized on high-performance cloud servers with a large-scale dataset over a long time, which may take (\textit{i}) many human labor resources to prepare the dataset or the training implementation, (\textit{ii}) a large amount of time and money cost, (\textit{iii}) a remarkable CO2 emission \cite{bib:ICLR20:Cai}.
For example, the widely-used DNN ResNet50 \cite{bib:CVPR16:He} needs to be trained with ImageNet dataset which contains $1.2$ million well-labeled internet images collected from 1000 balanced fine classes;
the GPT-3 model published by OpenAI \cite{bib:arXiv20:Brown} takes $3.14\times 10^{23}$ floating-point multiply-accumulate operations (FLOPs) for a single training run, which is equivalent to 355 GPU-years and 4.6M US dollars, according to the theoretical $2.8\times 10^{13}$ FLOPs of high-performed Nvidia V100 GPU and the lowest 3-year reserved cloud pricing we could find \cite{bib:GPT-3}.
Even during the inference phase, the pretrained DNNs still demand a rather significant amount of computing resources from e.g.,\xspace memory, computation, latency, and energy.
For example, the Faster-RCNN model \cite{bib:NIPS15:Ren} requires several hundreds of GFLOPs for a single inference, thus can only achieve around 5 frames per second for object detection on a state-of-the-art GPU;
the current language models \cite{bib:arXiv20:Brown} contain billions of parameters which often require several GPUs with GB-level memory on a cloud server with high-bandwidth communication bus for the parallelism during inference.
Note that the state-of-the-art GPU often has a minimal operation power requirement of around 500W \cite{bib:GPU_power}.
All the examples mentioned above indicate the inherent resource-intensive characteristics of DNNs.
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch1/DNN.png}
\caption[The number of parameters in different DNNs is exponentially increased along the years.]{The number of parameters in different DNNs is exponentially increased along the years. Note that the two outlying nodes (pink) are AlexNet and VGG16, now considered over-parameterized. The figure is originally from \cite{bib:Report21:Bernstein}.}
\label{ch1-fig:dnn}
\end{figure}
However, the resource demands of DNNs still keep growing.
As noted in \cite{bib:Report21:Bernstein}, although the state-of-the-art DNNs continuously improve the accuracy level, the number of parameters (as well as the number of FLOPs) in these DNNs also increases along the years, even with an \textit{exponential} increasing rate, as shown in \figref{ch1-fig:dnn}.
On the other hand, the research and development of hardware often require a long cycle and a high investment.
As a result, the growth rate of the model size is far larger than the growth rate of the computing power of the state-of-the-art high-performance computers, e.g.,\xspace GPUs.
For example, the number of parameters has increased more than 2000 times from AlexNet in 2012 \cite{bib:NIPS12:Krizhevsky} to GPT-3 in 2020 \cite{bib:arXiv20:Brown}, whereas at the same time, the memory of Nvidia GPU has only increased 22 times from Geforce GTX 660 to Geforce RTX 3090, and the computing power (FLOPs/second) has increased around 17 times \cite{bib:wiki_Nvidia}.
\section{Cloud Intelligence}
\label{ch1-sec:cloud_intelligence}
As mentioned above, there exists a large gap between the computing power of available hardware and the resource demands of DNNs.
The common solution to such a conflict is to gather multiple high-performance computers and build a cluster-based server in the cloud, also known as cloud computing \cite{bib:wiki_cloud}.
A cloud server is a group of two or more computers that can share the computing resource, communicate with others and distribute the workload of the same task according to the predefined scheduling system \cite{bib:cluster}.
Some commercial cloud servers include Amazon Web Services (AWS), Google Cloud, Microsoft Azure, etc.
These cloud servers may contain high-performance computers of CPUs, GPUs, TPUs, the communication bus with a high bandwidth, the on-demand shared storage infrastructures, and the high power supplement.
Particularly, a DNN can be deployed on a cloud server to perform some resource-intensive intelligent applications e.g.,\xspace gradient-based training, machine translation, question answering systems, etc.
The high resource demands from these applications can be delegated to multiple computers, and if necessary the results from these computers are aggregated afterwards.
Cloud intelligence has become a prevailing solution for many intelligent services, which require a large amount of resources (e.g.,\xspace memory, computation) whereas a single computer is often not unable to meet these requirements.
\section{Edge Intelligence}
\label{ch1-sec:edge_intelligence}
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch1/edge_intelligent.png}
\caption[Example edge intelligence applications.]{Example edge intelligence applications. From Left to Right: Augmented/Virtual Reality, Mobile Assistants, Internet of Things, Autonomous Driving. The images are from Google.}
\label{ch1-fig:edge_intelligent}
\end{figure}
In addition to cloud intelligence, some new emerging edge intelligent applications further require us to deploy DNNs on \textit{edge devices}.
The term edge refers to an entry point \cite{bib:wiki_edge}.
Accordingly, the collected data (at the entry point) are processed by DNNs locally, i.e.,\xspace on devices.
Edge devices have a large variety, including mobile phones, wearable devices, sensor nodes, etc.
Some example edge intelligent applications (see in \figref{ch1-fig:edge_intelligent}) include but are not limited to,
\begin{itemize}
\item
\fakeparagraph{Augmented/Virtual Reality} Augmented/Virtual reality (AR/VR) can visualize the digital information as the real world via wearable devices, e.g.,\xspace glasses \cite{bib:wiki_ar,bib:wiki_vr}.
To bridge the gap between the physical world and the virtual environment, many AR/VR tasks, e.g.,\xspace hand detection, eye tracking, digital humans, require deep learning methods to provide high-quality interaction.
\item
\fakeparagraph{Mobile Assistants} Mobile assistants are software agents that can perform tasks or services on mobile platforms for an individual based on commands or questions \cite{bib:wiki_assistant}.
Individual users can input voice, images, or text to mobile assistants.
Given the inputs from users, DNNs are utilized to recognize, understand, and communicate with users.
\item
\fakeparagraph{Internet of Things} Internet of Things (IoT) describes physical objects with sensors, processing ability, software, and other technologies that connect with other devices over communication networks \cite{bib:wiki_iot}.
IoT applications use DNNs for automatic sensing and reasoning, e.g.,\xspace detecting intruders in a ``smart home'' monitor system.
\item
\fakeparagraph{Autonomous Driving} Autonomous cars can sense their surroundings and move safely with little or no human inputs \cite{bib:wiki_autonomous}.
Thanks to the rapid development of deep learning, many DNNs in computer vision tasks, e.g.,\xspace object detection, 3D localization, semantic segmentation, have been widely adopted to interpret sensory information and identify appropriate navigation paths.
\end{itemize}
In comparison to cloud intelligent applications, edge intelligent applications have the following advantages, (\textit{i}) it does not encounter privacy issues and can be used on sensitive/confidential data, as the data are processed locally; (\textit{ii}) it reduces the reliance on the cloud server, and can achieve a stable inference even with congested/interrupted communication channels; (\textit{iii}) it can realize a real-time inference if the communication bandwidth is limited; (\textit{iv}) it can save energy by avoiding to transfer data to the cloud server which often costs significant amounts of energy than sensing and computation \cite{bib:Book19:Warden,bib:arXiv18:Guo,bib:arXiv19:Lee}.
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch1/cloud_edge.png}
\caption[Comparison between deep learning on the cloud server and deep learning on edge devices.]{Comparison between deep learning on cloud and deep learning on edge. The figure is originally from \cite{bib:arXiv21:Soro}.}
\label{ch1-fig:cloud_edge}
\end{figure}
Unfortunately, deploying DNNs on edge devices is not trivial, as current DNNs contradict the \textit{resource-constrained} nature of edge devices.
Unlike plenty of high-performance computers (e.g.,\xspace GPUs and TPUs) in the cloud server, the processors on edge devices are commonly mobile SoCs, NPUs, or even MCUs, which have a rather small amount of resources and limited scalability.
We compare the difference between deep learning on the cloud server and deep learning on edge devices in \figref{ch1-fig:cloud_edge}.
The edge devices often use battery-driven energy and have only several $KB$ to $MB$ allocatable RAM.
Their parallel computing capabilities are also relatively low due to the small number of computing cores.
In addition, the number of user data collected on edge devices is also limited in comparison to the large-scale datasets used in cloud training.
To deploy DNNs on these edge devices, the complexity of DNNs needs to be trimmed down to fit the limited resource budget.
\section{Thesis Outline}
\label{ch1-sec:outline}
In this thesis, we will study how to \textit{enable deep learning on edge devices in different scenarios}.
Deploying DNNs on edge devices always targets a trade-off between \textit{the resource demands} and \textit{the model accuracy}.
Since DNNs often consume a large amount of resources, we hypothesize that there exists redundancy in the DNNs.
Our goal is to identify and reduce the redundancy according to the main resource constraints in different scenarios.
This thesis is partitioned into four separate scenarios.
In each scenario, we will (\textit{i}) analyze its main resource constraints, (\textit{ii}) review the drawbacks in the currently available solutions, (\textit{iii}) propose our solution to reduce the redundancy in the DNNs; (\textit{iv}) verify the effectiveness of our solution experimentally or theoretically.
The four studied scenarios are summarized as follows.
\subsection{Inference on Edge Devices (\chref{ch2:inference})}
\label{ch1-sec:inference}
\fakeparagraph{Scenario}
We first enable an efficient inference on edge devices.
Inference on edge devices does not rely on the connection to the cloud server, thus it is especially preferred if the communication is highly constrained, or a stable and fast inference is required.
The main resource constraints of inference on edge devices are the limited static storage and the limited computational ability, as DNNs often contain a large number of parameters to be stored and require a large number of FLOPs for inference.
In this scenario, according to the given resource constraints on edge devices, we train a compressed DNN on a cloud server with a large-scale dataset collected beforehand.
The well-trained compressed network is then deployed on the edge devices and is able to conduct inference with limited resources.
\fakeparagraph{Related Work}
To reduce the storage cost and the computation cost, plenty of works propose to (\textit{i}) design efficient network architectures manually \cite{bib:arXiv17:Howard,bib:CVPR18:Sandler} or automatically using neural architecture search methods \cite{bib:ICLR20:Cai,bib:arXiv19:Yu,bib:ECCV20:Yu}; (\textit{ii}) quantize weights into lower bitwidth to use cheaper operations and reduce the storage consumption \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari,bib:ECCV18:Zhang}; (\textit{iii}) structured \cite{bib:ICCV19:Liu,bib:ECCV20:Li}/unstructured \cite{bib:ICLR16:Han,bib:ICLR20:Renda,bib:ICML21:Evci} pruning unimportant weights as zeros to reduce the number of operations and the number of nonzero weights.
We focus on quantizing a pretrained DNN into multi-bit form among others for the following reasons, (\textit{i}) it utilizes the cheaper operations of bitwise \texttt{xnor} and \texttt{popcount} to replace expensive FLOPs; (\textit{ii}) it achieves a high compression ratio without introducing irregular computations; (\textit{iii}) it explores the lower bound of quantized networks.
The state-of-the-art multi-bit networks \cite{bib:arXiv14:Gong,bib:CVPR17:Guo,bib:AAAI18:Hu,bib:NIPS17:Lin,bib:ICLR18:Xu,bib:ECCV18:Zhang} first assign an empirical global bitwidth across layers and then are optimized by minimizing the reconstruction error to the full precision weights, which often results in a subpar performance.
\fakeparagraph{Our Solution}
To resolve the above drawbacks, we propose an adaptive loss-aware trained quantizer for multi-bit quantization, that (\textit{i}) allocates an adaptive bitwidth to different weights w.r.t. the loss, (\textit{ii}) optimizes the multi-bit quantizer by directly minimizing the loss.
We aim at reducing the \textit{redundant quantization bitwidth} of the weights that are less critical to the loss, to achieve a better trade-off between the model accuracy and the resource demands.
\subsection{Adaptation on Edge Devices (\chref{ch3:adaptation})}
\label{ch1-sec:adaptation}
\fakeparagraph{Scenario}
The compressed DNNs trained with the methods in \chref{ch2:inference} can achieve an efficient inference, if the available resources on edge devices are fixed and provided before training on the cloud server.
However, the resource constraints on the target edge devices may dynamically change during runtime e.g.,\xspace the allowed execution time, the allocatable RAM, and the battery energy.
To maximize the model accuracy during on-device inference, the deployed DNN should maintain a dynamic capacity, such that the DNN can be adapted and executed under varying resource constraints.
In order to quantify the varying resource constraints mentioned earlier, we choose two proxies, (\textit{i}) the storage of weights, which affects the amount of memory fetching and static memory consumption, and (\textit{ii}) the number of operations for inference, which is relevant to the computing energy and the inference latency.
\fakeparagraph{Related Work}
The most straightforward solution could be for example deploying multiple individual compressed DNNs with different resource demands on edge devices, yet it consumes several times more storage than a single DNN.
Some prior works \cite{bib:ICLR18:Huang,bib:arXiv17:Hu,bib:ICLR19:Yu,bib:ICCV19:Yu,bib:ICLR20:Cai,bib:RTAS20:Lee} proposed to optimize a backbone network (a.k.a. supernet), such that different candidate sub-networks can be sampled from the backbone network while reaching a similar accuracy level as training them individually. However, these works often sample sub-networks along hand-crafted structured dimensions, e.g.,\xspace kernel size, width, depth, thus the generated sub-networks have different network architectures. This not only results in a sub-optimal performance but also leads to extra re-configuration overhead for storing multiple compiled network architectures.
\fakeparagraph{Our Solution}
We overcome the above disadvantages through sampling sub-networks in a row-based unstructured manner, and propose a novel compressed sparse row (CSR) format to efficiently execute different sub-networks on edge devices.
Our solution reduces \textit{the architecture redundancy} by reusing a single compiled network architecture among multiple sparse sub-networks, achieving re-configuration efficiency.
In addition, we also reduce \textit{the weight redundancy} by imposing nonzero weight sharing among sub-networks, achieving storage efficiency.
\subsection{Learning on Edge Devices (\chref{ch4:learning})}
\label{ch1-sec:learning}
\fakeparagraph{Scenario}
In \chref{ch2:inference} and \chref{ch3:adaptation}, we train a compressed DNN on a cloud server with a large number of available data samples, such that this pretrained DNN can be deployed on edge devices to conduct inference under \textit{fixed} and \textit{varying} resource constraints, respectively.
However, the pretrained DNN may not achieve satisfactory performance when the inference environments on edge devices have a large variance in comparison to the prior environments used to collect data samples for cloud training.
In other words, when facing unseen environments or users on edge devices, it is crucial to adapt the pretrained DNN to deliver consistent performance and customized services.
New data samples collected by edge devices are often private and have a large diversity across users/devices.
Hence, on-device learning is preferred over uploading the data to cloud server.
Compared to the number of data samples used in cloud training, the number of collected data on each edge device is significantly smaller (a.k.a. few-shot) due to the limited labor resources.
Furthermore, training a DNN, i.e.,\xspace optimizing its weights, requires storing all the intermediate values of each layer, which often consumes several orders of magnitude more peak memory than inference.
Thus, in this scenario, we target memory-efficient and data-efficient on-device learning.
\fakeparagraph{Related Work}
Meta learning is a prevailing solution to few-shot learning \cite{bib:arXiv20:Hospedales}, where the meta-trained model can learn an unseen task from a few training samples, i.e.,\xspace data-efficient learning.
However, most meta learning algorithms \cite{bib:ICLR19:Antreas, bib:ICML17:Finn, bib:NIPS21:Oswald} optimize the backbone network for better generalization yet ignore the workload if the meta-trained backbone is deployed on low-resource edge platforms for few-shot learning.
Existing memory-efficient training schemes include for example, low-precision training \cite{bib:ICLR20:Cambier, bib:NIPS18:Wang}, trading memory with computation \cite{bib:arXiv16:Chen, bib:NIPS16:Gruslys}.
However, they are mainly designed for high-throughput cloud training on large-scale datasets, which are not suitable for on-device learning with only a few data samples.
\fakeparagraph{Our Solution}
We ground our work (i.e.,\xspace memory-efficient few-shot learning) on gradient-based meta learning methods for their wide applicability in various tasks.
To avoid the high dynamic memory cost in few-shot learning, we focus on reducing \textit{the updating redundancy}.
In other words, we think not all weights in the learner are equally critical for adaptation.
Thus, we propose to meta-train a selection mechanism, which can identify and update adaptation-critical weights only during few-shot learning.
This way, only the relevant subset of the intermediate values needs to be stored, leading to memory efficiency.
\subsection{Edge-Server-System (\chref{ch5:edgeserver})}
\label{ch1-sec:edgeserver}
\fakeparagraph{Scenario}
In \chref{ch2:inference}, \chref{ch3:adaptation} and \chref{ch4:learning}, we explored enabling deep learning on a single edge platform in three different scenarios.
In addition to a single edge device, edge-server system is another commonly used infrastructure for edge intelligent applications.
In edge-server system, several edge devices are connected to a remote server, and some information is allowed to be communicated between edge devices and the server.
In \chref{ch5:edgeserver}, we design a new pipeline to enable efficient inference and efficient updating for edge-server system.
On such an edge-server system, on-device inference is preferred over cloud inference, since it can achieve a fast and stable inference with less energy consumption.
Due to a possible lack of relevant training data at the initial deployment, pretrained DNNs may either fail to perform satisfactorily or be significantly improved after the initial deployment.
However, the resources on edge devices are often limited e.g.,\xspace memory, computing power, and energy; the wireless communication is also constrained, e.g.,\xspace limited bandwidth.
An efficient updating/learning that satisfies the resource constraints mentioned above is needed.
\fakeparagraph{Related Work}
Communication-efficient federated learning \cite{bib:ICLR18:Lin,bib:arXiv19:Kairouz,bib:arXiv20:Li} studies how to compress multiple gradients (to be communicated to the server) calculated on different sets of non-\textit{i.i.d.} local data, such that the aggregation of these (compressed) gradients could result in a similar convergence performance as centralized training on all data.
However, federated learning (as well as other on-device retraining methods) has the following main shortages, (\textit{i}) it conducts resource-intensive gradient calculation on edge devices; (\textit{ii}) the collected data are continuously accumulated on memory-constrained edge devices; (\textit{iii}) it needs to label a large number of samples on edge devices.
\fakeparagraph{Our Solution}
We propose a two-stage iterative process for a continuous improvement of the deployed model's accuracy, (\textit{i}) at each round, edge devices collect new data samples and send them to the server, and (\textit{ii}) the server retrains the network using all collected data, and then sends the updates to each edge device.
An essential challenge herein is that the transmissions in the server-to-edge stage are highly constrained by the limited communication resource (e.g.,\xspace bandwidth, energy) in comparison to the edge-to-server stage for the following reasons.
(\textit{i}) A batch of samples that can lead to reasonable updates is relatively smaller in size than the DNN model, especially for the low-resource data type used on edge devices; (\textit{ii}) the server may also receive data from other sources, e.g.,\xspace through data augmentation or new data collection campaigns.
We reduce the communication cost in the server-to-edge stage by distinguishing \textit{the redundant updated weights} given newly collected samples.
In our proposed solution, the server only selects and updates a small subset of critical weights that have a large contribution to the loss reduction during the retraining.
In the rest of this thesis, we first present our four scenarios of enabling deep learning on edge devices, i.e.,\xspace inference on edge devices in \chref{ch2:inference}, adaptation on edge devices in \chref{ch3:adaptation}, learning on edge devices in \chref{ch4:learning}, edge-server-system in \chref{ch5:edgeserver}, respectively; finally conclude and discuss the future work in \chref{ch6:conclusion}.
\chapter[Inference on Edge Devices]{Inference on Edge Devices}
\label{ch2:inference}
We attempt to enable an efficient inference of DNNs on resource-constrained edge devices in this chapter.
Particularly, we focus on quantizing a pretrained DNN to fit the given resource constraints on edge devices while with the minimal accuracy drop.
\fakeparagraph{Main Resource Constraints}
State-of-the-art DNNs often contain a large number of floating-point weights and require a significant amount of floating-point multiply-accumulate operations, which are essential for conducting accurate inference.
However, edge devices have neither powerful computational ability nor enormous storage.
Thus, for inference on edge devices, we consider that the main resource constraints are the \textit{limited static storage} and \textit{the limited computing power}.
\fakeparagraph{Principles}
Unlike prior quantized networks that (\textit{i}) often assign an empirical global bitwidth across layers, (\textit{ii}) train the quantizer by minimizing the reconstruction error to the full precision weights,
we propose an adaptive loss-aware trained quantizer for multi-bit quantization, that (\textit{i}) allocates an adaptive bitwidth to different weights w.r.t. the loss, (\textit{ii}) optimizes the multi-bit quantizer by minimizing the loss.
The adaptive bitwidth assignment and the direct optimization objective allow our methods to find and remove more redundant bitwidth than prior works, thus achieving both storage efficiency and computation efficiency.
The contents of this chapter are established mainly based on the paper ``Adaptive Loss-aware Quantization for Multi-bit Networks'' that is published on IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 \cite{bib:CVPR20:Qu}.
\section{Introduction}
\label{ch2-sec:introduction}
To take advantage of the various pretrained models for efficient inference on resource-constrained edge devices, it is common to compress the pretrained models via pruning \cite{bib:ICLR16:Han}, quantization \cite{bib:arXiv14:Gong,bib:CVPR17:Guo,bib:NIPS17:Lin,bib:ICLR18:Xu,bib:ECCV18:Zhang}, among others.
We focus on \emph{quantization}, especially quantizing both the full precision weights and activations of a deep neural network into binary encodes and the corresponding scaling factors \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari}, which are also interpreted as binary basis vectors and floating-point coordinates in a geometry viewpoint \cite{bib:CVPR17:Guo}.
Neural networks quantized with binary encodes replace expensive floating-point operations by bitwise operations, which are supported even by microprocessors and often result in small memory footprints \cite{bib:ICLR18:Mishra}.
Since the space spanned by only one-bit binary basis and one coordinate is too sparse to optimize, many researchers suggest a multi-bit network (MBN) \cite{bib:arXiv14:Gong,bib:CVPR17:Guo,bib:AAAI18:Hu,bib:NIPS17:Lin,bib:ICLR18:Xu,bib:ECCV18:Zhang}, which allows to obtain a small size without notable accuracy loss and still leverages bitwise operations.
An MBN is usually obtained via quantization-aware training.
Recent studies~\cite{bib:ICLR18:Pedersoli} leverage bit-packing and bitwise computations for efficient deploying binary networks on a wide range of general devices, which also provides more flexibility to design multi-bit/binary networks.
\fakeparagraph{Challenges}
Most MBN quantization schemes~\cite{bib:arXiv14:Gong,bib:CVPR17:Guo,bib:AAAI18:Hu,bib:NIPS17:Lin,bib:ICLR18:Xu,bib:ECCV18:Zhang} predetermine a global bitwidth, and learn a quantizer to transform the full precision parameters into binary bases and coordinates such that the quantized models do not incur a significant accuracy loss.
However, these approaches have the following drawbacks:
\begin{itemize}
\item
A \textit{global bitwidth} may be sub-optimal.
Recent studies on fixed-point quantization \cite{bib:ICLR18:Khoram,bib:ICML16:Lin} show that the optimal bitwidth varies across layers.
\item
Previous efforts \cite{bib:NIPS17:Lin,bib:ICLR18:Xu,bib:ECCV18:Zhang} retain inference accuracy by minimizing \textit{the weight reconstruction error} rather than the loss function.
Such an indirect optimization objective may lead to a notable loss in accuracy.
Furthermore, they rely on approximated gradients, e.g.,\xspace straight-through estimators (STE) to propagate gradients through quantization functions during training.
\item
Many quantization schemes~\cite{bib:ECCV16:Rastegari,bib:ECCV18:Zhang} keep \textit{the first and last layer in full precision empirically}, because quantizing these layers to low bitwidth tends to dramatically decrease the inference accuracy \cite{bib:ECCV18:Wan,bib:ICLR18:Mishra2}.
However, these two full precision layers can be a significant storage overhead compared to other low-bit layers (see \secref{ch2-sec:experiment_imagenet}).
Also, floating-point operations in both layers can take up the majority of computation in quantized networks~\cite{bib:ICLR19:Louizos}.
\end{itemize}
We overcome the above challenges and drawbacks via a novel \textbf{A}daptive \textbf{L}oss-aware \textbf{Q}uantization scheme (ALQ\xspace).
Instead of using a uniform bitwidth, ALQ\xspace assigns an adaptive different bitwidth to each group of weights.
More importantly, ALQ\xspace directly minimizes the loss function w.r.t. the quantized weights, by iteratively learning a quantizer that (\textit{i}) smoothly reduces the number of binary bases (also the quantization bitwidth) and (\textit{ii}) alternatively optimizes the remaining binary bases and the corresponding coordinates.
\section{Related Work}
\label{ch2-sec:related}
ALQ\xspace follows the trend to quantize the DNNs using discrete bases with lower bitwidth to reduce expensive floating-point operations as well as the static storage consumption.
Commonly used bases include fixed-point~\cite{bib:arXiv16:Zhou}, power of two \cite{bib:JMLR17:Hubara,bib:ICLR17:Zhou}, and $\{-1,0,+1\}$ \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari}.
We focus on quantization with binary bases i.e.,\xspace $\{-1,+1\}$ among others for the following considerations.
(\textit{i})
If both weights and activations are quantized with the same binary basis, it is possible to evaluate 32 floating-point multiply-accumulate operations (FLOPs) with only 3 instructions on a 32-bit microprocessor, i.e.,\xspace bitwise $\texttt{xnor}$, $\texttt{popcount}$, and accumulation.
This will significantly speed up the \texttt{conv} operations \cite{bib:JMLR17:Hubara,bib:ICLR18:Pedersoli}.
(\textit{ii})
Multi-bit quantization can be considered as the non-uniform counter-part of fixed-point (integer) quantization.
A network quantized to fixed-point requires specialized integer arithmetic units and/or specialized integer storage units with various bitwidth for efficient computing~\cite{bib:MICRO17:Albericio,bib:ICLR18:Khoram}, whereas a network quantized with multiple binary bases adopts the same operations mentioned before as binary networks.
Therefore, multi-bit networks may also achieve a higher hardware efficiency than fixed-point network in adaptive bitwidth quantization.
Popular networks quantized with binary bases include \textit{Binary Networks} and \textit{Multi-bit Networks}.
\subsection{Quantization for Binary Networks}
BNN \cite{bib:NIPS15:Courbariaux} is the first network with both binarized weights and activations.
It dramatically reduces the memory and computation but often with notable accuracy loss.
To resume the accuracy degradation from binarization, XNOR-Net \cite{bib:ECCV16:Rastegari} introduces a layerwise full precision scaling factor into BNN.
However, XNOR-Net leaves the first and last layers unquantized, which consumes more memory.
SYQ \cite{bib:CVPR18:Faraone} studies the efficiency of different structures during binarization/ternarization.
LAB \cite{bib:ICLR17:Hou} is the first loss-aware quantization scheme which optimizes the weights by directly minimizing the loss function.
ALQ\xspace is inspired by recent loss-aware binary networks such as LAB \cite{bib:ICLR17:Hou}.
Loss-aware quantization has also been extended to fixed-point networks in \cite{bib:ICLR18:Hou}.
However, existing loss-aware quantization schemes proposed for binary and ternary networks \cite{bib:ICLR17:Hou,bib:ICLR18:Hou,bib:CVPR18:Zhou} are inapplicable for MBNs.
This is because multiple binary bases dramatically extend the optimization space with the same bitwidth (i.e.,\xspace an optimal set of binary bases rather than a single basis), which may be intractable.
Some proposals \cite{bib:ICLR17:Hou,bib:ICLR18:Hou,bib:CVPR18:Zhou} still require full-precision weights and gradient approximation (backward STE and forward loss-aware projection), introducing undesirable errors when minimizing the loss.
In contrast, ALQ\xspace is free from gradient approximation.
\subsection{Quantization for Multi-bit Networks}
MBNs denote networks that use multiple binary bases to trade-off storage and accuracy.
Gong et~al.\@\xspace propose a residual quantization process, which greedily searches the next binary basis by minimizing the residual reconstruction error~\cite{bib:arXiv14:Gong}.
Guo et~al.\@\xspace improve the greedy search with a least square refinement~\cite{bib:CVPR17:Guo}.
Xu et~al.\@\xspace~\cite{bib:ICLR18:Xu} separate this search into two alternating steps, fixing coordinates then exhausted searching for optimal bases, and fixing the bases then refining the coordinates using the method in \cite{bib:CVPR17:Guo}.
LQ-Net~\cite{bib:ECCV18:Zhang} extends the scheme of~\cite{bib:ICLR18:Xu} with a moving average updating, which jointly quantizes weights and activations.
However, similar to XNOR-Net \cite{bib:ECCV16:Rastegari}, LQ-Net~\cite{bib:ECCV18:Zhang} does not quantize the first and last layers.
ABC-Net~\cite{bib:NIPS17:Lin} leverages the statistical information of all weights to construct the binary bases as a whole for all layers.
All the state-of-the-art MBN quantization schemes minimize the weight reconstruction error rather than the loss function of the network.
They also rely on the gradient approximation such as STE when back propagating the quantization function.
In addition, they all predetermine a uniform bitwidth for all parameters.
The indirect objective, the approximated gradient, and the global bitwidth lead to a sub-optimal quantization.
ALQ\xspace is the first scheme to explicitly optimize the loss function and incrementally train an adaptive bitwidth while without gradient approximation.
\section{Preliminaries and Notations}
\label{ch2-sec:notations}
We aim at multi-bit quantization with an adaptive bitwidth on a DNN consisting of $L$ convolutional (\texttt{conv}) layers or fully connected (\texttt{fc}) layers.
To simplify the notation, we start the discussion with a single layer and extend to the entire network with $L$ layers in the implementation section \secref{ch2-sec:implementation}.
For a \texttt{conv}/\texttt{fc} layer, its weights dominate the resource consumption of storage and computation than other parameters, e.g.,\xspace bias, batch normalization.
We thus judiciously focus on quantizing the weight tensor of the \texttt{conv}/\texttt{fc} layer $l$.
To allow an adaptive bitwidth, we structure the weight tensor of the layer $l$ in \textit{disjoint groups}.
The weights in a single group will be quantized into the same bitwidth, whereas different group may have an adaptive different bitwidth.
Specifically, for the \textit{vectorized} weight tensor $\bm{w}_l\in\mathbb{R}^{N}$ of layer $l$, we divide $\bm{w}_l$ into $G$ disjoint groups.
For simplicity, we omit the subscript $l$ in the following discussion.
Each group of weights is denoted by $\bm{w}_{g}$, where $\bm{w}_{g}\in\mathbb{R}^{n}$ and $N = n \times G$.
In other words, the overall $N$ weights in layer $l$ are evenly partitioned into $G$ groups, see more details in \secref{ch2-sec:experiment_group}.
Then the multi-bit quantized weights $\hat{\bm{w}}_{g}$ of group $g$ are formulated as,
\begin{equation}
\bm{\hat{w}}_g = \sum_{i=1}^{I_g}\alpha_i\bm{\beta}_i=\bm{B}_g\bm{\alpha}_g
\label{ch2-eq:multi_bit}
\end{equation}
where $\bm{\beta}_i\in\{-1,+1\}^{n\times 1}$ and $\alpha_i\in\mathbb{R}_+$ are the $i$-th binary basis and the corresponding coordinate; $I_g$ represents the quantization bitwidth, i.e.,\xspace the number of binary bases, of group $g$.
$\bm{B}_g\in\{-1,+1\}^{n\times I_g}$ and $\bm{\alpha}_g\in\mathbb{R}_+^{I_g\times1}$ are the matrix forms of the binary bases and the coordinates.
We further denote $\bm{\alpha}=\bm{\alpha}_{1:G}$ as vectorized coordinates $\bm{\alpha}_g$ of all weight groups, and $\bm{B}=\bm{B}_{1:G}$ as concatenated binary bases $\bm{B}_g$ of all weight groups.
A layer $l$ quantized as above yields an average bitwidth
\begin{equation}
I = \frac{1}{G}\sum_{g = 1}^G I_g
\label{ch2-eq:avg_bit}
\end{equation}
\section{Adaptive Loss-Aware Quantization}
\label{ch2-sec:method}
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch2/alq.png}
\caption[The overall approach of ALQ\xspace.]{The figure depicts the overall approach of ALQ\xspace. In Initialization Step, the pretrained full precision weights are separated into disjoint groups and then are quantized into an 8-bit multi-bit form. In Pruning Step, we search an adaptive different bitwidth for each group of weights by removing the unimportant $\alpha$'s w.r.t. the loss. Based on the searched bitwidth assignment, we further conduct an Optimization step to train the remaining binary bases $\bm{B}_g$ and coordinates $\bm{\alpha}_g$. Both Pruning Step and Optimization Step are conducted iteratively.}
\label{ch2-fig:approach}
\end{figure}
\subsection{Weight Quantization Overview}
\label{ch2-sec:overview}
\fakeparagraph{Problem Formulation}
ALQ\xspace quantizes weights by directly minimizing the loss function rather than the reconstruction error.
For layer $l$, the process can be formulated as the following optimization problem.
\begin{eqnarray}
\min_{\bm{\hat{w}}_{1:G}} & & \ell\left(\bm{\hat{w}}_{1:G}\right) \label{ch2-eq:objective} \\
\text{s.t.} & & \bm{\hat{w}}_g = \sum_{i=1}^{I_g}\alpha_i\bm{\beta}_i = \bm{B}_g\bm{\alpha}_g \quad \forall g\in 1,...,G\label{ch2-eq:weights} \\
& & \mathrm{card}(\bm{\alpha}) = I\times G \leq I_\mathrm{min}\times G \label{ch2-eq:sum_Ig}
\end{eqnarray}
where $\ell$ is the loss; $\mathrm{card}(.)$ denotes the cardinality of the set, i.e.,\xspace the total number of elements in $\bm{\alpha}$; $I_\mathrm{min}$ is the desirable average bitwidth, which is determined by the storage constraints on edge devices.
Since the group size $n$ is the same in one layer, $\mathrm{card}(\bm{\alpha})$ is proportional to the storage consumption.
\fakeparagraph{Solution Pipeline}
The constrained domain of \equref{ch2-eq:weights} and \equref{ch2-eq:sum_Ig} are both discrete and non-convex.
Directly conducting an exhaustive searching is NP-hard and infeasible on current DNNs.
Therefore, we propose to narrow down the search space and disentangle the constraints into two sub-problems.
Particularly, our ALQ\xspace solves the optimization problem in \equref{ch2-eq:objective}-\equref{ch2-eq:sum_Ig} by three steps.
The overall approach is shown in \figref{ch2-fig:approach}.
The pseudocode of the entire pipeline is illustrated in \algoref{ch2-alg:pipeline} in \secref{ch2-sec:implementation_pipeline}.
\begin{itemize}
\item
\underline{Initialization Step:} \textbf{Structured Sketching} (\secref{ch2-sec:implementation_initialization}).
In this step, we adapt the network sketching in~\cite{bib:CVPR17:Guo}, and propose a structured sketching algorithm.
It first partitions the pretrained full precision weights $\bm{w}$ into $G$ groups; then quantizes each $\bm{w}_{g}$ into its 8-bit multi-bit form $\bm{\hat{w}}_{g}$ by greedily searching the optimal binary basis vector $\bm{\beta}_i$ and the optimal scaling factor $\alpha_i$.
This step not only provides a good initial point for the following steps, but also restricts each group to a maximal 8-bit to reduce the search space.
\item
\underline{Pruning Step:} \textbf{Pruning in $\bm{\alpha}$ Domain} (\secref{ch2-sec:pruning} and \secref{ch2-sec:implementation_pruning}).
This step starts from the initialized 8-bit network obtained in Initialization Step, and then progressively reduces the average bitwidth $I$ by pruning the least important (w.r.t. the loss) coordinates in $\bm{\alpha}$ domain.
Note that removing an element $\alpha_i$ will also lead to the removal of the binary basis $\bm{\beta}_i$, which in effect results in a smaller bitwidth $I_g$ for group $g$.
This way, no sparse tensor is introduced.
Note that sparse tensors could lead to a detrimental irregular computation.
Since the importance of each weight group differs, the resulting $I_g$ varies across groups, and thus contributes to an adaptive bitwidth $I_g$ for each group.
In this step, we only set some elements of $\bm{\alpha}$ to zero (also remove them from $\bm{\alpha}$ leading to a reduced $I_g$) without changing the others.
The sub-problem for Pruning Step is:
\begin{eqnarray}
\min_{\bm{\alpha}} & & \ell\left(\bm{\alpha}\right) \label{ch2-eq:pruning_objective} \\
\text{s.t.} & & \mathrm{card}(\bm{\alpha}) \leq I_\mathrm{min}\times G \label{ch2-eq:pruning_constraint}
\end{eqnarray}
\item
\underline{Optimization Step:} \textbf{Optimizing Binary Bases $\bm{B}_g$ and Coordinates $\bm{\alpha}_g$} (\secref{ch2-sec:optimization} and \secref{ch2-sec:implementaion_optimization}).
In this step, we retrain the remaining binary bases and coordinates to recover the accuracy degradation induced by the bitwidth reduction.
Similar to~\cite{bib:ICLR18:Xu}, we take an alternative approach for better accuracy recovery.
Specifically, we first search for a new set of binary bases w.r.t. the loss given fixed coordinates.
Then we optimize the coordinates by fixing the binary bases.
The sub-problem for Optimization Step is:
\begin{eqnarray}
\min_{\bm{\hat{w}}_{1:G}} & & \ell\left(\bm{\hat{w}}_{1:G}\right) \label{ch2-eq:optimization_objective} \\
\text{s.t.} & & \bm{\hat{w}}_g = \sum_{i=1}^{I_g}\alpha_i\bm{\beta}_i=\bm{B}_g\bm{\alpha}_g \quad \forall g \in 1,...,G \label{ch2-eq:optimization_constraint}
\end{eqnarray}
\end{itemize}
\noindent
For a higher accuracy, state-of-the-art unstructured pruning methods \cite{bib:ICLR16:Han,bib:ICLR19:Frankle} often conduct pruning and sparse fine-tuning iteratively rather than the one-shot manner.
Similarly, we also conduct our Pruning Step and our Optimization Step \textit{iteratively} until the average bitwidth reaches the desired bitwidth.
Namely, the original problem of \equref{ch2-eq:objective}-\equref{ch2-eq:sum_Ig} is decoupled into two sub-problems of \equref{ch2-eq:pruning_objective}-\equref{ch2-eq:pruning_constraint} and \equref{ch2-eq:optimization_objective}-\equref{ch2-eq:optimization_constraint}, and the two sub-problems are solved iteratively.
\fakeparagraph{Optimizer Framework}
We consider both Pruning Step and Optimization Step above as an optimization problem with \textit{domain constraints}, and solve them using the same optimization framework: subgradient methods with projection update \cite{bib:JMLR11:Duchi}.
The optimization problem in \equref{ch2-eq:optimization_objective}-\equref{ch2-eq:optimization_constraint} imposes domain constraints on $\bm{B}_g$ because they can only be discrete binary bases.
The optimization problem in \equref{ch2-eq:pruning_objective}-\equref{ch2-eq:pruning_constraint} can be considered as with a trivial domain constraint: the output $\bm{\alpha}$ should be a subset (subvector) of the input $\bm{\alpha}$.
Furthermore, the feasible sets for both $\bm{B}_g$ and $\bm{\alpha}$ are bounded.
Subgradient methods with projection update are effective to solve problems in the form of $\min_{\bm{\theta}}(\ell(\bm{\theta}))$ s.t. $\bm{\theta}\in\Theta$ \cite{bib:JMLR11:Duchi}.
We apply AMSGrad~\cite{bib:ICLR18:Reddi}, an adaptive stochastic subgradient method with projection update, as the common optimizer framework in Pruning Step and Optimization Step.
At training iteration $s$, AMSGrad generates the next update as,
\begin{equation}
\begin{split}
\bm{\theta}^{s+1} & = \Pi_{\Theta,\sqrt{\bm{\hat{V}}^s}}(\bm{\theta}^s-a^s\bm{m}^s/\sqrt{\bm{\hat{v}}^s}) \\
& = \underset{{\bm{\theta}\in\Theta}}{\mathrm{argmin}}~\|(\sqrt{\bm{\hat{V}}^s})^{1/2}(\bm{\theta}-(\bm{\theta}^s-\frac{a^s\bm{m}^s}{\sqrt{\bm{\hat{v}}^s}}))\|
\end{split}
\label{ch2-eq:amsgrad_theta}
\end{equation}
where $\Pi$ is a projection operator; $\Theta$ is the feasible domain of $\bm{\theta}$; $a^s$ is the learning rate; $\bm{m}^s$ is the (unbiased) first momentum; $\bm{\hat{v}}^s$ is the (unbiased) maximum second momentum; and $\bm{\hat{V}}^s$ is the diagonal matrix of $\bm{\hat{v}}^s$.
In our context, \equref{ch2-eq:amsgrad_theta} can be written as,
\begin{equation}
\bm{\hat{w}}_g^{s+1} = \underset{\bm{\hat{w}}_g\in\mathbb{W}}{\mathrm{argmin}} f^s(\bm{\hat{w}}_g)
\label{ch2-eq:amsgrad_w1}
\end{equation}
\begin{equation}
f^s=(a^s\bm{m}^s)^{\mathrm{T}}(\bm{\hat{w}}_g-\bm{\hat{w}}_g^s)+\frac{1}{2}(\bm{\hat{w}}_g-\bm{\hat{w}}_g^s)^{\mathrm{T}}\sqrt{\bm{\hat{V}}^s}(\bm{\hat{w}}_g-\bm{\hat{w}}_g^s)
\label{ch2-eq:amsgrad_w2}
\end{equation}
where $\mathbb{W}$ is the feasible domain of $\bm{\hat{w}}_g$.
Pruning Step and Optimization Step have different feasible domains of $\mathbb{W}$ according to their objective (see details in~\secref{ch2-sec:pruning} and~\secref{ch2-sec:optimization}).
\equref{ch2-eq:amsgrad_w2} approximates the loss increment incurred by $\bm{\hat{w}}_g$ around the current point $\bm{\hat{w}}_g^s$ as a quadratic model function under domain constraints \cite{bib:NIPS15:Dauphin,bib:JMLR11:Duchi,bib:ICLR18:Reddi}.
For simplicity, we replace $a^s\bm{m}^s$ with $\bm{g}^s$ and replace $\sqrt{\bm{\hat{V}}^s}$ with $\bm{H}^s$.
$\bm{g}^s$ and $\bm{H}^s$ are updated by the loss gradient of $\bm{\hat{w}}_g^s$.
Thus, the required input of each AMSGrad step is $\partial\ell^s/\partial {\bm{\hat{w}}_g}^s$.
It can be directly obtained during the backward propagation, since $\bm{\hat{w}}_g^s$ is used as an intermediate value during the forward propagation.
\subsection{Pruning in $\bm{\alpha}$ Domain}
\label{ch2-sec:pruning}
As introduced in \secref{ch2-sec:overview}, we reduce the average bitwidth $I$ by pruning the elements in $\bm{\alpha}$ w.r.t. the resulting loss.
If one element $\alpha_i$ in $\bm{\alpha}$ is pruned, the corresponding dimension $\bm{\beta}_i$ is also removed from $\bm{B}$.
Now we explain how to instantiate the optimizer in \equref{ch2-eq:amsgrad_w1} to solve \equref{ch2-eq:pruning_objective}-\equref{ch2-eq:pruning_constraint} of Pruning Step.
As discussed above, pruning in $\bm{\alpha}$ domain is regarded as an optimization problem solved in multiple training iterations.
Thus, the cardinality of the chosen subset (i.e.,\xspace the average bitwidth) is uniformly reduced over training iterations.
For example, assume there are $T$ training iterations in total, the initial average bitwidth is $I^0$ and the desired average bitwidth after $T$ iterations $I^{T}$ is $I_\mathrm{min}$.
Then at each iteration $t$, ($M_p = (I^{0}-I_\mathrm{min})\times G/T$) of $\alpha_i^t$'s are pruned.
This way, the cardinality after $T$ iterations will be smaller than $I_\mathrm{min}\times G$.
When pruning in the $\bm{\alpha}$ domain, $\bm{B}$ is considered as invariant.
Hence \equref{ch2-eq:amsgrad_w1} and \equref{ch2-eq:amsgrad_w2} become,
\begin{equation}
\bm{\alpha}^{t+1} = \underset{\bm{\alpha}\in\mathbb{A}}{\mathrm{argmin}}~f_{\bm{\alpha}}^t(\bm{\alpha})
\label{ch2-eq:amsgrad_alpha1}
\end{equation}
\begin{equation}
f_{\bm{\alpha}}^t=(\bm{g}_{\bm{\alpha}}^t)^{\mathrm{T}}(\bm{\alpha}-\bm{\alpha}^t)+\frac{1}{2}(\bm{\alpha}-\bm{\alpha}^t)^{\mathrm{T}}\bm{H_\alpha}^t(\bm{\alpha}-\bm{\alpha}^t)
\label{ch2-eq:amsgrad_alpha2}
\end{equation}
where $\bm{g}_{\bm{\alpha}}^t$ and $\bm{H_\alpha}^t$ are similar to the ones in \equref{ch2-eq:amsgrad_w2} but are in the $\bm{\alpha}$ domain.
If $\alpha_i^t$ is pruned, the $i$-th element in $\bm{\alpha}$ is set to $0$ in the above~\equref{ch2-eq:amsgrad_alpha1} and~\equref{ch2-eq:amsgrad_alpha2}.
Thus, the constrained domain $\mathbb{A}$ is taken as all possible vectors with $M_p$ zero elements in $\bm{\alpha}^t$.
AMSGrad uses a diagonal matrix of $\bm{H_\alpha}^t$ in the quadratic model function, which decouples each element in $\bm{\alpha}^t$.
This means the loss increment caused by several $\alpha_i^t$ equals the sum of the increments caused by them individually, which are calculated as,
\begin{equation}
f_{\bm{\alpha},i}^t = -g_{\bm{\alpha},i}^t~\alpha_i^t+\frac{1}{2}~H_{\bm{\alpha},{ii}}^t~({\alpha_i^t})^2
\label{ch2-eq:taylor_pruning}
\end{equation}
All items of $f_{\bm{\alpha},i}^t$ are sorted in ascending.
Then the first $M_p$ items ($\alpha_i^t$) in the sorted list are removed from $\bm{\alpha}^t$, and results in a smaller cardinality $I^{t}\times G$.
The input of the AMSGrad step in $\bm{\alpha}$ domain is the loss gradient of $\bm{\alpha}_g^t$, which can be computed with the chain rule,
\begin{equation}
\frac{\partial\ell^t}{\partial\bm{\alpha}_g^t}={\bm{B}_g^t}^{\mathrm{T}} \frac{\partial\ell^t}{\partial {\bm{\hat{w}}_g}^t}
\label{ch2-eq:gradient_pruning}
\end{equation}
\begin{equation}
\bm{\hat{w}}_g^t=\bm{B}_g^t \bm{\alpha}_g^t
\end{equation}
Our pipeline allows to reduce the bitwidth smoothly, since the average bitwidth can be floating-point.
In ALQ\xspace, since different layers have a similar group size (see in \secref{ch2-sec:experiment_group}), the loss increment caused by pruning is sorted among all layers, such that only a global pruning number needs to be determined.
More details are explained in \secref{ch2-sec:implementation_pipeline}.
This Pruning Step not only provides a loss-aware adaptive bitwidth, but also seeks a better initialization for the successive Optimization Step, since low-bit quantized weights may be relatively far from their original full precision values.
\subsection{Optimizing Binary Bases and Coordinates}
\label{ch2-sec:optimization}
After pruning, the loss degradation needs to be recovered.
Following~\equref{ch2-eq:amsgrad_w1}, the objective in Optimization Step is
\begin{equation}
\bm{\hat{w}}_g^{s+1} = \underset{\bm{\hat{w}}_g\in\mathbb{W}}{\mathrm{argmin}}~f^s(\bm{\hat{w}}_g)
\end{equation}
The constrained domain $\mathbb{W}$ is decided by, both binary bases and full precision coordinates.
Hence directly searching for the optimal $\bm{\hat{w}}_g$ is NP-hard.
Instead, we optimize $\bm{B}_g$ and $\bm{\alpha}_g$ in an alternative manner, as prior multi-bit quantization works \cite{bib:ICLR18:Xu,bib:ECCV18:Zhang} that minimize the reconstruction error.
\fakeparagraph{Optimizing $\bm{B}_g$}
We directly search for the optimal bases with AMSGrad.
In each training iteration $q$, we fix $\bm{\alpha}_g^q$, and update $\bm{B}_g^q$.
We find the optimal increment for each group of weights, such that it converts to a new set of binary bases, $\bm{B}_g^{q+1}$.
This Optimization Step searches a new space spanned by $\bm{B}_g^{q+1}$ based on the loss reduction, which prevents the pruned space to be always a subspace of the previous one.
According to~\equref{ch2-eq:amsgrad_w1} and~\equref{ch2-eq:amsgrad_w2}, the optimal $\bm{B}_g$ w.r.t. the loss is updated by,
\begin{equation}
\bm{B}_g^{q+1} = \underset{\bm{B}_g\in\{-1,+1\}^{n\times I_g}}{\mathrm{argmin}}~f^q(\bm{B}_g)
\label{ch2-eq:amsgrad_B1}
\end{equation}
\begin{equation}
f^q=(\bm{g}^q)^{\mathrm{T}}(\bm{B}_g\bm{\alpha}_g^{q}-\bm{\hat{w}}_g^q)+\frac{1}{2}(\bm{B}_g\bm{\alpha}_g^{q}-\bm{\hat{w}}_g^q)^{\mathrm{T}}\bm{H}^q (\bm{B}_g\bm{\alpha}_g^{q}-\bm{\hat{w}}_g^q)
\label{ch2-eq:amsgrad_B2}
\end{equation}
where $\bm{\hat{w}}_g^q = \bm{B}_g^{q}\bm{\alpha}_g^{q}$.
Recall that $\bm{B}_g^q\in\{-1,+1\}^{n\times I_g}$.
Since $\bm{H}^q$ is diagonal in AMSGrad, each row vector in $\bm{B}_g^{q+1}$ can be independently determined.
For example, the $j$-th row is computed as,
\begin{equation}
\bm{B}_{g,j}^{q+1} = \underset{\bm{B}_{g,j}}{\mathrm{argmin}}~\|\bm{B}_{g,j}\bm{\alpha}_{g}^q-(\hat{w}_{g,j}^q-g^q_j/H_{jj}^q)\|,\quad j \in 1,...,n
\label{ch2-eq:row}
\end{equation}
Since in general $n>>I_g$, to reduce the computation complexity, we firstly compute all $2^{I_g}$ possible values of
\begin{equation}
\bm{b}^{\mathrm{T}}\bm{\alpha}_{g}^q~,~~~ \bm{b}^{\mathrm{T}}\in\{-1,+1\}^{1\times I_g}
\label{ch2-eq:comb}
\end{equation}
Then each row vector $\bm{B}_{g,j}^{q+1}$ can be directly substituted with the optimal $\bm{b}^{\mathrm{T}}$ through an exhaustive searching in $2^{I_g}$ values.
\fakeparagraph{Optimizing $\bm{\alpha}_g$}
The above obtained set of binary bases $\bm{B}_g$ spans a new $I_g$-dim linear space, which is a subspace of original $n$-dim full space.
The current $\bm{\alpha}_g$ is unlikely to be the optimal point in this $I_g$-dim space, so now we optimize $\bm{\alpha}_g$.
Since $\bm{\alpha}_g$ is in full precision, i.e.,\xspace $\bm{\alpha}_g\in\mathbb{R}^{I_g\times1}$, there is no domain constraint and thus no need for projection updating.
Similar to optimizing full precision $\bm{w}_g$, conventional training strategies can be directly used to optimize $\bm{\alpha}_g$.
Similar to~\equref{ch2-eq:amsgrad_alpha1} and~\equref{ch2-eq:amsgrad_alpha2}, we use AMSGrad optimizer in $\bm{\alpha}$ domain without projection updating, for each group in the $p$-th training iteration as,
\begin{equation}
\bm{\alpha}_g^{p+1} = \bm{\alpha}_g^p-a_{\bm{\alpha}}^p\bm{m}_{\bm{\alpha}}^p/\sqrt{\bm{\hat{v}_\alpha}^p}
\label{ch2-eq:optimizing_alpha}
\end{equation}
We also add an L2-norm regularization on $\bm{\alpha}_g$ to enforce unimportant coordinates to zero.
If there is a negative value in $\bm{\alpha}_{g}$, the corresponding basis is set to its negative complement, to keep $\bm{\alpha}_{g}$ semi-positive definite. Optimizing $\bm{B}_g$ and $\bm{\alpha}_g$ does not influence the number of binary bases $I_g$.
\fakeparagraph{Optimization Speedup}
Since $\bm{\alpha}_g$ is full precision, updating $\bm{\alpha}_g^q$ is much cheaper than exhaustively search $\bm{B}_g^{q+1}$.
Even if the main purpose of the first step in Optimization Step is optimizing bases, we also add an updating process for $\bm{\alpha}_g^q$ in each training iteration $q$.
We fix $\bm{B}_{g}^{q+1}$, and update $\bm{\alpha}_{g}^{q}$.
The overall increment of quantized weights from both updating processes is,
\begin{equation}
\bm{\hat{w}}^{q+1}_g - \bm{\hat{w}}^q_{g} = \bm{B}_{g}^{q+1}\bm{\alpha}_{g}^{q+1}-\bm{B}_{g}^{q}\bm{\alpha}_{g}^{q}
\label{ch2-eq:W}
\end{equation}
Substituting~\equref{ch2-eq:W} into~\equref{ch2-eq:amsgrad_w1} and~\equref{ch2-eq:amsgrad_w2}, we have,
\begin{equation}
\bm{\alpha}_{g}^{q+1}=-((\bm{B}_{g}^{q+1})^{\mathrm{T}} \bm{H}^q \bm{B}_{g}^{q+1})^{-1}\times((\bm{B}_{g}^{q+1})^{\mathrm{T}}(\bm{g}^q-\bm{H}^q\bm{B}^q_{g}\bm{\alpha}_{g}^{q}))
\label{ch2-eq:alpha}
\end{equation}
To ensure the inverse in~\equref{ch2-eq:alpha} exists, we add a small diagonal matrix $\lambda \mathbf{I}$ to \equref{ch2-eq:alpha},
\begin{equation}
\bm{\alpha}_{g}^{q+1}=-((\bm{B}_{g}^{q+1})^{\mathrm{T}} \bm{H}^q \bm{B}_{g}^{q+1}+\lambda \mathbf{I})^{-1}\times((\bm{B}_{g}^{q+1})^{\mathrm{T}}(\bm{g}^q-\bm{H}^q\bm{B}^q_{g}\bm{\alpha}_{g}^{q}))
\label{ch2-eq:alpha_lambda}
\end{equation}
where $\lambda=10^{-6}$.
\subsection{Implementation}
\label{ch2-sec:implementation}
In this section, we discuss the detailed implementation of ALQ\xspace.
We elaborate the pseudocodes of three steps and analyze their complexity.
Note that the discussion in this section is extended to the entire networks with $L$ layers, thus we reintroduce the layer index $l$ for clarity reasons.
\subsubsection{Implementation of Initialization Step}
\label{ch2-sec:implementation_initialization}
We adapt the network sketching in~\cite{bib:CVPR17:Guo}, and propose a structured sketching algorithm for Initialization Step, see \algoref{ch2-alg:sketching}\footnote{Circled operation in \algoref{ch2-alg:sketching} means elementwise operations.}.
This algorithm partitions the pretrained full precision weights $\bm{w}_l$ of the $l$-th layer into $G_l$ groups.
We study the different structures of grouping in \secref{ch2-sec:experiment_group}.
The vectorized weights $\bm{w}_{l,g}$ of each group are quantized with $I_{l,g}$ linear independent binary bases (i.e.,\xspace column vectors in $\bm{B}_{l,g}$) and corresponding coordinates $\bm{\alpha}_{l,g}$ to minimize the reconstruction error.
This algorithm initializes the matrix of binary bases $\bm{B}_{l,g}$, the vector of floating-point coordinates $\bm{\alpha}_{l,g}$, and the scalar of integer bitwidth $I_{l,g}$ in each group across layers.
The initial reconstruction error is upper bounded by a threshold $\sigma$.
In addition, a maximum bitwidth of each group is defined as $I_\mathrm{max}$.
Both of these two parameters determine the initial bitwidth $I_{l,g}$.
We discuss the choice of group size $n$, and the maximum bitwidth $I_\mathrm{max}$ in \secref{ch2-sec:experiment_initialization}.
\begin{algorithm}[!htbp]
\caption{Structured sketching of weights}\label{ch2-alg:sketching}
\KwIn{$\bm{w}_{1:L}$, $G_{1:L}$, $I_\mathrm{max}$, $\sigma$}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^{L}$}
\For {$l\leftarrow 1$ \KwTo $L$} {
\For {$g \leftarrow 1$ \KwTo $G_l$} {
Fetch and vectorize $\bm{w}_{l,g}$ from $\bm{w}_l$\;
Initialize $\bm{\epsilon} = \bm{w}_{l,g}$, $i=0$\;
$\bm{B}_{l,g} = [~]$\;
\While{$\|\bm{\epsilon}\oslash\bm{w}_{l,g}\|_2^2>\sigma$ \texttt{\textup{and}} $i<I_\mathrm{max}$} {
$i = i+1$\;
$\bm{\beta}_{i} = \mathrm{sign}(\bm{\epsilon})$\;
$\bm{B}_{l,g} = [\bm{B}_{l,g}, \bm{\beta}_{i}]$\;
\tcc{Find the optimal point spanned by $\bm{B}_{l,g}$}
$\bm{\alpha}_{l,g} = (\bm{B}_{l,g}^\mathrm{T}\bm{B}_{l,g})^{-1}\bm{B}_{l,g}^\mathrm{T}\bm{w}_{l,g}$\;
\tcc{Update the residual reconstruction error}
$\bm{\epsilon} = \bm{w}_{l,g}-\bm{B}_{l,g}\bm{\alpha}_{l,g}$\;
}
$I_{l,g}=i$\;
}
}
\end{algorithm}
\begin{theorem}
The column vectors in $\bm{B}_{l,g}$ are linear independent.
\end{theorem}
\begin{proof}
The instruction $\bm{\alpha}_{l,g} = (\bm{B}_{l,g}^\mathrm{T}\bm{B}_{l,g})^{-1}\bm{B}_{l,g}^\mathrm{T}\bm{w}_{l,g}$ ensures $\bm{\alpha}_{l,g}$ is the optimal point in $\mathrm{span}(\bm{B}_{l,g})$ regarding the least square reconstruction error $\bm{\epsilon}$.
Thus, $\bm{\epsilon}$ is orthogonal to $\mathrm{span}(\bm{B}_{l,g})$.
The new basis is computed from the next iteration by $\bm{\beta}_{i}= \mathrm{sign}(\bm{\epsilon})$.
Since $\mathrm{sign}(\bm{\epsilon})\cdot\bm{\epsilon}>0, \forall\bm{\epsilon}\ne\bm{0}$, we have $\bm{\beta}_{i}\notin \mathrm{span}(\bm{B}_{l,g})$.
Thus, the iteratively generated column vectors in $\bm{B}_{l,g}$ are linear independent.
This also means the square matrix of $\bm{B}_{l,g}^\mathrm{T}\bm{B}_{l,g}$ is invertible.
\end{proof}
\subsubsection{Implementation of Pruning Step}
\label{ch2-sec:implementation_pruning}
As discussed in \secref{ch2-sec:pruning}, $\alpha_i$'s are pruned iteratively in mini-batches.
During each Pruning Step, for example, $30\%$ of $\alpha_i$'s are iteratively pruned in one epoch.
Due to the high complexity of sorting all $f_{\bm{\alpha},i}$, sorting is firstly executed in each layer, and the top-$k\%$ $f_{\bm{\alpha}_l,i}$ of the $l$-th layer are selected to resort again for pruning.
Recall that $l$ stands for the layer index.
$k$ is generally small, e.g.,\xspace $1$ or $0.5$, which ensures that the pruned $\alpha_i$'s in one iteration do not always come from a single layer.
There are $n_l$ weights in each group, and $G_l$ groups in the $l$-th layer.
The sorting complexity mainly depends on the sorting in the most critical layer that has the largest $\mathrm{card}(\bm{\alpha}_l)$.
The Pruning Step is elaborated in \algoref{ch2-alg:pruning}.
Here, assume that there are altogether $T$ pruning (training) iterations in each execution of Pruning Step; the total number of $\alpha_i$'s across all layers is $M_0$ before pruning, i.e.,\xspace
\begin{equation}
M_0 = \underset{l}{\sum}{\underset{g}{\sum}{\mathrm{card}(\bm{\alpha}_{l,g})}}
\label{ch2-eq:m0}
\end{equation}
and the desired total number of $\alpha_i$'s after pruning is $M_T$.
\begin{algorithm}[tbp!]
\caption{Pruning in $\alpha$ domain}\label{ch2-alg:pruning}
\KwIn{$T$, $M_T$, $k$, $\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$, training dataset}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$}
Compute $M_0$ with \equref{ch2-eq:m0}\;
Compute the pruning number per iteration $M_p = \mathrm{round}(\frac{M_0-M_T}{T})$\;
\For {$t \leftarrow 1$ \KwTo $T$} {
\For {$l\leftarrow 1$ \KwTo $L$} {
Update $\bm{\hat{w}}_{l,g}^t = \bm{B}_{l,g}^t\bm{\alpha}_{l,g}^t$\;
Forward propagate\;
}
Compute the loss $\ell^t$\;
\For {$l\leftarrow L$ \KwTo $1$} {
Backward propagate gradient $\partial\ell^t/\partial\bm{\hat{w}}_{l,g}^t$\;
Compute $\partial\ell^t/\partial\bm{\alpha}_{l,g}^t$ with \equref{ch2-eq:gradient_pruning}\;
Update momentums of AMSGrad in $\bm{\alpha}$ domain\;
\For {$\alpha_{l,i}^t$ \textup{in} $\bm{\alpha}_l^t$} {
Compute $f_{\bm{\alpha}_l,i}^t$ with \equref{ch2-eq:taylor_pruning}\;
}
Sort and select Top-$k\%$ $f_{\bm{\alpha}_l,i}^t$ in ascending order\;
}
Resort the selected $\{f_{\bm{\alpha}_l,i}^t\}_{l=1}^{L}$ in ascending order\;
Remove Top-$M_p$ $\alpha_{l,i}^t$ and their binary bases\;
Update $\{\{\bm{\alpha}_{l,g}^{t+1},\bm{B}_{l,g}^{t+1}, I_{l,g}^{t+1}\}_{g=1}^{G_l}\}_{l=1}^L$\;
}
\end{algorithm}
\subsubsection{Implementation of Optimization Step}
\label{ch2-sec:implementaion_optimization}
Optimization Step is also executed in batch training.
Since $\bm{\alpha}_g$ is floating-point value, the complexity of optimizing $\bm{\alpha}_g$ is the same as the conventional optimization (see \algoref{ch2-alg:coordinates}).
Assume that there are altogether $P$ training iterations.
It is worth noting that both the bitwidth $I_{l,g}$ and the binary bases $\bm{B}_{l,g}$ do not change in this step; only the coordinates $\bm{\alpha}_{l,g}$ are updated over $P$ iterations.
\begin{algorithm}[tbp!]
\caption{Optimizing $\bm{\alpha}_g$}\label{ch2-alg:coordinates}
\KwIn{$P$, $\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$, training dataset}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$}
\For {$p \leftarrow 1$ \KwTo $P$} {
\For {$l\leftarrow 1$ \KwTo $L$} {
Update $\bm{\hat{w}}_{l,g}^p = \bm{B}_{l,g}\bm{\alpha}_{l,g}^p$\;
Forward propagate\;
}
Compute the loss $\ell^p$\;
\For {$l\leftarrow L$ \KwTo $1$} {
Backward propagate gradient $\partial\ell^p/\partial\bm{\hat{w}}_{l,g}^p$\;
Compute $\partial\ell^p/\partial\bm{\alpha}_{l,g}^p$ with \equref{ch2-eq:gradient_pruning}\;
Update momentums of AMSGrad in $\bm{\alpha}$ domain\;
\For {$g \leftarrow 1$ \KwTo $G_l$} {
Update $\bm{\alpha}_{l,g}^{p+1}$ with \equref{ch2-eq:optimizing_alpha}\;
}
}
}
\end{algorithm}
Optimizing $\bm{B}_g$ with speedup is presented in \algoref{ch2-alg:bases}.
Assume that there are altogether $Q$ training iterations.
It is worth noting that the bitwidth $I_{l,g}$ does not change in this step; only the binary bases $\bm{B}_{l,g}$ and the coordinates $\bm{\alpha}_{l,g}$ are updated over $Q$ iterations.
\begin{algorithm}[tbp!]
\caption{Optimizing $\bm{B}_g$ with speedup}\label{ch2-alg:bases}
\KwIn{$Q$, $\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$, training dataset}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$}
\For {$q \leftarrow 1$ \KwTo $Q$} {
\For {$l\leftarrow 1$ \KwTo $L$} {
Update $\bm{\hat{w}}_{l,g}^q = \bm{B}_{l,g}^q\bm{\alpha}_{l,g}^q$\;
Forward propagate\;
}
Compute the loss $\ell^q$ \;
\For {$l\leftarrow L$ \KwTo $1$} {
Backward propagate gradient $\partial\ell^q/\partial\bm{\hat{w}}_{l,g}^q$\;
Update momentums of AMSGrad\;
\For {$g \leftarrow 1$ \KwTo $G_l$} {
Compute all values of \equref{ch2-eq:comb}\;
\For {$j \leftarrow 1$ \KwTo $n_l$} {
Update $\bm{B}_{l,g,j}^{q+1}$ with \equref{ch2-eq:row}\;
}
Update $\bm{\alpha}_{l,g}^{q+1}$ with \equref{ch2-eq:alpha_lambda}\;
}
}
}
\end{algorithm}
The extra complexity related to the original AMSGrad mainly comes from two parts, \equref{ch2-eq:row} and \equref{ch2-eq:alpha_lambda}.
\equref{ch2-eq:row} is also the most resource-hungry step of the whole pipeline, since it requires an exhaustive search.
For each group, \equref{ch2-eq:row} takes both time and storage complexities of $O(n\cdot2^{I_g})$, and in general $n>>I_g\geq1$.
Since $\bm{H}^q$ is a diagonal matrix, most of the matrix-matrix multiplications in \equref{ch2-eq:alpha_lambda} is avoided through matrix-vector multiplication and matrix-diagonalmatrix multiplication.
Thus, the time complexity trims down to $O(nI_g+nI_g^2+I_g^3+nI_g+n+n+nI_g+I_g^2) \doteq O(n(I_g^2+3I_g+2))$.
\subsubsection{Implementation of the Pipeline}
\label{ch2-sec:implementation_pipeline}
The entire pipeline of ALQ\xspace is demonstrated in \algoref{ch2-alg:pipeline}.
For Initialization Step, the pretrained full precision weights $\bm{w}_{1:L}$ are required.
Then, we need to specify the structure used in each layer, i.e.,\xspace the structure of grouping $G_{1:L}$.
In addition, a maximum bitwidth $I_\mathrm{max}$ and a threshold $\sigma$ for the residual reconstruction error also need to be determined (see more details in \secref{ch2-sec:implementation_initialization}).
After initialization, we might need to retrain the model with several epochs of \algoref{ch2-alg:bases} to recover the accuracy degradation caused by the initialization.
Then, we need to determine the number of outer iterations $R$, i.e.,\xspace how many times the Pruning Step is executed.
A pruning schedule $M^{1:R}$ is also required.
$M^r$ determines the total number of remaining $\alpha_i$'s (across all layers) after the $r$-th Pruning Step, which is also taken as the input $M_T$ in \algoref{ch2-alg:pruning}.
For example, we can build this schedule by pruning $30\%$ of $\alpha_i$'s during each execution of Pruning Step, as,
\begin{equation}
M^{r+1} = M^{r}\times(1-0.3)
\label{ch2-eq:mr}
\end{equation}
with $r\in 0,1,2,...,R-1$. $M^0$ represents the total number of $\alpha_i$'s (across all layers) after initialization.
For Pruning Step, other individual inputs include the total number of iterations $T$, and the selected percentages $k$ for sorting (see \algoref{ch2-alg:pruning}).
For Optimization Step, the individual inputs includes the total number of iterations $Q$ in optimizing $\bm{B}_g$ (see \algoref{ch2-alg:bases}), and the total number of iterations $P$ in optimizing $\bm{\alpha}_g$ (see \algoref{ch2-alg:coordinates}).
\begin{algorithm}[htbp!]
\caption{Adaptive Loss-aware Quantization for multi-bit networks} \label{ch2-alg:pipeline}
\KwIn{Pretrained full precision weights $\bm{w}_{1:L},$ structures $G_{1:L}$, $I_\mathrm{max}$, $\sigma$, $T$, pruning schedule $M^{1:R}$, $k$, $P$, $Q$, $R$, training dataset}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$}
\tcc{Initialization Step: }
Initialize $\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$ with \algoref{ch2-alg:sketching}\;
\For {$r \leftarrow 1$ \KwTo $R$} {
\tcc{Pruning Step: }
Assign $M^r$ to the input $M_T$ of \algoref{ch2-alg:pruning}\;
Prune in $\bm{\alpha}$ domain with \algoref{ch2-alg:pruning}\;
\tcc{Optimization Step: }
Optimize binary bases with \algoref{ch2-alg:bases}\;
Optimize coordinates with \algoref{ch2-alg:coordinates}\;
}
\end{algorithm}
\section{Activation Quantization}
\label{ch2-sec:activation}
To leverage bitwise operations for speedup, the inputs of each layer (i.e.,\xspace the activation output of the last layer) also need to be quantized into the multi-bit form.
We quantize activations with the same binary basis (i.e.,\xspace $\{-1,+1\}$) as the aforementioned weight quantization.
Our activation quantization follows the idea proposed in \cite{bib:arXiv18:Choi}, i.e.,\xspace a parameterized clipping for fixed-point activation quantization, but it is adapted to the multi-bit form.
Specially, we replace ReLU with a step activation function.
The vectorized activation $\bm{x}$ of the $l$-th layer is quantized as,
\begin{equation}
\bm{x}\doteq\bm{\hat{x}}=x_{\mathrm{ref}}+\bm{D}\bm{\gamma}=\bm{D}'\bm{\gamma}'
\label{ch2-eq:act}
\end{equation}
where $\bm{D}\in\{-1,+1\}^{N_x\times I_x}$, and $\bm{\gamma}\in\mathbb{R}_+^{I_x\times1}$.
$\bm{\gamma}'$ is a column vector formed by $[x_{\mathrm{ref}},\bm{\gamma}^\mathrm{T}]^{\mathrm{T}}$; $\bm{D}'$ is a matrix formed by $[\bm{1}^{{N_x\times 1}}, \bm{D}]$.
$N_x$ is the dimension of $\bm{x}$, and $I_x$ is the quantization bitwidth for activations.
$x_{\mathrm{ref}}$ is the introduced layerwise (positive floating-point) reference to fit the output range of ReLU.
During inference, $x_{\mathrm{ref}}$ is convoluted with the weights of the next layer and added to the bias.
Hence the introduction of $x_{\mathrm{ref}}$ does not lead to extra computations.
The output of the last layer is not quantized, as it does not involve computations anymore.
For other settings, we mainly follow the ones used in \cite{bib:ECCV18:Zhang}.
$\bm{\gamma}$ and $x_{\mathrm{ref}}$ are updated during the forward propagation with a running average to minimize the squared reconstruction error as,
\begin{equation}
\bm{\gamma}'_{\text{new}} = (\bm{D'}^{\mathrm{T}}\bm{D}')^{-1}\bm{D'}^{\mathrm{T}}\bm{x}
\end{equation}
\begin{equation}
\bm{\gamma}' = 0.9\bm{\gamma}'+(1-0.9)\bm{\gamma}'_{\text{new}}
\end{equation}
The (quantized) weights are also further fine-tuned with our optimizer to resume the accuracy drop.
Here, we only set a global bitwidth for all layers in activation quantization.
\section{Experiments}
\label{ch2-sec:experiment}
In this section, we implement ALQ\xspace with Pytorch~\cite{bib:NIPSWorkshop17:Paszke}, and evaluate its performance on MNIST~\cite{bib:MNIST}, CIFAR10~\cite{bib:CIFAR}, and ImageNet~\cite{bib:ILSVRC15} using LeNet5~\cite{bib:PIEEE98:LeCun}, VGGNet~\cite{bib:ICLR17:Hou,bib:ECCV16:Rastegari}, and ResNet18/34~\cite{bib:CVPR16:He}, respectively.
The Top-1 test accuracy is reported, when the validation dataset has the highest accuracy during training.
We first conduct the experiments on Initialization Step (\secref{ch2-sec:experiment_initialization}), Pruning Step (\secref{ch2-sec:experiment_adaptive}) and Optimization Step (\secref{ch2-sec:experiment_convergence}) individually to study their impacts.
Then, we benchmark ALQ\xspace on different datasets and compare ALQ\xspace with different state-of-the-art network compression methods.
\subsection{Benchmarking Details}
\label{ch2-sec:experiment_benchmark}
\fakeparagraph{LeNet5 on MNIST}
The MNIST dataset~\cite{bib:MNIST} consists of $28\times28$ gray scale images from 10 digit classes.
We use 50000 samples in the training set for training, the rest 10000 for validation, and the 10000 samples in the test set for testing.
We use a mini-batch with size of 128.
We use the default hyperparameters proposed in~\cite{bib:torchLeNet5} to train LeNet5 for 100 epochs as the baseline of full precision version.
The network architecture is presented as, 20C5 - MP2 - 50C5 - MP2 - 500FC - 10SVM.
\fakeparagraph{VGGNet on CIFAR10}
The CIFAR-10 dataset~\cite{bib:CIFAR} consists of 60000 $32\times32$ color images in 10 object classes.
We use 45000 samples in the training set for training, the rest 5000 for validation, and the 10000 samples in the test set for testing.
We use a mini-batch with size of 128.
We use the default Adam optimizer provided by Pytorch to train full precision parameters for 200 epochs as the baseline of the full precision version.
The initial learning rate is $0.01$, and it decays with 0.2 every $30$ epochs.
The network architecture is presented as, 2$\times$128C3 - MP2 - 2$\times$256C3 - MP2 - 2$\times$512C3 - MP2 - 2$\times$1024FC - 10SVM.
\fakeparagraph{ResNet18/34 on ImageNet}
The ImageNet dataset~\cite{bib:ILSVRC15} consists of $1.28$ million high-resolution images for classifying in 1000 object classes.
The validation set contains 50k images, which are used to report the accuracy level.
We use mini-batch with size of 256. The used ResNet18/34 is from~\cite{bib:CVPR16:He}.
We use the ResNet18/34 provided by Pytorch as the baseline of full precision version.
The network architecture is the same as "resnet18/resnet34" in~\cite{bib:torchResNet}.
\subsection{Experiments on Initialization}
\label{ch2-sec:experiment_initialization}
As mentioned in \secref{ch2-sec:implementation_initialization}, we propose a structured sketching for Initialization Step.
Some important parameters in \algoref{ch2-alg:sketching} are discussed as below.
\subsubsection{Group Size $n$}
\label{ch2-sec:experiment_group}
Researchers propose different structures e.g.,\xspace layerwise, channelwise, to partition weights, and then quantize the weights in one structured group with the same bitwidth.
To explore the redundancy among weights, we conduct experiments on the different structures of grouping.
Certainly, the weights in one layer can be arbitrarily selected to gather a group.
However, due to the extra indexing cost, the weights are often sliced along the tensor dimensions and uniformly grouped.
According to~\cite{bib:CVPR17:Guo}, the squared reconstruction error of a single group decays with~\equref{ch2-eq:error_decay}, where $\lambda\ge0$.
\begin{equation}
\|\bm{\epsilon}\|_2^2 \le \|\bm{w}_{g}\|_2^2 (1-\frac{1}{n-\lambda})^{I_g}
\label{ch2-eq:error_decay}
\end{equation}
If full precision values are stored in floating-point, i.e.,\xspace $32$-bit, the storage compression ratio in one layer can be written as,
\begin{equation}
r_s = \frac{N\times32}{I\times N+I\times32\times \frac{N}{n}}
\label{ch2-eq:r_s}
\end{equation}
where $N$ is the total number of weights in one layer; $n$ is the number of weights in each group, i.e.,\xspace $n = N/G$; $I$ is the average bitwidth, $I = \frac{1}{G}\sum_{g = 1}^G I_g$.
We analyse the trade-off between the reconstruction error and the storage compression ratio of different group size $n$.
We choose the pretrained AlexNet~\cite{bib:NIPS12:Krizhevsky} and VGGNet~\cite{bib:ICLR15:Simonyan}, and plot the curves of the average (per weight) reconstruction error related to the storage compression ratio of each layer under different sliced structures.
We also randomly shuffle the weights in each layer, then partition them into groups with different sizes.
We select one example plot which comes from the last \texttt{conv} layer ($256\times256\times3\times3$) of AlexNet~\cite{bib:NIPS12:Krizhevsky} (see~\figref{ch2-fig:conv_alexnet}).
The pretrained full precision weights are provided by Pytorch~\cite{bib:NIPSWorkshop17:Paszke}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.75\textwidth]{./figs/ch2/conv_alexnet.pdf}
\caption[The curves of the average reconstruction error in different group size] {The curves about the logarithmic L2-norm of the average reconstruction error $\mathrm{log}(\|\bm{\epsilon}\|_2^2)$ related to the reciprocal of the storage compression ratio $1/r_s$. The pretrained full precision weights are from the last \texttt{conv} layer of AlexNet. The legend demonstrates the corresponding group sizes. `k' stands for kernelwise; `p' stands for pointwise; `c' stands for channelwise. }
\label{ch2-fig:conv_alexnet}
\end{figure}
We found that there is not a significant difference between random groups and sliced groups along tensor dimensions.
Only the group size influences the trade-off.
We think the reason is that one layer always contains thousands of groups, such that the points presented by these groups are roughly scattered in the $n$-dim space.
Furthermore, regarding the deployment on a 32-bit general microprocessor, the group size should be larger than 32 for efficient computation.
In short, a group size from $32$ to $512$ achieves relatively good trade-off between the weight reconstruction error and the storage compression ratio.
Accordingly, for \texttt{conv} layers, grouping in channelwise ($\bm{w}_{c,:,:,:}$), kernelwise ($\bm{w}_{c,d,:,:}$), and pointwise ($\bm{w}_{c,:,h,w}$) appears to be appropriate.
Channelwise $\bm{w}_{c,:}$ and subchannelwise $\bm{w}_{c,d:d+n}$ grouping are suited for \texttt{fc} layers.
For example, if each channel is sliced into 2 groups with the same size, we denote it as subchannelwise(2).
In addition, the most frequently used structures in this chapter are pointwise (\texttt{conv} layers) and (sub)channelwise (\texttt{fc} layers), which align with the bit-packing approach in~\cite{bib:ICLR18:Pedersoli}, and could result in a more efficient deployment.
Since many network architectures choose an integer multiple of 32 as the number of output channels in each layer, pointwise and (sub)channelwise are also efficient for the current storage format in 32-bit microprocessors.
\subsubsection{Maximum Bitwidth $I_\mathrm{max}$}
\label{ch2-sec:experiment_max_bit}
The initial $I_g$ is decided by a predefined initial reconstruction precision or a maximum bitwidth.
We notice that the accuracy degradation caused by the initialization can be fully recovered after several optimization epochs of \algoref{ch2-alg:bases}, if the maximum bitwidth is $8$.
For example, ResNet18 on ImageNet after such an initialization can be retrained to a Top-1/5 accuracy of $70.3\%$/$89.4\%$, even higher than its full precision counterpart ($69.8\%$/$89.1\%$).
For smaller networks, e.g.,\xspace VGGNet on CIFAR10, a maximum bitwidth of $6$ is already sufficient.
\subsection{Convergence Analysis of Optimization Step}
\label{ch2-sec:experiment_convergence}
In this section, we conduct the ablation studies on our Optimization Step in \secref{ch2-sec:optimization}.
We show the advantages of our optimizer in terms of convergence.
We mainly studied the convergence performance of \algoref{ch2-alg:bases} (i.e.,\xspace optimizing $\bm{B}_g$ with speedup) for two reasons, (\textit{i}) it involves the domain constraints of binarization and takes the majority of computation complexity; (\textit{ii}) it conducts a similar alternative process as prior works~\cite{bib:ICLR18:Xu,bib:ECCV18:Zhang}.
Recall that our optimizer in \algoref{ch2-alg:bases} (\textit{i}) has no gradient approximation and (\textit{ii}) directly minimizes the loss.
We developed the following two baselines for comparison.
\begin{itemize}
\item
\textit{STE with rec. error:}
This baseline quantizes the maintained full precision weights by minimizing the reconstruction error (rather than the loss) during forward and approximates gradients via STE during backward.
This approach is adopted in some of the best-performing quantization schemes such as LQ-Net \cite{bib:ECCV18:Zhang}.
\item
\textit{STE with loss-aware:}
This baseline approximates gradients via STE but performs a loss-aware projection updating (adapted from our ALQ\xspace).
It can be considered as a multi-bit extension of prior loss-aware quantizers for binary and ternary networks \cite{bib:ICLR17:Hou,bib:ICLR18:Hou}.
See \secref{ch2-sec:lossaware_ste} below for more details.
\end{itemize}
\subsubsection{The Optimizer of ``STE with Loss-Aware''}
\label{ch2-sec:lossaware_ste}
In this section, we provide the details of the proposed \textit{STE with loss-aware} optimizer.
The training scheme of \textit{STE with loss-aware} is similar to \algoref{ch2-alg:bases}, except that it maintains the full precision weights $\bm{w}_g$.
See the pseudocode of \textit{STE with loss-aware} in \algoref{ch2-alg:ste}.
\begin{algorithm}[t!]
\caption{STE with loss-aware}\label{ch2-alg:ste}
\KwIn{$Q$, $\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$, training dataset}
\KwOut{$\{\{\bm{\alpha}_{l,g},\bm{B}_{l,g}, I_{l,g}\}_{g=1}^{G_l}\}_{l=1}^L$}
\For {$q \leftarrow 1$ \KwTo $Q$} {
\For {$l\leftarrow 1$ \KwTo $L$} {
Update $\bm{\hat{w}}_{l,g}^q = \bm{B}_{l,g}^q\bm{\alpha}_{l,g}^q$\;
Forward propagate\;
}
Compute the loss $\ell^q$ \;
\For {$l\leftarrow L$ \KwTo $1$} {
Backward propagate gradient $\partial\ell^q/\partial\bm{\hat{w}}_{l,g}^q$\;
Directly approximate $\partial\ell^q/\partial\bm{w}_{l,g}^q$ with $\partial\ell^q/\partial {\bm{\hat{w}}_{l,g}}^q$\;
Update momentums of AMSGrad\;
\For {$g \leftarrow 1$ \KwTo $G_l$} {
Update $\bm{w}_{l,g}^{q+1}$ with \equref{ch2-eq:ste_W}\;
Compute all values of \equref{ch2-eq:comb}\;
\For {$j \leftarrow 1$ \KwTo $n_l$} {
Update $\bm{B}_{l,g,j}^{q+1}$ with \equref{ch2-eq:ste_row}\;
}
Update $\bm{\alpha}_{l,g}^{q+1}$ with \equref{ch2-eq:ste_alpha}\;
}
}
}
\end{algorithm}
For the layer $l$, the quantized weights $\bm{\hat{w}}_g$ is used during forward propagation.
During backward propagation, the loss gradients to the full precision weights $\partial\ell/\partial\bm{w}_{g}$ are directly approximated with $\partial\ell/\partial {\bm{\hat{w}}_{g}}$, i.e.,\xspace via STE in the $q$-th training iteration as,
\begin{equation}
\frac{\partial\ell^q}{\partial\bm{w}_g^q}=\frac{\partial\ell^q}{\partial {\bm{\hat{w}}_g}^q}
\end{equation}
Then the first and second momentums in AMSGrad are updated with $\partial\ell^q/\partial\bm{w}_{g}^q$.
Accordingly, the loss increment around $\bm{w}_g^q$ is modeled as,
\begin{equation}
f_{\text{ste}}^q=(\bm{g}^q)^{\mathrm{T}}(\bm{w}_g-\bm{w}_g^q)+\frac{1}{2} (\bm{w}_g-\bm{w}_g^q)^{\mathrm{T}} \bm{H}^q (\bm{w}_g-\bm{w}_g^q)
\label{ch2-eq:ste_B}
\end{equation}
Since $\bm{w}_g$ is full precision, $\bm{w}_g^{q+1}$ can be directly obtained through the above AMSGrad step without projection updating,
\begin{equation}
\bm{w}_g^{q+1} = \bm{w}_g^q-({\bm{H}^q})^{-1}\bm{g}^q = \bm{w}_g^q-a^q\bm{m}^q/\sqrt{\bm{\hat{v}}^q}
\label{ch2-eq:ste_W}
\end{equation}
Similarly, the loss increment caused by $\bm{B}_g$ (see \equref{ch2-eq:amsgrad_B1} and \equref{ch2-eq:amsgrad_B2}) is formulated as,
\begin{equation}
f_{\text{ste},\bm{B}}^q=(\bm{g}^q)^{\mathrm{T}}(\bm{B}_g\bm{\alpha}_g^{q}-\bm{w}_g^q)+\frac{1}{2}(\bm{B}_g\bm{\alpha}_g^{q}-\bm{w}_g^q)^{\mathrm{T}}\bm{H}^q (\bm{B}_g\bm{\alpha}_g^{q}-\bm{w}_g^q)
\label{ch2-eq:ste_B2}
\end{equation}
Thus, the $j$-th row in $\bm{B}_g^{q+1}$ is updated by,
\begin{equation}
\bm{B}_{g,j}^{q+1} = \underset{\bm{B}_{g,j}}{\mathrm{argmin}}~\|\bm{B}_{g,j}\bm{\alpha}_{g}^q-(w_{g,j}^q-g^q_j/H_{jj}^q)\|
\label{ch2-eq:ste_row}
\end{equation}
In addition, the speedup of \equref{ch2-eq:alpha_lambda} is changed accordingly as,
\begin{equation}
\bm{\alpha}_{g}^{q+1}=-((\bm{B}_{g}^{q+1})^{\mathrm{T}}\bm{H}^q\bm{B}_{g}^{q+1}+\lambda \mathbf{I})^{-1}\times((\bm{B}_{g}^{q+1})^{\mathrm{T}}(\bm{g}^q-\bm{H}^q\bm{w}^q_{g}))
\label{ch2-eq:ste_alpha}
\end{equation}
So far, the quantized weights are updated in a loss-aware manner as,
\begin{equation}
\bm{\hat{w}}_{g}^{q+1} = \bm{B}_{g}^{q+1}\bm{\alpha}_{g}^{q+1}
\end{equation}
\subsubsection{Ablation Results}
\label{ch2-sec:experiment_convergence_results}
\fakeparagraph{Settings}
To show the convergence performance of our Optimization Step, we compare \algoref{ch2-alg:bases} with the above two baselines \textit{STE with rec. error} and \textit{STE with loss-aware} mentioned above.
The three optimizers are used to train the networks quantized with a uniform bitwidth.
We use AMSGrad\footnote{AMSGrad can also optimize full precision parameters.} as the optimization framework for all optimizers and adopt a learning rate of 0.001.
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.95\textwidth]{./figs/ch2/conv.pdf}
\caption[Validation accuracy trained with ALQ\xspace and other STE-based baselines.]{Validation accuracy trained with ALQ\xspace and other STE-based baselines along the training epochs.}
\label{ch2-fig:convergence}
\end{figure}
\fakeparagraph{Results}
\figref{ch2-fig:convergence} shows the Top-1 validation accuracy of different optimizers, with increasing epochs on uniform bitwidth MBNs.
ALQ\xspace exhibits not only a more stable and faster convergence, but also a higher accuracy.
The exception is 2-bit ResNet18.
ALQ\xspace converges faster, but the validation accuracy trained with STE gradually exceeds ALQ\xspace after about 20 epochs.
For training a large network with $\leq2$ bitwidth, the positive effect brought from the high precision trace may compensate certain negative effects caused by gradient approximation.
In this case, keeping full precision parameters will help calibrate some aggressive steps of quantization, resulting in a slow oscillating convergence to a better local optimum.
This also encourages us to add several epochs of STE based optimization (e.g.,\xspace \textit{STE with loss-aware}) after low bitwidth quantization to further regain the accuracy.
\begin{table}[tbp!]
\centering
\caption[Comparison between uniform bitwidth and adaptive bitwidth in ALQ\xspace.]{Comparison between uniform bitwidth and adaptive bitwidth in ALQ\xspace.}
\label{ch2-tab:adapt}
\small
\begin{tabular}{ccc}
\toprule
Method & $I_W$ & Top-1 \\ \hline
Baseline VGGNet (uniform) & 1 & 91.8\% \\
\textbf{ALQ\xspace VGGNet} & \textbf{0.66} & \textbf{92.0}\% \\
Baseline ResNet18 (uniform) & 2 & 66.2\% \\
\textbf{ALQ\xspace ResNet18} & \textbf{2.00} & \textbf{68.9}\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Studies on Adaptive Bitwidth}
\label{ch2-sec:experiment_adaptive}
\fakeparagraph{Settings}
This experiment demonstrates the performance of incrementally trained adaptive bitwidth in ALQ\xspace, i.e.,\xspace our Pruning Step in \secref{ch2-sec:pruning}.
Uniform bitwidth quantization (an equal bitwidth allocation across all groups in all layers) is taken as the baseline.
The baseline is trained with the same number of epochs as the sum of all epochs during the bitwidth reduction.
Both ALQ\xspace and the baseline are trained with the same learning rate decay schedule.
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch2/adapt.pdf}
\caption[Distribution of the average bitwidth and the number of weights across layers.]{Distribution of the average bitwidth and the number of weights across layers.}
\label{ch2-fig:adapt}
\end{figure}
\fakeparagraph{Results}
\tabref{ch2-tab:adapt} shows that there is a large Top-1 accuracy gap between an adaptive bitwidth trained with ALQ\xspace and a uniform bitwidth.
In addition to the overall average bitwidth, we also plot the distribution of the average bitwidth and the number of weights across layers (both models in \tabref{ch2-tab:adapt}) in \figref{ch2-fig:adapt}.
Generally, the first several layers and the last layer are more sensitive to the loss, thus require a higher bitwidth.
The shortcut layers in ResNet architecture (e.g.,\xspace the $8$-th, $13^{\text{rd}}$, $18$-th layers in ResNet18) also need a higher bitwidth.
We think this is due to the fact that the shortcut pass helps the information forward/backward propagate through the blocks.
Since the average of adaptive bitwidth can have a decimal part, ALQ\xspace can achieve a compression ratio with a much higher resolution than a uniform bitwidth, which not only controls a more precise trade-off between storage and accuracy, but also benefits our incremental bitwidth reduction scheme.
It is worth noting that both the Optimization Step and the Pruning Step in ALQ\xspace follow the same metric, i.e.,\xspace the loss increment modeled by a quadratic function, allowing them to work in synergy.
We replace the step of optimizing $\bm{B}_g$ in ALQ\xspace with an STE step (with the reconstruction forward, see in \secref{ch2-sec:experiment_convergence}), and keep other steps unchanged in the pipeline.
When the VGGNet model is reduced to an average bitwidth of $0.66$-bit, the simple combination of an STE step with our Pruning Step can only reach $90.7\%$ Top-1 accuracy, which is significantly worse than ALQ\xspace's $92.0\%$.
\subsection{Comparison with State-of-the-Art Methods}
\label{ch2-sec:experiment_comparison}
\subsubsection{Unstructured Pruning on MNIST}
\label{ch2-sec:experiment_mnist}
\fakeparagraph{Settings}
Since ALQ\xspace can be considered a structured pruning scheme (i.e.,\xspace pruning in $\bm{\alpha}$ domain), we first compare ALQ\xspace with two widely used unstructured pruning schemes: Deep Compression (DC) \cite{bib:ICLR16:Han} and ADMM-Pruning (ADMM) \cite{bib:ECCV18:Zhang2}, i.e.,\xspace pruning in the original $\bm{w}$ domain.
For a fair comparison, we implement a modified LeNet5 model as in \cite{bib:ICLR16:Han,bib:ECCV18:Zhang2} on MNIST dataset~\cite{bib:MNIST} and compare the Top-1 prediction accuracy and the compression ratio.
The structures of each layer chosen for ALQ\xspace are kernelwise, kernelwise, subchannelwise(2), channelwise, respectively.
After each pruning, the network is retrained to recover the accuracy degradation with 20 epochs of optimizing $\bm{B}_g$ and 10 epochs of optimizing $\bm{\alpha}_g$.
The pruning ratio is 80\%, and 4 times of Pruning Step are executed after initialization in the reported experiment in \tabref{ch2-tab:lenet5}.
After the last Pruning Step, we conduct 50 epochs of Optimizing Step to further increase the final accuracy (also applied in the following experiments of VGGNet and ResNet18/34).
ALQ\xspace can fast converge in the training.
However, we observed that even after the convergence, the accuracy still continues increasing slowly along the training, which is similar to the behavior of STE-based optimizer.
During the Optimization Step after each Pruning Step, as long as the training loss is almost converged with a few epochs, we can further proceed the next Pruning Step.
We found that the final accuracy level is approximately the same whether we add plenty of epochs each time to slowly recover the accuracy to the original level or not.
Thus, we choose a fixed modest number of retraining epochs after each Pruning Step to save the overall training time.
In fact, this benefits from the feature of ALQ\xspace, which leverages the true gradient w.r.t. the loss to result in a fast and stable convergence.
The final added 50 training epochs aim to further slowly regain the final accuracy level, where we use a gradually decayed learning rate, e.g.,\xspace $10^{-4}$ decays with 0.98 in each epoch.
Note that the storage consumption only counts the weights, since the weights take the most majority of the storage (even after quantization) in comparison to others, e.g.,\xspace bias, activation quantizer, batch normalization, etc.\@\xspace
The storage consumption of weights in ALQ\xspace includes the look-up-table for the resulting $I_g$ in each group.
\begin{table}[tbp!]
\centering
\caption[Comparison with unstructured pruning methods (LeNet5 on MNIST)]{Comparison with state-of-the-art unstructured pruning methods (LeNet5 on MNIST). ``FP'' denotes the full precision baseline. ``CR'' denotes the compression ratio related to full precision. }
\label{ch2-tab:lenet5}
\small
\begin{tabular}{ccc}
\toprule
Method & Weights~(CR) & Top-1 \\ \hline
FP & 1720KB~(1$\times$ ) & 99.19\% \\
DC~\cite{bib:ICLR16:Han} & 44.0KB~(39$\times$) & \textbf{99.26\%} \\
ADMM~\cite{bib:ECCV18:Zhang2} & 24.2KB~(71$\times$) & 99.20\% \\
\textbf{ALQ\xspace} & \textbf{22.7KB}~(\textbf{76}$\bm{\times}$) & 99.12\% \\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Results}
ALQ\xspace shows the highest compression ratio (\textbf{76}$\bm{\times}$) while keeping acceptable Top-1 accuracy compared to the two other pruning methods (see \tabref{ch2-tab:lenet5}).
FP stands for full precision, and the weights in the original full precision LeNet5 consume $1720$KB \cite{bib:ICLR16:Han}.
CR denotes the compression ratio of static weight storage.
Note that both DC \cite{bib:ICLR16:Han} and ADMM \cite{bib:ECCV18:Zhang2} rely on sparse tensors, which need special libraries or hardwares for efficient execution \cite{bib:ICLR17:Li}.
Their operands (the shared quantized values) are still floating-point.
Hence they hardly utilize bitwise operations for speedup.
In contrast, ALQ\xspace achieves a higher compression ratio without sparse tensors, which is more suited for general off-the-shelf platforms.
The average bitwidth of ALQ\xspace is below $1.0$-bit ($1.0$-bit corresponds to a compression ratio slightly below $32$), indicating some groups are fully removed.
In fact, this process leads to a new network architecture containing less output channels of each layer, and thus the corresponding input channels of the next layers can be safely removed.
The original configuration $20-50-500-10$ is now $18-45-231-10$.
\subsubsection{Binary Networks on CIFAR10}
\label{ch2-sec:experiment_cifar10}
\fakeparagraph{Settings}
In this experiment, we compare the performance of ALQ\xspace with state-of-the-art binary networks \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari,bib:ICLR17:Hou}.
A binary network is an MBN with the lowest bitwidth, i.e.,\xspace single-bit.
Thus, the storage consumption of a binary network can be regarded as the lower bound of a (uniform) quantized network.
We implement a small version of VGGNet from~\cite{bib:ICLR15:Simonyan} on CIFAR10 dataset~\cite{bib:CIFAR}, as in many state-of-the-art binary networks~\cite{bib:NIPS15:Courbariaux,bib:ICLR17:Hou,bib:ECCV16:Rastegari}.
The structures of each layer chosen for ALQ\xspace are channelwise, pointwise, pointwise, pointwise, pointwise, pointwise, subchannelwise(16), subchannelwise(2), subchannelwise(2) respectively.
After each pruning, the network is retrained to recover the accuracy degradation with 20 epochs of optimizing $\bm{B}_g$ and 10 epochs of optimizing $\bm{\alpha}_g$.
The pruning ratio is 40\%, and 5 or 6 times of Pruning Step are executed after initialization in the reported experiment (\tabref{ch2-tab:cifar10}).
\begin{table}[tbp!]
\centering
\caption[Comparison with binary networks (VGGNet on CIFAR10)]{Comparison with state-of-the-art binary networks (VGGNet on CIFAR10). ``FP'' denotes the full precision baseline. ``CR'' denotes the compression ratio related to full precision. $I_w$ denotes the average bitwidth of weights. }
\label{ch2-tab:cifar10}
\small
\begin{tabular}{cccc}
\toprule
Method & $I_W$ & Weights~(CR) & Top-1 \\ \hline
FP & 32 & 56.09MB~(1$\times$) & 92.8\% \\
BC~\cite{bib:NIPS15:Courbariaux} & 1 & 1.75MB~(32$\times$) & 90.1\% \\
BWN~\cite{bib:ECCV16:Rastegari}* & 1 & 1.82MB~(31$\times$) & 90.1\% \\
LAB~\cite{bib:ICLR17:Hou} & 1 & 1.77MB~(32$\times$) & 89.5\% \\
AQ~\cite{bib:ICLR18:Khoram} & 0.27 & 1.60MB~(35$\times$) & 90.9\% \\
\textbf{ALQ\xspace} & \textbf{0.66} & \textbf{1.29MB}~(\textbf{43$\times$}) & \textbf{92.0\%} \\
\textbf{ALQ\xspace} & \textbf{0.40} & \textbf{0.82MB}~(\textbf{68$\times$}) & \textbf{90.9\%} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item
*: both first and last layers are unquantized.
\end{tablenotes}
\end{table}
\fakeparagraph{Results}
\tabref{ch2-tab:cifar10} shows the performance comparison to popular binary networks.
$I_W$ stands for the quantization bitwidth for weights.
Since ALQ\xspace has an adaptive quantization bitwidth, the reported bitwidth of ALQ\xspace is an average bitwidth of all weights.
ALQ\xspace allows to compress the network to under $1$-bit, which remarkably reduces the storage and computation.
ALQ\xspace achieves the smallest weight storage and the highest accuracy compared to all weights binarization methods BC~\cite{bib:NIPS15:Courbariaux}, BWN~\cite{bib:ECCV16:Rastegari}, LAB~\cite{bib:ICLR17:Hou}.
Similar to results on LeNet5, ALQ\xspace generates a new network architecture with fewer output channels per layer, which further reduces our models in \tabref{ch2-tab:cifar10} to $1.01$MB ($0.66$-bit) or even $0.62$MB ($0.40$-bit).
The computation and the run-time memory can also decrease.
Furthermore, we also compare with AQ~\cite{bib:ICLR18:Khoram}, the state-of-the-art adaptive fixed-point quantizer.
It assigns a different bitwidth for each parameter based on its sensitivity, and also realizes a pruning for 0-bit parameters.
Our ALQ\xspace not only consumes less storage, but also acquires a higher accuracy than AQ~\cite{bib:ICLR18:Khoram}.
Besides, the non-standard quantization bitwidth in AQ cannot efficiently run on general hardware due to the irregularity~\cite{bib:ICLR18:Khoram}, which is not the case for ALQ\xspace.
In order to demonstrate the affects from different steps in ALQ\xspace, we plot the training loss curve of quantizing VGGNet on CIFAR10 with ALQ\xspace.
Different steps in ALQ\xspace are marked with different colors, see~\figref{ch2-fig:loss}.
The results show that (\textit{i}) Initialization Step does not bring any performance drop; (\textit{ii}) Optimization Step can fast converge in a few epochs and may recover the performance drop from Pruning Step as long as the average bitwidth is not extremely low.
\begin{figure}[tbp!]
\centering
\includegraphics[width=0.5\textwidth]{./figs/ch2/single_loss.pdf}
\caption[The training loss curves of different steps in ALQ\xspace.]{The training loss curves of different steps in ALQ\xspace (VGGNet on CIFAR10). 'Magenta' stands for Initialization Step; 'Green' stands for optimizing $\bm{B}_g$ with speedup; 'Blue' stands for optimizing $\bm{\alpha}_g$; 'Red' stands for Pruning Step. Please see this figure in color.}
\label{ch2-fig:loss}
\end{figure}
\subsubsection{MBNs on ImageNet}
\label{ch2-sec:experiment_imagenet}
\fakeparagraph{Settings}
We quantize both the weights and the activations of ResNet18/34~\cite{bib:CVPR16:He} with a low bitwidth ($\leq2$-bit) on ImageNet dataset~\cite{bib:ILSVRC15}, and compare our results with state-of-the-art multi-bit networks.
The results for the full precision version are provided by Pytorch~\cite{bib:NIPSWorkshop17:Paszke}.
We choose ResNet18, as it is a popular model on ImageNet used in the previous quantization schemes.
ResNet34 is a deeper network used more in recent quantization papers.
The structures of each layer chosen for ALQ\xspace are all pointwise except for the first layer (kernelwise) and the last layer (subchannelwise(2)).
After each pruning, the network is retrained to recover the accuracy degradation with 10 epochs of optimizing $\bm{B}_g$ and 5 epochs of optimizing $\bm{\alpha}_g$.
The pruning ratio is 15\%.
\begin{table}[tbp!]
\centering
\caption[Comparison with quantized networks (ResNet18/34 on ImageNet).] {Comparison with state-of-the-art quantized networks (ResNet18/34 on ImageNet). ``FP'' denotes the full precision baseline. ``CR'' denotes the compression ratio related to full precision. $I_w$ denotes the average bitwidth of weights. $I_A$ denotes the bitwidth of activations. }
\label{ch2-tab:ResNet}
\footnotesize
\begin{tabular}{cccc}
\toprule
Method & $I_W$/$I_A$ & Weights & Top-1 \\ \hline
\multicolumn{4}{c}{ResNet18} \\ \hdashline
FP~\cite{bib:NIPSWorkshop17:Paszke} & 32/32 & 46.72MB & 69.8\% \\
TWN~\cite{bib:NIPS16:Li} & 2/32 & 2.97MB & 61.8\% \\
LR~\cite{bib:ICLR18:Shayer} & 2/32 & 4.84MB & 63.5\% \\
LQ~\cite{bib:ECCV18:Zhang}* & 2/32 & 4.91MB & 68.0\% \\
QIL~\cite{bib:CVPR19:Jung}* & 2/32 & 4.88MB & 68.1\% \\
INQ~\cite{bib:ICLR17:Zhou} & 3/32 & 4.38MB & 68.1\% \\
ABC~\cite{bib:NIPS17:Lin} & 5/32 & 7.41MB & 68.3\% \\
\textbf{ALQ\xspace} & \textbf{2.00/32} & \textbf{3.44MB} & \textbf{68.9\%} \\
\textbf{ALQ\xspace}$^\mathrm{e}$ & \textbf{2.00/32} & \textbf{3.44MB} & \textbf{70.0\%} \\
BWN~\cite{bib:ECCV16:Rastegari}* & 1/32 & 3.50MB & 60.8\% \\
LR~\cite{bib:ICLR18:Shayer}* & 1/32 & 3.48MB & 59.9\% \\
DSQ~\cite{bib:ICCV19:Gong}* & 1/32 & 3.48MB & 63.7\% \\
\textbf{ALQ\xspace} & \textbf{1.01/32} & \textbf{1.77MB} & \textbf{65.6\%} \\
\textbf{ALQ\xspace}$^\mathrm{e}$ & \textbf{1.01/32} & \textbf{1.77MB} & \textbf{67.7\%} \\
LQ~\cite{bib:ECCV18:Zhang}* & 2/2 & 4.91MB & 64.9\% \\
PACT~\cite{bib:arXiv18:Choi}* & 2/2 & 4.88MB & 64.4\% \\
QIL~\cite{bib:CVPR19:Jung}* & 2/2 & 4.88MB & 65.7\% \\
DSQ~\cite{bib:ICCV19:Gong}* & 2/2 & 4.88MB & 65.2\% \\
GroupNet~\cite{bib:CVPR19:Zhuang}* & 4/1 & 7.67MB & 66.3\% \\
RQ~\cite{bib:ICLR19:Louizos} & 4/4 & 5.93MB & 62.5\% \\
ABC~\cite{bib:NIPS17:Lin} & 5/5 & 7.41MB & 65.0\% \\
\textbf{ALQ\xspace} & \textbf{2.00/2} & \textbf{3.44MB} & \textbf{66.4\%} \\
SYQ~\cite{bib:CVPR18:Faraone}* & 1/8 & 3.48MB & 62.9\% \\
LQ~\cite{bib:ECCV18:Zhang}* & 1/2 & 3.50MB & 62.6\% \\
PACT~\cite{bib:arXiv18:Choi}* & 1/2 & 3.48MB & 62.9\% \\
\textbf{ALQ\xspace} & \textbf{1.01/2} & \textbf{1.77MB} & \textbf{63.2\%} \\ \hline
\multicolumn{4}{c}{ResNet34} \\ \hdashline
FP~\cite{bib:NIPSWorkshop17:Paszke} & 32/32 & 87.12MB & 73.3\% \\
\textbf{ALQ\xspace}$^\mathrm{e}$ & \textbf{2.00/32} & \textbf{6.37MB} & \textbf{73.6\%} \\
\textbf{ALQ\xspace}$^\mathrm{e}$ & \textbf{1.00/32} & \textbf{3.29MB} & \textbf{72.5\%} \\
LQ~\cite{bib:ECCV18:Zhang}* & 2/2 & 7.47MB & 69.8\% \\
QIL~\cite{bib:CVPR19:Jung}* & 2/2 & 7.40MB & 70.6\% \\
DSQ~\cite{bib:ICCV19:Gong}* & 2/2 & 7.40MB & 70.0\% \\
GroupNet~\cite{bib:CVPR19:Zhuang}* & 5/1 & 12.71MB & 70.5\% \\
ABC~\cite{bib:NIPS17:Lin} & 5/5 & 13.80MB & 68.4\% \\
\textbf{ALQ\xspace} & \textbf{2.00/2} & \textbf{6.37MB} & \textbf{71.0\%} \\
TBN~\cite{bib:ECCV18:Wan}* & 1/2 & 4.78MB & 58.2\% \\
LQ~\cite{bib:ECCV18:Zhang}* & 1/2 & 4.78MB & 66.6\% \\
\textbf{ALQ\xspace} & \textbf{1.00/2} & \textbf{3.29MB} & \textbf{67.4\%} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item
*: both first and last layers are unquantized.
\item
$^\mathrm{e}$: adding extra epochs of \textit{STE with loss-aware} in the end.
\end{tablenotes}
\end{table}
\fakeparagraph{Results}
\tabref{ch2-tab:ResNet} shows that ALQ\xspace obtains the highest accuracy with the smallest network size on ResNet18/34, in comparison with other weight and weight+activation quantization approaches.
$I_W$ and $I_A$ are the quantization bitwidth for weights and activations respectively.
Several schemes (marked with *) are not able to quantize the first and last layers, since quantizing both layers as other layers will cause a huge accuracy degradation \cite{bib:ECCV18:Wan,bib:ICLR18:Mishra2}.
It is worth noting that the first and last layers with floating-point values occupy $2.09$MB storage in ResNet18/34, which is still a significant storage consumption on such a low-bit network.
We can simply observe this enormous difference between TWN~\cite{bib:NIPS16:Li} and LQ-Net~\cite{bib:ECCV18:Zhang} in~\tabref{ch2-tab:ResNet} for example.
The evolved floating-point computations in both layers can hardly be accelerated with bitwise operations either.
For reported ALQ\xspace models in~\tabref{ch2-tab:ResNet}, as several layers have already been pruned to an average bitwidth below $1.0$-bit (e.g.,\xspace see in \figref{ch2-fig:adapt}), we add extra 50 epochs of our \textit{STE with loss-aware} in the end as discussed in \secref{ch2-sec:experiment_convergence}.
The learning rate is $10^{-4}$, and gradually decays with $0.98$ per epoch.
The final accuracy is further boosted by around $1\%\sim2\%$, see the results marked with $^\mathrm{e}$.
With such an extremely low bitwidth, maintained full precision weights help to calibrate some aggressive steps of quantization, which slowly converges to a local optimum with a higher accuracy for a large network.
Recall that maintaining full precision parameters means STE is required to approximate the gradients, since the true-gradients only relate to the quantized parameters used in the forward propagation.
However, for the quantization bitwidth higher than two ($>2.0$-bit), the quantizer can take smooth steps, and the gradient approximation due to STE damages the training inevitably.
Thus in this case, the true-gradient optimizer, i.e.,\xspace \algoref{ch2-alg:bases}, can converge to a better local optimum, faster and more stable.
ALQ\xspace can quantize ResNet18/34 with 2.00-bit (across all layers) \textit{without any accuracy loss}.
To the best of our knowledge, this is the first time that the 2-bit weight-quantized ResNet18/34 can achieve the accuracy level of its full precision version, even if some prior schemes keep the first and last layers unquantized.
These results further demonstrate the high-performance of the pipeline in ALQ\xspace.
\section{Summary}
\label{ch2-sec:summary}
In this chapter, we propose ALQ\xspace, an adaptive loss-aware trained quantizer for multi-bit networks.
ALQ\xspace enables efficient inference on edge devices.
ALQ\xspace tries to reduce the redundancy on the quantization bitwidth to achieve both storage efficiency and computation efficiency.
Unlike prior quantized networks that (\textit{i}) often assign an empirical global bitwidth across layers, (\textit{ii}) train the quantizer by minimizing the reconstruction error to the full precision weights,
ALQ\xspace (\textit{i}) allocates an adaptive bitwidth to different weights w.r.t. the loss, (\textit{ii}) optimizes the multi-bit quantizer by minimizing the loss as well.
The adaptive bitwidth assignment and the direct optimization objective allow ALQ\xspace to find and remove more redundant bitwidth, thus achieving a better trade-off between the resource constraints and the model accuracy.
The main contributions are summarized as follows,
\begin{itemize}
\item
ALQ\xspace introduces a multi-bit network with adaptive quantization bitwidth across different groups of weights.
Such an adaptive multi-bit network not only achieves a high compression ratio on static weight storage by only assigning a high bitwidth to loss-critical weights,
but also replaces the expensive floating-point operations with a single set of cheaper operations from \texttt{xnor}, \texttt{popcount} and accumulations.
\item
ALQ\xspace trains the multi-bit quantized weights by directly minimizing the loss function.
This loss-aware quantization results in a faster convergence rate as well as a higher final accuracy than state-of-the-art STE-based quantization training that minimizes the reconstruction error.
\item
Via entirely pruned groups (i.e.,\xspace 0-bit weights in some groups), ALQ\xspace enables extremely low-bit networks with an average bitwidth below 1-bit yet with \textit{dense tensor form}.
It breaks the traditional lower bound of the quantized network, i.e.,\xspace binary network, thus providing more visions and possibilities for the network compression.
Experiments on CIFAR10 show that ALQ\xspace can compress VGGNet to an average bitwidth of $0.4$-bit, while yielding a higher accuracy than other binary networks~\cite{bib:ECCV16:Rastegari,bib:NIPS15:Courbariaux}.
\item
ALQ\xspace is the first loss-aware quantization scheme for multi-bit networks and eliminates the need for approximating gradients and retaining full precision weights.
ALQ\xspace is also able to quantize the first and last layers without incurring a notable accuracy loss.
\end{itemize}
This chapter studied how to compress the network for efficient inference given the fixed on-device resource constraints.
In the next chapter, we will further study how to adapt the network on edge devices when the resource constraint is varied along the lifetime.
Although we may deploy multiple ALQ\xspace-quantized multi-bit networks with different average bitwidth to execute under different resource budgets, this naive solution can only result in a subpar performance, as it requires several times more storage consumption in comparison to a single (multi-bit) network.
However, the solution proposed in the next chapter can meet the varying resource constraints without incurring extra storage overhead.
\chapter[Adaptation on Edge Devices]{Adaptation on Edge Devices}
\label{ch3:adaptation}
In \chref{ch2:inference}, we explored how DNNs can be compressed while respecting resource constraints.
However, the resource constraints on the target edge devices may change dynamically during runtime.
To maximize model accuracy during on-device inference, in this chapter we deploy a DNN that can adapt to the different resource constraints on the edge device.
\fakeparagraph{Main Resource Constraints}
The different resource constraints during on-device inference may be due to for example the available battery power or the allowed inference time.
Similar to \chref{ch2:inference}, we mainly adopt two widely used proxies to quantify the (varying) resource consumption, (\textit{i}) \textit{the storage of weights}, which affects the amount of memory fetching and static memory consumption, and (\textit{ii}) \textit{the number of operations for inference}, which is relevant to the computing energy and the inference latency.
\fakeparagraph{Principles}
Faced with the varying resource constraints on edge devices, existing synthesis methods require either deploying multiple individual networks with different resource demands or sampling sub-networks along structured dimensions, which leads to poor performance.
However, we propose to sample sub-networks from the backbone network through row-based unstructured sparsity, and propose a novel compressed sparse row (CSR) format for efficient sparse inference.
Our synthesis methods reduce redundancy among multiple sub-networks through weight sharing and architecture sharing, resulting in storage efficiency and re-configuration efficiency.
The contents of this chapter are established mainly based on the paper ``DRESS: Dynamic REal-time Sparse Subnets'' that is published on Efficient Deep Learning for Computer Vision CVPRWorkshop (ECV), 2022 \cite{bib:CVPRWorkshop22:Qu}.\footnote{This work was done when Zhongnan Qu was a research intern at Meta, and it was collaborated with the colleagues at Meta Reality Labs Research. }
\section{Introduction}
\label{ch3-sec:introduction}
Extensive synthesis works \cite{bib:ICLR16:Han,bib:ECCV16:Rastegari,bib:ACMTrans21:Sunny,bib:ACMTrans21:Oh} have proposed to first compress a pretrained model according to the given resource constraints, and then compile the compressed model to deploy on target edge devices.
However, the time constraints of many practical embedded systems may dynamically change at run-time.
For example, when detecting hand positions on a workbench in real-time, the allowed inference time varies during the entire manipulation.
In comparison to general movement, engineers will slow down the hand movement if performing some critical tasks, e.g.,\xspace grasping objects, which gives DNNs a longer execution time when requiring higher perceptive precision.
Some similar scenarios also include autonomous vehicles' reaction time on city roads and highways due to different operating speeds.
On the other hand, the available resources on the target edge device may also vary along the lifetime, e.g.,\xspace the battery energy, the allocatable RAM.
All considerations mentioned above indicate that the deployed inference model should maintain a dynamic capacity, such that the model can be adapted and executed under different resource constraints.
\fakeparagraph{Challenges}
Making DNNs adaptable on resource-constrained devices is even more challenging.
Existing synthesis methods either fail to compile DNNs that can adapt to varying resource constraints, or result in subpar performance.
Traditional compression techniques, e.g.,\xspace pruning, quantization, only result in a static inference model.
Although the compressed model is mapped onto target devices, it can not meet various resource requirements.
As an alternative, we may compile for example multiple networks with different sparsity levels, which however need several times more storage consumption in comparison to a single sparse network.
Recent works \cite{bib:ICLR19:Yu,bib:ICLR20:Cai} show that sub-networks from a pretrained backbone network can reach a decent performance compared to the sub-networks trained individually from scratch.
Nevertheless, they only sample sub-network architectures along hand-crafted structured dimensions, e.g.,\xspace width, kernel size, which leads to sub-optimal results.
Switching among multiple compiled architectures on edge devices may also cause extra re-configuration overhead.
In this chapter, we propose a novel synthesis technique, \textbf{D}ynamic \textbf{RE}al-time \textbf{S}parse \textbf{S}ubnets (DRESS).
DRESS\xspace samples sub-networks from the backbone network through row-based unstructured sparsity, while ensuring that nonzero weights of the higher sparsity networks are reused by the lower sparsity networks.
This way, the overall memory consumption is bounded by the network with the lowest sparsity and does not depend on the number of networks, resulting in \textit{memory efficiency}; all sparse sub-networks leverage the same architecture as the backbone network, leading to \textit{re-configuration efficiency}.
The sub-network with a higher sparsity (i.e.,\xspace fewer nonzero weights) needs a smaller amount of on-device memory fetching and fewer FLOPs, thus shall be adopted to inference under more severe resource constraints, e.g.,\xspace lower energy budget, limited inference time.
Specifically, we (\textit{i}) sample weights w.r.t. their magnitudes in a row-based unstructured manner; (\textit{ii}) train all sampled sparse sub-networks with weighted loss in parallel;
(\textit{iii}) further fine-tune batch normalization for each sub-network individually.
\section{Related Work}
\label{ch3-sec:related}
\subsection{Network Compression \& Deployment}
\label{ch3-sec:related_compression}
Network compression focuses on trimming down the DNN model size with negligible performance degradation.
Commonly used compression techniques can be divided into three categories, (\textit{i}) designing efficient network architectures manually \cite{bib:arXiv17:Howard,bib:CVPR18:Sandler} or automatically using neural architecture search \cite{bib:ICLR20:Cai,bib:arXiv19:Yu,bib:ECCV20:Yu,bib:ACMTrans21:Mendis}; (\textit{ii}) quantizing weight values into lower bitwidth to use cheaper operations and reduce the storage consumption \cite{bib:ECCV16:Rastegari,bib:ACMTrans19:Yu,bib:ACMTrans21:Sunny}; (\textit{iii}) structured \cite{bib:ECCV20:Li,bib:IEEETrans20:Li,bib:IEEETrans20:Wu}/unstructured \cite{bib:ICLR16:Han,bib:ICLR20:Renda,bib:ICML21:Evci,bib:NIPS21:Peste,bib:ACMTrans21:Oh,bib:IEEETrans20:Ahmad} pruning unimportant weights as zeros to reduce the number of operations and the number of nonzero weights.
The compressed model is further optimized by some compilation libraries in order to speed up inference on target edge platforms, e.g.,\xspace CMSIS-NN for Arm Cortex-M CPUs \cite{bib:CMSIS-NN}, XNNPACK for Arm64 and ArmV7 CPUs \cite{bib:XNNPACK}, Vela for Ethos-U NPU \cite{bib:Vela}.
Note that the compiled model often only supports a static computation graph due to the limited resources on edge devices \cite{bib:CMSIS-NN,bib:XNNPACK,bib:Vela}.
In this chapter, we focus on unstructured pruning among others, since (\textit{i}) it often yields a high compression ratio \cite{bib:ICLR20:Renda}; (\textit{ii}) the networks with different unstructured sparsity may share the same network architecture, i.e.,\xspace the same compiled computation graph.
Furthermore, some recent libraries e.g.,\xspace XNNPACK include fast kernels for sparse matrix-dense matrix multiplication, which enables sparse DNN acceleration on edge platforms \cite{bib:CVPR20:Elsen,bib:IEEETrans20:Li2}.
\subsection{Dynamic Networks}
\label{ch3-sec:related_dynamic}
Dynamic networks aim at a better trade-off between inference accuracy and average inference efficiency, by adapting network structures or network parameters according to the inputs during inference \cite{bib:PAMI21:Han}.
Among them, some works propose allocating less computation on those canonical data samples, through skipping layers \cite{bib:ICLR18:Huang}, pruning unimportant channels \cite{bib:CVPR21:Li,bib:IEEETrans20:Wu}, selecting a subset of salience pixels \cite{bib:CVPR20:Verelst}.
Although these sample-wise dynamic networks may achieve a smaller inference cost averaged over different samples, they cannot adapt the model to fit different resource budgets.
In addition, to achieve data-dependent adaptiveness, they often bring additional computation burden, e.g.,\xspace hard attention, gater, etc. \cite{bib:PAMI21:Han}
\subsection{Anytime Networks (Sub-networks)}
\label{ch3-sec:related_anytime}
Anytime networks refer to the network whose sub-networks can be executed separately with less resource consumption while achieving a satisfactory performance.
DRESS\xspace falls into the same scope of anytime networks.
MSDNet \cite{bib:ICLR18:Huang} densely connects multiple convolutional layers in both depth direction and scale direction, such that the computation can be saved by early-exiting from a certain layer.
\cite{bib:arXiv17:Hu} introduces an adaptive weighted loss to optimize the network with various depths.
Slimmable networks \cite{bib:ICLR19:Yu,bib:ICCV19:Yu} propose to train a single model which supports multiple width multipliers (i.e.,\xspace number of channels) in each layer.
\cite{bib:ICLR20:Cai} suggests to search network architectures with different kernel size, depth, and width, in a single pretrained once-for-all network.
Subflow \cite{bib:RTAS20:Lee} executes only a sub-graph of the full DNN by activating partial neurons given the varying time constraints.
State-of-the-art anytime networks always sample sub-networks from the backbone network along hand-crafted structured dimensions, e.g.,\xspace depth, width, kernel size, neuron.
As zero weights have no effects on the calculation, anytime networks actually perform structured pruning on the backbone network, which could yield a subpar performance in comparison to unstructured sampling.
In addition, resulted sub-networks often have different network architectures, e.g.,\xspace different kernel sizes.
When adopting these sub-networks on edge devices, the re-configuration of the computation graph may bring extra overhead.
On the other hand, SP-Net \cite{bib:arXiv20:Guerra} suggests adjusting the quantization bitwidth on demand, which however requires specialized integer arithmetic units for efficient computing.
\subsection{Weight Sharing}
\label{ch3-sec:related_sharing}
Sub-networks rely on weight reusing (sharing).
Except for sub-networks, weight sharing among different networks is also widely used in other settings.
Multi-task learning \cite{bib:NIPS18:Sener} reuses partial weights of networks performing diverse tasks to reduce memory consumption. However, these methods are inapplicable in our scenarios, which target a single task with varying resource constraints.
Neural architecture search (NAS) applied in \cite{bib:ICLR20:Cai,bib:arXiv19:Yu,bib:ECCV20:Yu,bib:ICCV21:Chu} maintains a single set of shared weights (also known as \textit{supernet}) when searching different architectures to reduce the training effort.
Note that NAS is orthogonal to our method since the searched optimal architecture can be used as our backbone network.
\section{Dynamic Real-time Sparse Subnets}
\label{ch3-sec:method}
\subsection{Problem Definition}
\label{ch3-sec:problem}
We aim at sampling multiple subnets from a backbone network.
The backbone network is a conventional DNN consisting of $L$ convolutional (\texttt{conv}) layers or fully connected (\texttt{fc}) layers.
These subnets have different resource demands and thus can be adapted to different resource availabilities.
Since the subnets have the same architecture as the backbone network, they can share a single compiled architecture to achieve re-configuration efficiency;
the nonzero weights of the subnet with a higher sparsity are reused by the subnet with a lower sparsity to achieve memory efficiency.
This way, we only need to store a table for the lowest sparsity network, including its nonzero weights sorted w.r.t. importance and corresponding indices.
Accordingly, the other networks can be build from the top important weights through a pre-defined sparsity together with the compiled architecture.
Assume that we sample $K$ sparse subnets, then the preliminary problem is defined as,
\begin{eqnarray}
\min_{\bm{w},\bm{m}_k} & \ell(\bm{w}\odot\bm{m}_k) & \forall k \in 1,...,K \label{ch3-eq:loss} \\
\text{s.t.} & \|\bm{m}_k\|_0 = (1-s_k) \cdot I & \forall k \in 1,...,K \label{ch3-eq:constraints1} \\
& \bm{m}_i \odot \bm{m}_j = \bm{m}_j & \forall 1 \le i<j \le K \label{ch3-eq:constraints2}
\end{eqnarray}
where $\bm{w}$ stands for the weights of the (dense) backbone network; $\bm{m}_k$ stands for the binary mask of the $k$-th subnet; $s_k$ stands for the pre-defined sparsity level.
$\ell(.)$ denotes the loss function, $\|.\|_0$ denotes the L0-norm, $\odot$ denotes the element-wise multiplication.
Note that $\bm{w}\in\mathbb{R}^I$, $\bm{m}_k\in\{0,1\}^I$, where $I$ is the total number of weights.
Clearly, we have $0<s_1<s_2<...<s_K<1$, i.e.,\xspace the first subnet bounds the overall static storage consumption.
$\bm{w}_k$ is denoted as nonzero weights of the $k$-th sparse subnet, i.e.,\xspace $\bm{w}_k=\bm{w}\odot\bm{m}_k$.
\begin{algorithm}[!ht]
\caption{Dynamic REal-time Sparse Subnets}\label{ch3-alg:DRESS}
\KwIn{Initial random weights $\bm{w}$, training dataset $\mathcal{D}_\text{tr}$, validation dataset $\mathcal{D}_\text{val}$, overall sparsity $\{s_k\}_{k=1}^K$, normalized loss weights $\{\pi_k\}_{k=1}^K$}
\KwOut{Optimized weights $\bm{w}$, binary masks $\{\bm{m}_k\}_{k=1}^K$}
\tcc{Dense pre-training}
Train dense network $\bm{w}$ with traditional optimizer\;
\tcc{DRESS training}
Allocate layer-wise sparsity $\{s_{k,l}\}_{l=1}^L$ for each $s_k$\;
Initiate $\bm{w}^0=\bm{w}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
\tcp{The $q$-th training iteration}
Fetch mini-batch from $\mathcal{D}_\text{tr}$\;
Initialize backbone-net gradient $\bm{g}(\bm{w}^{q-1})=\bm{0}$\;
\For {$k \leftarrow 1$ \KwTo $K$} {
Sample a subnet with sparsity $\{s_{k,l}\}_{l=1}^L$ and get its mask $\bm{m}_k$\;
Get sparse subnet $\bm{w}^{q-1}_k=\bm{w}^{q-1}\odot\bm{m}_k$\;
Back-propagate subnet gradient $\bm{g}(\bm{w}^{q-1}_k)=\pi_k\cdot\frac{\partial\ell(\bm{w}^{q-1}_k)}{\partial\bm{w}^{q-1}_k}$\;
Accumulate backbone-net gradient $\bm{g}(\bm{w}^{q-1})=\bm{g}(\bm{w}^{q-1})+\bm{g}(\bm{w}^{q-1}_k)\odot\bm{m}_k$\;
}
Compute optimization step $\Delta\bm{w}^{q}$ with $\bm{g}(\bm{w}^{q-1})$\;
Update $\bm{w}^{q} = \bm{w}^{q-1} + \Delta\bm{w}^{q}$\;
\uIf{Higher average epoch accuracy on $\mathcal{D}_\text{val}$}
{Save $\bm{w}=\bm{w}^q$ and $\{\bm{m}_k\}_{k=1}^K$\;}
\Else{Re-allocate layer-wise sparsity $\{s_{k,l}\}_{l=1}^L$ for each $s_k$\;}
}
\tcc{Post-training on batch normalization (BN)}
\For {$k \leftarrow 1$ \KwTo $K$} {
Load $\bm{w}$ and $\bm{m}_k$\;
Fine-tune BN layers of subnet $\bm{w}\odot\bm{m}_k$\;
}
\end{algorithm}
In the following sections, we detail how to solve \equref{ch3-eq:loss}-\equref{ch3-eq:constraints2} in our DRESS\xspace synthesis approach.
DRESS\xspace consists of three training stages as discussed below.
The overall pipeline is shown in \algoref{ch3-alg:DRESS}.
\begin{itemize}
\item
\underline{Dense Pre-Training:} The backbone network is trained from scratch with a traditional optimizer to provide a good initialization for the following sparse training.
\item
\underline{DRESS\xspace Training:} The multiple sparse subnets are sampled from the backbone network (\secref{ch3-sec:sample}, \secref{ch3-sec:store}) and are jointly trained in parallel with weighted loss (\secref{ch3-sec:optimize}).
\item
\underline{Post-Training on Batch Normalization:} Batch normalization (BN) layers are further optimized individually for each subnet to better reveal the statistical information (\secref{ch3-sec:boost}).
\end{itemize}
\subsection{How to Sample Sparse Subnets}
\label{ch3-sec:sample}
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch3/ForwardBackward.png}
\caption[The computation graph of DRESS\xspace.]{The computation graph used in parallel training multiple subnets.
The orange block stand for the leaf variable to be optimized; the blue block stand for the intermediate variable; the green block stand for the computation unit.}
\label{ch3-fig:approach}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/cosine.pdf}
\caption[The cosine similarity between the loss gradients of different subnets.]{The cosine similarity between the loss gradients of 5 subnets (with sparsity 0.8,0.9,0.95,0.98,0.99) and that of the lowest sparsity subnet (with sparsity 0.8) along the training iterations.
We show two typical layers in ResNet20, the last \texttt{conv} layer of the first block and the \texttt{fc} layer}
\label{ch3-fig:cosine}
\end{figure}
Unlike traditional anytime networks that sample subnets along structured dimensions, DRESS\xspace samples subnets weight-wise which extremely enlarges the sampling space.
Recall that we introduce $K$ binary masks $\bm{m}_{1:K}$ to indicate if the weight is selected in each subnet.
The naive approach could be iterative sampling $K$ subnets where each iteration exhaustively searches the best-performed subnet inside the current subnet.
Yet this naive approach can be either conducted rarely or infeasible due to the high complexity.
To reduce the complexity, we propose to greedily sample the subnet based on the importance of weights.
Following the prior pruning works \cite{bib:ICLR16:Han,bib:ICLR19:Frankle,bib:ICLR20:Renda}, the importance is measured by the weight magnitudes.
Given an overall sparsity level $s_k$, the $(1-s_k)\cdot I$ weights with the largest magnitudes will be sampled and are used to build the subnet.
However, it is still infeasible to conduct such a global sorting across all layers in each training iteration.
Instead, the weights are only sorted and sampled inside each layer according to a layer-wise sparsity $s_{k,l}$, where $l$ denotes the layer index.
The global sorting, also the (re-)allocation of layer-wise sparsity, is conducted only if the average accuracy of subnets does not improve anymore, see in \algoref{ch3-alg:DRESS}.
During (re-)allocation, the weights of all \texttt{conv} and \texttt{fc} layers with the largest magnitudes will be selected in sequence until reaching the overall sparsity $s_k$, and the layer-wise sparsity can be then calculated accordingly.
Note that the (dense) backbone network is maintained and continuously updated when training sampled subnets.
In comparison to traditional pruning, where only the nonzero weights at fixed locations are fine-tuned, our sparse subnets are re-sampled from the backbone network in each training iteration.
This flexible mechanism is crucial to acquire multiple high-performed subnets, see the ablation results in \secref{ch3-sec:ablation_sampling}.
\subsection{How to Optimize Subnets}
\label{ch3-sec:optimize}
With sampled binary masks, we can now build and train the subnets.
Our concept for optimizing subnets is based on the key insight: \textit{in comparison to iterative training of subnets in progressively decreased/increased sparsity, parallel training allows multiple subnets to be sampled and optimized jointly thus yields higher performance.}
Experimental results in \secref{ch3-sec:ablation_iterative} show that parallel training multiple subnets always yields a higher prediction accuracy than iterative training.
As a possible explanation, the optimizer may be stuck into a bad local optimum around the previous subnet during iterative training, whereas parallel training searches multiple subnets jointly.
We thus adopt parallel training in DRESS\xspace as in \algoref{ch3-alg:DRESS}.
In parallel training, \equref{ch3-eq:loss} can be re-written as,
\begin{equation}
\min_{\bm{w},\bm{m}_k}~~~\sum_{k=1}^K \pi_k \cdot \ell(\bm{w}\odot\bm{m}_k)
\label{ch3-eq:loss_parallel}
\end{equation}
where $\pi_k$ is the normalized scale ($\sum_{k=1}^K \pi_k = 1$) used to weight $K$ loss items, which will be discussed later.
In fact, this process determines a threshold $t_k$ for the $k$-th sparsity level, the mask value $m_{k,i}=1$ if $\text{abs(}w_i\text{)}\ge t_k$, otherwise 0, $\forall i\in1,...,I$.
$t_k$ is set to the value such that $(1-s_k)$ of weights have a larger absolute value than $t_k$.
Clearly, we have $t_1<t_2<...<t_K$ due to the constraints of \equref{ch3-eq:constraints1}-\equref{ch3-eq:constraints2}.
In each training iteration, we sample $K$ sparse subnets $\bm{w}_{1:K}$ from the backbone network $\bm{w}$.
Each subnet's loss function is weighted by $\pi_k$ and summed together.
This weighted sum is to be minimized and thus is used to compute the gradients of $\bm{w}$, see the optimization graph in \figref{ch3-fig:approach}.
When parallel training multiple subnets, the gradients of the backbone network are accumulated by the (weighted) loss gradients back-propagated through all $K$ sparse subnets, as,
\begin{equation}
\bm{g}(\bm{w}) = \sum_{k=1}^K \pi_k \frac{\partial\ell(\bm{w}_k)}{\partial\bm{w}_k} \odot \bm{m}_k
\end{equation}
We parallelly train 5 subnets of ResNet20 \cite{bib:CVPR16:He} with sparsity $s_{1:5}=0.8,0.9,0.95,0.98,0.99$ on CIFAR10, and let the 5 loss items weighted equally, i.e.,\xspace $\pi_{1:5}=0.2$.
We plot the cosine similarity between the loss gradients of 5 subnets (i.e.,\xspace $(\partial\ell(\bm{w}_k)/\partial\bm{w}_k) \odot \bm{m}_k$ with $k=1,...,5$) and that of the lowest sparsity subnet (i.e.,\xspace $(\partial\ell(\bm{w}_1)/\partial\bm{w}_1) \odot \bm{m}_1$) along with the training iterations, see in \figref{ch3-fig:cosine}.
It shows that the loss gradients of different subnets are always positively correlated with each other.
The results also verify that multiple subnets are jointly trained towards the optimal point in the loss landscape.
Because of \equref{ch3-eq:constraints2}, the nonzero weights in higher sparsity subnets (e.g.,\xspace $\bm{w}_5$) are also selected by other subnets, which means these weights are optimized with a larger step size than other weights.
To balance the step size, the subnet with a higher sparsity (fewer trainable weights) shall be assigned to a smaller weight $\pi_k$ on its loss.
We propose to weight the loss items by the ratio of trainable weights (i.e.,\xspace $1-s_k$) together with a correction factor $\gamma$.
\begin{equation}
\alpha_k = (1-s_k)^\gamma
\label{ch3-eq:gamma}
\end{equation}
\begin{equation}
\pi_k = \alpha_k/\sum_{k=1}^K\alpha_k
\label{ch3-eq:alpha}
\end{equation}
The normalized weights $\pi_k$ provide control over the significance of subnets in parallel training.
$\gamma=0$ means weighting loss items equally.
Experimentally, we find that $\gamma\in[0.5,1]$ often yields a satisfactory performance, see in \secref{ch3-sec:ablation_gamma}.
\subsection{How to Store Subnets}
\label{ch3-sec:store}
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\textwidth]{./figs/ch3/csr1.png}
\caption[Traditional CSR format of unstructured sparse tensor.]{Traditional CSR format of unstructured sparse tensor.
The example weight tensor is from a $1\times1$ \texttt{conv} layer with 8 input channels and 4 output channels.
The sparse tensor has a row dimension along each output channel, i.e.,\xspace each \texttt{conv} filter.
There are 3 sparse subnets with sparsity 0.5, 0.75, and 0.875.
Each subnet corresponds to a threshold value of 0.7, 1.3, and 1.7, respectively.
}
\label{ch3-fig:csr1}
\end{figure}
To realize efficient storage and computation, current compilation libraries often encode sparse tensors in compressed sparse row (CSR) format (or some similar formats e.g.,\xspace Block-CSR) \cite{bib:XNNPACK,bib:CVPR20:Elsen,bib:IEEETrans20:Li2}.
An example CSR format of sparse tensor for a \texttt{conv} layer is depicted in \figref{ch3-fig:csr1}.
When adopting traditional CSR format to store subnets generated by DRESS\xspace, we need to store (\textit{i}) the subnet with the lowest sparsity including the row indices, the column indices, and the nonzero values, (\textit{ii}) $K$ threshold values $t_{1:K}$.
However, when selecting the $k$-th subnet for inference, all nonzero weights need to be fetched and compared with $t_k$.
Although we may build specialized indexing for every subnet individually, it in turn results in more memory cost depending on the number of subnets.
To achieve an efficient inference on different sparse subnets while without extra memory overhead, we adopt a \textit{row-based unstructured} sparsity (a.k.a. \textit{N:M fine-grained structure sparsity} \cite{bib:ICLR21:Zhou,bib:NIPS21:Hubara,bib:NIPS21:Sun}), where different rows leverage the same sparsity level.
We denote $N$ as the row size, also the number of weights in each row.
Especially, for sparsity $s_k$, all rows have exactly $(1-s_k)\cdot N$ nonzero weights.
In comparison to conventional unstructured sparsity, this kind of sparsity can also be accelerated with sparse tensor cores of Nvidia A100 GPUs \cite{bib:arXiv21:Mishra} for both training and inference, and thus becomes prevailing recently.
\textit{To our best knowledge, this is the first work that builds multiple sub-networks via fine-grained structure of weight sharing.}
The column indices are stored according to the descending order of the importance (also weight magnitudes) in a two-dimensional table.
The nonzero weights are stored in another table with the same order as the column indices.
This DRESS\xspace CSR format needs to store (\textit{i}) the subnet with the lowest sparsity including the table of the column indices and the table of nonzero weights, (\textit{ii}) $K$ integers $\{(1-s_k)\cdot N\}_{k=1}^K$.
It has overall a similar memory cost as traditional CSR format.
When adopting the $k$-th subnet, we fetch the first $(1-s_k)\cdot N$ columns from both tables as shown in \figref{ch3-fig:csr2}.
The row indices can be built with $(1-s_k)\cdot N$.
Note that all fetched subnets follow the CSR format and utilize the same compiled network architecture, which allows us to leverage available libraries to achieve a fast inference without re-configuration overhead.
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch3/csr2.png}
\caption[DRESS\xspace CSR format of row-based unstructured sparse tensor.]{DRESS\xspace CSR format of row-based unstructured sparse tensor.
The example weight tensor has the same size as \figref{ch3-fig:csr1}.
Each row has 8 weights in total, also the row size $N=8$.
There are 3 sparse subnets with sparsity 0.5, 0.75, and 0.875, as \figref{ch3-fig:csr1}.
Each subnet has 4, 2, and 1 nonzero weights per row, respectively.}
\label{ch3-fig:csr2}
\end{figure}
To obtain DRESS\xspace CSR format, the sampling process needs to be adjusted accordingly.
Especially, for layer $l$, we first pre-define a row size $N_l$ and reshape the weight tensor into rows.
For example, each \texttt{conv} filter corresponds to one row in accordance with the compilation libraries \cite{bib:XNNPACK,bib:CVPR20:Elsen}.
When sampling a subnet in layer $l$, we conduct unstructured sampling in each row individually, while each row has the same sparsity $s_{k,l}$ as in \figref{ch3-fig:csr2}.
Note that when $N_l$ equals the total number of weights, i.e.,\xspace a single row in DRESS\xspace CSR, it turns back into the original unstructured sampling discussed in \secref{ch3-sec:sample}.
In this case, unstructured sampling is conducted in the entire tensor.
Although the resulted sparse tensor can still be stored in the traditional CSR format as \figref{ch3-fig:csr1}, it can not perform an efficient inference due to extra comparison computation discussed above.
The algorithm for generating the mask with row-based unstructured sampling is presented in \algoref{ch3-alg:rowsampling}.
We focus on sampling in a weight tensor $\bm{w}$ with a predefined row size $N$.
Note that $N$ must be divisible by the total number of weights in $\bm{w}$.
The weight tensor $\bm{w}$ is then reshaped into the form of $\mathbb{R}^{G \times N}$, i.e.,\xspace $N$ weights per row and $G$ rows in total.
Given $K$ sparsity levels $s_{1:K}$, $K$ binary masks with the form of $\{0,1\}^{G \times N}$ are generated.
Binary masks can be reshaped into the original form of the weight tensor accordingly.
\begin{algorithm}[!t]
\caption{Row-based unstructured sampling}\label{ch3-alg:rowsampling}
\KwIn{Weight tensor $\bm{w}\in\mathbb{R}^{G \times N}$, row size $N$, sparsity $\{s_k\}_{k=1}^K$}
\KwOut{Binary masks $\{\bm{m}_k\}_{k=1}^K$}
\For {$k \leftarrow 1$ \KwTo $K$} {
Initiate binary mask $\bm{m}_k=0^{G \times N}$\;
Get the number of nonzero weights per row $N_k^\text{nz}=N\cdot (1-s_k)$\;
}
\For {$g \leftarrow 1$ \KwTo $G$} {
Sort the weight magnitudes of row $\bm{w}_{g,:}$ in descending order\;
\For {$k \leftarrow 1$ \KwTo $K$} {
Set the mask values of $\bm{m}_{k,g,:}$ as 1 for Top-$N_k^\text{nz}$ indices\;
}
}
\end{algorithm}
\subsection{How to Further Boost Subnets}
\label{ch3-sec:boost}
Batch normalization (BN) layers are critical for the stable training of state-of-the-art DNNs.
Previous synthesis works \cite{bib:ICLR19:Yu,bib:ICCV19:Yu} find that subnets with different width may cause an accumulated error on batch statistics, and propose to switch BN layers for different subnets.
Multiple subnets in DRESS\xspace share a single architecture, and thus are capable of being optimized in synergy with a shared BN layer.
However, post-training BN layers for each subnet can better calibrate the running statistics, which in turn increases the accuracy.
As BN layers often only require a rather smaller amount of memory and computation in comparison to \texttt{conv}/\texttt{fc} layers, we propose to further fine-tune BN layers for each subnet individually after parallel training, as shown in \algoref{ch3-alg:DRESS}.
\section{Evaluation}
\label{ch3-sec:experiments}
With our design-flow mentioned above, we are now synthesizing the algorithm to map onto resource-constrained edge platforms.
To better understand the effectiveness of our algorithm, we first evaluate our algorithm on widely used vision benchmarks in this section.
Then, we compile and deploy the generated subnets on an edge platform in \secref{ch3-sec:deployment} to see the actual performance of the entire synthesis.
\subsection{Benchmarking Details}
\label{ch3-sec:experiment_benchmark}
We implement our algorithm with Pytorch \cite{bib:NIPSWorkshop17:Paszke}, and evaluate on image classification and object detection/instance segmentation tasks.
As prior works \cite{bib:ICLR18:Huang,bib:ICLR19:Yu,bib:ICCV19:Yu,bib:ICLR20:Renda,bib:NIPS21:Peste,bib:ICLR21:Zhou,bib:NIPS21:Sun}, for image classification, we benchmark VGGNet \cite{bib:ICLR15:Simonyan} and ResNet20 \cite{bib:CVPR16:He} on CIFAR10 \cite{bib:CIFAR}, and benchmark ResNet50 \cite{bib:CVPR16:He} and MobileNetV1/V2 \cite{bib:arXiv17:Howard,bib:CVPR18:Sandler} on ImageNet \cite{bib:ILSVRC15}; for object detection, we benchmark Faster-RCNN with ResNet50-FPN on COCO \cite{bib:COCO}; for instance segmentation, we benchmark Mask-RCNN with ResNet50-FPN on COCO \cite{bib:COCO}.
We use Nesterov SGD optimizer with the cosine schedule for learning rate decay.
We report the Top-1 test accuracy for the subnets of the epoch when the validation dataset achieves the highest average accuracy over all subnets.
For all pre-processing and random initialization, we apply the tools provided in Pytorch.
In our experiments, the row-based unstructured sampling is conducted in all \texttt{conv}/\texttt{fc} layers, except for the depthwise \texttt{conv} layers in MobileNetV1/V2.
We found that sparse depthwise \texttt{conv} layers lead to substantially lower accuracy.
As depthwise \texttt{conv} layers only consume a rather small amount of memory and computation \cite{bib:ICML21:Evci,bib:ACMTrans21:Cho}, different subnets share the same dense depthwise \texttt{conv} layers in DRESS\xspace.
In addition, we keep BN layers dense as in \cite{bib:ICLR19:Yu,bib:ICCV19:Yu}.
We set the overall sparsity levels $s_{1:5}=0.95,0.98,0.99,0.995,0.998$ for VGGNet, $s_{1:5}=0.8,0.9,0.95,0.98,0.99$ for ResNet20, $s_{1:4}=0.5,0.8,0.9,0.95$ for ResNet50 and MobileNetV1/V2.
The sparsity levels are averaged over all \texttt{conv}/\texttt{fc} layers.
\subsubsection{VGGNet/ResNet20 on CIFAR10}
\label{ch3-sec:experiment_cifar}
CIFAR10 \cite{bib:CIFAR} is an image classification dataset, which consists of $32\times32$ color images in 10 object classes.
We use the original training dataset with 50000 samples for training, and randomly select 2000 samples in the original test dataset (10000 samples in total) for validation, and the rest 8000 samples for testing.
We train on 1 Nvidia V100 GPU with a batch size of 128.
\fakeparagraph{VGGNet}
The used VGGNet is widely adopted in many previous compression works \cite{bib:NIPS15:Courbariaux,bib:ICLR17:Hou,bib:ECCV16:Rastegari}, which is a modified version of the original VGG~\cite{bib:ICLR15:Simonyan}.
The used VGGNet architecture is presented as, 2$\times$128C3 - MP2 - 2$\times$256C3 - MP2 - 2$\times$512C3 - MP2 - 2$\times$1024FC - 10SVM/100SVM.
The initial learning rate is set as 0.1; the momentum is set as 0.9; the weight decay is set as 0.0005; the number of training epochs is set as 100.
Note that we use the same training hyperparameters for all three stages in \algoref{ch3-alg:DRESS}.
This also holds true for the following experiments.
\fakeparagraph{ResNet20}
The network architecture is the same as ResNet-20 in the original paper~\cite{bib:CVPR16:He}.
The initial learning rate is set as 0.1; the momentum is set as 0.9; the weight decay is set as 0.0005; the number of training epochs is set as 100.
\subsubsection{ResNet50/MobileNetV1/MobileNetV2 on ImageNet}
\label{ch3-sec:experiment_imagenet}
ImageNet \cite{bib:ILSVRC15} is a large-scale image classification dataset, which consists of high-resolution color images in 1000 object classes.
We use the original training dataset with 1.28 million samples for training, and randomly select 10000 samples in the original validation dataset (50000 samples in total) for validation, and the rest 40000 samples for testing.
We train on 4 Nvidia V100 GPUs with a batch size of 1024.
\fakeparagraph{ResNet50}
We use pytorch-style ResNet50, which is slightly different than the original Resnet-50~\cite{bib:CVPR16:He}.
The down-sampling (stride=2) is conducted in $3\times3$ \texttt{conv} layer instead of $1\times1$ \texttt{conv} layer.
The network architecture is the same as ``resnet50'' in~\cite{bib:torchResNet}.
The initial learning rate is set as 0.5; the momentum is set as 0.9; the weight decay is set as 0.0001; the number of training epochs is set as 100.
\fakeparagraph{MobileNetV1}
The network architecture is the same as $1.0\times$ MobileNet-224 in the original paper~\cite{bib:arXiv17:Howard}.
The initial learning rate is set as 0.5; the momentum is set as 0.9; the weight decay is set as 0.00001; the number of training epochs is set as 150.
\fakeparagraph{MobileNetV2}
The network architecture is the same as $1.0\times$ MobileNetV2 in the original paper~\cite{bib:CVPR18:Sandler}.
The initial learning rate is set as 0.1; the momentum is set as 0.9; the weight decay is set as 0.00004; the number of training epochs is set as 300.
\subsubsection{ResNet50-FPN on COCO}
\label{ch3-sec:supplementary_coco}
MS COCO \cite{bib:COCO} is object detection, segmentation, key-point detection, and captioning dataset.
We use COCO 2017 dataset, which consists of high-resolution annotated images in 80 object classes.
It contains a training dataset with 118000 annotated samples, and a validation dataset with 5000 data samples.
We focus on object detection and instance segmentation.
We report the standard COCO metrics, average precision (AP), which is averaged over Intersection-over-Union (IoU) thresholds $\in0.5:0.05:0.95$.
The bounding box level AP and the mask level AP are adopted in object detection and the instance segmentation, respectively.
We follow the official reference training scripts provided by Pytorch \cite{bib:torchdetection} to set up our experiments.
We distributed train on 8 Nvidia V100 GPUs with a batch size of 16 (2 per GPU).
The final AP is reported on the validation dataset after the entire training.
\fakeparagraph{ResNet50-FPN}
We adopt Faster-RCNN \cite{bib:NIPS15:Ren} in object detection and Mask-RCNN \cite{bib:ICCV17:He} in instance segmentation.
The overall network architecture consists of two parts, the basic network\footnote{To avoid confusion, we use \textit{the basic model} to refer to \textit{the backbone network} mentioned in the Faster-RCNN and Mask-RCNN papers \cite{bib:NIPS15:Ren,bib:ICCV17:He}.
The backbone network only stands for the original dense network in DRESS\xspace in this chapter.} and the head architecture.
We use ResNet50 pretrained on ImageNet dataset as the basic network.
As suggested by \cite{bib:ICCV17:He}, the feature extractor, feature pyramid network (FPN) \cite{bib:CVPR17:Lin}, is connected to ResNet50 in lateral.
The bounding-box head and the mask head will then use the extracted feature to detect objects and segment instances.
Especially, our network architectures are the same as the ones provided in the Pytorch reference training scripts \cite{bib:torchdetection}.
As the batch size used in Faster-RCNN training and Mask-RCNN training is relatively small, we freeze the BN layers of ResNet50 as in \cite{bib:ICCV17:He,bib:detectron2018}.
Following \cite{bib:ICLR19:Yu}, we first pretrain ResNet50 with \algoref{ch3-alg:DRESS} on ImageNet, i.e.,\xspace obtain 4 subnets of ResNet50 with sparsity $0.5,0.8,0.9,0.95$ as in \figref{ch3-fig:benchmark}.
The lateral FPN and the head architecture are added into ResNet50.
We then train the overall network on COCO dataset with \algoref{ch3-alg:DRESS} while fixing BN layers for each subnet.
\subsection{Ablation Studies}
\label{ch3-sec:ablation}
We first implement a set of ablation experiments to study the effect of different components/parameters in DRESS\xspace.
The ablation experiments are mainly conducted with ResNet20 on CIFAR10 and MobileNetV1 on ImageNet.
\subsubsection{Row Size $N$}
\label{ch3-sec:ablation_rowsize}
\fakeparagraph{Settings}
In \secref{ch3-sec:store}, we restrict different rows in a CSR sparse tensor to have the same number of nonzero weights, and subnets are sampled in a row-based unstructured manner.
To study the impact of row size $N$, we select three methodical ways to reshape the weight tensor for row-based sampling,
(\textit{i}) unstructured, where unstructured sampling is conducted in the entire weight tensor, i.e.,\xspace each layer only contains a single row in DRESS\xspace CSR format as discussed in \secref{ch3-sec:store};
(\textit{ii}) filterwise, where unstructured sampling is conducted in each filter for \texttt{conv} layers or in each output-neuron for \texttt{fc} layers;
(\textit{iii}) 256, where each row contains 256 elements in a \texttt{conv} filter, or the entire filter if the filter has less than 256 weights.
$\gamma$ is set as 1 in these experiments.
The row size $N$ used in CSR format is tightly related to the memory cost to store the column indices of nonzero weights, e.g.,\xspace 256 means 8-bit for each column index.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/rowsize.pdf}
\caption[Ablation studies on different row sizes.]{Ablation studies on different row sizes. ``BN'' means further fine-tuning BN layers for each subnet.}
\label{ch3-fig:rowsize}
\end{figure}
\fakeparagraph{Results}
The results in \figref{ch3-fig:rowsize} show that when choosing a relatively large row size e.g.,\xspace filterwise or 256, our proposed row-based unstructured sampling can yield a similar accuracy as totally unstructured sampling in the entire tensor.
Especially, for both ResNet20 and MobileNetV1, the accuracy difference between ``Unstructured'' and ``Filterwise'' is less than 0.5\% on average.
In the following experiments, we mainly adopt filterwise unstructured sampling due to its high accuracy and efficient DRESS\xspace CSR format.
The dashed curves and the solid curves with the same marker in \figref{ch3-fig:rowsize} can be viewed as the ablation study of the third training stage, i.e.,\xspace further fine-tuning BN layers for each subnet (see \secref{ch3-sec:boost}).
Particularly, fine-tuning BN layers can calibrate the statistic discrepancy between different subnets and thus consistently improves the performance of subnets.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.7\textwidth]{./figs/ch3/bn.pdf}
\caption[The BN statistics of different subnets across layers.]{The BN statistics of different subnets across layers. The subnets of MobileNetV1 with different sparsity levels are plotted with different colors.}
\label{ch3-fig:bn}
\end{figure}
The BN statistic information of 4 subnets of MobileNetV1 is shown in \figref{ch3-fig:bn}.
For each layer in subnets, we plot the average value of ``running mean'', ``running variance'', ``weight'', and ``bias'' over all channels after the third training stage.
The results show that the BN statistic information is closed among different subnets, which allows multiple subnets to be optimized in synergy in the second training stage (DRESS\xspace training) of \algoref{ch3-alg:DRESS}.
On the other hand, the third training stage can calibrate the small discrepancy between different subnets, which in turn improves the accuracy of each subnet.
\subsubsection{With/Without Sampling}
\label{ch3-sec:ablation_sampling}
\fakeparagraph{Settings}
To explore the efficacy of our sampling process, we compare DRESS\xspace with sampling (i.e.,\xspace \algoref{ch3-alg:DRESS}) and DRESS\xspace without sampling.
DRESS\xspace without sampling has a similar process as traditional unstructured magnitude pruning \cite{bib:ICLR16:Han,bib:ICLR20:Renda}, where $K$ binary masks are built after the dense pre-training and then are fixed, and only the nonzero weights (with mask value equals 1) are sparsely fine-tuned.
In other words, the subnets will not be re-sampled in the DRESS\xspace training stage of \algoref{ch3-alg:DRESS}.
We set $\gamma$ as 1 and use filterwise unstructured sampling in these experiments.
\fakeparagraph{Results}
As shown in \figref{ch3-fig:sampling}, our (re-)sampling process improves the accuracy of subnets by a large margin than without sampling, especially under a high sparsity, e.g.,\xspace increasing by around 7.4\% on average (up to 16.1\%) on MobileNetV1.
Re-sampling provides more flexibility to re-select the sparse subnets that are abandoned before the parallel training.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/sampling.pdf}
\caption[Comparison between DRESS\xspace with sampling and DRESS\xspace without sampling.]{Comparison between DRESS\xspace with sampling and DRESS\xspace without sampling.}
\label{ch3-fig:sampling}
\end{figure}
\subsubsection{Iterative vs. Parallel}
\label{ch3-sec:ablation_iterative}
In this part, we compare DRESS\xspace (parallel training) with iterative training multiple subnets.
We first elaborate the iterative training methods with progressively increased/decreased sparsity mentioned in \secref{ch3-sec:optimize}.
Recall that there are $K$ sparsity levels, and $0<s_1<s_2<...<s_K<1$.
In iterative training, each subnet is optimized separately, also altogether $K$ iterations.
In each iteration, we mainly adopt the idea of traditional unstructured pruning \cite{bib:ICLR20:Renda}, which is the current best-performed pruning method aiming at the trade-off between the model accuracy and the number of zero’s weights.
\cite{bib:ICLR20:Renda} conducts iterative pruning with a pruning scheduler $p^{1:R}$.
The network progressively reaches the desired sparsity $s$ until the $R$-th pruning iteration.
We choose $p^{1:5}=0.5,0.8,0.9,0.95,1$.
Accordingly, the sparsity is set to $0.5s,0.8s,0.9s,0.95s,s$ in 5 pruning iterations, respectively.
During each pruning iteration, the network is pruned with the corresponding sparsity, and the remaining nonzero weights are sparsely fine-tuned with learning rate rewinding.
\begin{algorithm}[tbp!]
\caption{Iterative training with increased sparsity}\label{ch3-alg:increased}
\KwIn{Initial random weights $\bm{w}$, training dataset $\mathcal{D}_\text{tr}$, validation dataset $\mathcal{D}_\text{val}$, sparsity $\{s_k\}_{k=1}^K$, pruning scheduler $\{p^r\}_{r=1}^{R}$}
\KwOut{Optimized weights $\bm{w}$, binary masks $\{\bm{m}_k\}_{k=1}^K$}
\tcc{Dense pre-training}
Train dense network $\bm{w}$ with traditional optimizer\;
\tcc{Traditional pruning, also k=1}
\For {$r \leftarrow 1$ \KwTo $R$} {
\tcp{The $r$-th pruning iteration}
Prune with sparsity $s_1\cdot p^r$ and get mask $\bm{m}_1^r$\;
Sparsely fine-tune nonzero weights $\bm{w}\odot\bm{m}_1^r$ on $\mathcal{D}_\text{tr}$\;
}
Get mask $\bm{m}_1 = \bm{m}_1^R$\;
\tcc{Iterative (training)}
\For {$k \leftarrow 2$ \KwTo $K$} {
Get the previous subnet $\bm{w}_{k-1} = \bm{w}\odot\bm{m}_{k-1}$\;
Sample a subnet from $\bm{w}_{k-1}$ with sparsity $s_k$ and get mask $\bm{m}_k$\;
\tcp{Note that no training here.}
}
\end{algorithm}
The pseudocode of training subnets iteratively with increased sparsity is shown in \algoref{ch3-alg:increased}.
With progressively increased sparsity (from $s_1$ to $s_K$), the first optimized subnet of $\bm{w}\odot\bm{m}_1$ already contains all subsequent subnets with higher sparsity due to \equref{ch3-eq:constraints2}.
The first sparse subnet $\bm{w}\odot\bm{m}_1$ is trained by unstructured pruning \cite{bib:ICLR20:Renda} as discussed above.
In the following iteration $k$ ($k\in2,...,K$), the subnet with sparsity $s_k$ is directly sampled from the previous subnet without any retraining regarding the constraint of \equref{ch3-eq:constraints2}.
The pseudocode of training multiple subnets iteratively with decreased sparsity is shown in \algoref{ch3-alg:decreased}.
For progressively decreased sparsity (from $s_K$ to $s_1$), the sampling and training process only happen in the complementary part of the previous subnet due to \equref{ch3-eq:constraints2}.
Particularly, in iteration $k$ ($k\in K,...,1$), we should (\textit{i}) sample the new subnet from the backbone network with sparsity $s_k$ that contains the subnet of $\bm{w}\odot\bm{m}_{k+1}$; (\textit{ii}) freeze the subnet of $\bm{w}\odot\bm{m}_{k+1}$ and only update the other weights.
We still adopt the iterative pruning when training each subnet, i.e.,\xspace the sparsity of the $k$-th subnet gradually approaches the target sparsity $s_k$.
Note that the dense backbone network is maintained and updated during the training.
Note that the (re-)sampling process is only conduct in each pruning iteration instead of each training iteration.
\begin{algorithm}[tbp!]
\caption{Iterative training with decreased sparsity}\label{ch3-alg:decreased}
\KwIn{Initial random weights $\bm{w}$, training dataset $\mathcal{D}_\text{tr}$, validation dataset $\mathcal{D}_\text{val}$, sparsity $\{s_k\}_{k=1}^K$, pruning scheduler $\{p^r\}_{r=1}^{R}$}
\KwOut{Optimized weights $\bm{w}$, binary masks $\{\bm{m}_k\}_{k=1}^K$}
\tcc{Dense pre-training}
Train dense network $\bm{w}$ with traditional optimizer\;
\tcc{Iterative training}
Set $s_{K+1}=1$ and $\bm{m}_{K+1}=\bm{0}$\;
\For {$k \leftarrow K$ \KwTo $1$} {
Get the complementary subnet $\bm{w}^\text{cs}=\bm{w}\odot(1-\bm{m}_{k+1})$\;
\For {$r \leftarrow 1$ \KwTo $R$} {
\tcp{The $r$-th pruning iteration}
Sample a subnet from $\bm{w}^\text{cs}$ with sparsity $(1-(s_{k+1}-s_k))\cdot p^r$ and get mask $\bm{m}_k^{\text{cs},r}$\;
Merge mask $\bm{m}_k^{r}=\bm{m}_{k+1}+\bm{m}_k^{\text{cs},r}$\;
Initiate $\bm{w}^0=\bm{w}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
\tcp{The $q$-th training iteration}
Fetch mini-batch from $\mathcal{D}_\text{tr}$\;
Get sparse subnet $\bm{w}^{r,q-1}_k=\bm{w}^{q-1}\odot\bm{m}_k^{r}$\;
Back-propagate subnet gradient $\bm{g}(\bm{w}^{r,q-1}_k)=\frac{\partial\ell(\bm{w}^{r,q-1}_k)}{\partial\bm{w}^{r,q-1}_k}$\;
Compute optimization step $\Delta\bm{w}^{q}$ with $\bm{g}(\bm{w}^{r,q-1}_k)\odot\bm{m}_k^{\text{cs},r}$\;
Update $\bm{w}^{q} = \bm{w}^{q-1} + \Delta\bm{w}^{q}$\;
}
Save $\bm{w}=\bm{w}^Q$
}
Save mask $\bm{m}_k=\bm{m}_k^R$\;
}
\end{algorithm}
\fakeparagraph{Settings}
We implement the two iterative training methods of \algoref{ch3-alg:increased} and \algoref{ch3-alg:decreased}.
The loss of each subnet is optimized separately in iterative training.
Thus for a fair comparison, we do not re-weight loss in the parallel training of DRESS\xspace, i.e.,\xspace $\gamma=0$.
Also in all experiments, we conduct unstructured sampling in the entire tensor, and allow BN layers to be fine-tuned individually for each subnet to avoid other side effects.
\fakeparagraph{Results}
The comparison results are plotted in \figref{ch3-fig:parallel} Left.
Parallel training substantially outperforms iterative training.
Iterating over increased sparsity does not provide any space to optimize subnets with higher sparsity.
Therefore, the accuracy drops quickly along iterations.
Although iterating over decreased sparsity may yield a well-performed high sparsity network, the accuracy does not improve significantly afterwards.
We argue this is due to the fact that iterative training causes the optimizer to end in a hard to escape region around the previous subnet in the loss landscape.
On the contrary, parallel training allows multiple subnets to be sampled and optimized jointly, which may especially benefit highly sparse networks, see \figref{ch3-fig:parallel} Left.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/parallel.pdf}
\caption[Left: Comparing parallel training with iterative training. Right: Ablation studies on the correction factor $\gamma$.]{Left: Comparing parallel training with iterative training. Right: Ablation studies on the correction factor $\gamma$.}
\label{ch3-fig:parallel}
\end{figure}
\subsubsection{Correction Factor $\gamma$}
\label{ch3-sec:ablation_gamma}
\fakeparagraph{Settings}
The loss weights $\pi_k$ used in the parallel training may influence the final accuracy of different subnets.
In \secref{ch3-sec:optimize}, we introduce a correction factor $\gamma$ to control $\pi_k$ (see \equref{ch3-eq:gamma} and \equref{ch3-eq:alpha}).
We thus conduct a set of experiments with different $\gamma$.
$\gamma=0$ means all loss items are weighted equally; $\gamma>0$ means the loss of the lower sparsity subnets is weighted larger, and vice versa.
For example, for ResNet20 with $s_{1:5}=0.8,0.9,0.95,0.98,0.99$, when $\gamma=0.5$, $\pi_{1:5}\approx0.36,0.26,0.18,0.12,0.08$; $\gamma=-1.0$, $\pi_{1:5}\approx0.03,0.05,0.11,0.27,0.54$.
\fakeparagraph{Results}
The results in \figref{ch3-fig:parallel} Right show that the high sparsity subnets generally yield a higher final accuracy with a smaller $\gamma$.
This is intuitive since a smaller $\gamma$ assigns a larger weight on the high sparsity subnets.
However, the downside is that the most powerful subnet (with the lowest sparsity) can not reach its top accuracy.
Note that the most powerful subnet is often adopted either under the critical case requiring high accuracy or in the commonly used scenario with standard resource constraints, see in \secref{ch3-sec:introduction}.
Also as discussed in \secref{ch3-sec:optimize}, low sparsity subnets should be weighted more, since they are implicitly optimized with a smaller step size.
Experimentally, we find that $\gamma\in[0.5,1]$ in parallel training allows us to train a group of subnets where the most powerful subnet can reach a similar accuracy as training in separately.
We set $\gamma=0.5$ in the following experiments.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.75\textwidth]{./figs/ch3/benchmark.pdf}
\caption[Comparing DRESS\xspace with other baselines on image classification.]{Comparing DRESS\xspace with other baselines on image classification. The methods that do not involve weight sharing among different networks are plotted with dotted curves. The memory cost and the number of MFLOPs of the original backbone networks are reported in the parentheses in titles; their accuracy is shown as red horizontal lines. The sparsity levels of DRESS\xspace and ``Pruning'' are the same as the ones discussed in \secref{ch3-sec:experiments}.}
\label{ch3-fig:benchmark}
\end{figure}
\subsection{Evaluation on Image Classification}
\label{ch3-sec:experiment_classification}
\fakeparagraph{Settings}
In this section, we benchmark DRESS\xspace on public image classification datasets including CIFAR10 and ImageNet with different backbone networks discussed earlier in \secref{ch3-sec:experiments}.
We compare the performance of the subnets generated by DRESS\xspace with various methods, including (\textit{i}) anytime networks \cite{bib:ICLR18:Huang,bib:ICLR19:Yu,bib:ICCV19:Yu}, where the sub-networks with different width or depth can be cropped from the backbone network;
(\textit{ii}) unstructured pruning \cite{bib:ICLR20:Renda,bib:NIPS21:Peste}, where \cite{bib:ICLR20:Renda} is re-implemented under our settings for a fair comparison;
(\textit{iii}) N:M fine-grained structure pruning \cite{bib:ICLR21:Zhou,bib:NIPS21:Sun}.
We choose two metrics for comparison, the memory cost of parameters and the number of MFLOPs ($10^6$ FLOPs).
Both metrics are widely used proxies of resource consumption.
FLOPs dominate in the entire computation burden, thus fewer FLOPs can (but does not necessarily) result in a smaller computation time.
The memory cost of parameters not only represents the static storage consumption but also relates to the amount of memory fetching when on-device inference with different (sub-)networks \cite{bib:IEEETrans20:Ahmad}.
Note that memory access often consumes more time and more energy than computation \cite{bib:ISSCC14:Horowitz}.
Assume that each parameter uses 32-bit for floating point values.
DRESS\xspace, (\textit{ii}), and (\textit{iii}) generate sparse tensors, thus their memory cost also includes the indices of nonzero weights.
Following the suggestions of \cite{bib:tensorflow,bib:XNNPACK}, each index of nonzero weights is encoded into 8-bit in DRESS\xspace and (\textit{ii}), whereas the binary mask is stored for indexing in \cite{bib:ICLR21:Zhou,bib:NIPS21:Sun}.
\begin{table*}[tbp!]
\centering
\caption{The average test accuracy over all sub-networks, the theoretical storage (MB) required by all sub-networks and the average number of theoretical MFLOPs over all sub-networks.}
\label{ch3-tab:storage}
\footnotesize
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{Model} & \multicolumn{2}{c}{Average Accuracy} & \multicolumn{2}{c}{Overall Storage (MB)} & \multicolumn{2}{c}{Average MFLOPs} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7}
& DRESS\xspace & Pruning & DRESS\xspace & Pruning & DRESS\xspace & Pruning \\ \hline
VGGNet & \textbf{91.4\%} & 91.1\% & \textbf{3.54} & 6.27 & \textbf{32} & 33 \\
ResNet20 & \textbf{86.1\%} & 85.9\% & \textbf{0.28} & 0.55 & \textbf{4} & \textbf{4} \\
ResNet50 & 74.5\% & \textbf{74.6\%} & \textbf{63.97} & 109.25 & \textbf{887} & 994 \\
MobileNetV1 & 65.6\% & \textbf{65.9\%} & \textbf{10.72} & 18.96 & \textbf{146} & 152 \\
MobileNetV2 & 61.5\% & \textbf{62.4\%} & \textbf{8.98} & 16.32 & \textbf{95} & 101 \\
\bottomrule
\end{tabular}
\end{table*}
\fakeparagraph{Results}
The results are plotted in \figref{ch3-fig:benchmark}.
In comparison to other anytime networks, the subnets generated by DRESS\xspace require significantly less memory fetching and fewer FLOPs under the same accuracy level.
In addition, the sub-networks of conventional anytime networks \cite{bib:ICLR18:Huang,bib:ICLR19:Yu,bib:ICCV19:Yu} have different network architectures, while current compilation libraries (e.g.,\xspace TensorFlowLite) may not support to adopt a dynamic architecture on-device.
The extra re-configuration overhead e.g.,\xspace storing various compiled architectures could be necessary for on-device inference. However, this is avoided in DRESS\xspace, since different subnets of DRESS\xspace leverage the same architecture as the backbone network.
Like traditional unstructured pruning, DRESS\xspace does not explicitly reduce the number of operations, i.e.,\xspace the networks with the same sparsity can require different numbers of FLOPs to perform inference as shown in \figref{ch3-fig:benchmark}.
Thanks to the weight sharing, the static storage is only determined by the largest network for both DRESS\xspace and anytime networks \cite{bib:ICLR18:Huang,bib:ICLR19:Yu,bib:ICCV19:Yu}.
The methods of (\textit{ii})-(\textit{iii}) do not involve weight sharing, thus they need more memory to store all networks separately.
We further compare DRESS\xspace with the unstructured pruning method \cite{bib:ICLR20:Renda}, in terms of the test accuracy, the theoretical storage, and the theoretical FLOPs, see \tabref{ch3-tab:storage}.
DRESS\xspace reaches a similar average accuracy and computation complexity while only requiring 50\%-60\% of storage as pruning.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/benchmark_appendix.pdf}
\caption[Comparing DRESS\xspace with other baselines on object detection and instance segmentation.]{Comparing DRESS\xspace with other baselines on object detection and instance segmentation. The methods that do not involve weight sharing among different networks are plotted with dotted curves. The memory cost and the number of GFLOPs of the original backbone networks are reported in the parentheses in titles; their average precision is shown as red horizontal lines.}
\label{ch3-fig:benchmark_appendix}
\end{figure}
\subsection{Evaluation on Object Detection/Instance Segmentation}
\label{ch3-sec:experiment_object}
\fakeparagraph{Settings}
To show the versatility of our synthesis technique, we further benchmark DRESS\xspace on other vision tasks, object detection and instance segmentation.
We compare DRESS\xspace with other baselines mentioned in \secref{ch3-sec:experiment_classification} on MS COCO 2017 dataset.
We adopt Faster-RCNN with ResNet50-FPN \cite{bib:NIPS15:Ren} in object detection and Mask-RCNN \cite{bib:ICCV17:He} with ResNet50-FPN in instance segmentation.
\fakeparagraph{Results}
Since the number of FLOPs for Faster-RCNN and Mask-RCNN depends on the number of proposals in each image \cite{bib:arXiv20:Carion}, we report the average number of FLOPS for the randomly selected 100 images in COCO 2017 validation dataset.
We compute the FLOPS with the tool flop count operators from Detectron2 \cite{bib:detectron2019}.
For Faster-RCNN, we report its bounding box AP; for Mask-RCNN, we report its bounding box AP and its mask AP.
The results are plotted in \figref{ch3-fig:benchmark_appendix}.
Similar to the results in \figref{ch3-fig:benchmark}, the subnets generated by DRESS\xspace require a significantly lower memory cost and fewer GFLOPs ($10^9$ FLOPs) than other anytime networks \cite{bib:ICLR19:Yu}.
In addition, in comparison to the unstructured pruning \cite{bib:ICLR20:Renda} that does not involve weight sharing, DRESS\xspace can also achieve a similar precision level.
\begin{figure}[t!]
\centering
\includegraphics[width=0.78\textwidth]{./figs/ch3/sparsity_resnet20.pdf}
\caption[Comparing DRESS\xspace with traditional pruning on ResNet20 in terms of the layerwise sparsity.]{Comparing DRESS\xspace with traditional pruning on ResNet20 (CIFAR10) in terms of the layerwise sparsity. The (sub-)networks with different overall sparsity levels ($0.8,0.9,0.95,0.98,0.99$) are plotted in different subplots.}
\label{ch3-fig:sparsity_resnet20}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch3/sparsity_mobilenetv2.pdf}
\caption[Comparing DRESS\xspace with traditional pruning on MobileNetV2 in terms of the layerwise sparsity.]{Comparing DRESS\xspace with traditional pruning on MobileNetV2 (ImageNet) in terms of the layerwise sparsity. The (sub-)networks with different overall sparsity levels $0.5,0.8,0.9,0.95$ are plotted in different subplots.}
\label{ch3-fig:sparsity_mobilenetv2}
\end{figure}
\subsection{Sparsity across Layers}
\label{ch3-sec:sparsity}
To further explore the different impact from DRESS\xspace and traditional pruning, we compare their layerwise sparsity.
Recall that the main differences between DRESS\xspace and traditional pruning are, (\textit{i}) the nonzero weights of the higher sparsity subnets are reused by the lower sparsity subnets in DRESS\xspace, whereas different sparse networks generated by traditional pruning are independent; (\textit{ii}) DRESS\xspace maintains an unstructured sparse pattern in a row-based manner (i.e.,\xspace fine-grained structure sparsity \cite{bib:ICLR21:Zhou,bib:NIPS21:Hubara,bib:NIPS21:Sun}), whereas traditional pruning yields an unstructured sparse pattern in the entire tensor.
We plot the layerwise sparsity of the sparse (sub-)networks generated by DRESS\xspace and traditional pruning \cite{bib:ICLR20:Renda}, for ResNet20 on CIFAR10 in \figref{ch3-fig:sparsity_resnet20} and for MobileNetV2 on ImageNet in \figref{ch3-fig:sparsity_mobilenetv2}.
In general, both methods have a similar layerwise sparsity in each subplot.
However, there exists a diversity under a low sparsity level, e.g.,\xspace MobileNetV2 with sparsity 0.5.
Jointly training with weight sharing in DRESS\xspace enforces the low sparsity network to be optimized towards a certain region that has been less explored in the individual training of pruning, as the low sparsity network often has relatively looser constraints.
\section{Deployments}
\label{ch3-sec:deployment}
To measure actual performance and compare it to the benchmark evaluation, we used DRESS\xspace-generated subnets on a RaspberryPi 4 edge platform (with off-the-shelf Arm Cortex-A72 quad-core CPUs) for on-device inference.
The optimized Pytorch model is compiled by TensorFlow Lite \cite{bib:tensorflow} with XNNPACK \cite{bib:XNNPACK} delegate for deployment.
We use multi-threading with 4 threads for acceleration.
The inference latency when adopting different subnets of MobileNetV1 and MobileNetV2 on RaspberryPi 4 is reported in \tabref{ch3-tab:raspberry}.
The original dense models and the sparse models generated from unstructured pruning methods \cite{bib:ICLR20:Renda} are also added in \tabref{ch3-tab:raspberry} for comparison.
The reported latency is averaged over 100 randomly selected samples from ImageNet dataset.
By using the fast kernels for sparse matrix-dense matrix multiplication provided by XNNPACK, DRESS\xspace can dynamically select its subnets to satisfy the various inference latency constraints.
Note that the sparse (sub-)networks of DRESS\xspace and pruning \cite{bib:ICLR20:Renda} have a similar number of theoretical FLOPs (see \figref{ch3-fig:benchmark} and \tabref{ch3-tab:storage}), yet DRESS\xspace often yields a lower inference time.
This is due to the fact that row-based unstructured sparsity leads to regular computation among different rows, which speeds up inference \cite{bib:ICLR21:Zhou}.
\begin{table}[tbp!]
\centering
\caption[The average inference time (ms) on RaspberryPi 4.]{The average inference time (ms) on RaspberryPi 4.}
\label{ch3-tab:raspberry}
\footnotesize
\begin{tabular}{cccccc}
\toprule
& Sparsity & \multicolumn{2}{c}{MobileNetV1} & \multicolumn{2}{c}{MobileNetV2} \\ \hline
Model & & \multicolumn{2}{c}{Dense} & \multicolumn{2}{c}{Dense} \\
Time (ms) & 0\% & \multicolumn{2}{c}{83} & \multicolumn{2}{c}{52} \\ \hdashline
Model & & DRESS\xspace & Pruning & DRESS\xspace & Pruning \\
\multirow{4}{*}{Time (ms)} & 50\% & \textbf{77} & 80 & \textbf{47} & 48 \\
& 80\% & \textbf{45} & 55 & \textbf{36} & 41 \\
& 90\% & \textbf{31} & 35 & \textbf{29} & 32 \\
& 95\% & \textbf{25} & 27 & \textbf{26} & \textbf{26} \\
\bottomrule
\end{tabular}
\end{table}
Note also that although the inference time decreases when adopting the subnets with a higher sparsity, the realistic speedup of sparse inference is not proportional to the reduction in theoretical FLOPs.
For example, the theoretical FLOPs decrease by a factor of 6.4 when the sparsity of DRESS\xspace MobileNetV1 subnets increases from 50\% to 95\%, while the inference is only accelerated by a factor of 3.1.
A similar phenomenon can also be observed in MobileNetV2 and pruned models.
We suspect that the reason is that sparse computational cores of XNNPACK have a larger fraction of cache miss at a higher sparsity level, see also in \cite{bib:CVPR20:Elsen}.
\section{Summary}
\label{ch3-sec:summary}
This chapter develops a novel synthesis approach DRESS\xspace that can adapt the sub-networks for on-device inference to maximize the model performance with different resource budgets.
DRESS\xspace enables efficient adaptation on edge devices under varying resource constraints.
Prior synthesis methods either require deploying multiple individual networks, or sample sub-network architectures along structured dimensions leading to subpar performance.
However, DRESS\xspace utilizes nonzero-weight sharing and architecture sharing to reduce the redundancy among multiple unstructured sub-networks, resulting in both storage efficiency and re-configuration efficiency.
The main contributions of DRESS\xspace are summarized as follows,
\begin{itemize}
\item
DRESS\xspace can adapt different sub-networks sampled from the backbone network on edge devices.
These optimized sub-networks have different sparsity, and thus can infer under various resource constraints, e.g.,\xspace the inference latency, and the battery energy.
\item
DRESS\xspace samples sub-networks in a row-based unstructured sparsity (a.k.a. fine-grained structure sparsity) and introduces a novel compressed sparse row (CSR) format for storing the sub-networks.
This way, multiple sub-networks can be efficiently fetched and executed for on-device inference, by using the fast kernels of sparse tensor computation provided by recent compilation libraries.
To our best knowledge, this is the first work that builds multiple sub-networks via a fine-grained structure of weight sharing.
\item
DRESS\xspace enables weight sharing and architecture sharing among multiple sub-networks, resulting in (static) storage efficiency and re-configuration efficiency, respectively.
\item
Experimental results show DRESS\xspace reaches a similar accuracy while only requiring 50\%-60\% of static storage as unstructured pruned networks, and can result in various distinct inference latency on off-the-shelf edge platforms according to different sparsity levels.
\end{itemize}
This chapter studied how to adapt the network on edge devices to maximize the inference accuracy under varying resource constraints.
In the next chapter, we will study how to conduct learning on edge devices given a few data samples of new tasks.
The different sub-networks generated by DRESS\xspace are used for the same inference task, thus DRESS\xspace is inapplicable to adapt its network given a new task.
\chapter[Learning on Edge Devices]{Learning on Edge Devices}
\label{ch4:learning}
In \chref{ch2:inference} and \chref{ch3:adaptation}, we studied how to compress a pretrained DNN for on-device inference under \textit{fixed} and \textit{varying} resource constraints.
However, when facing \textit{unseen} environments, users, or tasks, it is crucial to adapt\footnote{In this chapter, the adaptation is referred to as (re-)training on new data samples.} the pretrained DNN to deliver consistent performance and customized services.
Sometimes, data collected by edge devices are private and have a large diversity across users/devices.
Hence, \textit{on-device learning} is preferred over uploading the data to cloud servers for adaptation.
\fakeparagraph{Main Resource Constraints}
For on-device learning, neither abundant \textit{user data} nor \textit{computing resources} are applicable.
On the one hand, the amount of user data collected on a single edge device is rather small due to the limited labor resources.
On the other hand, edge devices often have a small amount of available resources from memory and computation.
\fakeparagraph{Principles}
Existing memory-efficient training approaches are not able to optimize a DNN given only a few training samples, whereas current meta learning methods require a significant amount of dynamic memory to few-shot learn unseen tasks.
Therefore, we introduce a memory-efficient on-device few-shot learning setting, and propose a novel meta learning scheme that can (\textit{i}) fast learn new unseen tasks given a few training samples, resulting in data efficiency, (\textit{ii}) avoid redundant training by distinguishing and learning adaptation-critical weights only, leading to memory efficiency.
The contents of this chapter are established mainly based on the paper ``p-Meta: Towards On-device Deep Model Adaptation'' that is published on ACM Conference on Knowledge Discovery and Data Mining (SIGKDD), 2022 \cite{bib:KDD22:Qu}.
\section{Introduction}
\label{ch4-sec:introduction}
The excellent accuracy of contemporary DNNs is attributed to training with high-performance computers on large-scale datasets \cite{bib:Book16:Goodfellow}.
For example, it takes $29$ hours to complete a $90$-epoch ResNet50 \cite{bib:CVPR16:He} training on ImageNet ($1.28$ million training images) \cite{bib:ILSVRC15} with $8$ Nvidia Tesla P100 GPUs \cite{bib:arXiv17:Goyal}.
However, on-device learning/adaptation of a DNN demands both \textit{data efficiency} and \textit{memory efficiency}.
A personal voice assistant, for example, may learn to adapt to users' accent and dialect within a few sentences, while a home robot should learn to recognize new object categories with few labelled images to navigate in new environments.
Furthermore, such learning is expected to be conducted on low-resource platforms such as smart portable devices, home hubs, and other IoT devices, with only several $KB$ to $MB$ memory.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{./figs/ch4/FSL.pdf}
\caption[Meta learning and few-shot learning in the context of on-device learning.]{Meta learning and few-shot learning (FSL) in the context of on-device learning. The backbone $F(\bm{w})$ is meta-trained into $F(\bm{w}^\mathrm{meta})$ on the cloud and is deployed to IoT devices to learn unseen tasks as $F(\bm{w}^\mathrm{new})$ via FSL.}
\label{ch4-fig:meta}
\end{figure}
For \textit{data-efficient} DNN training, we resort to \textit{meta learning}, a paradigm that learns to fast generalize to unseen tasks \cite{bib:arXiv20:Hospedales}.
Of our particular interest is \textit{gradient-based} meta learning \cite{bib:ICLR19:Antreas, bib:ICML17:Finn, bib:ICLR20:Raghu, bib:ICLR21:Oh} for its wide applicability in classification, regression and reinforcement learning, as well as the availability of gradient-based training frameworks for low-resource devices, e.g.,\xspace TensorFlow Lite \cite{bib:tfliteOndeviceTraining}.
\figref{ch4-fig:meta} explains major terminologies in the context of on-device learning.
Given a backbone, its weights are \textit{meta-trained} on \textit{many} tasks, to output a \textit{model} that is expected to fast learn new \textit{unseen} tasks.
The process of learning is also known as \textit{few-shot learning}, where the meta-trained model is further retrained by standard stochastic gradient decent (SGD) on \textit{few new} samples only.
However, existing gradient-based meta learning schemes \cite{bib:ICLR19:Antreas, bib:ICML17:Finn, bib:ICLR20:Raghu, bib:ICLR21:Oh} fail to support \textit{memory-efficient} training.
Although \textit{meta training} is conducted in the cloud, \textit{few-shot learning} of the meta-trained model is performed on IoT devices.
Consider to retrain a common backbone ResNet12 in a 5-way (5 new classes) 5-shot (5 samples per class) scenario.
One round of SGD consumes 370.44MB peak dynamic memory, since the inputs of all layers must be stored to compute the gradients of these layers' weights in the backward path.
In comparison, inference only needs 3.61MB.
The necessary dynamic memory is a key bottleneck for on-device learning due to cost and power constraints, even though the meta-trained model only needs to be retrained with a few data.
Prior efficient DNN training solutions mainly focus on parallel and distributed training on data centers \cite{bib:arXiv16:Chen, bib:ICLR21:Chen, bib:NIPS17:Greff, bib:NIPS16:Gruslys, bib:NIPS20:Raihan}.
On-device training has been explored for \textit{vanilla supervised training} \cite{bib:ICASSP20:Gooneratne, bib:MLSys21:Mathur, bib:SenSys19:Lee}, where training and testing are performed on the \textit{same} task.
A pioneer study \cite{bib:NIPS20:Cai} investigated on-device learning to new tasks via memory-efficient \textit{transfer learning}.
Yet transfer learning is prone to overfitting when only a few samples are available \cite{bib:ICML17:Finn}.
In this paper, we propose \textbf{p-Meta\xspace}, a new meta learning method for data- and memory-efficient DNN training.
The key idea is to enforce \textit{structured partial parameter updates} while ensuring \textit{fast generalization to unseen tasks}.
The idea is inspired by recent advances in understanding gradient-based meta learning \cite{bib:ICLR21:Oh, bib:ICLR20:Raghu}.
Empirical evidence shows that only the \textit{head} (the last output layer) of a DNN needs to be updated to achieve reasonable few-shot classification accuracy \cite{bib:ICLR20:Raghu} whereas the \textit{body} (the layers closed to the input) needs to be updated for cross-domain few-shot classification \cite{bib:ICLR21:Oh}.
These studies imply that certain weights are more important than others when generalizing to unseen tasks. Hence, we propose to automatically identify these \textit{adaptation-critical weights} to minimize the memory demand in few-shot learning.
Particularly, the critical weights are determined in two structured dimensionalities as,
(\textit{i}) layer-wise: we meta-train a layer-by-layer learning rate that enables a \textit{static} selection of critical layers for updating;
(\textit{ii}) channel-wise: we introduce meta attention modules in each layer to select critical channels \textit{dynamically}, i.e.,\xspace depending on samples from new tasks.
Partial updating of weights means that (structurally) sparse gradients are generated, reducing memory requirements to those for computing nonzero gradients.
In addition, the computation demand for calculating zero gradients can be also saved.
To further reduce the memory, we utilize \textit{gradient accumulation} in few-shot learning and \textit{group normalization} in the backbone.
Although weight importance metrics and SGD with sparse gradients have been explored in vanilla training \cite{bib:NIPS20:Raihan, bib:PIEEE20:Deng, bib:ICASSP20:Gooneratne, bib:ICLR16:Han}, it is unknown \textit{(i)} how to identify adaptation-critical weights and \textit{(ii)} whether meta learning is robust to sparse gradients, where the objective is to fast learn \textit{unseen} tasks.
\section{Related Work}
\label{ch4-sec:related}
\subsection{Meta Learning for Few-Shot Learning}
\label{ch4-sec:related_meta}
Meta learning is a prevailing solution to few-shot learning \cite{bib:arXiv20:Hospedales}, where the meta-trained model can learn an unseen task from a few training samples, i.e.,\xspace data-efficient training.
The majority of meta learning methods can be divided into two categories, (\textit{i}) embedding-based methods \cite{bib:NIPS16:Vinyals, bib:NIPS17:Snell, bib:CVPR18:Sung} that learn an embedding for classification tasks to map the query samples onto the classes of labeled support samples, (\textit{ii}) gradient-based methods \cite{bib:ICLR19:Antreas, bib:ICML17:Finn, bib:ICLR20:Raghu, bib:ICLR21:Oh, bib:NIPS21:Oswald} that learn an initial model (and/or optimizer parameters) such that it can be trained with gradient information calculated on the new few samples.
Among them, we focus on gradient-based meta learning methods for their applicability in various learning tasks and the availability of gradient-based training frameworks for low-resource devices \cite{bib:tfliteOndeviceTraining}.
Particularly, we aim at meta training a DNN that allows fast learning on memory-constrained devices.
Most meta learning algorithms \cite{bib:ICLR19:Antreas, bib:ICML17:Finn, bib:NIPS21:Oswald} optimize the backbone network for better generalization yet ignore the workload if the meta-trained backbone is deployed on low-resource platforms for few-shot learning.
Manually fixing certain layers during on-device few-shot learning \cite{bib:ICLR20:Raghu, bib:ICLR21:Oh, bib:AAAI21:Shen} may also reduce memory and computation, but to a much lesser extent as shown in our evaluations.
\subsection{Efficient DNN Training}
\label{ch4-sec:related_efficient}
Existing efficient training schemes are mainly designed for high-throughput GPU training on large-scale datasets.
\cite{bib:ICLR20:Cambier, bib:NIPS18:Wang} conduct 8-bit floating point low precision training which requires specialized hardware for efficient execution.
A general strategy is to trade memory with computation \cite{bib:arXiv16:Chen, bib:NIPS16:Gruslys}, which is unfit for IoT device with a limited computation capability.
An alternative is to sparsify the computational graphs in backpropagation \cite{bib:NIPS20:Raihan}.
Yet it relies on massive training iterations on large-scale datasets.
Other techniques include layer-wise local training \cite{bib:NIPS17:Greff} and reversible residual module \cite{bib:NIPS17:Gomez}, but they often incur notable accuracy drops.
There are a few studies on DNN training on low-resource platforms, such as updating the last several layers only \cite{bib:MLSys21:Mathur}, reducing batch sizes \cite{bib:SenSys19:Lee}, and gradient approximation \cite{bib:ICASSP20:Gooneratne}.
However, they are designed for vanilla supervised training, i.e.,\xspace train and test on the same task.
One recent study proposes to update the bias parameters only for memory-efficient transfer learning \cite{bib:NIPS20:Cai}, yet transfer learning is prone to overfitting when trained with limited data \cite{bib:ICML17:Finn}.
\section{Preliminaries and Challenges}
\label{ch4-sec:preliminary}
In this section, we first motivate on-device few-shot learning via example applications, then provide the basics on meta learning for few-shot learning and highlight the challenges to enable on-device learning.
\subsection{Example Application Scenarios}
\label{ch4-sec:preliminary_example}
On-device few-shot learning is essential for model adaptation in some intelligent applications, when the new data collected on edge devices tend to relate to personal habits and lifestyle.
For instance, activity recognition with smartphone sensors should adapt to countless walking patterns and sensor orientation \cite{bib:SenSys19:Gong}.
Gaze tracking with smart glasses requires calibration to personal gaze conditions for cognitive context recognition \cite{bib:SenSys20:Lan}.
Human motion prediction with home robots needs fast learning of unseen poses for seamless human-robot interaction \cite{bib:ECCV18:Gui}.
We detail two representative applications below and summarize their resource utilization in \tabref{ch4-tab:examples}.
\fakeparagraph{Home Surveillance Customization}
Household camera systems are pervasively deployed to detect intruders and monitor pets, where suspicious images are uploaded to a smart gateway for further investigation such as object classification.
Due to the \textit{countless object classes} of interest across individuals, the image classification model needs post-deployment customization.
Fast model adaptation (e.g.,\xspace pre-trained on dog breeds such as Komondor, Poodle and Saluki, and re-trained to recognize Malamute) at the smart gateway delivers more targeted surveillance services without leaking images of family members or private locations.
\fakeparagraph{Robot Locomotion Control}
Robots that walk and run as humans have been a long-standing challenge in robotics \cite{bib:IJRR21:Ibarz}.
Deep reinforcement learning (DRL) advances the development of naturally behaved robots for new applications such as police robotic dogs and unmanned last-mile delivery \cite{bib:SR19:Hwangbo}.
It is important that the robots fast learn their locomotion policies to new goals and environments since there is often a gap between the training and deployment environments.
Naive DRL can take millions of data samples to learn meaningful locomotion gaits \cite{bib:IJRR21:Ibarz}.
Conversely, on-robot few-shot DRL enables rapid control policy acquisition with few new experience.
\subsection{Meta Learning for Few-Shot Learning}
\label{ch4-sec:preliminary_meta}
Meta learning is a prevailing solution to adapt a DNN to unseen tasks with limited training samples, i.e.,\xspace few-shot learning \cite{bib:arXiv20:Hospedales}.
We ground our work on model-agnostic meta learning (MAML) \cite{bib:ICML17:Finn}, a generic meta learning framework which supports classification, regression and reinforcement learning.
\tabref{ch4-tab:notations} lists the major notations.
\begin{table}[!t]
\centering
\caption[Summary of major notations.]{Summary of major notations.}
\label{ch4-tab:notations}
\footnotesize
\begin{tabular}{ll}
\toprule
Notation & Description \\
\midrule
$l$ & Layer index, $l \in 1,2,...,L$ \\
$\bm{x}_{l-1}$, $\bm{w}_l$, $\bm{y}_l$ & Input, weight, and intermediate tensors \\
$C_l$, $H_l$, $W_l$ & Output channel number, height and width \\
$\bm{x}_L = F(\bm{w};\bm{x}_0)$ & A model (backbone) with parameter $\bm{w}$, its input and output \\
$\mathsf{T}^{i}$ & Sampled task $i$ from distribution $p(\mathsf{T})$ during meta training \\
$\mathcal{D}^i=\{\mathcal{S}^i,\mathcal{Q}^i\}$ & Dataset with support set $\mathcal{S}^i$ and query set $\mathcal{Q}^i$ for task $\mathsf{T}^{i}$ \\
$\mathsf{T}^{\mathrm{new}}$ & New unseen task during on-device few-shot learning \\
\multirow{2}{*}{$\mathcal{D}^{\mathrm{new}}=\{\mathcal{S}^{\mathrm{new}},\mathcal{Q}^{\mathrm{new}}\}$} & Dataset for unseen task $\mathsf{T}^{\mathrm{new}}$ \\
& $\mathcal{S}^{\mathrm{new}}$ for few-shot learning and $\mathcal{Q}^{\mathrm{new}}$ for evaluation \\
$\ell(\bm{w};\mathcal{D})$ & loss function over model $F(\bm{w})$ and dataset $\mathcal{D}$ \\
$\bm{w}^\mathrm{meta}$, $\bm{w}^\mathrm{new}$ & parameters after meta training and few-shot learning \\
$\bm{w}^{i,k}$ & model parameters $\bm{w}^{i}$ at step $k$ on task $i$ in inner loop \\
$\bm{\alpha}$, $\beta$ & inner and outer step size \\
$\bm{g}(\cdot)$ & the loss gradients w.r.t. the given tensor \\
$\sigma(\cdot)$, $\sigma'(\cdot)$ & Non-linear function and its derivative\\
$m(\cdot)$ & memory consumption of the given tensor in words \\
\bottomrule
\end{tabular}
\end{table}
Given the dataset $\mathcal{D}=\{\mathcal{S}, \mathcal{Q}\}$ of an unseen few-shot task, where $\mathcal{S}$ (support set) and $\mathcal{Q}$ (query set) are for training and testing, MAML trains a model $F(\bm{w})$ with weights $\bm{w}$ such that it yields high accuracy on $\mathcal{Q}$ even when $\mathcal{S}$ only contains a few samples.
This is enabled by simulating the few-shot learning experiences over abundant few-shot tasks sampled from a task distribution $p(\mathsf{T})$.
Specifically, it meta-trains a backbone $F$ over few-shot tasks $\mathsf{T}^{i}\sim p(\mathsf{T})$, where each $\mathsf{T}^{i}$ has dataset $\mathcal{D}^i=\{\mathcal{S}^i,\mathcal{Q}^i\}$, and then generates $F(\bm{w}^\mathrm{meta})$, an initialization for the unseen few-shot task $\mathsf{T}^{\mathrm{new}}$ with dataset $\mathcal{D}^{\mathrm{new}}=\{\mathcal{S}^{\mathrm{new}},\mathcal{Q}^{\mathrm{new}}\}$.
Training from $F(\bm{w}^\mathrm{meta})$ over $\mathcal{S}^{\mathrm{new}}$ is expected to achieve a high test accuracy on $\mathcal{Q}^{\mathrm{new}}$.
MAML achieves fast learning via two-tier optimization.
In the \textit{inner loop}, a task $\mathsf{T}^i$ and its dataset $\mathcal{D}^i$ are sampled.
The weights $\bm{w}$ are updated to $\bm{w}^{i}$ on support dataset $\mathcal{S}^{i}$ via $K$ gradient descent steps, where $K$ is usually small, compared to vanilla training:
\begin{equation}\label{ch4-eq:maml_inner}
\bm{w}^{i,k} = \bm{w}^{i,k-1}-\alpha\nabla_{\bm{w}}~\ell\left(\bm{w}^{i,k-1};\mathcal{S}^i\right)\quad\mathrm{for}~k=1,...,K
\end{equation}
where $\bm{w}^{i,k}$ are the weights at step $k$ in the inner loop, and $\alpha$ is the inner step size.
Note that $\bm{w}^{i,0}=\bm{w}$ and $\bm{w}^{i}=\bm{w}^{i,K}$.
$\ell(\bm{w};\mathcal{D})$ is the loss function on dataset $\mathcal{D}$.
In the \textit{outer loop}, the weights are optimized to minimize the sum of loss at $\bm{w}^{i}$ on query dataset $\mathcal{Q}^i$ across tasks.
The gradients to update weights in the outer loop are calculated w.r.t. the starting point $\bm{w}$ of the inner loop.
\begin{equation}\label{ch4-eq:maml_outer}
\bm{w} \leftarrow \bm{w}-\beta \nabla_{\bm{w}} \sum_i \ell\left(\bm{w}^{i};\mathcal{Q}^i\right)
\end{equation}
where $\beta$ is the outer step size.
\begin{table}[t]
\centering
\caption[Memory and total computation of inference and training in example few-shot learning.]{Memory and total computation (GFLOPs $=10^9$FLOPs) of inference and training in example few-shot learning. For image classification (``4Conv on MiniImageNet'' and ``ResNet12 on MiniImageNet''), we use batch size $=25$, i.e.,\xspace 5-way 5-shot. For robot locomotion (``MLP'' on MuJoCo), we use rollouts $=20$, horizon $=200$; each sample corresponds to a rollouted episode, and the case for an observation is reported in brackets. The calculation is based on \secref{ch4-sec:analysis}.}
\label{ch4-tab:examples}
\footnotesize
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{Benchmark} & 4Conv & ResNet12 & MLP \\
& MiniImageNet & MiniImageNet & MuJoCo \\ \midrule
Model Static Storage (MB) & $0.13$ & $32.0$ & $0.05$ \\
Sample Static Storage (MB) & $0.53$ & $0.53$ & $0.016 (0.00008)$ \\ \hdashline
Inference Peak Memory (MB) & $0.90$ & $3.61$ & $0.08 (0.0004)$ \\
Training Peak Memory (MB) & $48.33$ & $370.44$ & $3.72$ \\ \hdashline
Inference GFLOPs & $0.72$ & $62.08$ & $0.05$ \\
Training GFLOPs & $1.96$ & $185.42$ & $0.15$ \\
\bottomrule
\end{tabular}
\end{table}
The meta-trained weights $\bm{w}^\mathrm{meta}$ are then used as initialization for few-shot learning into $\bm{w}^\mathrm{new}$ by $K$ gradient descent steps over $\mathcal{S}^{\mathrm{new}}$.
Finally we assess the accuracy of $F(\bm{w}^\mathrm{new})$ on $\mathcal{Q}^{\mathrm{new}}$.
\subsection{Memory Bottleneck of On-Device Learning}
\label{ch4-sec:preliminary_memory}
As mentioned above, the meta-trained model $F(\bm{w}^\mathrm{meta})$ can learn unseen tasks via $K$ gradient descent steps.
Each step is the same as the inner loop of meta-training \equref{ch4-eq:maml_inner}, but on dataset $\mathcal{S}^{\mathrm{new}}$.
\begin{equation}\label{ch4-eq:fsl}
\bm{w}^{\mathrm{new},k} = \bm{w}^{\mathrm{new},k-1}-\alpha\nabla_{\bm{w}^{\mathrm{new}}}~\ell\left(\bm{w}^{\mathrm{new},k-1};\mathcal{S}^{\mathrm{new}}\right)
\end{equation}
where $\bm{w}^{\mathrm{new},0}=\bm{w}^\mathrm{meta}$.
For brevity, we omit the superscripts of model adaption in \equref{ch4-eq:fsl} and use $\bm{g}(\cdot)$ as the loss gradients w.r.t. the given tensor.
Hence, without ambiguity, we simplify the notations of \equref{ch4-eq:fsl} as follows:
\begin{equation}\label{ch4-eq:fsl_s}
\bm{w} \leftarrow \bm{w}-\alpha \bm{g}(\bm{w})
\end{equation}
Let us now understand where the main memory cost for iterating \equref{ch4-eq:fsl_s} comes from.
For the sake of clarity, we focus on a feed forward DNNs that consist of $L$ convolutional (\texttt{conv}) layers or fully-connected (\texttt{fc}) layers.
A typical layer (see \figref{ch4-fig:layer}) consists of two operations: (\textit{i}) a linear operation with trainable parameters, e.g.,\xspace convolution or affine; (\textit{ii}) a parameter-free non-linear operation, where we consider max-pooling or ReLU-styled (ReLU, LeakyReLU) activation functions in this paper.
Note that the non-linear operation unit may not exist in some layers; some layers may also have more than one non-linear units (e.g.,\xspace both max-pooling and ReLU activation function), and all corresponding intermediate tensors should be stored.
\begin{figure}[t]
\centering
\includegraphics[width=0.65\textwidth]{./figs/ch4/typical_layer.png}
\caption[A typical layer in DNNs.]{A typical layer $l$ in DNNs. $\bm{x}_{l-1}$ is the input tensor; $\bm{x}_l$ is the output tensor, also the input tensor of layer $l+1$; $\bm{y}_l$ is the intermediate tensor; $\bm{w}_l$ is the weight tensor. }
\label{ch4-fig:layer}
\end{figure}
Take a network consisting of \texttt{conv} layers only as an example.
The memory requirements for storing the activations $\bm{x}_l\in\mathbb{R}^{C_l\times H_l \times W_l}$ as well as the convolution weights $\bm{w}_l \in \mathbb{R}^{C_l\times C_{l-1}\times S_l \times S_l} $ of layer $l$ in words can be determined as
\[
m(\bm{x}_l)= C_l H_l W_l \; , \; \; m(\bm{w}_l) = C_l C_{l-1} S_l^2
\]
where $C_{l-1}$, $C_l$, $H_l$, and $W_l$ stand for input channel number, output channel number, height and width of layer $l$, respectively; $S_l$ stands for the kernel size.
The detailed memory and computation demand analysis as provided in \secref{ch4-sec:analysis} reveals that the by far largest memory requirement is neither attributed to determining the activations $\bm{x}_l$ in the forward path nor to determining the gradients of the activations $\bm{g}(\bm{x}_l)$ in the backward path.
Instead, the memory bottleneck lies in the computation of the weight gradients $\bm{g}(\bm{w}_l)$, which requires the availability of the activations $\bm{x}_{l-1}$ from the forward path.
Following \equref{ch4-eq:memorySimple} in \secref{ch4-sec:analysis}, the necessary memory in words is
\begin{equation}
\label{ch4-eq:MemoryContrib}
\sum_{1 \leq l \leq L} m(\bm{x}_{l-1})
\end{equation}
\tabref{ch4-tab:examples} summarizes the peak memory and the total computation of the commonly used few-shot learning backbone models \cite{bib:ICML17:Finn, bib:NIPS18:Oreshkin}.
The requirements are based on the detailed analysis in \secref{ch4-sec:analysis}.
We can draw two intermediate conclusions.
\begin{itemize}
\item
The total computation of training is approximately $2.7\times$ to $3\times$ larger compared to inference.
Yet the peak memory of training is far larger, $47 \times$ to $103 \times$ over inference.
\item
To enable training on memory-constrained IoT devices, we need to find some way of getting rid of the major dynamic memory contribution in \equref{ch4-eq:MemoryContrib}.
\end{itemize}
\section{p-Meta}
\label{ch4-sec:method}
This section presents p-Meta\xspace, a new meta learning scheme that enables memory-efficient few-shot learning on unseen tasks.
p-Meta\xspace is a novel meta training algorithm that not only learns the weights of the initialized backbone but also learns to identify adaptation-critical weights for memory-efficient few-shot learning.
\subsection{p-Meta\xspace Overview}
\label{ch4-sec:overview}
We first provide an overview of p-Meta\xspace and introduce its main concepts, namely selecting critical gradients, using a hierarchical approach to determine adaption-critical layers and channels, and using a mixture of static and dynamic selection mechanisms.
We impose \textit{structured sparsity} on the \textit{gradients} $\bm{g}(\bm{w}_l)$ such that the corresponding tensor dimensions of $\bm{x}_l$ do not need to be saved.
There are other options to reduce the dominant memory demand in \equref{ch4-eq:MemoryContrib}.
They are inapplicable for the reasons below.
\begin{itemize}
\item
One may trade-off computation and memory by recomputing activations $\bm{x}_{l-1}$ when needed for determining $\bm{w}_l$, see for example \cite{bib:arXiv16:Chen, bib:NIPS16:Gruslys}.
Due to the limited processing abilities of IoT devices, we exclude this option.
\item
It is also possible to prune activations $\bm{x}_{l-1}$.
Yet based on our experimental results in \tabref{ch4-tab:sparse}, imposing sparsity on $\bm{x}_{l-1}$ hugely degrades few-shot learning accuracy as this causes error accumulation along the propagation, see also \cite{bib:NIPS20:Raihan}.
\item
Note that unstructured sparsity, as proposed in \cite{bib:CIKM21:Gao, bib:NIPS21:Oswald}, does not in general lead to memory savings, since there is a very small probability that all weight gradients for which an element of $\bm{x}_{l-1}$ is necessary have been pruned.
Furthermore, their weight selection is fixed after meta training, whereas p-Meta\xspace allows dynamic weight selection when few-shot learning on different tasks. Such runtime weight selection is essential for few-shot model training.
\end{itemize}
\noindent
We impose sparsity on the gradients in a hierarchical manner.
\begin{itemize}
\item
\underline{Selecting Adaption-Critical Layers:}
We first impose layer-by-layer sparsity on $\bm{g}(\bm{w}_l)$.
It is motivated by previous results showing that manual freezing of certain layers does no harm to few-shot learning accuracy \cite{bib:ICLR20:Raghu,bib:ICLR21:Oh}.
Layer-wise sparsity reduces the number of layers whose weights need to be updated.
We determine the adaptation-critical layers from the meta-trained \textit{layer-wise sparse learning rates}.
\item
\underline{Selecting Adaption-Critical Channels:}
In addition to imposing layer-wise sparsity of weight gradients,
We further reduce the memory demand by imposing sparsity on $\bm{g}(\bm{w}_l)$ within each layer. Noting that calculating $\bm{g}(\bm{w}_l)$ needs both the input channels $\bm{x}_{l-1}$ and the output channels $\bm{g}(\bm{y}_{l})$, we enforce sparsity on both of them.
Input channel sparsity decreases memory and computation overhead, whereas output channel sparsity improves few-shot learning accuracy and reduces computation.
We design a novel \textit{meta attention mechanism} to \textit{dynamically} determine adaptation-critical channels.
They take as inputs $\bm{x}_{l-1}$ and $\bm{g}(\bm{y}_{l})$ and determine adaptation-critical channels during few-shot learning, based on the given data samples from new unseen tasks.
Dynamic channel-wise learning rates as determined by meta attention yield a significantly higher accuracy than a static channel-wise learning rate (see \secref{ch4-sec:ablation}).
\end{itemize}
\fakeparagraph{Memory Reduction}
The reduced memory demand due to our hierarchical approach can be formulated in high level as,
\begin{equation}
\sum_{1 \leq l \leq L} \hat{\alpha}_l \mu^{\mathrm{fw}}_l m(\bm{x}_{l-1})
\end{equation}
where $\hat{\alpha}_l \in \{ 0, 1\}$ is the mask from the static selection of adaptation-critical layers and $0 \leq \mu^{\mathrm{fw}}_l \leq 1$ denotes the relative amount of dynamically chosen adaptation-critical input channels.
A more detailed analysis of memory demands can be found in \secref{ch4-sec:analysis}.
Next, we explain how p-Meta\xspace selects adaptation-critical layers (\secref{ch4-sec:LR}) and channels within layers (\secref{ch4-sec:attention}) as well as the deployment optimizations (\secref{ch4-sec:others}) for memory-efficient training.
\subsection{Selecting Adaption-Critical Layers by Learning Sparse Inner Step Sizes}
\label{ch4-sec:LR}
This subsection introduces how p-Meta\xspace meta-learns adaptation-critical layers to reduce the number of updated layers during few-shot learning.
Particularly, instead of manual configuration as in \cite{bib:ICLR21:Oh, bib:ICLR20:Raghu}, we propose to automate the layer selection process.
During meta training, we identify adaptation-critical layers by learning layer-wise sparse inner step sizes (\secref{ch4-sec:LR_ml}).
Only these critical layers with nonzero step sizes will be updated during on-device learning to new tasks (\secref{ch4-sec:LR_fsl}).
\subsubsection{Learning Sparse Inner Step Sizes in Meta Training}
\label{ch4-sec:LR_ml}
Prior work \cite{bib:ICLR19:Antreas} suggests that instead of a global fixed inner step size $\alpha$, learning the inner step sizes $\bm{\alpha}$ for each layer and each gradient descent step improves the generalization of meta learning, where $\bm{\alpha} = \alpha_{1:L}^{1:K} \succeq \bm{0}$.
We utilize such learned inner step sizes to infer layer importance for adaptation.
We learn the inner step sizes $\bm{\alpha}$ in the outer loop of meta-training while fixing them in the inner loop.
\fakeparagraph{Learning Layer-wise Inner Step Sizes}
We change the inner loop of \equref{ch4-eq:maml_inner} to incorporate the per-layer inner step sizes:
\begin{equation}\label{ch4-eq:inner_alpha}
\bm{w}_l^{i,k} = \bm{w}_l^{i,k-1}-\alpha^k_l\nabla_{\bm{w}_l}~\ell\left(\bm{w}^{i,k-1}_{1:L};\mathcal{S}^i\right)
\end{equation}
where $\bm{w}_l^{i,k}$ is the weights of layer $l$ at step $k$ optimized on task $i$ (dataset $\mathcal{S}^{i}$).
In the outer loop, weights $\bm{w}$ are still optimized as
\begin{equation}\label{ch4-eq:outer_alpha}
\bm{w} \leftarrow \bm{w}-\beta \nabla_{\bm{w}} \sum_i \ell\left(\bm{w}^{i};\mathcal{Q}^i\right)
\end{equation}
where $\bm{w}^{i}=\bm{w}^{i,K}=\bm{w}^{i,K}_{1:L}$, which is a function of $\bm{\alpha}$.
The inner step sizes $\bm{\alpha}$ are then optimized as
\begin{equation}\label{ch4-eq:alpha}
\bm{\alpha} \leftarrow \bm{\alpha}-\beta \nabla_{\bm{\alpha}} \sum_i \ell\left(\bm{w}^{i};\mathcal{Q}^i\right)
\end{equation}
\fakeparagraph{Imposing Sparsity on Inner Step Sizes}
To facilitate layer selection, we enforce sparsity in $\bm{\alpha}$, i.e.,\xspace encouraging a subset of layers to be selected for updating.
Specifically, we add a Lasso regularization term in the loss function of \equref{ch4-eq:alpha} when optimizing $\bm{\alpha}$.
Hence, the final optimization of $\bm{\alpha}$ in the outer loop is formulated as
\begin{equation}\label{ch4-eq:regularization}
\bm{\alpha} \leftarrow \bm{\alpha}-\beta \nabla_{\bm{\alpha}} (\sum_i \ell\left(\bm{w}^{i};\mathcal{Q}^i\right)+\lambda \sum_{l,k} m(\bm{x}_{l-1})\cdot|\alpha_l^k|) \
\end{equation}
where $\lambda$ is a positive scalar to control the ratio between two terms in the loss function.
We empirically set $\lambda=0.001$.
$|\alpha_l^k|$ is re-weighted by $m(\bm{x}_{l-1})$, which denotes the necessary memory in \equref{ch4-eq:MemoryContrib} if only updating the weights in layer $l$.
\subsubsection{Exploiting Sparse Inner Step Sizes for on-device learning}
\label{ch4-sec:LR_fsl}
We now explain how to apply the learned $\bm{\alpha}$ to save memory during on-device learning.
After deploying the meta-trained model to IoT devices for few-shot learning, at updating step $k$, for layers with $\alpha_l^k=0$, the activations (i.e.,\xspace their inputs) $\bm{x}_{l-1}$ need not be stored, see \equref{ch4-eq:memoryAll} and \equref{ch4-eq:memorySimple} in \secref{ch4-sec:analysis}.
In addition, we do not need to calculate the corresponding weight gradients $\bm{g}(\bm{w}_l)$, which saves computation, see \equref{ch4-eq:MAC} in \secref{ch4-sec:analysis}.
\subsection{Selecting Adaption-Critical Channels within Layers via Sparse Meta Attention}
\label{ch4-sec:attention}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch4/metaatten.png}
\caption[Meta attention during meta-training.]{Meta attention of layer $l$ during meta-training. The blue blocks correspond to tensors; the orange blocks correspond to computation units with parameters, and the green ones without. Each column of a tensor corresponds to one channel. The input tensor $\bm{x}_{l-1}$ has 4 channels; the output tensor $\bm{y}_{l}$ has 6 channels. The other dimensions (e.g.,\xspace height, width and batch) are omitted here. The green block with $*$ stands for the operations involved to compute $\bm{g}(\bm{w}_l)$. In order to compute the gradients of the parameters in meta attention, i.e.,\xspace $\bm{w}_l^{\mathrm{fw}}$ and $\bm{w}_l^{\mathrm{bw}}$, the full dense gradients $\bm{g}(\bm{w}_l)$ are computed during meta-training, and then are masked by $\bm{\gamma}_l$. An example meta attention module for a \texttt{conv} layer is shown in the upper part. $B$ denotes the batch size. The newly added blocks related to the inference attention in \cite{bib:CVPR20:Chen} are marked with solid lines.}
\label{ch4-fig:metaattention}
\end{figure}
This subsection explains how p-Meta\xspace learns a novel meta attention mechanism in each layer to dynamically select adaptation-critical channels for further memory saving in few-shot learning.
Despite the widespread adoption of channel-wise attention for inference \cite{bib:CVPR18:Hu, bib:CVPR20:Chen}, we make the first attempt to use attention for memory-efficient training (few-shot learning in our case).
For each layer, its meta attention outputs a dynamic channel-wise sparse attention score based on the samples from new tasks.
The sparse attention score is used to re-weight (also sparsify) the weight gradients.
Therefore, by calculating only the nonzero gradients of critical weights within a layer, we can save both memory and computation.
We first present our meta attention mechanism during meta training (\secref{ch4-sec:attention_ml}) and then show its usage for on-device model training (\secref{ch4-sec:attention_fsl}).
\subsubsection{Learning Sparse Meta Attention in Meta Training}
\label{ch4-sec:attention_ml}
Since mainstream backbones in meta learning use small kernel sizes (1 or 3), we design the meta attention mechanism channel-wise.
\figref{ch4-fig:metaattention} illustrates the attention design during meta-training.
\fakeparagraph{Learning Meta Attention}
The attention mechanism is as follows.
\begin{itemize}
\item
We assign an attention score to the weight gradients of layer $l$ in the inner loop of meta training.
The attention scores are expected to indicate which weights/channels are important and thus should be updated in layer $l$.
\item
The attention score is obtained from two attention modules: one taking $\bm{x}_{l-1}$ as input in the forward pass, and the other taking $\bm{g}(\bm{y}_l)$ as input during the backward pass.
We use $\bm{x}_{l-1}$ and $\bm{g}(\bm{y}_l)$ to calculate the attention scores because they are used to compute the weight gradients $\bm{g}(\bm{w}_{l})$.
\end{itemize}
Concretely, we define the forward and backward attention scores for a \texttt{conv} layer as,
\begin{equation}\label{ch4-eq:atten_fw}
\bm{\gamma}^{\mathrm{fw}}_{l} = h(\bm{w}^{\mathrm{fw}}_l;\bm{x}_{l-1})\in\mathbb{R}^{C_{l-1}\times 1 \times 1}
\end{equation}
\begin{equation}\label{ch4-eq:atten_bw}
\bm{\gamma}^{\mathrm{bw}}_{l} = h(\bm{w}^{\mathrm{bw}}_l;\bm{g}(\bm{y}_l))\in\mathbb{R}^{C_{l}\times 1 \times 1}
\end{equation}
where $h(\cdot;\cdot)$ stands for the meta attention module, and $\bm{w}^{\mathrm{fw}}_l$ and $\bm{w}^{\mathrm{bw}}_l$ are the parameters of the meta attention modules.
The overall (sparse) attention scores $\bm{\gamma}_l\in\mathbb{R}^{C_{l}\times C_{l-1} \times 1 \times 1}$ and is computed as,
\begin{equation}\label{ch4-eq:atten_fwbw}
\gamma_{l,ba11} = \gamma^{\mathrm{fw}}_{l,a11} \cdot \gamma^{\mathrm{bw}}_{l,b11}
\end{equation}
In the inner loop, for layer $l$, step $k$ and task $i$, $\bm{\gamma}_l$ is (broadcasting) multiplied with the dense weight gradients to get the sparse ones,
\begin{equation}\label{ch4-eq:inner_incr}
\bm{\gamma}_l^{i,k}\odot\nabla_{\bm{w}_l}~\ell\left(\bm{w}^{i,k-1}_{1:L};\mathcal{S}^i\right)
\end{equation}
The weights are then updated by,
\begin{equation}\label{ch4-eq:inner_atten}
\bm{w}_l^{i,k} = \bm{w}_l^{i,k-1}-\alpha^k_l(\bm{\gamma}_l^{i,k}\odot\nabla_{\bm{w}_l}~\ell\left(\bm{w}^{i,k-1}_{1:L};\mathcal{S}^i\right))
\end{equation}
Let all attention parameters be $\bm{w}^{\mathrm{atten}} = \{\bm{w}^{\mathrm{fw}}_l,\bm{w}^{\mathrm{bw}}_l\}_{l=1}^{L}$.
The attention parameters $\bm{w}^{\mathrm{atten}}$ are optimized in the outer loop as,
\begin{equation}\label{ch4-eq:outer_atten}
\bm{w}^{\mathrm{atten}} \leftarrow \bm{w}^{\mathrm{atten}}-\beta \nabla_{\bm{w}^{\mathrm{atten}}} \sum_i \ell\left(\bm{w}^{i};\mathcal{Q}^i\right)
\end{equation}
Note that we use a dense forward path and a dense backward path in both meta-training and on-device learning, as shown in \figref{ch4-fig:metaattention}.
That is, the attention scores $\bm{\gamma}^{\mathrm{fw}}_{l}$ and $\bm{\gamma}^{\mathrm{bw}}_{l}$ are only calculated locally and will not affect $\bm{y}_l$ during forward and $\bm{g}(\bm{x}_{l-1})$ during backward.
Based on our experimental results in \tabref{ch4-tab:sparse}, using either sparse $\bm{x}_{l-1}$ during forward or sparse $\bm{g}(\bm{y}_l)$ during backward will cause a dramatic performance degradation.
\begin{algorithm}[t]
\caption{Clip and normalization}\label{ch4-alg:clip}
\KwIn{softmax output (normalized) $\bm{\pi}\in\mathbb{R}^{C}$, clip ratio $\rho$}
\KwOut{sparse $\bm{\gamma}$}
Sort $\bm{\pi}$ in ascending order and get sorted indices $d_{1:C}$\;
Find the smallest $c$ such that $\sum_{i=1}^c\pi_{d_i}\ge\rho$\;
Set $\pi_{d_1:d_c}$ as 0 \tcp*{if $\rho=0$, do nothing}
Normalize $\bm{\gamma} = \bm{\pi}/\sum\bm{\pi}$\;
Re-scale $\bm{\gamma} = \bm{\gamma}\cdot C$ \tcp*{keeping step sizes' magnitude}
\end{algorithm}
\begin{algorithm}[t]
\caption{p-Meta}\label{ch4-alg:pMeta}
\KwIn{meta-training task distribution $p(\mathsf{T})$, backbone $F$ with initial weights $\bm{w}$, meta attention parameters $\bm{w}^{\mathrm{atten}}$, inner step sizes $\bm{\alpha}$, outer step sizes $\beta$}
\KwOut{meta-trained weights $\bm{w}$, meta-trained meta attention parameters $\bm{w}^{\mathrm{atten}}$, meta-trained sparse inner step sizes $\bm{\alpha}$}
\While {not done} {
Sample a batch of $I$ tasks $\mathsf{T}^i\sim p(\mathsf{T})$\;
\For {$i \leftarrow 1$ \KwTo $I$} {
Update $\bm{w}^i$ in $K$ gradient descent steps with \eqref{ch4-eq:inner_atten}\;
}
Update $\bm{w}$ with \eqref{ch4-eq:outer_alpha}\;
Update inner step sizes $\bm{\alpha}$ with \eqref{ch4-eq:regularization}\;
Update attention parameters $\bm{w}^{\mathrm{atten}}$ with \equref{ch4-eq:outer_atten}\;
}
\end{algorithm}
\fakeparagraph{Meta Attention Module Design}
\figref{ch4-fig:metaattention} (upper part) shows an example meta attention module.
We adapt the inference attention modules used in \cite{bib:CVPR18:Hu, bib:CVPR20:Chen}, yet with the following modifications.
\begin{itemize}
\item
Unlike inference attention that applies to a single sample, training may calculate the averaged loss gradients based on a batch of samples.
Since $\bm{g}(\bm{w}_l)$ does not have a batch dimension, the input to softmax function is first averaged over the batch data, see in \figref{ch4-fig:metaattention}.
\item
We enforce sparsity on the meta attention scores such that they can be utilized to save memory and computation in few-shot learning.
The original attention in \cite{bib:CVPR18:Hu, bib:CVPR20:Chen} outputs normalized scales in $[0,1]$ from softmax.
We clip the output with a clip ratio $\rho\in[0,1]$ to create zeros in $\bm{\gamma}$.
This way, our meta attention modules yield batch-averaged sparse attention scores $\bm{\gamma}^{\mathrm{fw}}_{l}$ and $\bm{\gamma}^{\mathrm{bw}}_{l}$.
\algoref{ch4-alg:clip} shows this clipping and re-normalization process.
Note that \algoref{ch4-alg:clip} is not differentiable.
Hence we use the straight-through-estimator for its backward propagation in meta training.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch4/metaatten_fewshot.png}
\caption[Meta attention during on-device few-shot learning.]{Meta attention of layer $l$ during on-device few-shot learning. Note that ``Forward'' part and ``Backward'' part are the same as \figref{ch4-fig:metaattention}, which are omitted for simplicity. Meta attention modules are not optimized during few-shot learning, thus are expressed as parameter-free functions $h^{\mathrm{fw}}$ and $h^{\mathrm{bw}}$. The input $\bm{x}_{l-1}$ stored during forward path is a sparse re-weighted tensor. }
\label{ch4-fig:metaattention_fewshot}
\end{figure}
\subsubsection{Exploiting Meta Attention for on-device learning}
\label{ch4-sec:attention_fsl}
We now explain how to apply the meta attention to save memory during on-device few-shot learning.
Note that the parameters in the meta attention modules are fixed during few-shot learning.
Assume that at step $k$, layer $l$ has a nonzero step size $\alpha_l^k$.
In the forward pass, we only store a sparse tensor $\bm{\gamma}_l^{\mathrm{fw}}\cdot\bm{x}_{l-1}$, i.e.,\xspace its channels are stored only if they correspond to nonzero entries in $\bm{\gamma}_l^{\mathrm{fw}}$.
This reduces memory consumption as shown in \equref{ch4-eq:memorySimple} in \secref{ch4-sec:analysis}.
Similarly, in the backward pass, we get a channel-wise sparse tensor $\bm{\gamma}_l^{\mathrm{bw}}\cdot\bm{g}(\bm{y}_{l})$.
Since both sparse tensors are used to calculate the corresponding nonzero gradients in $\bm{g}(\bm{w}_l)$, the computation cost is also reduced, see \equref{ch4-eq:MAC} in \secref{ch4-sec:analysis}.
We plot the meta attention during on-device learning in \figref{ch4-fig:metaattention_fewshot}.
\subsection{Summary of p-Meta\xspace}
\label{ch4-sec:overall_alg}
\algoref{ch4-alg:pMeta} shows the overall process of p-Meta\xspace during meta-training.
The final meta-trained weights $\bm{w}$ from \algoref{ch4-alg:pMeta} are assigned to $\bm{w}^{\mathrm{meta}}$, see \secref{ch4-sec:preliminary_meta}.
The meta-trained backbone model $F(\bm{w}^{\mathrm{meta}})$, the sparse inner step sizes $\bm{\alpha}$, and the meta attention modules will be then deployed on edge devices and used to conduct a memory-efficient few-shot learning.
\subsection{Deployment Optimization}
\label{ch4-sec:others}
To further reduce the memory during few-shot learning, we propose gradient accumulation during backpropagation and replace batch normalization in the backbone with group normalization.
\subsubsection{Gradient Accumulation}
\label{ch4-sec:gradacc}
In standard few-shot learning, all the new samples (e.g.,\xspace $25$ for $5$-way $5$-shot) are fed into the model as one batch.
To reduce the peak memory due to large batch sizes, we conduct few-shot learning with gradient accumulation (GA).
GA is a technique that (\textit{i}) breaks a large batch into smaller partial batches; (\textit{ii}) sequentially forward/backward propagates each partial batches through the model; (\textit{iii}) accumulates the loss gradients of each partial batch and get the final averaged gradients of the full batch.
Note that GA does not increase computation, which is desired for low-resource platforms with constrained memory and limited parallelism.
Accordingly, our meta attention module should be also modified.
Particularly, the input to the softmax is averaged over all samples in the batch (see \figref{ch4-fig:metaattention}), i.e.,\xspace $\bm{\gamma}^{\mathrm{fw}}_{l}$ and $\bm{\gamma}^{\mathrm{bw}}_{l}$ are the batch-averaged scores.
We evaluate the impact of different sample batch sizes in GA in \secref{ch4-sec:experiment_batchsize}.
\subsubsection{Group Normalization}
\label{ch4-sec:norm}
Mainstream backbones in meta learning typically adopt batch normalization layers.
Batch normalization layers compute the statistical information in each batch, which is dependent on the sample batch size.
When using GA with different sample batch sizes, the inaccurate batch statistics can degrade the training performance (see \secref{ch4-sec:experiment_pool}).
As a remedy, we use group normalization \cite{bib:ECCV18:Wu}, which does not rely on batch statistics (i.e.,\xspace independent of the sample batch size).
We also apply meta attention on group normalization layers when updating their weights.
The only difference w.r.t. \texttt{conv} and \texttt{fc} layers is that the stored input tensor (also the one used for the meta attention) is not $\bm{x}_{l-1}$, but its normalized version.
\section{Theoretical Analysis on Memory and Computation}
\label{ch4-sec:analysis}
In this section, we derive the memory requirement and computation workload for inference and training.
We further analyze the reduced consumption of memory and computation due to p-Meta\xspace.
Recall that we focus on a feed forward DNN that consists of $L$ convolutional (\texttt{conv}) layers or fully-connected (\texttt{fc}) layers.
Note that our analysis focuses on 2D \texttt{conv} layers but can apply to other \texttt{conv} layer types as well. We assume the ReLU activation function for all layers, denoted as $\sigma(\cdot)$. For simplicity, we omit the bias, normalization layers, pooling or strides.
We use the notation $m(\bm{x})$ to denote the memory demand in words to store tensor $\bm{x}$. The wordlength is denoted as $\mathit{T}$.
For representing indexed summations we use the Einstein notation. If index variables appear in a term on the right hand side of an equation and are not otherwise defined (free indices), it implies summation of that term over the range of the free indices. If indices of involved tensor elements are out of range, the values of these elements are assumed to be 0.
\subsection{Single Layer}
We start with a single layer and accumulate the memory and computation for networks with several layers afterwards. Assume the input tensor of a layer is $\bm{x}$, the weight tensor is $\bm{w}$, the result after the linear transformation is $\bm{y}$, and the layer output after the non-linear operator is $\bm{z}$ which is also the input to the next layer.
For convolutional layers, we have $\bm{x} \in \mathbb{R}^{C_I \times H_I \times W_I}$ and elements $x_{cij}$, where $C_I$, $H_I$, and $W_I$ denote the number of input channels, height and width, respectively. In a similar way, we have $\bm{z} \in \mathbb{R}^{C_O \times H_O \times W_O}$ with elements $x_{fij}$ where $C_O$, $H_O$, and $W_O$ denote the number of output channels, height and width, respectively. Moreover, $\bm{w} \in \mathbb{R}^{C_O \times C_I \times S \times S}$ with elements $w_{fcmn}$. Therefore,
\[
m(\bm{x})= C_I H_I W_I \; , \; \; m(\bm{y})= m(\bm{z}) = C_O H_O W_O \; , \; \; m(\bm{w}) = C_O C_I S^2
\]
For fully connected layers we have $\bm{x} \in \mathbb{R}^{C_I}$, $\bm{y}, \bm{z} \in \mathbb{R}^{C_O}$, and $\bm{w} \in \mathbb{R}^{C_O \times C_I}$ with memory demand
\[
m(\bm{x})= C_I \; , \; \; m(\bm{y})= m(\bm{z}) = C_O \; , \; \; m(\bm{w}) = C_O C_I
\]
\subsubsection{Fully Connected Layer}
For inference we derive the relations $y_{f} = w_{fc} x_c$ and $z_{f} = \sigma(y_{f})$ for all admissible indices $f \in [1, C_O]$. The necessary dynamic memory has a size of about $m(\bm{x}) + m(\bm{y})$ words and we need about $m(\bm{w})$ FLOPs.
For training, we suppose that $\frac{\partial \ell}{\partial z_i}$ is already provided from the next layer. We find $\frac{\partial \ell}{\partial y_i} = \sigma'(y_i) \cdot \frac{\partial \ell}{\partial z_i}$ with $\sigma'(y_i) = \begin{cases} 1 & \mbox{if } y_i > 0 \\ 0 & \mbox{if } y_i < 0 \end{cases}$ which leads to $\frac{\partial \ell}{\partial x_i} = w_{ji} \cdot \frac{\partial \ell}{\partial y_j}$. The necessary dynamic memory is about $m(\bm{x}) + m(\bm{y}) \cdot (1 + \frac{1}{\mathit{T}})$ words, where the last term comes from storing $\sigma'(y_i)$ single bits from the forward path. We need about $m(\bm{w})$ FLOPs.
According to the approach described in the paper we are only interested in the partial derivatives $\frac{\partial \ell}{\partial w_{fc}}$ if $\alpha > 0$ for this layer, and if scales $\gamma^{\mathrm{bw}}_{f} > 0$ and $\gamma^{\mathrm{fw}}_{c} > 0$ for indices $f$, $c$. To simplify the notation, let us define the critical ratios
\begin{gather}
\mu^{\mathrm{fw}} = \frac{\text{number of nonzero elements of } \gamma^{\mathrm{fw}}_{c}}{C_I} \\
\mu^{\mathrm{bw}} = \frac{\text{number of nonzero elements of } \gamma^{\mathrm{bw}}_{f}}{C_O}
\end{gather}
which are 1 if all channels are determined to be critical for weight adaptation, and 0 if none of them.
We find $\gamma^{\mathrm{bw}}_{f} \frac{\partial \ell}{\partial w_{fc}} \gamma^{\mathrm{fw}}_{c} = (\gamma^{\mathrm{bw}}_{f} \frac{\partial \ell}{\partial y_{f}}) \cdot (\gamma^{\mathrm{fw}}_{c} x_c)$. Therefore, we need $\mu^{\mathrm{fw}} \mu^{\mathrm{bw}} m(\bm{w}) + \mu^{\mathrm{fw}} m(\bm{x})$ words dynamic memory if $\alpha > 0$ where the latter term considers the information needed from the forward path. We require about $\mu^{\mathrm{fw}} \mu^{\mathrm{bw}} m(\bm{w})$ FLOPs if $\alpha > 0$.
\subsubsection{Convolutional Layer}
The memory analysis for a convolutional layer is very similar, just replacing matrix multiplication by convolution. For inference we find $y_{fij} = w_{fcmn} x_{c, i+m-1, j+n-1}$ and $z_{fij} = \sigma(y_{fij})$ for all admissible indices $f$, $i$, $j$. The necessary dynamic memory has a size of about $\max \{ m(\bm{x}), m(\bm{y}) \}$ words when using memory sharing between input and output tensors. We need about $H_O W_O \cdot m(\bm{w})$ FLOPs.
For training, we again suppose that $\frac{\partial \ell}{\partial z_{fij}}$ is provided from the next layer. We find $\frac{\partial \ell}{\partial y_{fij}} = \sigma'(y_{fij}) \cdot \frac{\partial \ell}{\partial z_{fij}}$ and get $\frac{\partial \ell}{\partial x_{cij}} = w_{fcmn} \cdot \frac{\partial \ell}{\partial y_{f, i + m - 1, j + n - 1}}$. The necessary memory is about $\max \{ m(\bm{x}), m(\bm{y}) \} + \frac{ m(\bm{y})}{\mathit{T}}$ words, where the last term comes from storing $\sigma'(y_{fij})$ single bits from the forward path. We need about $H_I W_I \cdot m(\bm{w})$ multiply and accumulate operations.
For determining the weight gradients we find $\frac{\partial \ell}{\partial w_{fcmn}} = \frac{\partial \ell}{\partial y_{fij}} \cdot x_{c, i + m -1, j + n - 1}$. When considering the scales for filtering, we yield $\gamma^{\mathrm{bw}}_{f} \frac{\partial \ell}{\partial w_{fcmn}} \gamma^{\mathrm{fw}}_{c} = (\gamma^{\mathrm{bw}}_{f} \frac{\partial \ell}{\partial y_{fij}}) \cdot (\gamma^{\mathrm{fw}}_{c} x_{c, i + m -1, j + n - 1})$. As a result, we need $\mu^{\mathrm{fw}} \mu^{\mathrm{bw}} m(\bm{w}) + \mu^{\mathrm{fw}} m(\bm{x})$ words of dynamic memory if $\alpha > 0$ where the latter term considers the information needed from the forward path. We require about $\mu^{\mathrm{fw}} \mu^{\mathrm{bw}} H_O W_O m(\bm{w})$ FLOPs if $\alpha > 0$.
Finally, let us determine the required memory and computation to determine the scales $\gamma^{\mathrm{fw}}_{c}$ and $\gamma^{\mathrm{bw}}_{f}$. According to \figref{ch4-fig:metaattention}, we find as an upper bound for the memory $B \cdot ( C_I + C_O)$ and $( C_I H_I W_I + 2 C_I^2 + C_O H_O W_O + 2 C_O^2 )$ FLOPs.
\subsection{All Layers}
The above relations are valid for a single layer. The following relations hold for the overall network. In order to simplify the notation, we consider a network that consists of convolution layers only. Extensions to mixed layers can simply be done.
We suppose $L$ layers with sizes $C_l$, $H_l$, $W_l$ and $S_l$ for the number of output channels, output width, output height and kernel size, respectively. We assume that the step-sizes $\alpha_l$ for some iteration of the training are given. The memory requirement in words is
\[
m(\bm{x}_l)= C_l H_l W_l \; , \; \; m(\bm{w}_l) = C_l C_{l-1} S_l^2
\]
and the word-length is again denoted as $\mathit{T}$.
We define as $\hat{\alpha}_l = \begin{cases} 1 & \mbox{if } \alpha_l > 0 \\ 0 & \mbox{if } \alpha_l = 0 \end{cases}$ the mask that determines whether the weight adaptation for this layer is necessary or not.
Let us first look at the forward path. The necessary dynamic memory is about $\max_{0 \leq l \leq L} \{ m(\bm{x}_l) \}$ words. The number of FLOPs is $\sum_{1 \leq l \leq L} H_l W_l m(\bm{w}_l)$.
The backward path needs only to be evaluated until we reach the first layer where we require the computation of the gradients. We define $l_\mathit{min} = \min \{ l \, | \, \hat{\alpha}_{l} = 1 \}$. For the calculation of the partial derivatives of the activations we need dynamic memory of $\max_{l_{\mathit{min}} \leq l \leq L} \{ m(\bm{x}_l) \} + \frac{1}{\mathit{T}} \sum_{l_\mathit{min} \leq l \leq L} m(\bm{x}_l)$ words where the last term is due to storing the derivatives of the ReLU operations. We need about $\sum_{l_\mathit{min} + 1 \leq l \leq L} H_{l-1} W_{l-1} m(\bm{w}_l) $ FLOPs.
The second contribution of the backward path is for computing the weight gradients. The memory and computation demand of the scales will be neglected as they are much smaller than other contributions. We can determine the necessary dynamic memory as $\max_{1 \leq l \leq L} \{ \hat{\alpha}_l \mu^{\mathrm{fw}}_l \mu^{\mathrm{bw}}_l m(\bm{w}_l)\} + \sum_{1 \leq l \leq L} \hat{\alpha}_l \mu^{\mathrm{fw}}_l m(\bm{x}_{l-1})$, and we need $\sum_{1 \leq l \leq L} \hat{\alpha}_l \mu^{\mathrm{fw}}_l \mu^{\mathrm{bw}}_l H_l W_l m(\bm{w}_l)$ FLOPs.
Considering all necessary dynamic memory with memory reuse for a gradient-based training step, we get an estimation of memory in words
\begin{multline}
\max_{0 \leq l \leq L} \{ m(\bm{x}_l) \} + \sum_{1 \leq l \leq L} \hat{\alpha}_l m(\bm{w}_l) + \sum_{1 \leq l \leq L} \hat{\alpha}_l \mu^{\mathrm{fw}}_l m(\bm{x}_{l-1}) + \frac{1}{\mathit{T}} \sum_{l_\mathit{min} \leq l \leq L} m(\bm{x}_l)
\label{ch4-eq:memoryAll}
\end{multline}
if we accumulate the weight gradients before doing an SGD step and re-use some memory during back-propagation. More elaborate memory re-use can be used to slightly sharpen the bounds without a major improvement.
For conventional training, each parameter is in 32-bit floating point format, i.e.,\xspace one word corresponds to 32-bit. As discussed in \secref{ch4-sec:preliminary_memory}, we only consider max-pooling and ReLU-styled activation as the $\sigma$ function. The wordlength $T$ in \equref{ch4-eq:memoryAll} is set as 16 for max-pooling , and 32 for ReLU-styled activation.
One can see that under the typical assumptions for network parameters, the above memory requirement in words is dominated by
\begin{equation}
\sum_{1 \leq l \leq L} \hat{\alpha}_l \mu^{\mathrm{fw}}_l m(\bm{x}_{l-1})
\label{ch4-eq:memorySimple}
\end{equation}
The necessary storage between the forward and backward path is reduced proportionally to $\mu^{\mathrm{fw}}_l$ with factor $m(\bm{x}_{l-1})$.
Finally, the amount of FLOPs can be estimated as
\begin{equation}
\sum_{1 \leq l \leq L} H_l W_l m(\bm{w}_l) ( 1 + \hat{\alpha}_l \mu^{\mathrm{fw}}_l \mu^{\mathrm{bw}}_l) + \sum_{l_\mathit{min} \leq l \leq L} H_{l-1} W_{l-1} m(\bm{w}_l)
\label{ch4-eq:MAC}
\end{equation}
while neglecting lower order terms. Here it is important to note that all terms are of similar order. The approach used in the paper does not determine a trade-off between computation and memory, but reduces the amount of FLOPs. This reduction is less than the reduction in required dynamic memory.
\section{Experiments}
\label{ch4-sec:experiment}
This section presents the evaluations of p-Meta\xspace on standard few-shot image classification and reinforcement learning benchmarks.
\subsection{General Experimental Settings}
\label{ch4-sec:experiment_settings}
\fakeparagraph{Compared Methods}
We test the meta learning algorithms below.
\begin{itemize}
\item MAML \cite{bib:ICML17:Finn}: the original model-agnostic meta learning.
\item ANIL \cite{bib:ICLR20:Raghu}: update the last layer only in few-shot learning.
\item BOIL \cite{bib:ICLR21:Oh}: update the body except the last layer.
\item MAML++ \cite{bib:ICLR19:Antreas}: learn a per-step per-layer step sizes $\bm{\alpha}$.
\item p-Meta\xspace (\ref{ch4-sec:LR}): can be regarded as a sparse version of MAML++, since it learns a sparse $\bm{\alpha}$ with our methods in \secref{ch4-sec:LR}.
\item p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}): the full version of our methods which include the meta attention modules in \secref{ch4-sec:attention}.
\end{itemize}
For fair comparison, all the algorithms are re-implemented with the deployment optimization in \secref{ch4-sec:others}.
\fakeparagraph{Implementation}
The experiments are conducted with tools provided by TorchMeta \cite{bib:torchmeta, bib:torchmetarl}.
Particularly, the backbone is meta-trained with full sample batch size (e.g.,\xspace 25 for 5-way 5-shot) on meta training dataset.
After each meta training epoch, the model is tested (i.e.,\xspace few-shot learned) on meta validation dataset.
The model with the highest validation performance is used to report the final few-shot learning results on meta test dataset.
We follow the same process as TorchMeta \cite{bib:torchmeta, bib:torchmetarl} to build the dataset.
During few-shot learning, we adopt a sample batch size of 1 to verify the model performance under the most strict memory constraints.
In p-Meta\xspace, meta attention is applied to all \texttt{conv}, \texttt{fc}, and group normalization layers, except the last output layer, because (\textit{i}) we find modifying the last layer's gradients may decrease accuracy; (\textit{ii}) the final output is often rather small in size, resulting in little memory saving even if imposing sparsity on the last layer.
Without further notations, we set $\rho=0.3$ in forward attention, and $\rho=0$ in backward attention across all layers, as the sparsity of $\bm{\gamma}_l^{\mathrm{bw}}$ almost has no effect on the memory saving.
\fakeparagraph{Metrics}
We compare the peak memory and FLOPs of different algorithms.
Note that the reported peak memory and FLOPs for p-Meta\xspace also include the consumption from meta attention, although they are rather small related to the backward propagation.
\subsection{Benchmarking Details}
\label{ch4-sec:experiment_benchmark}
\subsubsection{4Conv/ResNet12 on MiniImageNet/TieredImageNet/CUB}
MiniImageNet \cite{bib:NIPS16:Vinyals} is an image classification dataset from ImageNet dataset \cite{bib:ILSVRC15}, which consists of $84\times84$ color images in 100 classes.
Following the splitting in \cite{bib:NIPS16:Vinyals}, 64 classes are used for meta-training, 16 classes are used for meta-validation, and the rest 20 classes are used as unseen tasks for meta-testing (i.e.,\xspace few-shot learning).
We train on 1 Nvidia V100 GPU.
We experiment in both 5-way 1-shot and 5-way 5-shot settings.
The task batch size is set to 4 in general, except for ResNet12 under 5-way 5-shot settings where we use 2.
TieredImageNet \cite{bib:ICLR18:Ren} is an image classification dataset from ImageNet dataset \cite{bib:ILSVRC15}, which consists of $84\times84$ color images in 34 categories (608 classes).
Following the splitting in \cite{bib:ICLR18:Ren}, 20 categories (351 classes) are used for meta-training, 6 categories (97 classes) are used for meta-validation, and the rest 8 categories (160 classes) are used as unseen tasks for meta-testing (i.e.,\xspace few-shot learning).
CUB \cite{bib:CUB} is an image classification dataset, which consists of $84\times84$ color images of bird species in 200 classes.
Following the splitting in \cite{bib:torchmeta}, 100 classes are used for meta-training, 50 classes are used for meta-validation, and the rest 50 classes are used as unseen tasks for meta-testing (i.e.,\xspace few-shot learning).
\fakeparagraph{4Conv}
The ``4Conv'' \cite{bib:ICML17:Finn} backbone has 4 \texttt{conv} blocks.
Each \texttt{conv} block includes a \texttt{conv} layer with 32 channels, a group normalization layer (as discussed in \secref{ch4-sec:norm}), a ReLU activation, and a max-pooling with stride 2.
\fakeparagraph{ResNet12}
The ``ResNet12'' \cite{bib:NIPS18:Oreshkin} backbone has 4 residual blocks with $\{64,128,256,512\}$ channels in each block respectively.
Each residual block consists of 3 \texttt{conv} blocks followed by max-pooling with stride 2.
Each \texttt{conv} layer is followed by a group normalization layer and a LeakyReLU activation with slope 0.1.
Refer to \cite{bib:NIPS18:Oreshkin} for more detailed structure.
\subsubsection{MLP on MuJoCo}
MuJoCo is an advanced simulator for multi-body dynamics with contact.
For all experiments, we mainly adopt the experimental setup in \cite{bib:ICML17:Finn,bib:torchmetarl}.
We run the MuJoCo environment as well as the policy model training on 8 CPUs.
\fakeparagraph{MLP}
We use a neural network as the policy model. The neural network is a MLP with two hidden \texttt{fc} layers of size 100 and the ReLU activation.
\subsection{Experiments on Image Classification}
\label{ch4-sec:image}
\begin{table}[t!]
\centering
\caption[5-Way 1-shot few-shot image classification results on 4Conv and ResNet12.]{5-Way 1-shot few-shot image classification results on 4Conv and ResNet12. All methods are meta-trained on MiniImageNet, and are few-shot learned on the reported datasets: MiniImageNet, TieredImageNet, and CUB (denoted by Mini, Tiered, and CUB in the table). The total computation (\# GFLOPs) and the peak memory (MB) during few-shot learning are reported based on the theoretical analysis in \secref{ch4-sec:analysis}. }
\label{ch4-tab:image_1shot}
\footnotesize
\begin{tabular}{llccccc}
\toprule
\multicolumn{2}{l}{\textbf{5-Way 1-Shot}} & \multicolumn{3}{c}{Accuracy} & GFLOPs & Memory \\
\cmidrule(lr){3-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7}
\multicolumn{2}{l}{Benchmarks} & Mini & Tiered & CUB & Mini & Mini \\ \midrule
\multirow{6}{*}{4Conv} & MAML \cite{bib:ICML17:Finn} & 46.2\% & 51.4\% & 39.7\% & 0.39 & 2.06 \\
& ANIL \cite{bib:ICLR20:Raghu} & 46.4\% & 51.5\% & 39.2\% & 0.14 & 0.92 \\
& BOIL \cite{bib:ICLR21:Oh} & 44.7\% & 51.3\% & 42.3\% & 0.39 & 2.05 \\
& MAML++ \cite{bib:ICLR19:Antreas} & 48.2\% & 53.2\% & \textbf{43.2\%} & 0.39 & 2.06 \\
& p-Meta\xspace (\ref{ch4-sec:LR}) & 47.1\% & 52.3\% & 41.8\% & 0.16 & 1.00 \\
& p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}) & \textbf{48.8\%} & \textbf{53.9\%} & 42.6\% & 0.15 & 0.99 \\ \midrule
\multirow{6}{*}{ResNet12} & MAML \cite{bib:ICML17:Finn} & 51.7\% & 57.4\% & 41.3\% & 37.08 & 54.69 \\
& ANIL \cite{bib:ICLR20:Raghu} & 50.3\% & 56.7\% & 40.6\% & 12.42 & 3.62 \\
& BOIL \cite{bib:ICLR21:Oh} & 42.7\% & 47.7\% & 44.2\% & 37.08 & 54.69 \\
& MAML++ \cite{bib:ICLR19:Antreas} & 53.1\% & 58.6\% & 45.1\% & 37.08 & 54.69 \\
& p-Meta\xspace (\ref{ch4-sec:LR}) & 51.8\% & 58.3\% & 40.6\% & 25.84 & 17.66 \\
& p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}) & \textbf{53.6\%} & \textbf{59.4\%} & \textbf{45.4\%} & 24.02 & 16.01 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption[5-Way 5-shot few-shot image classification results on 4Conv and ResNet12.]{5-Way 5-shot few-shot image classification results on 4Conv and ResNet12. All methods are meta-trained on MiniImageNet, and are few-shot learned on the reported datasets: MiniImageNet, TieredImageNet, and CUB (denoted by Mini, Tiered, and CUB in the table). The total computation (\# GFLOPs) and the peak memory (MB) during few-shot learning are reported based on the theoretical analysis in \secref{ch4-sec:analysis}. }
\label{ch4-tab:image_5shot}
\footnotesize
\begin{tabular}{llccccc}
\toprule
\multicolumn{2}{l}{\textbf{5-Way 5-Shot}} & \multicolumn{3}{c}{Accuracy} & GFLOPs & Memory \\
\cmidrule(lr){3-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7}
\multicolumn{2}{l}{Benchmarks} & Mini & Tiered & CUB & Mini & Mini \\ \midrule
\multirow{6}{*}{4Conv} & MAML \cite{bib:ICML17:Finn} & 61.4\% & 66.5\% & 55.6\% & 1.96 & 2.06 \\
& ANIL \cite{bib:ICLR20:Raghu} & 60.6\% & 64.5\% & 54.2\% & 0.72 & 0.92 \\
& BOIL \cite{bib:ICLR21:Oh} & 60.5\% & 65.3\% & 58.3\% & 1.96 & 2.05 \\
& MAML++ \cite{bib:ICLR19:Antreas} & 63.7\% & \textbf{68.5\%} & 59.1\% & 1.96 & 2.06 \\
& p-Meta\xspace (\ref{ch4-sec:LR}) & 62.9\% & 68.3\% & 59.3\% & 1.34 & 1.09 \\
& p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}) & \textbf{65.0\%} & \textbf{68.5\%} & \textbf{60.2\%} & 1.11 & 1.04 \\
\midrule
\multirow{6}{*}{ResNet12} & MAML \cite{bib:ICML17:Finn} & 64.7\% & 69.6\% & 53.8\% & 185.42 & 54.69 \\
& ANIL \cite{bib:ICLR20:Raghu} & 62.3\% & 68.7\% & 54.0\% & 62.08 & 3.62 \\
& BOIL \cite{bib:ICLR21:Oh} & 53.6\% & 59.8\% & 53.7\% & 185.42 & 54.69 \\
& MAML++ \cite{bib:ICLR19:Antreas} & 68.6\% & \textbf{73.4\%} & 63.9\% & 185.42 & 54.69 \\
& p-Meta\xspace (\ref{ch4-sec:LR}) & 68.8\% & 72.6\% & 65.9\% & 124.15 & 18.95 \\
& p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}) & \textbf{69.7\%} & 73.3\% & \textbf{66.6\%} & 116.79 & 17.17 \\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Settings}
We test on standard few-shot image classification tasks (both in-domain and cross-domain).
We adopt two common backbones ``4Conv'' \cite{bib:ICML17:Finn} and ``ResNet12'' \cite{bib:NIPS18:Oreshkin}.
The batch normalization layers are replaced with group normalization layers, as discussed in \secref{ch4-sec:norm}.
We train the model on MiniImageNet \cite{bib:NIPS16:Vinyals} (both meta training and meta validation dataset) with 100 meta epochs.
In each meta epoch, 1000 random tasks are drawn from the task distribution.
The model is updated with 5 gradient steps (i.e.,\xspace $K=5$) in both inner loop of meta-training and few-shot learning.
We use Adam optimizer with cosine learning rate scheduling as \cite{bib:ICLR19:Antreas} for all outer loop updating.
The (initial) inner step size $\bm{\alpha}$ is set to 0.01.
The meta-trained model is then tested on three datasets MiniImageNet \cite{bib:NIPS16:Vinyals}, TieredImageNet \cite{bib:ICLR18:Ren}, and CUB \cite{bib:CUB} to verify both \textit{in-domain} and \textit{cross-domain} performance.
\fakeparagraph{Results}
\tabref{ch4-tab:image_1shot} and \tabref{ch4-tab:image_5shot} show the accuracy of few-shot learned models for 5-way 1-shot and 5-way 5-shot scenarios respectively.
The reported accuracy is averaged over $5000$ new unseen tasks randomly drawn from the meta test dataset
We also report the average number of GFLOPs and the average peak memory per task according to \secref{ch4-sec:analysis}.
Clearly, p-Meta\xspace almost always yields the highest accuracy in all settings.
Note that the comparison between ``p-Meta\xspace (\ref{ch4-sec:LR})'' and ``MAML++'' can be considered as the ablation studies on learning sparse layer-wise inner step sizes proposed in \secref{ch4-sec:LR}.
Thanks to the imposed sparsity on $\bm{\alpha}$, ``p-Meta\xspace (\ref{ch4-sec:LR})'' significantly reduces the peak memory ($2.5\times$ saving on average and up to $3.1\times$) and the computation burden ($1.7\times$ saving on average and up to $2.4\times$) over ``MAML++''.
Note that the imposed sparsity also cause a moderate accuracy drop.
However, with the meta attention, ``p-Meta\xspace (\ref{ch4-sec:LR}+\ref{ch4-sec:attention})'' not only notably improves the accuracy but also further reduces the peak memory ($2.7\times$ saving on average and up to $3.4\times$) and computation ($1.9\times$ saving on average and up to $2.6\times$) over ``MAML++''.
Note that ``ANIL'' only updates the last layer, and therefore consumes less memory but also yields a substantially lower accuracy.
\subsection{Experiments on Reinforcement Learning}
\label{ch4-sec:reinforcement}
\begin{table}[t!]
\centering
\caption[Few-shot reinforcement learning results on 2D navigation and robot locomotion tasks.]{Few-shot reinforcement learning results on 2D navigation and robot locomotion tasks (larger return means better). A MLP with two hidden layers of size 100 is used as the policy model. The total computation (\# GFLOPs) and the peak memory (MB) during few-shot learning are reported based on the theoretical analysis in \secref{ch4-sec:analysis}.}
\label{ch4-tab:reinforce}
\footnotesize
\begin{tabular}{lcccccc}
\toprule
\textbf{20 Rollouts} & \multicolumn{3}{c}{Half-Cheetah Velocity} & \multicolumn{3}{c}{2D Navigation} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Benchmarks & Return & GFLOPs & Memory & Return & GFLOPs & Memory \\ \midrule
MAML \cite{bib:ICML17:Finn} & -82.2 & 0.15 & 0.24 & -13.3 & 0.12 & 0.21 \\
ANIL \cite{bib:ICLR20:Raghu} & -78.8 & 0.06 & 0.09 & -13.8 & 0.04 & 0.08 \\
BOIL \cite{bib:ICLR21:Oh} & -76.4 & 0.15 & 0.23 & -12.4 & 0.12 & 0.21 \\
MAML++ \cite{bib:ICLR19:Antreas} & -69.6 & 0.15 & 0.24 & -17.6 & 0.12 & 0.21 \\
p-Meta (\ref{ch4-sec:LR}) & -65.5 & 0.11 & 0.12 & \textbf{-11.2} & 0.09 & 0.09 \\
p-Meta (\ref{ch4-sec:LR}+\ref{ch4-sec:attention}) & \textbf{-64.0} & 0.11 & 0.11 & -11.8 & 0.09 & 0.09 \\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Settings}
To show the versatility of p-Meta\xspace, we experiment with two few-shot reinforcement learning problems: 2D navigation and Half-Cheetah robot locomotion simulated with MuJoCo library \cite{bib:IROS12:Todorov}.
We adopt vanilla policy gradient \cite{bib:ML1992:Williams} for the inner loop and trust-region policy optimization \cite{bib:ICML15:Schulman} for the outer loop.
During the inner loop as well as few-shot learning, the agents rollout 20 episodes with a horizon size of 200 and are updated for one gradient step.
The policy model is trained for 500 meta epochs, and the model with the best average return during training is used for evaluation.
The task batch size is set to 20 for 2D navigation, and 40 for robot locomotion.
The (initial) inner step size $\bm{\alpha}$ is set to 0.1.
Each episode is considered as a data sample, and thus the gradients are accumulated 20 times for a gradient step.
\fakeparagraph{Results}
\tabref{ch4-tab:reinforce} lists the average return averaged over 400 new unseen tasks randomly drawn from simulated environments.
We also report the average number of GFLOPs and the average peak memory per task according to \secref{ch4-sec:analysis}.
Note that the reported computation and peak memory do not include the estimations of the advantage \cite{bib:ICML16:Duan}, as they are relatively small and could be done during the rollout.
p-Meta\xspace consumes a rather small amount of memory and computation, while often obtains the highest return in comparison to others.
Therefore, p-Meta\xspace can fast adapt its policy to reach the new goal in the environment with less on-device resource demand.
\subsection{Ablation Studies on Meta Attention}
\label{ch4-sec:ablation}
We study the effectiveness of our meta attention via the following two ablation studies.
The experiments are conducted on ``4Conv'' in both 5-way 1-shot and 5-way 5-shot as \secref{ch4-sec:image}.
\fakeparagraph{Sparsity in Meta Attention}
\tabref{ch4-tab:ablation} shows the few-shot classification accuracy with different sparsity settings in the meta attention.
We first do not impose sparsity on $\bm{\gamma}_l^{\mathrm{fw}}$ and $\bm{\gamma}_l^{\mathrm{bw}}$ (i.e.,\xspace set both $\rho$'s as 0), and adopt forward attention and backward attention separately.
In comparison to no meta attention at all, enabling either forward or backward attention improves accuracy.
With both attention enabled, the model achieves the best performance.
We then test the effects when imposing sparsity on $\bm{\gamma}_l^{\mathrm{fw}}$ or $\bm{\gamma}_l^{\mathrm{bw}}$ (i.e.,\xspace set $\rho>0$).
We use the same $\rho$ for all layers.
We observe a sparse $\bm{\gamma}_l^{\mathrm{bw}}$ often cause a larger accuracy drop than a sparse $\bm{\gamma}_l^{\mathrm{fw}}$.
Since a sparse $\bm{\gamma}_l^{\mathrm{bw}}$ does not bring substantial memory or computation saving (see \secref{ch4-sec:analysis}), we use $\rho=0$ for backward attention and $\rho=0.3$ for forward attention.
Note that $\rho=1$ means that the resulted $\bm{\gamma}_l$ are all zeros and the layers are not updated, which can be realized by imposing sparsity on layerwise learning rate in \secref{ch4-sec:LR}.
Attention scores $\bm{\gamma}_l$ introduce a dynamic channel-wise learning rate according to the new data samples.
We further compare meta attention with a static channel-wise learning rate, where the channel-wise learning rate $\bm{\alpha}^{\mathrm{Ch}}$ is meta-trained as the layer-wise inner step sizes in \secref{ch4-sec:LR} while without imposing sparsity.
By comparing ``$\bm{\alpha}^{\mathrm{Ch}}$'' with ``0, 0'' in \tabref{ch4-tab:ablation}, we conclude that the dynamic channel-wise learning rate yields a significantly higher accuracy.
\begin{table}[t]
\centering
\caption[Ablation results of meta attention on 4Conv.]{Ablation results of meta attention on 4Conv.}
\label{ch4-tab:ablation}
\small
\begin{tabular}{cccccccc}
\toprule
\multicolumn{2}{c}{$\rho$} & \multicolumn{3}{c}{5-way 1-shot} & \multicolumn{3}{c}{5-way 5-shot} \\
\cmidrule(lr){1-2} \cmidrule(lr){3-5} \cmidrule(lr){6-8}
fw & bw & Mini & Tiered & CUB & Mini & Tiered & CUB \\
\midrule
x & x & 47.1\% & 52.3\% & 41.8\% & 62.9\% & 68.3\% & 59.3\% \\
0 & x & 48.1\% & 53.2\% & 41.7\% & 64.1\% & 68.4\% & 59.0\% \\
x & 0 & 47.8\% & 53.1\% & 40.9\% & 63.9\% & 68.5\% & 60.0\% \\
0 & 0 & \textbf{49.0\%} & \textbf{54.2\%} & \textbf{43.1\%} & 64.5\% & \textbf{69.2\%} & \textbf{60.2\%} \\
0 & 0.3 & 48.5\% & 53.4\% & 42.2\% & 64.7\% & 68.2\% & 59.3\% \\
0.3 & 0 & 48.8\% & 53.9\% & 42.6\% & \textbf{65.0\%} & 68.5\% & \textbf{60.2\%} \\
0.3 & 0.3 & 48.7\% & 53.7\% & 42.3\% & 64.5\% & 68.3\% & 59.5\% \\
0.5 & 0.5 & 48.2\% & 53.4\% & 42.7\% & 64.8\% & 68.1\% & 59.1\% \\
\multicolumn{2}{c}{$\bm{\alpha}^{\mathrm{Ch}}$} & 47.8\% & 52.8\% & 41.0\% & 63.6\% & 68.1\% & 58.1\% \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item
x: no forward/backward meta attention, i.e.,\xspace $\bm{\gamma}_l^{\mathrm{fw}}=1$ or $\bm{\gamma}_l^{\mathrm{bw}}=1$.
\item
$\bm{\alpha}^{\mathrm{Ch}}$: introduce an input- and output-channel inner step sizes $\bm{\alpha}^{\mathrm{Ch}}$ per layer. We use $\bm{\alpha}\cdot\bm{\alpha}^{\mathrm{Ch}}$ as inner step sizes. $\bm{\alpha}^{\mathrm{Ch}}$ is meta-trained as $\bm{\alpha}$ while without sparsity.
\end{tablenotes}
\end{table}
\fakeparagraph{Layer-wise Updating Ratios}
To study the resulted updating ratios across layers, i.e.,\xspace the layer-wise sparsity of weight gradients, we randomly select 100 new tasks and plot the layer-wise updating ratios, see \figref{ch4-fig:ratio}.
The ``4Conv'' backbone has 9 layers ($L=9$), i.e.,\xspace 8 alternates of \texttt{conv} and group normalization layers, and an \texttt{fc} output layer.
As mentioned in \secref{ch4-sec:experiment_settings}, we do not apply meta attention to the output layer, i.e.,\xspace $\bm{\gamma}_9=1$.
The used backbone is updated with 5 gradient steps ($K=5$).
We use $\rho=0.3$ for forward attention, and $\rho=0$ for backward.
Note that \algoref{ch4-alg:clip} adaptively determines the sparsity of $\bm{\gamma}_l$, which also means different samples may result in different updating ratios even with the same $\rho$ (see \figref{ch4-fig:ratio}).
The size of $\bm{x}_{l-1}$ often decreases along the layers in current DNNs.
As expected, the latter layers are preferred to be updated more, since they need a smaller amount of memory for updating.
Interestingly, even if with a small $\rho$($=0.3$), the ratio of updated weights is rather small, e.g.,\xspace smaller than $0.2$ in step 3 of 5-way 5-shot.
It implies that the outputs of softmax have a large discrepancy, i.e.,\xspace only a few channels are adaptation-critical for each sample, which in turn verifies the effectiveness of our meta attention mechanism.
We also randomly pair data samples and compute the cosine similarity between their attention scores $\bm{\gamma}_l$.
We plot the cosine similarity of step 1 in \figref{ch4-fig:similarity}.
The results show that there may exist a considerable variation on the adaptation-critical weights selected by different samples, which is consistent with our observation in \tabref{ch4-tab:ablation}, i.e.,\xspace dynamic learning rate outperforms the static one.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{./figs/ch4/sparsity_thesis.pdf}
\caption[Layer-wise updating ratios in each updating step.]{Layer-wise updating ratios (mean $\pm$ standard deviation) in each updating step. Note that the ratio of updated weights is determined by both static layer-wise inner step sizes $\alpha_{1:L}^{1:K}$ and the dynamic meta attention scores $\bm{\gamma}_{1:L}$. The layer with an updating ratio of 0 means its $\alpha=0$.}
\label{ch4-fig:ratio}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{./figs/ch4/similarity_thesis.pdf}
\caption[Cosine similarity of $\bm{\gamma}_{1:L}$ between random pair of data samples.]{Cosine similarity (mean $\pm$ standard deviation) of $\bm{\gamma}_{1:L}$ between random pair of data samples. The results are reported in step 1, because all samples are fed into the same initial model in step 1. }
\label{ch4-fig:similarity}
\end{figure}
\begin{table}[t]
\centering
\small
\caption[Ablation results of sparse $\bm{x}_{l-1}$ and sparse $\bm{g}(\bm{y}_l)$.]{Ablation results of sparse $\bm{x}_{l-1}$ and sparse $\bm{g}(\bm{y}_l)$.}
\label{ch4-tab:sparse}
\begin{tabular}{cccccccc}
\toprule
\multicolumn{2}{c}{$\rho=0.3$} & \multicolumn{3}{c}{5-way 1-shot} & \multicolumn{3}{c}{5-way 5-shot} \\
\cmidrule(lr){1-2} \cmidrule(lr){3-5} \cmidrule(lr){6-8}
fw & bw & Mini & Tiered & CUB & Mini & Tiered & CUB \\
\midrule
x & x & 47.1\% & 52.3\% & 41.8\% & 62.9\% & 68.3\% & 59.3\% \\
$\bm{g}(\bm{w}_l)$ & x & 48.2\% & 53.6\% & 41.2\% & 63.6\% & 69.0\% & 59.0\% \\
$\bm{x}_{l-1}$ & x & 37.4\% & 37.9\% & 35.4\% & 47.9\% & 49.3\% & 42.5\% \\
x & $\bm{g}(\bm{w}_l)$ & 48.0\% & 53.0\% & 42.6\% & 64.0\% & 67.8\% & 59.9\% \\
x & $\bm{g}(\bm{y}_{l})$ & 22.8\% & 21.1\% & 20.6\% & 20.7\% & 21.0\% & 20.4\% \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item
x: no forward/backward (sparse) meta attention, i.e.,\xspace $\bm{\gamma}_l^{\mathrm{fw}}=1$ or $\bm{\gamma}_l^{\mathrm{bw}}=1$.
\end{tablenotes}
\end{table}
\fakeparagraph{Sparse \textit{x} and Sparse \textit{g}(\textit{y})}
Our meta attention modules take $\bm{x}_{l-1}$ and $\bm{g}(\bm{y}_l)$ as inputs, and output attention scores which are used to create sparse $\bm{g}(\bm{w}_l)$.
However, applying the resulted sparse attention scores on $\bm{x}_{l-1}$ and $\bm{g}(\bm{y}_l)$ can also bring memory and computation benefits, as discussed in \secref{ch4-sec:overview}.
We conduct the ablations when multiplying attention scores $\bm{\gamma}_l^{\mathrm{fw}}$ and $\bm{\gamma}_l^{\mathrm{bw}}$ on $\bm{g}(\bm{w}_l)$ (also the one used in the main text), or on $\bm{x}_{l-1}$ and $\bm{g}(\bm{y}_{l})$ respectively.
The results in \tabref{ch4-tab:sparse} show that a channel-wise sparse $\bm{x}_{l-1}$ hugely degrades the performance, in comparison to only imposing sparsity on $\bm{g}(\bm{w}_l)$ while using a dense $\bm{x}_{l-1}$ in the forward pass.
In addition, directly adopting a sparse $\bm{g}(\bm{y}_{l})$ in backpropagation may even cause non-convergence in few-shot learning.
We think this is due to the fact that the error accumulates along the propagation when imposing sparsity on $\bm{x}_{l-1}$ or $\bm{g}(\bm{y}_{l})$.
\begin{table}[t]
\centering
\small
\caption[Comparison between different pooling and normalization layers.]{Comparison between different pooling and normalization layers.}
\label{ch4-tab:pool}
\begin{tabular}{ccccc}
\toprule
\multicolumn{2}{c}{4Conv} & \multicolumn{3}{c}{5-way 1-shot} \\
\cmidrule(lr){1-2} \cmidrule(lr){3-5}
Pooling & Normalization & Mini & Tiered & CUB \\
\midrule
Average-pooling & Batch normalization & 25.3\% & 27.2\% & 26.1\% \\
Average-pooling & Group normalization & 45.8\% & 50.3\% & 40.2\% \\
Max-pooling & Batch normalization & 27.6\% & 28.9\% & 26.5\% \\
Max-pooling & Group normalization & 46.2\% & 51.4\% & 39.9\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Studies on Pooling \& Normalization Layers}
\label{ch4-sec:experiment_pool}
In this section, we test the backbone network with different types of pooling and normalization.
Without further notations in the following experiments, we meta-train our ``4Conv'' backbone on MiniImageNet with full batch sizes, and conduct few-shot learning with gradient accumulation with a batch size of 1, as in \secref{ch4-sec:experiment_settings}.
Here, we report the results with the original ``MAML'' method \cite{bib:ICML17:Finn} in \tabref{ch4-tab:pool}.
Clearly, the discrepancy of batch statistics between meta-training phase and few-shot learning phase causes a large accuracy loss in batch normalization layers.
Batch normalization works only if few-shot learning uses full batch sizes, i.e.,\xspace without gradient accumulation, which however does not fit in our memory-constrained scenarios (see \secref{ch4-sec:gradacc}).
In addition, max-pooling performs better than average-pooling.
We thus use group normalization and max-pooling in our backbone model, see \secref{ch4-sec:experiment_settings}.
\begin{table}[t]
\centering
\small
\caption[Ablation results of sample batch sizes.]{Ablation results of sample batch sizes.}
\label{ch4-tab:batchsize}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{5-way 1-shot} & \multicolumn{3}{c}{5-way 5-shot} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7}
Batch Size & 1 & 2 & 5 & 1 & 5 & 25 \\
\midrule
Mini & 48.8\% & 48.7\% & 48.3\% & 65.0\% & 65.1\% & 64.7\% \\
Tiered & 53.9\% & 53.6\% & 54.3\% & 68.5\% & 68.9\% & 68.1\% \\
CUB & 42.6\% & 42.1\% & 42.4\% & 60.2\% & 59.5\% & 60.6\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Studies on Sample Batch Size}
\label{ch4-sec:experiment_batchsize}
In this section, we show the effects brought from different sample batch sizes.
As the setting mentioned in \secref{ch4-sec:experiment_settings}, the full batch sizes is adopted in meta-training phase with our p-Meta\xspace.
During the few-shot learning phase, gradient accumulation is applied to fit different on-device memory constraints.
We report the accuracy when adopting different sample batch sizes in gradient accumulation.
Although group normalization eliminates the variance of batch statistics, adopting different batch sizes may still result in diverse performance due to the batch-averaged scores in meta attention.
The results in \tabref{ch4-tab:batchsize} show that different batch sizes yield a similar accuracy level, which indicates that our meta attention module is relatively robust to batch sizes.
\section{Summary}
\label{ch4-sec:summary}
In this chapter, we propose a new meta learning method p-Meta\xspace for memory-efficient few-shot learning on unseen tasks.
p-Meta\xspace enables efficient learning on edge devices.
On-device learning of a DNN requires both data efficiency and memory efficiency.
However, on the one hand, existing low memory training methods fail to learn a DNN given only a few training samples; on the other hand, current few-shot learning methods require a significant amount of dynamic memory.
p-Meta\xspace addresses these challenges by (\textit{i}) meta-training an initial backbone that can fast adapt to unseen tasks with only a few samples, (\textit{ii}) meta-training a selection mechanism that can identify structurewise adaptation-critical weights to reduce the training memory.
The main contributions of DRESS\xspace are summarized as follows,
\begin{itemize}
\item
p-Meta\xspace enables data- and memory-efficient DNN (re-)training given new unseen tasks.
p-Meta\xspace utilizes gradient-based model adaptation, and thus is applicable to various tasks, e.g.,\xspace classification, regression, and reinforcement learning.
\item
p-Meta\xspace adopts structured partial parameter updates for low-memory training, which is realized by automatically identifying adaptation-critical weights both layer-wise and channel-wise.
This hierarchical approach combines static selection of layers and dynamic selection of channels whose weights are critical for few-shot learning on the given new task, and avoids the redundant updating of non-critical weights.
This way, the necessary memory consumption required for optimizing adaptation-critical weights decreases.
To the best of our knowledge, p-Meta\xspace is the first meta learning method designed for on-device few-shot learning.
\item
Evaluations on few-shot image classification and reinforcement learning show that p-Meta\xspace not only improves the accuracy but also reduces the peak dynamic memory by a factor of 2.5 on average over the state-of-the-art few-shot learning methods.
p-Meta\xspace can also simultaneously reduce the computation by a factor of 1.7 on average.
\end{itemize}
This chapter studied how to conduct learning on edge devices with limited dynamic memory and limited training data.
Note that the methods proposed in the previous chapters solely target the application scenarios on a single edge platform.
Edge-server-system is another common scenario of edge intelligence, where multiple resource-constrained edge nodes are remotely connected with a resource-sufficient central server.
In the next chapter, we will study how to deploy DNNs on edge-server-system to achieve an efficient inference and an efficient updating.
\chapter[Edge-Server System]{Edge-Server System}
\label{ch5:edgeserver}
In \chref{ch2:inference}, \chref{ch3:adaptation} and \chref{ch4:learning}, we studied how to conduct inference, adaptation, and learning on a single edge device, respectively.
Edge-server system is another commonly used infrastructure for edge intelligent applications.
In edge-server system, several resource-constrained edge devices are connected to a remote server with sufficient resources, and some public information is allowed to be communicated between edge devices and the server.
In this chapter, we design a new pipeline to enable efficient inference and efficient updating for edge-server system.
\fakeparagraph{Main Resource Constraints}
The main resource constraints on edge-server system comprise two aspects, (\textit{i}) the limited resources on edge devices e.g.,\xspace from memory, computing power, and energy, as discussed in \chref{ch2:inference} and \chref{ch3:adaptation}, (\textit{ii}) the limited communication resources e.g.,\xspace from bandwidth.
\fakeparagraph{Principles}
On-device inference is preferred over cloud inference, since it can achieve a fast and stable inference with less energy consumption.
Due to a possible lack of relevant training data at the initial deployment, pretrained DNNs may either fail to perform satisfactorily or be significantly improved after the initial deployment.
On such an edge-server system, the remote server retrains the DNNs with newly collected data from edge devices or from other sources and sends the updates to the edge device is preferred over on-device re-training (or federated learning), because of the limited memory and computing power on edge devices.
To reduce the communication cost for sending the updated models, we propose a deep partial updating paradigm, where the server only selects and sends a small subset of critical weights that have a large contribution to the loss reduction during the retraining.
The contents of this chapter are established mainly based on the paper ``Deep Partial Updating: Towards Communication Efficient Updating for On-device Inference'' that is published on European Conference on Computer Vision (ECCV), 2022 \cite{bib:ECCV22:Qu}.
\section{Introduction}
\label{ch5-sec:introduction}
Compared to traditional cloud inference, on-device inference is subject to severe limitations in terms of storage, energy, computing power and communication.
On the other hand, it has many advantages, e.g.,\xspace it enables fast and stable inference even with low communication bandwidth or interrupted communication, and can save energy by avoiding the transfer of data to the cloud, which often costs significant amounts of energy than sensing and computation \cite{bib:Book19:Warden,bib:arXiv18:Guo,bib:arXiv19:Lee}.
To deploy deep neural networks (DNNs) on resource-constrained edge devices, extensive research has been done to compress a well pre-trained model via pruning \cite{bib:ICLR16:Han,bib:ICLR19:Frankle,bib:ICLR20:Renda} and quantization \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari}.
During on-device inference, compressed DNNs may achieve a good balance between model performance and resource demand.
However, due to a possible lack of relevant training data at the time of initial deployment or due to an unknown sensing environment, pre-trained DNNs may either fail to perform satisfactorily or be significantly improved after the initial deployment.
In other words, re-training the models by using newly collected data (from \emph{edge devices} or \emph{other sources}) is typically required to achieve the desired performance during the lifetime of devices.
Because of the resource-constrained nature of edge devices in terms of memory and computing power, on-device re-training (or federated learning) is typically restricted to tiny batch size, small inference (sub-)networks or limited optimization steps, all resulting in a performance degradation.
Instead, retraining often occurs on a remote server with sufficient resources.
One possible strategy to allow for a continuous improvement of the model performance is a two-stage iterative process: (\textit{i}) at each round, edge devices collect new data samples and send them to the server, and (\textit{ii}) the server retrains the model using all collected data, and sends updates to each edge device \cite{bib:CS06:Brown}.
The first stage may be even not necessary if new data are collected in other ways and made directly available to the server.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.99\textwidth,height=0.42\textwidth]{./figs/ch5/communication.png}
\caption[The iterative process of edge-to-server communication and server-to-edge communication.]{The iterative process for updating the deployed inference model on edge devices via a wireless communication. Edge-to-server communication: edge devices collect new data samples and send them to the server. Server-to-edge communication: the server retrains the model and then sends the updates to each edge device. The edge-to-server communication may not be necessary if new training data is collected from other sources and made directly available to the server.}
\label{ch5-fig:communication}
\end{figure}
\fakeparagraph{Example Scenarios}
Example application scenarios of relevance include vision robotic sensing in an unknown environment (e.g.,\xspace Mars) \cite{bib:RAL17:Meng}, local translators of low-resource languages on mobile phones \cite{bib:ICMLWorkshop19:Bhandare,bib:arXiv20:Wang}, and sensor networks mounted in alpine areas \cite{bib:IPSN19:Meyer}, automatic wildlife monitoring \cite{bib:MEE18:Stowell}.
We detail two specific scenarios.
\textit{Hazard alarming on mountains: }
Researchers in \cite{bib:IPSN19:Meyer} mounted tens of sensor nodes at different scarps in high alpine areas with cameras, geophones and high-precision GPS. The purpose is to achieve fast, stable, and energy-efficient hazard monitoring for early warning to protect people and infrastructure.
To this end, a DNN is deployed on each node to on-device detect rockfalls and debris flows.
The nodes regularly collect and send data to the server for labeling and retraining, and the server sends the updated model back through a low-power wireless network. Retraining during deployment is essential for a highly reliable hazard warning.
\textit{Endangered species monitoring: }
To detect endangered species, researchers often deploy some audio or image sensor nodes in virgin rainforests \cite{bib:MEE18:Stowell}.
Edge nodes are supposed to classify the potential signal from endangered species and send these relevant data to the server.
Due to the limited prior information from environments and species, retraining the initially classifier with received data or data from other sources (e.g.,\xspace other areas) is necessary.
\fakeparagraph{Challenges}
An essential challenge herein is that the transmissions in the server-to-edge stage are highly constrained by the limited communication resource (e.g.,\xspace bandwidth, energy \cite{bib:Sensors16:Augustin}) in comparison to the edge-to-server stage, if necessary at all.
Typically, state-of-the-art DNNs often require tens or even hundreds of mega-Bytes (MB) to store parameters, whereas a single batch of data samples (a number of samples that lead to reasonable updates in batch training) needs a relatively smaller amount of data.
For example, for CIFAR10 dataset \cite{bib:CIFAR}, the weights of a popular VGGNet require $56.09$MB storage, while one batch of 128 samples only uses around $0.40$MB \cite{bib:ICLR15:Simonyan,bib:ECCV16:Rastegari}.
As an alternative approach, the server sends a full update of the inference model once or rarely. But in this case, every node will suffer from a low performance until such an update occurs.
Besides, edge devices could decide on and send only critical samples by using active learning schemes \cite{bib:ICLR20:Ash}.
The server may also receive data from other sources, e.g.,\xspace through data augmentation based on the data collected in previous rounds or new data collection campaigns.
These considerations indicate that the updated weights that are sent to edge devices by the server become a major bottleneck.
Facing the above challenges, we ask the following question: \textit{Is it possible to update only a small subset of weights while reaching a similar performance as updating all weights?}
Doing such a \textit{partial updating} can significantly reduce the server-to-edge communication overhead.
Furthermore, fewer parameter updates also lead to less memory access on edge devices, which in turn results in smaller energy consumption than full updating \cite{bib:ISSCC14:Horowitz}.
\fakeparagraph{Why Partial Updating Works}
Since the model deployed on edge devices is trained with the data collected beforehand, some learned knowledge can be reused.
In other words, we only need to distinguish and update the weights which are critical to the newly collected data.
\fakeparagraph{How to Select Weights}
Our key concept for partial updating is based on the hypothesis, that \textit{a weight shall be updated only if it has a large contribution to the loss reduction} during the retraining given newly collected data samples.
Specially, we define a binary mask $\bm{m}$ to describe which weights are subject to update and which weights are fixed (also reused).
For any $\bm{m}$, we establish the analytical upper bound on the difference between the loss value under partial updating and that under full updating.
We determine an optimized mask $\bm{m}$ by combining two different view points: (\textit{i}) measuring each weight's ``global contribution'' to the upper bound through computing the Euclidean distance, and (\textit{ii}) measuring each weight's ``local contribution'' to the upper bound using gradient-related information.
The weights to be updated according to $\bm{m}$ will be further sparsely fine-tuned while the remaining weights are rewound to their initial values.
\section{Related Work}
\label{ch5-sec:related}
\subsection{Partial Updating}
Although partial updating has been adopted in some prior works, it is conducted in a fairly coarse-grained manner, e.g.,\xspace layer-wise or neuron-wise, and targets at completely different objectives.
Especially, under continual learning settings, \cite{bib:ICLR18:Yoon,bib:arXiv20:Jung} propose to freeze all weights related to the neurons which are more critical in performing prior tasks than new ones, to preserve existing knowledge.
Under adversarial attack settings, \cite{bib:CCS15:Shokri} updates the weights in the first several layers only, which yield a dominating impact on the extracted features, for better attack efficacy.
Under architecture generalization settings, \cite{bib:ICLR20:Chatterji} studies the generalization performance through the resulting loss degradation when rewinding the weights of each individual layer to their initial values.
Under meta learning settings, \cite{bib:ICLR20:Raghu,bib:AAAI21:Shen} reuse learned representations by only updating a subset of layers for efficiently learning new tasks.
Unfortunately, such techniques do not focus on reducing the number of updated weights, and thus cannot be applied in our problem settings.
\subsection{Federated Learning}
Communication-efficient federated learning \cite{bib:ICLR18:Lin,bib:arXiv19:Kairouz,bib:arXiv20:Li} studies how to compress multiple gradients calculated on different sets of non-\textit{i.i.d.} local data, such that the aggregation of these compressed gradients could result in a similar convergence performance as centralized training on all data.
Such compressed updates are fundamentally different from our setting, where (\textit{i}) updates are not transmitted in each optimization step; (\textit{ii}) training data are incrementally collected; (\textit{iii}) centralized training is conducted.
Our typical scenarios focus on outdoor areas, which generally do not involve data privacy issues, since these collected data are not personal data.
In comparison to federated learning, our pipeline has the following advantages: (\textit{i}) we do not conduct resource-intensive gradient backward propagation on edge devices; (\textit{ii}) the collected data are not continuously accumulated and stored on memory-constrained edge nodes; (\textit{iii}) we also avoid the difficult but necessary labeling process on each edge node in supervised learning tasks; (\textit{iv}) if few events occur on some nodes, the centralized training may avoid degraded updates in local training, e.g.,\xspace batch normalization.
\subsection{Compression}
The communication cost could also be reduced through some compression techniques, e.g.,\xspace quantizing/encoding the updated weights and the transmission signal.
But note that these techniques are orthogonal to our approach and could be applied in addition.
Following the compression pipeline in \cite{bib:ICLR16:Han}, the resulted sparse updating from our methods could be further quantized and Huffman-encoded.
\subsection{Unstructured Pruning}
Deep partial updating is inspired by recent unstructured pruning methods, e.g.,\xspace \cite{bib:ICLR16:Han,bib:ICLR19:Frankle,bib:NIPS19:Zhou,bib:ICLR20:Renda,bib:ICML21:Evci,bib:NIPS21:Peste}.
Traditional pruning methods aim at reducing the number of operations and storage consumption by setting some weights to zero. Sending a pruned DNN with only non-zero weights may also reduce the communication cost, but to a much lesser extent as shown in the experimental results, see \secref{ch5-sec:experiment_samplesratio}.
Since our objective namely reducing the server-to-edge communication cost when updating the deployed DNN is fundamentally different from pruning, we can leverage some learned knowledge by retaining weights (partial updating) instead of zero-outing weights (pruning).
\subsection{Domain Adaptation}
Domain adaptation targets reducing domain shift to transfer knowledge into new learning tasks \cite{bib:arXiv19:Zhuang}.
This chapter mainly considers the scenario where the inference task is not explicitly changed along the rounds, i.e.,\xspace the overall data distribution maintains the same along the data collection rounds.
Thus, selecting critical weights (features) by measuring their impact on domain distribution discrepancy is invalid herein.
Applying deep partial updating on streaming tasks where the data distribution varies along the rounds would be also worth studying, and we leave it for future works.
\section{Notations and Settings}
\label{ch5-sec:notation}
In this section, we define the notations used throughout this chapter, and provide a formalized problem setting.
We consider a set of remote edge devices that implement on-device inference.
They are connected to a host server that is able to perform DNN training and retraining.
We consider the necessary amount of information that needs to be communicated to each edge device to update its inference model.
Assume there are $R$ rounds of model updates.
The model deployed in the $r$-th round is represented with its weight vector $\bm{w}^r$.
The training data used to update the model for the $r$-th round is represented as $\mathcal{D}^r = \delta\mathcal{D}^{r}\cup\mathcal{D}^{r-1}$.
Also, newly collected data samples $\delta\mathcal{D}^r$ are made available to the server in round $r-1$.
To reduce the amount of information that needs to be sent to edge devices, only partial weights of $\bm{w}^{r-1}$ shall be updated when determining $\bm{w}^{r}$.
The overall optimization problem for weight-wise partial updating in round $r-1$ is thus,
\begin{eqnarray}
\min_{\delta\bm{w}^r} & & \ell\left(\bm{w}^{r-1}+\delta\bm{w}^{r};\mathcal{D}^r\right) \label{ch5-eq:objective_r} \\
\text{s.t.} & & \|\delta\bm{w}^{r}\|_0 \leq k \cdot I \label{ch5-eq:constraints_r}
\end{eqnarray}
where $\ell$ denotes the loss function, $\|.\|_0$ denotes the L0-norm, $k$ denotes the updating ratio that is determined by the communication constraints in practical scenarios, and $\delta\bm{w}^{r}$ denotes the increment of $\bm{w}^{r-1}$.
Note that both $\bm{w}^{r-1}$ and $\delta\bm{w}^{r}$ are drawn from $\mathbb{R}^I$, where $I$ is the total number of weights.
In this case, only a fraction of $k \cdot I$ weights and the corresponding index information need to be communicated to each edge device for updating the model, namely the partial updates $\delta\bm{w}^{r}$.
It is worth noting that the index information is relatively small in size compared to the partially updated weights (see \secref{ch5-sec:experiment}).
On each edge device, the weight vector is updated as $\bm{w}^{r} = \bm{w}^{r-1}+\delta\bm{w}^{r}$.
To simplify the notation, we will only consider a single update, i.e.,\xspace from weight vector $\bm{w}$ (corresponding to $\bm{w}^{r-1}$) to weight vector $\widetilde{\bm{w}}$ (corresponding to $\bm{w}^{r}$) with $\widetilde{\bm{w}} = \bm{w}+\widetilde{\delta\bm{w}}$.
\section{Deep Partial Updating}
\label{ch5-sec:method}
We develop a two-step approach for resolving the partial updating optimization problem in \equref{ch5-eq:objective_r}-\equref{ch5-eq:constraints_r}.
The final experimental implementation in \secref{ch5-sec:experiment} contains some minor adaptations that do not change the main principles as explained next.
The overall approach is depicted in \figref{ch5-fig:approach}.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.7\textwidth]{./figs/ch5/approach.pdf}
\caption[The overall approach of DPU\xspace.]{The figure depicts the overall approach that consists of two steps. The first step is depicted with dotted arrows and starts from the deployed model $\bm{w}$. In $Q$ optimization steps, all weights are trained to the optimum $\bm{w}^\mathrm{f}$. Based on the collected information, a binary mask $\bm{m}$ is determined that characterizes the set of weights that are rewound to the ones of $\bm{w}$. Therefore, the second step (solid arrows) starts from $\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m}$. According to the mask, this initial solution is sparsely fine-tuned to the final weights $\widetilde{\bm{w}}$, i.e.,\xspace $\widetilde{\delta\bm{w}}$ has only non-zero values where the mask value is 1}
\label{ch5-fig:approach}
\end{figure}
\begin{itemize}
\item
\underline{The First Step:} \textbf{Full Updating and Rewinding.}
The first step not only determines the subset of weights that are allowed to change their values, but also computes the initial values for the second step.
In particular, we first optimize the loss function in \equref{ch5-eq:objective_r} by updating all weights from the initialization $\bm{w}$ with a standard optimizer, e.g.,\xspace SGD or its variants.
We thus obtain the minimized loss $\ell\left(\bm{w}^\mathrm{f}\right)$ with $\bm{w}^\mathrm{f} = \bm{w} + \delta\bm{w}^\mathrm{f}$, where the superscript $\mathrm{f}$ denotes ``full updating''.
To consider the constraint of \equref{ch5-eq:constraints_r}, the information gathered during this optimization is used to determine the subset of weights that will be changed, also that are communicated to the edge devices.
In the explanation of the method in \secref{ch5-sec:metric}, we use the mask $\bm{m}$ with $\bm{m} \in\{0,1\}^I$ to describe which weights are subject to change and which ones are not.
The weights with $m_i=1$ are trainable, whereas the weights with $m_i=0$ will be rewound from the values in $\bm{w}^\mathrm{f}$ to their initial values in $\bm{w}$, i.e.,\xspace unchanged.
Obviously, we find $\|\bm{m}\|_0 = \sum_{i} m_i = k \cdot I$.
\item
\underline{The Second Step:} \textbf{Sparse Fine-Tuning.}
In the second step we start a sparse fine-tuning from a model with $k \cdot I$ weights from the optimized model $\bm{w}^\mathrm{f}$ and $(1-k) \cdot I$ weights from the previous, still deployed model $\bm{w}$.
In other words, the initial weights for the second step are $\bm{w} + \delta\bm{w}^{\mathrm{f}} \odot \bm{m}$, where $\odot$ denotes an element-wise multiplication.
To determine the final solution $\widetilde{\bm{w}} = \bm{w}+\widetilde{\delta\bm{w}}$, we conduct a sparse fine-tuning (still with a standard optimizer), i.e.,\xspace we keep all weights with $m_i = 0$ constant during the optimization.
Therefore, $\widetilde{\delta\bm{w}}$ is zero wherever $m_i=0$, and only weights where $m_i = 1$ are updated.
\end{itemize}
\subsection{Metrics for Rewinding}
\label{ch5-sec:metric}
We will now describe a new metric that determines the weights that should be kept constant, i.e.,\xspace with $m_i=0$.
Like most learning methods, we focus on minimizing a loss function.
The two-step approach relies on the following assumption: the better the loss $\ell(\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m})$ of the initial solution for the second step, the better the final performance.
Therefore, the first step should select a mask $\bm{m}$ such that the loss difference $\ell(\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m}) - \ell(\bm{w}^\mathrm{f})$ is as small as possible.
To determine an optimized mask $\bm{m}$, we propose to upper-bound the above loss difference in two view points, and measure each weight's contribution to the bounds.
The ``global contribution'' uses the norm information of incremental weights $\delta\bm{w}^\mathrm{f}=\bm{w}^\mathrm{f}-\bm{w}$.
The ``local contribution'' takes into account the gradient-based information that is gathered during the optimization in the first step, i.e.,\xspace in the path from $\bm{w}$ to $\bm{w}^\mathrm{f}$.
Both contributions will be combined to determine the mask $\bm{m}$.
The two view points are based on the concept of smooth differentiable functions.
A function $f(x)$ with $f: \mathbb{R}^d \rightarrow \mathbb{R}$ is called $L$-smooth if it has a Lipschitz continuous gradient $g(x)$: $\|g(x) - g(y)\|_2 \leq L \|x - y\|_2$ for all $x, y$.
Note that Lipschitz continuity of gradients is essential to ensuring convergence of many gradient-based algorithms.
Under such a condition, one can derive the following bounds, see also \cite{bib:Book98:Nesterov}:
\begin{equation} \label{ch5-eq:lipschitz}
|f(y) - f(x) - g(x)^\mathrm{T} \cdot (y - x) | \leq L/2 \cdot \|y - x\|_2^2 \quad \forall x, y
\end{equation}
\fakeparagraph{Global Contribution}
One would argue that a large absolute value in $\delta\bm{w}^\mathrm{f} = \bm{w}^\mathrm{f} - \bm{w}$ indicates that this weight has moved far from its initial value in $\bm{w}$, and thus should not be rewound.
This motivates us to adopt the widely used unstructured magnitude pruning to determine the mask $\bm{m}$.
Magnitude pruning prunes the weights with the lowest magnitudes, which often achieves a good trade-off between the model accuracy and the number of zero's weights \cite{bib:ICLR20:Renda}.
Using $a - b \leq |a - b|$, \equref{ch5-eq:lipschitz} can be reformulated as $f(y) - f(x) - g(x)^T (y-x) \leq | f(y) - f(x) - g(x)^T (y-x) | \leq L/2 \cdot \| y-x \|^2_2$.
Thus, we can bound the relevant loss difference $\ell(\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m}) - \ell(\bm{w}^\mathrm{f}) \geq 0$ as
\begin{equation} \label{ch5-eq:globalbound}
\ell(\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m}) - \ell(\bm{w}^\mathrm{f}) \leq \bm{g}(\bm{w}^\mathrm{f})^\mathrm{T} \cdot \left( \delta\bm{w}^\mathrm{f} \odot (\bm{m} - \bm{1}) \right) + L/2 \cdot \| \delta\bm{w}^\mathrm{f} \odot (\bm{m} - \bm{1})\|_2^2
\end{equation}
where $\bm{g}(\bm{w}^\mathrm{f})$ denotes the loss gradient at $\bm{w}^\mathrm{f}$, and $\bm{1}$ is a vector whose elements are all 1.
As the loss is optimized at $\bm{w}^\mathrm{f}$, i.e.,\xspace $\bm{g}(\bm{w}^\mathrm{f})\approx \bm{0}$, we can assume that the gradient term is much smaller than the norm of the weight differences in \equref{ch5-eq:globalbound}.
Therefore, we have
\begin{equation} \label{ch5-eq:globalsum}
\ell(\bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m}) - \ell(\bm{w}^\mathrm{f}) \lesssim L/2 \cdot \| \delta\bm{w}^\mathrm{f} \odot (\bm{1} - \bm{m})\|_2^2
\end{equation}
The right hand side is clearly minimized if $m_i = 1$ for the largest absolute values of $\delta\bm{w}^\mathrm{f}$.
As $\bm{1}^\mathrm{T} \cdot \left( \bm{c}^\mathrm{global} \odot (\bm{1} - \bm{m}) \right) = \| \delta\bm{w}^\mathrm{f} \odot (\bm{1} - \bm{m})\|_2^2$, this information is captured in the contribution vector
\begin{equation} \label{ch5-eq:globalc}
\bm{c}^\mathrm{global} = \delta\bm{w}^\mathrm{f} \odot \delta\bm{w}^\mathrm{f}
\end{equation}
The $k \cdot I$ weights with the largest values in $\bm{c}^\mathrm{global}$ are assigned to mask values $1$ and are further fine-tuned in the second step, whereas all others are rewound to their initial values in $\bm{w}$.
\algoref{ch5-alg:gcpu} shows this first approach.
\begin{algorithm}[tbp!]
\caption{Global Contribution Partial Updating (Prune Incremental Weights)}\label{ch5-alg:gcpu}
\KwIn{Weights $\bm{w}$, updating ratio $k$, learning rate $\{\alpha^q\}_{q=1}^{Q}$}
\KwOut{Weights $\widetilde{\bm{w}}$}
\tcc{The first step: full updating and rewinding}
Initiate $\bm{w}^0=\bm{w}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
Compute the loss gradient $\bm{g}(\bm{w}^{q-1})=\partial\ell(\bm{w}^{q-1})/\partial\bm{w}^{q-1}$\;
Compute the optimization step with learning rate $\alpha^q$ as $\Delta\bm{w}^{q}$\;
Update $\bm{w}^{q} = \bm{w}^{q-1} + \Delta\bm{w}^{q}$\;
}
Set $\bm{w}^{\mathrm{f}}=\bm{w}^{Q}$ and get $\delta\bm{w}^{\mathrm{f}} = \bm{w}^{\mathrm{f}}-\bm{w}$\;
Compute $\bm{c}^{\mathrm{global}}=\delta\bm{w}^{\mathrm{f}}\odot\delta\bm{w}^{\mathrm{f}}$ and sort in descending order\;
Create binary masks $\bm{m}$ with $1$ for Top-$(k \cdot I)$ indices, $0$ for others\;
\tcc{The second step: sparse fine-tuning}
Initiate $\widetilde{\delta\bm{w}}=\delta\bm{w}^{\mathrm{f}}\odot\bm{m}$ and $\widetilde{\bm{w}} = \bm{w}+\widetilde{\delta\bm{w}}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
Compute the optimization step with learning rate $\alpha^q$ as $\Delta\widetilde{\bm{w}}^{q}$\;
Update $\widetilde{\delta\bm{w}}=\widetilde{\delta\bm{w}}+\Delta\widetilde{\bm{w}}^{q}\odot\bm{m}$ and $\widetilde{\bm{w}}=\bm{w}+\widetilde{\delta\bm{w}}$\;
}
\end{algorithm}
\fakeparagraph{Local Contribution}
As experiments show, one can do better when leveraging in addition some gradient-based information gathered during the first step, i.e.,\xspace optimizing the initial weights $\bm{w}$ in $Q$ traditional optimization steps, $\bm{w} = \bm{w}^0 \rightarrow \cdots \; \rightarrow \bm{w}^{q-1} \rightarrow \bm{w}^{q} \rightarrow \cdots \; \rightarrow \bm{w}^{Q} = \bm{w}^\mathrm{f}$.
Using $ -a + b \leq | a - b|$, \equref{ch5-eq:lipschitz} can be reformulated as $f(x) - f(y) + g(x)^T (y-x) \leq | f(y) - f(x) - g(x)^T (y-x) | \leq L/2 \cdot \| y-x \|^2_2$.
Thus, each optimization step is bounded as
\begin{equation} \label{ch5-eq:localbound}
\ell(\bm{w}^{q-1}) - \ell(\bm{w}^q) \leq -\bm{g}(\bm{w}^{q-1})^\mathrm{T} \cdot \Delta\bm{w}^q + L/2 \cdot \| \Delta\bm{w}^q \|_2^2
\end{equation}
where $\Delta\bm{w}^q = \bm{w}^q - \bm{w}^{q-1}$.
For a conventional gradient descent optimizer with a small learning rate we can use the approximation $|\bm{g}(\bm{w}^{q-1})^\mathrm{T} \cdot \Delta\bm{w}^q| \gg \|\Delta\bm{w}^q\|_2^2$ and obtain $ \ell(\bm{w}^{q-1}) - \ell(\bm{w}^q) \lesssim -\bm{g}(\bm{w}^{q-1})^\mathrm{T} \cdot \Delta\bm{w}^q$.
Summing up over all optimization iterations yields approximately
\begin{equation} \label{ch5-eq:localsum}
\ell(\bm{w}^\mathrm{f} - \delta\bm{w}^\mathrm{f}) - \ell(\bm{w}^\mathrm{f}) \lesssim -\sum_{q = 1}^Q \bm{g}(\bm{w}^{q-1})^\mathrm{T} \cdot \Delta\bm{w}^q
\end{equation}
Note that we have $\bm{w} = \bm{w}^\mathrm{f} - \delta\bm{w}^\mathrm{f}$ and $\delta\bm{w}^\mathrm{f} = \sum_{q = 1}^Q \Delta\bm{w}^q$.
Therefore, with a small updating ratio $k$, i.e.,\xspace $\bm{m} \sim \bm{0}$, we can reformulate \equref{ch5-eq:localsum} as
$
\ell\left( \bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m} \right) - \ell(\bm{w}^\mathrm{f}) \lesssim \mathrm{U}(\bm{m})
$
with the upper bound
$
\mathrm{U}(\bm{m}) = -\sum_{q = 1}^Q \bm{g}(\bm{w}^{q-1})^\mathrm{T} \cdot (\Delta\bm{w}^q \odot (\bm{1} - \bm{m}))
$
where we suppose that the gradients are approximately constant for $\bm{m} \sim \bm{0}$ (i.e.,\xspace $\bm{m}$ has zero entries almost everywhere).
Therefore, an approximate incremental contribution of each weight dimension to the upper bound on the loss difference $\ell\left( \bm{w} + \delta\bm{w}^\mathrm{f} \odot \bm{m} \right) - \ell(\bm{w}^\mathrm{f})$
can be determined by the negative gradient vector at $\bm{m}=\bm{0}$, denoted as
\begin{equation}
\bm{c}^{\mathrm{local}} = - \frac{\partial \mathrm{U}(\bm{m})}{\partial \bm{m}} = - \sum_{q=1}^{Q} \bm{g}(\bm{w}^{q-1}) \odot \Delta\bm{w}^{q}
\end{equation}
which models the accumulated contribution to the overall loss reduction.
Note that the partial derivatives are computed by assuming that $\bm{m}$ is continuous in a small area around $\bm{0}$.
\fakeparagraph{Combining Global and Local Contribution}
So far, we independently calculate the global and local contributions.
To avoid the scale impact, we first normalize each contribution by its significance in its own set (either global or local contribution set).
We investigated the impacts and the different combinations of both normalized contributions, see results in \secref{ch5-sec:experiment_impact}.
Interestingly, the most straightforward combination (i.e.,\xspace the sum of both normalized metrics) often yields a satisfied and stable performance.
Intuitively, local contribution can better identify critical weights w.r.t. the loss during training, while global contribution may be more robust for a highly non-convex loss landscape.
Both metrics may be necessary when selecting weights to rewind.
Therefore, the combined contribution is computed as
\begin{equation}
\bm{c} = \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{global}} \bm{c}^\mathrm{global} + \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{local}} \bm{c}^\mathrm{local}
\label{ch5-eq:combined}
\end{equation}
and $m_i = 1$ for the $k \cdot I$ largest values of $\bm{c}$ and $m_i = 0$ otherwise.
The pseudocode of Deep Partial Updating (DPU\xspace), i.e.,\xspace rewinding according to the combined contribution to the loss reduction, is shown in \algoref{ch5-alg:dpu}.
We further analyze the complexity of \algoref{ch5-alg:dpu}.
Recall that the dimensionality of the weights vector is denoted as $I$.
In $Q$ optimization iterations during the first step, \algoref{ch5-alg:dpu} introduces an extra time complexity of $O(QI)$, and an extra space complexity of $O(I)$ related to the original optimizer.
The rest of the first step takes a time complexity of $O(I\cdot\mathrm{log}(I))$ and a space complexity of $O(I)$, (e.g.,\xspace using heap sort or quick sort).
In $Q$ optimization iterations during the second step, \algoref{ch5-alg:dpu} introduces an extra time complexity of $O(QI)$, and an extra space complexity of $O(I)$ related to the original optimizer.
Thus, a total extra time complexity is $O(2QI+I\cdot\mathrm{log}(I))$ and a total extra space complexity is $O(I)$.
\subsection{(Re-)Initialization of Weights}
\label{ch5-sec:initialization}
In this section, we discuss the initialization of our method.
$\mathcal{D}^1$ denotes the initial dataset used to train the model $\bm{w}^1$ from a randomly initialized model $\bm{w}^0$.
$\mathcal{D}^1$ corresponds to the available dataset before deployment, or collected in the $0$-th round if there are no data available before deployment.
$\{\delta\mathcal{D}^r\}_{r=2}^R$ denotes newly collected samples in each subsequent round.
\begin{algorithm}[tbp!]
\caption{Deep Partial Updating}\label{ch5-alg:dpu}
\KwIn{Weights $\bm{w}$, updating ratio $k$, learning rate $\{\alpha^q\}_{q=1}^{Q}$}
\KwOut{Weights $\widetilde{\bm{w}}$}
\tcc{The first step: full updating and rewinding}
Initiate $\bm{w}^0=\bm{w}$ and $\bm{c}^{\mathrm{local}}=\bm{0}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
Compute the loss gradient $\bm{g}(\bm{w}^{q-1})=\partial\ell(\bm{w}^{q-1})/\partial\bm{w}^{q-1}$\;
Compute the optimization step with learning rate $\alpha^q$ as $\Delta\bm{w}^{q}$\;
Update $\bm{w}^{q} = \bm{w}^{q-1} + \Delta\bm{w}^{q}$\;
Update $\bm{c}^{\mathrm{local}}=\bm{c}^{\mathrm{local}}-\bm{g}(\bm{w}^{q-1})\odot\Delta\bm{w}^{q}$\;
}
Set $\bm{w}^{\mathrm{f}}=\bm{w}^{Q}$ and get $\delta\bm{w}^{\mathrm{f}} = \bm{w}^{\mathrm{f}}-\bm{w}$\;
Compute $\bm{c}^{\mathrm{global}}=\delta\bm{w}^{\mathrm{f}}\odot\delta\bm{w}^{\mathrm{f}}$\;
Compute $\bm{c}$ as \equref{ch5-eq:combined} and sort in descending order\;
Create binary masks $\bm{m}$ with $1$ for Top-$(k \cdot I)$ indices, $0$ for others\;
\tcc{The second step: sparse fine-tuning}
Initiate $\widetilde{\delta\bm{w}}=\delta\bm{w}^{\mathrm{f}}\odot\bm{m}$ and $\widetilde{\bm{w}} = \bm{w}+\widetilde{\delta\bm{w}}$\;
\For {$q \leftarrow 1$ \KwTo $Q$} {
Compute the optimization step with learning rate $\alpha^q$ as $\Delta\widetilde{\bm{w}}^{q}$\;
Update $\widetilde{\delta\bm{w}}=\widetilde{\delta\bm{w}}+\Delta\widetilde{\bm{w}}^{q}\odot\bm{m}$ and $\widetilde{\bm{w}}=\bm{w}+\widetilde{\delta\bm{w}}$\;
}
\end{algorithm}
Experimental results show (see \secref{ch5-sec:experiment_fullupdating}) that training from a randomly initialized model can yield a higher accuracy \textit{after a large number of rounds}, compared to always training from the last round $\bm{w}^{r-1}$.
As a possible explanation, the optimizer could end in a hard to escape region of the search space if always trained from the last round for a long sequence of rounds.
Thus, we propose to re-initialize the weights after a certain number of rounds.
In such a case, \algoref{ch5-alg:dpu} does not start from the weights $\bm{w}^{r-1}$ but from the randomly initialized weights.
The randomly re-initialized model (weights) can be efficiently sent to the edge devices via a single random seed.
The device can determine the weights by means of a random generator.
This process realizes a random shift in the search space, which is a communication-efficient way in comparison to other alternatives, such as learning to increase the loss or using the (averaged) weights in the previous rounds, as these fully changed weights still need to be sent to each node.
Each time the model is randomly re-initialized, the new partially updated model might suffer from an accuracy drop in a few rounds.
However, we can simply avoid such an accuracy drop by not updating the model if the validation accuracy does not increase compared to the last round, see in \secref{ch5-sec:experiment_reinit}.
Note that the learned knowledge thrown away by re-initialization can be re-learned afterwards, since all collected samples are continuously stored and accumulated in the server.
This also makes our setting different from continual learning, that aims at avoiding catastrophic forgetting without accessing old data.
To determine after how many rounds the model should be re-initialized, we conduct extensive experiments on different partial updating settings, see more discussions and results in \secref{ch5-sec:experiment_reinit}.
In conclusion, the model is randomly re-initialized as long as the number of total newly collected data samples exceeds the number of samples when the model was re-initialized last time.
For example, assume that at round $r$ the model is randomly (re-)initialized and partially updated from this random model on dataset $\mathcal{D}^r$.
Then, the model will be re-initialized again at round $r+n$, if $|\mathcal{D}^{r+n}|>2\cdot|\mathcal{D}^r|$, where $|.|$ denotes the number of samples in the dataset.
\section{Evaluation}
\label{ch5-sec:experiment}
In this section, we experimentally show that through updating a small subset of weights, DPU\xspace can reach a similar accuracy as full updating while requiring a significantly lower communication cost.
We implement DPU\xspace with Pytorch \cite{bib:NIPSWorkshop17:Paszke}, and evaluate on public vision datasets, including MNIST \cite{bib:MNIST}, CIFAR10 \cite{bib:CIFAR}, CIFAR100 \cite{bib:CIFAR}, ImageNet \cite{bib:ILSVRC15}, using multilayer perceptron (MLP), VGGNet \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari}, ResNet56 \cite{bib:CVPR16:He}, MobileNetV1 \cite{bib:arXiv17:Howard}, respectively.
Particularly, we partition the experiments into multi-round updating and single-round updating.
\fakeparagraph{Multi-Round Updating}
We consider there are limited (or even zero) samples before the initial deployment, and data samples are continuously collected and sent from edge devices over a long period (the event rate is often low in real cases \cite{bib:IPSN19:Meyer}).
The server retrains the model and sends the updates to each device in multiple rounds.
Regarding the highly-constrained communication resources, we choose low resolution image datasets (MNIST \cite{bib:MNIST} and CIFAR10/100 \cite{bib:CIFAR}) to evaluate multi-round updating.
We conduct one-shot rewinding in multi-round DPU\xspace, i.e.,\xspace rewinding is executed only once to achieve the desired updating ratio at each round as in \algoref{ch5-alg:dpu}, which avoids hand-tuning hyperparameters (e.g.,\xspace updating ratio schedule) frequently over a large number of rounds.
\fakeparagraph{Single-Round Updating}
The deployed model is updated once via server-to-edge communication when new data from other sources become available on the server after some time, e.g.,\xspace releasing a new version of mobile applications based on newly retrieved internet data.
Although DPU\xspace is elaborated and designed under multi-round updating settings, it can be applied directly in single-round updating.
Since transmission from edge devices may be even not necessary, we evaluate single-round DPU\xspace on the large scale ImageNet dataset.
Iterative rewinding is adopted here due to its better performance.
Particularly, we alternatively perform rewinding 20\% of the remaining trainable weights according to \equref{ch5-eq:combined} and sparse fine-tuning until reaching the desired updating ratio.
\fakeparagraph{General Settings for All Experiments}
We randomly select 30\% of the original test dataset (original validation dataset for ImageNet) as the validation dataset, and the remainder as the test dataset.
Let $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ represent the available data samples along rounds, where $|\delta\mathcal{D}^r|$ is supposed to be constant along rounds.
Both $\mathcal{D}^1$ and $\delta\mathcal{D}^r$ are randomly sampled (without replacement) from the original training dataset to simulate the data collection.
In each round, the test accuracy is reported, when the validation dataset achieves the highest Top-1 accuracy during retraining.
When the validation accuracy does not increase compared to the previous round, the models are not updated to reduce the communication overhead.
This strategy is also applied to other baselines to enable a fair comparison.
We use the average cross-entropy as the loss function, and use Adam variant of SGD for MLP and VGGNet, Nesterov SGD for ResNet56 and MobileNetV1.
More implementation details are provided in \secref{ch5-sec:experiment_benchmark}.
\fakeparagraph{Indexing}
DPU\xspace generates a sparse tensor.
In addition to the updated weights, the indices of these weights also need to be sent to each edge device.
A simple implementation is to send the mask $\bm{m}$, i.e.,\xspace a binary vector of $I$ elements.
Let $S_w$ denote the bitwidth of each single weight, and $S_x$ denote the bitwidth of each index.
Directly sending $\bm{m}$ yields an overall communication cost of $I\cdot k \cdot S_w+I \cdot S_x$ with $S_x=1$.
To save the communication cost on indexing, we further encode $\bm{m}$. Suppose that $\bm{m}$ is a random binary vector with a probability of $k$ to contain 1.
The optimal encoding scheme according to Shannon yields $S_x(k)=k \cdot \mathrm{log}(1/k) + (1-k) \cdot \mathrm{log}(1/(1-k))$.
Coding schemes such as Huffman block coding can come close to this bound.
We use $S_w\cdot k\cdot I + S_x(k)\cdot I$ to report the size of data transmitted from server to each node at each round, contributed by the partially updated weights plus the encoded indices of these weights.
\subsection{Benchmarking Details}
\label{ch5-sec:experiment_benchmark}
\subsubsection{MLP on MNIST}
\label{ch5-sec:experiment_mlp}
The MNIST dataset \cite{bib:MNIST} consists of $28\times28$ gray scale images in 10 digit classes.
It contains a training dataset with 60000 data samples, and a test dataset with 10000 data samples.
We use the original training dataset for training; and randomly select 3000 samples in the original test dataset for validation, and the rest 7000 samples for testing.
We use a mini-batch with size of 128 training on 1 GeForce RTX 3090 GPU.
We use Adam variant of SGD as the optimizer, and use all default parameters provided by Pytorch.
The number of training epochs is chosen as 60 at each round.
The initial learning rate is $0.05$, and it decays with a factor of 0.1 every $20$ epochs.
The used MLP contains two hidden layers, and each hidden layer contains 512 hidden units.
The input is a 784-dim tensor consisting of all pixel values in each image.
All weights in MLP need around $2.67$MB.
Each data sample needs $0.784$KB.
The size of MLP equals around 3400 data samples.
The used MLP architecture is presented as, 2$\times$512FC - 10SVM.
\subsubsection{VGGNet on CIFAR10}
\label{ch5-sec:experiment_vgg}
The CIFAR10 dataset \cite{bib:CIFAR} consists of $32\times32$ color images in 10 object classes.
It contains a training dataset with 50000 data samples, and a test dataset with 10000 data samples.
We use the original training dataset for training; and randomly select 3000 samples in the original test dataset for validation, and the rest 7000 samples for testing.
We use a mini-batch with size of 128 training on 1 GeForce RTX 3090 GPU.
We use Adam variant of SGD as the optimizer, and use all default parameters provided by Pytorch.
The number of training epochs is chosen as 60 at each round.
The initial learning rate is $0.05$, and it decays with a factor of 0.2 every $20$ epochs.
The used VGGNet is widely adopted in many previous compression works \cite{bib:NIPS15:Courbariaux,bib:ECCV16:Rastegari}, which is a modified version of the original VGG \cite{bib:ICLR15:Simonyan}.
All weights in VGGNet need around $56.09$MB.
Each data sample needs $3.072$KB.
The size of VGGNet equals around 18200 data samples.
The used VGGNet architecture is presented as, 2$\times$128C3 - MP2 - 2$\times$256C3 - MP2 - 2$\times$512C3 - MP2 - 2$\times$1024FC - 10SVM.
\subsubsection{ResNet56 on CIFAR100}
\label{ch5-sec:experiment_resnet56}
Similar as CIFAR10, the CIFAR100 dataset \cite{bib:CIFAR} consists of $32\times32$ color images in 100 object classes.
It contains a training dataset with 50000 data samples, and a test dataset with 10000 data samples.
We use the original training dataset for training; and randomly select 3000 samples in the original test dataset for validation, and the rest 7000 samples for testing.
We use a mini-batch with size of 128 training on 1 GeForce RTX 3090 GPU.
We use Nesterov SGD with weight decay 0.0001 as the optimizer, and use all default parameters provided by Pytorch.
The number of training epochs is chosen as 100 at each round.
The initial learning rate is $0.1$, and it decays with the cosine annealing schedule.
The ResNet56 used in our experiments is proposed in \cite{bib:CVPR16:He}.
All weights in ResNet56 need around $3.44$MB.
Each data sample needs $3.072$KB.
The size of ResNet56 equals around 1100 data samples.
\subsubsection{MobileNetV1 on ImageNet}
\label{ch5-sec:experiment_mobilenetv1}
The ImageNet dataset \cite{bib:ILSVRC15} consists of high-resolution color images in 1000 object classes.
It contains a training dataset with $1.28$ million data samples, and a validation dataset with 50000 data samples.
Following the commonly used pre-processing \cite{bib:torchResNet}, each sample (single image) is randomly resized and cropped into a $224\times224$ color image.
We use the original training dataset for training; and randomly select 15000 samples in the original validation dataset for validation, and the rest 35000 samples for testing.
We use a mini-batch with size of 1024 training on 4 GeForce RTX 3090 GPUs.
We use Nesterov SGD with weight decay 0.0001 as the optimizer, and use all default parameters provided by Pytorch.
The number of training epochs is chosen as 150 at each round.
The initial learning rate is $0.5$, and it decays with the cosine annealing schedule.
The MobileNetV1 used in our experiments is proposed in \cite{bib:arXiv17:Howard}.
All weights in MobileNetV1 need around $16.93$MB.
Each data sample needs $150.528$KB.
The size of MobileNetV1 equals around 340 data samples.
\subsection{Ablation Studies on Full Updating}
\label{ch5-sec:experiment_fullupdating}
\fakeparagraph{Settings}
In this section, we compare full updating with different initialization at each round to confirm the best-performed full updating baseline.
The compared full updating methods include, (\textit{i}) the model is trained from different random initialization at each round; (\textit{ii}) the model is trained from a same random initialization at each round, i.e.,\xspace with the same random seed; (\textit{iii}) the model is trained from the weights $\bm{w}^{r-1}$ of the last round at each round.
The experiments are conducted on VGGNet using CIFAR10 dataset with different amounts of training samples $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$.
Each experiment runs for three times using random data samples.
\fakeparagraph{Results}
We report the mean and the standard deviation of test accuracy (over three runs) under different initialization in \figref{ch5-fig:fullupdating}.
The results show that training from a same random initialization yields a similar accuracy level while sometimes also a lower variance, as training from different random initialization at each round.
In comparison to training from scratch (i.e.,\xspace random initialization), training from $\bm{w}^{r-1}$ may yield a higher accuracy in the first few rounds; yet training from scratch can always outperform after a large number of rounds.
Thus, in this chapter, we adopt training from a same random initialization at each round, i.e.,\xspace (\textit{ii}), as the baseline of full updating.
\begin{figure}[tbp!]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{ccc}
\textbf{~~~~\{1000,~5000\}} & \textbf{~~~~\{5000,~1000\}} & \textbf{~~~~\{1000,~1000\}} \\
\tabimgb{./figs/ch5/1000-5000-f} & \tabimgb{./figs/ch5/5000-1000-f} & \tabimgb{./figs/ch5/1000-1000-f} \\
\tabimgb{./figs/ch5/1000-5000-fv} & \tabimgb{./figs/ch5/5000-1000-fv} & \tabimgb{./figs/ch5/1000-1000-fv} \\
\end{tabular}
\caption[Comparing full updating methods with different initialization at each round.]{Comparing full updating methods with different initialization at each round.}
\label{ch5-fig:fullupdating}
\end{figure}
\subsection{Number of Rounds for Re-Initialization}
\label{ch5-sec:experiment_reinit}
\fakeparagraph{Settings}
In these experiments, we re-initialize the model every $n$ rounds under different partial updating settings to determine a heuristic rule to set the number of rounds for re-initialization.
We conduct experiments on VGGNet using CIFAR10 dataset, with different amounts of training samples $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ and different updating ratios $k$.
Every $n$ rounds, the model is (re-)initialized again from a same random model (as mentioned in \secref{ch5-sec:experiment_fullupdating}), then partially updated in the next $n$ rounds with \algoref{ch5-alg:dpu}.
We choose $n=1,5,10,20$.
Specially, $n=1$ means that the model is partially updated from the same random model every round, i.e.,\xspace without reusing the learned knowledge at all.
Each experiment runs three times using random data samples.
\fakeparagraph{Results}
We plot the mean test accuracy along rounds in \figref{ch5-fig:partial_reinit}.
By comparing $n=1$ with other settings, we can conclude that within a certain number of rounds, the current deployed model $\bm{w}^{r-1}$ (i.e.,\xspace the model from the last round) is a better starting point for \algoref{ch5-alg:dpu} than a randomly initialized model.
In other word, partially updating from the last round may yield a higher accuracy than partially updating from a random model with the same training effort.
This is straightforward, since such a model is already pretrained on a subset of the currently available data samples, and the previous learned knowledge could help the new training process.
Since all newly collected samples are continuously stored in the server, complete information about the past data samples is available.
This also makes our setting different from continual learning setting, which aims at avoiding catastrophic forgetting without accessing (at least not all) old data.
Each time the model is re-initialized, the new partially updated model might suffer from an accuracy drop in a few rounds.
Although this accuracy drop may be relieved if we carefully tune the partial updating training scheme every time, this is not feasible regarding a large number of updating rounds.
However, we can simply avoid such an accuracy drop by not updating the model if the validation accuracy does not increase compared to the last round (as discussed in \secref{ch5-sec:experiment}).
Note that in this situation, the partially updated weights (as well as the random seed for re-initialization) still need to be sent to the edge devices, since this is an on-going training process.
After implementing the above strategy, we plot the mean accuracy in \figref{ch5-fig:partial_reinit_savecomm}.
In addition, we also add the related results of full updating in \figref{ch5-fig:partial_reinit_savecomm}, where the model is fully updated and is re-initialized every $n$ rounds from a same random model.
Note that full updating with re-initialization every round ($n=1$) is exactly the same as ``same rand init.'' in \figref{ch5-fig:fullupdating} in \secref{ch5-sec:experiment_fullupdating}.
From \figref{ch5-fig:partial_reinit_savecomm}, we can conclude that the model needs to be re-initialized more frequently in the first several rounds than in the following rounds to achieve a higher accuracy level.
The model also needs to be re-initialized more frequently with a large partial updating ratio $k$.
Particularly, the ratio between the number of current data samples and the number of following collected data samples has a larger impact than the updating ratio.
Thus, we propose to re-initialize the model as long as the number of total newly collected data samples exceeds the number of samples when the model is re-initialized last time.
For example, assume that at round $r$ the model is randomly (re-)initialized and partially updated from the random model on dataset $\mathcal{D}^r$.
Then, the model will be re-initialized at round $r+n$, if $|\mathcal{D}^{r+n}|>2\cdot|\mathcal{D}^r|$.
\begin{figure}[tbp!]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~\{1000,~5000\}} & \textbf{~~~~\{5000,~1000\}} & \textbf{~~~~\{1000,~1000\}} \\
\rotatebox{90}{\textbf{0.01}} & \tabimgc{./figs/ch5/1000-5000-1-r} & \tabimgc{./figs/ch5/5000-1000-1-r} & \tabimgc{./figs/ch5/1000-1000-1-r} \\
\rotatebox{90}{\textbf{0.05}} & \tabimgc{./figs/ch5/1000-5000-2-r} & \tabimgc{./figs/ch5/5000-1000-2-r} & \tabimgc{./figs/ch5/1000-1000-2-r} \\
\rotatebox{90}{\textbf{0.1}} & \tabimgc{./figs/ch5/1000-5000-3-r} & \tabimgc{./figs/ch5/5000-1000-3-r} & \tabimgc{./figs/ch5/1000-1000-3-r} \\
\end{tabular}
\caption[Comparison w.r.t. the mean accuracy when DPU\xspace is re-initialized every $n$ rounds.]{Comparison w.r.t. the mean accuracy when DPU\xspace is re-initialized every $n$ rounds ($n=1,5,10,20$) under different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ and updating ratio ($k=0.01,0.05,0.1$) settings.}
\label{ch5-fig:partial_reinit}
\end{figure}
\begin{figure}[tbp!]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~\{1000,~5000\}} & \textbf{~~~~\{5000,~1000\}} & \textbf{~~~~\{1000,~1000\}} \\
\rotatebox{90}{\textbf{0.01}} & \tabimgc{./figs/ch5/1000-5000-1-rp} & \tabimgc{./figs/ch5/5000-1000-1-rp} & \tabimgc{./figs/ch5/1000-1000-1-rp} \\
\rotatebox{90}{\textbf{0.05}} & \tabimgc{./figs/ch5/1000-5000-2-rp} & \tabimgc{./figs/ch5/5000-1000-2-rp} & \tabimgc{./figs/ch5/1000-1000-2-rp} \\
\rotatebox{90}{\textbf{0.1}} & \tabimgc{./figs/ch5/1000-5000-3-rp} & \tabimgc{./figs/ch5/5000-1000-3-rp} & \tabimgc{./figs/ch5/1000-1000-3-rp} \\
\rotatebox{90}{\textbf{1}} & \tabimgc{./figs/ch5/1000-5000-fr} & \tabimgc{./figs/ch5/5000-1000-fr} & \tabimgc{./figs/ch5/1000-1000-fr} \\
\end{tabular}
\caption[Comparison w.r.t. the mean accuracy when DPU\xspace is re-initialized every $n$ rounds.]{Comparison w.r.t. the mean accuracy when DPU\xspace is re-initialized every $n$ rounds ($n=1,5,10,20$) under different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ and updating ratio ($k=0.01,0.05,0.1$ and full updating $k=1$) settings.}
\label{ch5-fig:partial_reinit_savecomm}
\end{figure}
\subsection{Impacts from Global/Local Contributions}
\label{ch5-sec:experiment_impact}
\subsubsection{Ablation Studies of Rewinding Metrics}
\label{ch5-sec:experiment_rewind}
\fakeparagraph{Settings}
We conduct a set of ablation experiments regarding different rewinding metrics discussed in \secref{ch5-sec:metric}.
We compare the influence of the local and global contributions as well as their combination, in terms of the training loss after the rewinding and the final test accuracy.
We conduct single-round updating on VGGNet.
The initial model are fully trained on a randomly selected dataset of $10^3$ samples.
After adding $10^3$ new randomly selected samples, we conduct the first step of our approach (see \algoref{ch5-alg:dpu}) with all three rewinding metrics, i.e.,\xspace the global contribution, the local contribution, and the combined contribution.
Accordingly, the second step (sparse fine-tuning) is executed.
The experiment is executed over five runs with different random seeds.
\begin{table}[tbp!]
\centering
\caption[Comparing training loss after rewinding and the final test accuracy under different metrics.]{Comparing training loss after rewinding and the final test accuracy under different metrics.}
\label{ch5-tab:lossincr}
\footnotesize
\begin{tabular}{cccc}
\toprule
\multirow{2}{*}{$k$} & \multicolumn{3}{c}{Training loss at $\bm{w}+\delta\bm{w}^\mathrm{f}\odot\bm{m}$ ~~~~(Test accuracy at $\widetilde{\bm{w}}$)} \\ \cline{2-4}
& \multicolumn{1}{c}{Global} & \multicolumn{1}{c}{Local} & \multicolumn{1}{c}{Combined} \\ \hline
0.01 & $3.04\pm0.07$ $(55.0\pm0.1\%)$ & $\bm{2.59}\pm0.08$ $(55.6\pm0.1\%)$ & $2.66\pm0.09$ $(\bm{56.5}\pm0.0\%)$ \\
0.05 & $2.51\pm0.06$ $(57.3\pm0.2\%)$ & $1.80\pm0.10$ $(57.8\pm0.1\%)$ & $\bm{1.67}\pm0.06$ $(\bm{58.2}\pm0.1\%)$ \\
0.1 & $2.03\pm0.05$ $(58.3\pm0.0\%)$ & $1.34\pm0.08$ $(59.0\pm0.1\%)$ & $\bm{0.99}\pm0.03$ $(\bm{59.0}\pm0.1\%)$ \\
0.2 & $1.20\pm0.05$ $(59.0\pm0.1\%)$ & $0.74\pm0.03$ $(59.6\pm0.2\%)$ & $\bm{0.42}\pm0.01$ $(\bm{60.1}\pm0.2\%)$ \\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Results}
The training loss after rewinding (i.e.,\xspace $\ell(\bm{w}+\delta\bm{w}^\mathrm{f}\odot\bm{m})$) and the final test accuracy after sparse fine-tuning (i.e.,\xspace at $\widetilde{\bm{w}}$) are reported in \tabref{ch5-tab:lossincr}.
We use the form of mean $\pm$ standard deviation.
As seen in the table, the combined contribution always yields a lower or similar training loss after rewinding compared to the other two metrics.
The smaller deviation also indicates that adopting the combined contribution yields more robust results.
This demonstrates the effectiveness of our proposed metric, i.e.,\xspace the combined contribution to the analytical upper bound on loss reduction.
Rewinding with the combined contribution also acquires a higher final accuracy, which in turn verifies the hypothesis we made for partial updating, a weight shall be updated only if it has a large contribution to the loss reduction.
\subsubsection{Balancing between Global and Local Contributions}
\label{ch5-sec:experiment_balancing}
\fakeparagraph{Settings}
In \equref{ch5-eq:combined}, the combined contribution is calculated by adding both normalized contributions together.
However, both normalized contributions may have different importance when determining the critical weights.
In order to investigate which one plays a more essential role in the combined contribution, we introduce another hyper-parameter $\lambda$ to tune the proportion of both normalized contributions as
\begin{equation}
\bm{c}_\lambda = \lambda \cdot \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{global}} \bm{c}^\mathrm{global} + (1-\lambda) \cdot \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{local}} \bm{c}^\mathrm{local}
\end{equation}
Note that the combined contribution $\bm{c}$ used in the previous experiments is the same as $\bm{c}_\lambda$ when $\lambda=0.5$, since only the order matters when determining the critical weights.
We implement partial updating methods with the rewinding metric $\bm{c}_\lambda$ under different values of $\lambda$.
We compare these methods under updating ratios $k=0.01,0.05,0.1$ and different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ settings on VGGNet using CIFAR10 dataset, and with the re-initialization scheme described in \secref{ch5-sec:initialization}.
Each experiment runs three times using random data samples.
\begin{figure}[t!]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~~~0.01} & \textbf{~~~~~~0.05} & \textbf{~~~~~~0.1} \\
\rotatebox{90}{\textbf{\{1000,1000\}}} & \tabimgc{./figs/ch5/1000-1000-1-ratiod} & \tabimgc{./figs/ch5/1000-1000-2-ratiod} & \tabimgc{./figs/ch5/1000-1000-3-ratiod} \\
\rotatebox{90}{\textbf{\{1000,5000\}}} & \tabimgc{./figs/ch5/1000-5000-1-ratiod} & \tabimgc{./figs/ch5/1000-5000-2-ratiod} & \tabimgc{./figs/ch5/1000-5000-3-ratiod} \\
\rotatebox{90}{\textbf{\{5000,1000\}}} & \tabimgc{./figs/ch5/5000-1000-1-ratiod} & \tabimgc{./figs/ch5/5000-1000-2-ratiod} & \tabimgc{./figs/ch5/5000-1000-3-ratiod} \\
\end{tabular}
\caption[Comparison w.r.t. the mean accuracy difference (full updating as the reference) under different $\lambda$.]{Comparison w.r.t. the mean accuracy difference (full updating as the reference) under $\lambda=0.5,0.1,0.3,0.7,0.9$. The chosen settings are updating ratios $k=0.01,0.05,0.1$, $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}=\{1000,1000\}, \{1000,5000\}, \{5000,1000\}$.}
\label{ch5-fig:combinedratio}
\end{figure}
\fakeparagraph{Results}
To clearly illustrate the impact of $\lambda$, we compare the difference between the accuracy under partial updating methods with various $\lambda$ and that under full updating. The mean accuracy difference (over three runs) are plotted in \figref{ch5-fig:combinedratio}.
As seen in \figref{ch5-fig:combinedratio}, $\lambda=0.5$ always obtains the best performance in general, especially when the updating ratio is small.
Thus, in the following experiments, we fix this hyper-parameter $\lambda$ as 0.5.
In other words, the combined contribution is chosen as
\begin{equation}
\bm{c}_\lambda(\lambda=0.5) = 0.5 \cdot \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{global}} \bm{c}^\mathrm{global} + 0.5 \cdot \frac{1}{\bm{1}^\mathrm{T} \cdot \bm{c}^\mathrm{local}} \bm{c}^\mathrm{local}
\end{equation}
which has exactly the same functionality as \equref{ch5-eq:combined}.
Note that it may be possible to manually find another hyper-parameter $\lambda$ that achieves better performance in certain cases.
However, setting $\lambda$ as 0.5 already yields a satisfactory performance, and can avoid meticulous and computationally expensive hyper-parameter tuning in a large number of updating rounds.
\subsubsection{Number of Updated Weights across Layers under Different Rewinding Metrics}
\label{ch5-sec:experiment_numweights}
\fakeparagraph{Settings}
To further study the impact of adopting different rewinding metrics, we show the distribution of updated weights across layers in this section.
We implement partial updating methods with three rewinding metrics (i.e.,\xspace the global contribution, the local contribution, and the combined contribution, see in \secref{ch5-sec:metric}) on VGGNet using CIFAR10 dataset.
We compare these methods with different updating ratios $k$ under $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}=\{1000,1000\}$.
To study the distribution of updated weights along all rounds, we let the model partially updated in every round even if the validation accuracy may not increase compared to the previous round.
All methods start from the same randomly initialized model, and are re-initialized with this random model according to the proposed scheme in \secref{ch5-sec:initialization}.
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.95\textwidth]{./figs/ch5/1000-1000-1-updated-num.pdf}
\caption[Number of updated weights across all layers when adopting different rewinding metrics with $k=0.01$.]{Number of updated weights across all layers (VGGNet) when adopting different rewinding metrics (updating ratio $k=0.01$).}
\label{ch5-fig:updatednum1}
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.95\textwidth]{./figs/ch5/1000-1000-2-updated-num.pdf}
\caption[Number of updated weights across all layers when adopting different rewinding metrics with $k=0.05$.]{Number of updated weights across all layers (VGGNet) when adopting different rewinding metrics (updating ratio $k=0.05$).}
\label{ch5-fig:updatednum2}
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.95\textwidth]{./figs/ch5/1000-1000-3-updated-num.pdf}
\caption[Number of updated weights across all layers when adopting different rewinding metrics with $k=0.1$.]{Number of updated weights across all layers (VGGNet) when adopting different rewinding metrics (updating ratio $k=0.1$).}
\label{ch5-fig:updatednum3}
\end{figure}
\fakeparagraph{Results}
We plot the number of updated weights across all layers along rounds, under updating ratio $k=0.01,0.05,0.1$ in \figref{ch5-fig:updatednum1}, \figref{ch5-fig:updatednum2}, and \figref{ch5-fig:updatednum3}, respectively.
We also plot the corresponding test accuracy along rounds in \figref{ch5-fig:updatedacc}.
Generally, the metric of local contribution updates more weights in the first several layers and the last layer while with a large variance along rounds.
On the contrary, global contribution selects more weights in the last several layers (until the penultimate layer) to update.
Combined contribution (the sum of normalized local/global contribution) achieves a more robust and balanced distribution of updated weights across layers than other contributions.
It also results in the highest accuracy level especially under a small updating ratio.
Intuitively, local contribution can better identify critical weights w.r.t. the loss during training, while global contribution may be more robust for a highly non-convex loss landscape.
Both metrics may be necessary when selecting weights to rewind.
Note that the proposed combined contribution is not the simple averaging of both local and global contribution.
For example, in ``layer 6'' of \figref{ch5-fig:updatednum3}, the number of updated weights by combined contribution already exceeds the other two metrics.
\begin{figure}[tbp]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{ccc}
\textbf{~~~~~~0.01} & \textbf{~~~~~~0.05} & \textbf{~~~~~~0.1} \\
\tabimgb{./figs/ch5/1000-1000-1-updated-acc} & \tabimgb{./figs/ch5/1000-1000-2-updated-acc} & \tabimgb{./figs/ch5/1000-1000-3-updated-acc} \\
\end{tabular}
\caption[The test accuracy of partial updating methods with different rewinding metrics.]{The test accuracy of partial updating methods with different rewinding metrics (updating ratio $k=0.01,0.05,0.1$).}
\label{ch5-fig:updatedacc}
\end{figure}
\subsection{Benchmarking Multi-Round Updating}
\label{ch5-sec:experiment_multiround}
\fakeparagraph{Settings}
To the best of our knowledge, this is the first work on studying weight-wise partial updating a model using newly collected data in iterative rounds.
Therefore, we developed three baselines for comparison, including (\textit{i}) full updating (FU), where at each round the model is fully updated from a random initialization (i.e.,\xspace training from scratch, which yields a better performance see \secref{ch5-sec:initialization} and \secref{ch5-sec:experiment_fullupdating}); (\textit{ii}) random partial updating (RPU), where the model is trained from $\bm{w}^{r-1}$, while we randomly fix each layer's weights with a ratio of $(1-k)$ and sparsely fine-tune the rest; (\textit{iii}) global contribution partial updating (GCPU), where the model is trained with \algoref{ch5-alg:gcpu} without re-initialization described in \secref{ch5-sec:initialization}; (\textit{iv}) a state-of-the-art unstructured pruning method \cite{bib:ICLR20:Renda}, where the model is first trained from a random initialization at each round, then conducts one-shot magnitude pruning, and finally is sparsely fine-tuned with learning rate rewinding.
The ratio of nonzero weights in pruning is set to the same as the updating ratio $k$ to ensure the same communication cost.
The experiments are conducted on different benchmarks as mentioned earlier.
\begin{figure}[t]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{ccc}
\tabimgd{./figs/ch5/mlp-pruning} & \tabimgd{./figs/ch5/vgg-pruning} & \tabimgd{./figs/ch5/resnet56-pruning} \\
\end{tabular}
\caption[DPU\xspace is compared with other baselines on different benchmarks in terms of the test accuracy during multi-round updating.]{DPU\xspace is compared with other baselines on different benchmarks in terms of the test accuracy during multi-round updating.}
\label{ch5-fig:multiround}
\end{figure}
\begin{table}[t]
\centering
\caption[The average accuracy difference (full updating as the reference) over all rounds and the ratio of communication cost over all rounds related to full updating.]{The average accuracy difference over all rounds and the ratio of communication cost over all rounds related to full updating.}
\label{ch5-tab:multiround}
\footnotesize
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{* }{Method} & \multicolumn{3}{c}{Average accuracy difference} & \multicolumn{3}{c}{Ratio of communication cost} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7}
& MLP & VGGNet & ResNet56 & MLP & VGGNet & ResNet56 \\ \hline
DPU\xspace & $\bm{-0.17\%}$ & $\bm{+0.33\%}$ & $\bm{-0.42\%}$ & $0.0071$ & $0.0183$ & $0.1147$ \\
GCPU & $-0.72\%$ & $-1.51\%$ & $-3.87\%$ & $0.0058$ & $0.0198$ & $0.1274$ \\
RPU & $-4.04\%$ & $-11.35\%$ & $-7.78\%$ & $0.0096$ & $0.0167$ & $0.1274$ \\
Pruning \cite{bib:ICLR20:Renda} & $-1.45\%$ & $-4.35\%$ & $-2.35\%$ & $0.0106$ & $0.0141$ & $0.1274$ \\
\bottomrule
\end{tabular}
\end{table}
\fakeparagraph{Results}
We report the test accuracy of the model $\bm{w}^r$ along rounds in \figref{ch5-fig:multiround}.
All methods start from the same $\bm{w}^0$, an entirely randomly initialized model.
As seen in this figure, DPU\xspace clearly yields the highest accuracy in comparison to the other partial updating schemes.
For example, DPU\xspace can yield a final Top-1 accuracy of $92.85\%$ on VGGNet, even exceeds the accuracy ($92.73\%$) of full updating, while GCPU, RPU, and Pruning only acquire $91.11\%$, $82.21\%$, and $87.62\%$ respectively.
In addition, we compare three partial updating schemes in terms of the accuracy difference related to full updating averaged over all rounds, and the ratio of the communication cost related to full updating over all rounds in \tabref{ch5-tab:multiround}.
As seen in the table, DPU\xspace reaches a similar accuracy as full updating, while incurring significantly fewer transmitted data sent from the server to each edge node.
Specially, DPU\xspace saves around $99.3\%$, $98.2\%$ and $88.5\%$ of transmitted data on MLP, VGGNet, and ResNet56, respectively ($95.3\%$ in average).
The communication cost ratios shown in \tabref{ch5-tab:multiround} differ a little even for the same updating ratio.
This is because if the validation accuracy does not increase, the model will not be updated to reduce the communication cost, as discussed earlier.
The horizontal straight line segments in \figref{ch5-fig:multiround} represent those non-updated rounds under each method.
\subsubsection{Experiments on Total Communication Cost Reduction}
\label{ch5-sec:experiment_totalcost}
\begin{figure}[tbp!]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{ccc}
\tabimgb{./figs/ch5/mlp-N} & \tabimgb{./figs/ch5/vgg-N} & \tabimgb{./figs/ch5/resnet56-N} \\
\end{tabular}
\caption[The ratio, between the total communication cost under DPU\xspace and that under full updating, varies with the number of nodes $N$.]{The ratio, between the total communication cost (over all rounds) under DPU\xspace and that under full updating, varies with the number of nodes $N$.}
\label{ch5-fig:communicationratio}
\end{figure}
\fakeparagraph{Settings}
In this section, we show the benefit due to DPU\xspace in terms of \textit{the total communication cost reduction}, as DPU\xspace has no impact on the edge-to-server communication which may involve sending newly collected data samples on nodes.
The total communication cost includes both edge-to-server communication and server-to-edge communication.
This experimental setup assumes that all data samples in $\delta\mathcal{D}^r$ are collected by $N$ edge nodes during all rounds and sent to the server on a per-round basis.
In other words, the first stage (see in \secref{ch5-sec:introduction}) is anyway necessary for sending new training data to the server.
For clarity, let $S_d$ denote the data size of each training sample.
During round $r$, we define the per-node total communication cost under DPU\xspace as $S_d\cdot|\delta\mathcal{D}^r|/N + (S_w \cdot k \cdot I + S_x(k) \cdot I)$.
Similarly, the per-node total communication cost under full updating is defined as $S_d\cdot|\delta\mathcal{D}^r|/N + S_w \cdot I$.
In order to simplify the demonstration, we consider the scenario where $N$ nodes send a certain amount of data samples to the server in $R-1$ rounds, namely $\sum_{r=2}^R |\delta \mathcal{D}^r|$ (see \secref{ch5-sec:initialization}).
Thus, the average data size transmitted from each node to the server in all rounds is $\sum_{r=2}^R S_d\cdot|\delta \mathcal{D}^r|/N$.
A larger $N$ implies a fewer amount of transmitted data from each node to the server.
\fakeparagraph{Results}
We report the ratio of the total communication cost over all rounds required by DPU\xspace related to full updating, when DPU\xspace achieves a similar accuracy level as full updating (corresponding to three evaluations in \figref{ch5-fig:multiround}).
The ratio clearly depends on $\sum_{r=2}^R S_d\cdot|\delta \mathcal{D}^r|/N$, i.e.,\xspace the number of nodes $N$.
The relation between the ratio and $N$ is plotted in \figref{ch5-fig:communicationratio}.
We observe that DPU\xspace can still achieve a significant reduction on the total communication cost, e.g.,\xspace reducing up to $88.2\%$ even for a single node.
Single node corresponds to the largest data size during edge-to-serve transmission per node, i.e.,\xspace the worst case.
Moreover, DPU\xspace tends to be more beneficial when the size of data transmitted by each node to the server becomes smaller.
This is intuitive because in this case the server-to-edge communication (thus the reduction due to DPU\xspace) dominants in the entire communication.
\subsection{Different Number of Data Samples and Updating Ratios}
\label{ch5-sec:experiment_samplesratio}
\fakeparagraph{Settings}
In this section, we show that DPU\xspace outperforms other baselines under varying number of training samples and updating ratios in multi-round updating.
We also conduct ablations concerning the re-initialization of weights discussed in \secref{ch5-sec:initialization}.
We implement DPU\xspace with and without re-initialization, GCPU with and without re-initialization, RPU, and Pruning \cite{bib:ICLR20:Renda} (see more details in \secref{ch5-sec:experiment_multiround}) on VGGNet using CIFAR10 dataset.
We compare these methods with different amounts of samples $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ and different updating ratios $k$.
Without further notations, each experiment runs three times using random data samples.
\begin{figure}[t]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~~~0.01} & \textbf{~~~~~~0.05} & \textbf{~~~~~~0.1} \\
\rotatebox{90}{\textbf{\{1000,1000\}}} & \tabimgc{./figs/ch5/1000-1000-1-pruningd} & \tabimgc{./figs/ch5/1000-1000-2-pruningd} & \tabimgc{./figs/ch5/1000-1000-3-pruningd} \\
\rotatebox{90}{\textbf{\{1000,5000\}}} & \tabimgc{./figs/ch5/1000-5000-1-pruningd} & \tabimgc{./figs/ch5/1000-5000-2-pruningd} & \tabimgc{./figs/ch5/1000-5000-3-pruningd} \\
\rotatebox{90}{\textbf{\{5000,1000\}}} & \tabimgc{./figs/ch5/5000-1000-1-pruningd} & \tabimgc{./figs/ch5/5000-1000-2-pruningd} & \tabimgc{./figs/ch5/5000-1000-3-pruningd} \\
\rotatebox{90}{\textbf{\{1000,500\}}} & \tabimgc{./figs/ch5/1000-500-1-pruningd} & \tabimgc{./figs/ch5/1000-500-2-pruningd} & \tabimgc{./figs/ch5/1000-500-3-pruningd} \\
\rotatebox{90}{\textbf{\{500,1000\}}} & \tabimgc{./figs/ch5/500-1000-1-pruningd} & \tabimgc{./figs/ch5/500-1000-2-pruningd} & \tabimgc{./figs/ch5/500-1000-3-pruningd} \\
\end{tabular}
\caption[Comparison w.r.t. the mean accuracy difference (full updating as the reference) under different settings.]{Comparison w.r.t. the mean accuracy difference (full updating as the reference) under different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ (representing the available data samples along rounds) and updating ratio ($k=0.01,0.05,0.1$) settings.}
\label{ch5-fig:number_ratio_d}
\end{figure}
\begin{figure}[t]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~~~0.01} & \textbf{~~~~~~0.05} & \textbf{~~~~~~0.1} \\
\rotatebox{90}{\textbf{\{1000,1000\}}} & \tabimgc{./figs/ch5/1000-1000-1-pruning} & \tabimgc{./figs/ch5/1000-1000-2-pruning} & \tabimgc{./figs/ch5/1000-1000-3-pruning} \\
\rotatebox{90}{\textbf{\{1000,5000\}}} & \tabimgc{./figs/ch5/1000-5000-1-pruning} & \tabimgc{./figs/ch5/1000-5000-2-pruning} & \tabimgc{./figs/ch5/1000-5000-3-pruning} \\
\rotatebox{90}{\textbf{\{5000,1000\}}} & \tabimgc{./figs/ch5/5000-1000-1-pruning} & \tabimgc{./figs/ch5/5000-1000-2-pruning} & \tabimgc{./figs/ch5/5000-1000-3-pruning} \\
\rotatebox{90}{\textbf{\{1000,500\}}} & \tabimgc{./figs/ch5/1000-500-1-pruning} & \tabimgc{./figs/ch5/1000-500-2-pruning} & \tabimgc{./figs/ch5/1000-500-3-pruning} \\
\rotatebox{90}{\textbf{\{500,1000\}}} & \tabimgc{./figs/ch5/500-1000-1-pruning} & \tabimgc{./figs/ch5/500-1000-2-pruning} & \tabimgc{./figs/ch5/500-1000-3-pruning} \\
\end{tabular}
\caption[Comparison w.r.t. the mean accuracy under different settings.]{Comparison w.r.t. the mean accuracy under different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ (representing the available data samples along rounds) and updating ratio ($k=0.01,0.05,0.1$) settings.}
\label{ch5-fig:number_ratio}
\end{figure}
\begin{figure}[t]
\setlength\tabcolsep{\imgtabcolsep}
\centering
\begin{tabular}{m{0.3cm}ccc}
& \textbf{~~~~~~0.01} & \textbf{~~~~~~0.05} & \textbf{~~~~~~0.1} \\
\rotatebox{90}{\textbf{\{1000,1000\}}} & \tabimgc{./figs/ch5/1000-1000-1-pruningv} & \tabimgc{./figs/ch5/1000-1000-2-pruningv} & \tabimgc{./figs/ch5/1000-1000-3-pruningv} \\
\rotatebox{90}{\textbf{\{1000,5000\}}} & \tabimgc{./figs/ch5/1000-5000-1-pruningv} & \tabimgc{./figs/ch5/1000-5000-2-pruningv} & \tabimgc{./figs/ch5/1000-5000-3-pruningv} \\
\rotatebox{90}{\textbf{\{5000,1000\}}} & \tabimgc{./figs/ch5/5000-1000-1-pruningv} & \tabimgc{./figs/ch5/5000-1000-2-pruningv} & \tabimgc{./figs/ch5/5000-1000-3-pruningv} \\
\rotatebox{90}{\textbf{\{1000,500\}}} & \tabimgc{./figs/ch5/1000-500-1-pruningv} & \tabimgc{./figs/ch5/1000-500-2-pruningv} & \tabimgc{./figs/ch5/1000-500-3-pruningv} \\
\rotatebox{90}{\textbf{\{500,1000\}}} & \tabimgc{./figs/ch5/500-1000-1-pruningv} & \tabimgc{./figs/ch5/500-1000-2-pruningv} & \tabimgc{./figs/ch5/500-1000-3-pruningv} \\
\end{tabular}
\caption[Comparison w.r.t. the standard deviation of accuracy under different settings.]{Comparison w.r.t. the standard deviation of accuracy under different $\{|\mathcal{D}^1|,|\delta\mathcal{D}^r|\}$ (representing the available data samples along rounds) and updating ratio ($k=0.01,0.05,0.1$) settings.}
\label{ch5-fig:number_ratio_v}
\end{figure}
\fakeparagraph{Results}
We compare the difference between the accuracy under each method and that under full updating.
The mean accuracy difference over three runs is plotted in \figref{ch5-fig:number_ratio_d}.
In addition, we also plot the mean and standard deviation of the absolute accuracy of these methods in \figref{ch5-fig:number_ratio} and \figref{ch5-fig:number_ratio_v}, respectively.
As seen in \figref{ch5-fig:number_ratio_d}, DPU\xspace (with re-initialization) always achieves the highest accuracy.
DPU\xspace also significantly outperforms the pruning method, especially under a small updating ratio.
Note that we preferred a smaller updating ratio in our context because it explores the limits of the approach and it indicates that we can improve the deployed model more frequently with the same accumulated server-to-edge communication cost.
The dashed curves and the solid curves with the same color can be viewed as the ablation study of our re-initialization scheme.
Particularly given a large number of rounds, it is critical to re-initialize the start point $\bm{w}^{r-1}$ after several rounds (as discussed in \secref{ch5-sec:initialization}).
In the first few rounds, partial updating methods almost always yield a higher test accuracy than full updating, i.e.,\xspace the curves are above zero.
This is due to the fact that the amount of available samples is rather small, and partial updating may avoid some co-adaptation in full updating.
The partial updating methods perform almost randomly in the first round compared to each other, because the limited data are not sufficient to distinguish critical weights from the random initialization $\bm{w}^0$.
This also motivates us to (partially) update the deployed model when new samples are available.
\fakeparagraph{Pruning Weights vs. Pruning Incremental Weights}
One of our chosen baselines, global contribution partial updating (GCPU, \algoref{ch5-alg:gcpu}), could be viewed as a counterpart of the pruning method \cite{bib:ICLR20:Renda}, i.e.,\xspace pruning the incremental weights with the least magnitudes.
Specially, the elements with the smallest absolute values in $\delta\bm{w}^{\mathrm{f}}$ are set to zero (also rewinding), while the remaining weights are further sparsely fine-tuned with the same learning rate schedule as training $\bm{w}^{\mathrm{f}}$.
In comparison to traditional pruning on weights \cite{bib:ICLR20:Renda}, pruning on incremental weights has a different start point.
Traditional pruning on weights first trains randomly initialized weights (a zero-initialized model cannot be trained due to the symmetry), and then prunes the weights with the smallest magnitudes.
However, the increment of weights $\delta\bm{w}^\mathrm{f}$ is initialized with zero in \algoref{ch5-alg:gcpu}, since the first step starts from $\bm{w}$.
By comparing GCPU (with or without re-initialization) with ``Pruning'', we conclude that retaining previous weights yields better performance than zero-outing the weights.
\subsection{Benchmarking Single-Round Updating}
\label{ch5-sec:experiment_singleround}
\fakeparagraph{Settings}
To show the versatility of our methods, we test single-round updating for MobileNetV1 \cite{bib:arXiv17:Howard} on ImageNet \cite{bib:ILSVRC15} with iterative rewinding.
Single-round DPU\xspace is conducted on different initially deployed models, including a floating-point (FP32) dense model and two compressed models, i.e.,\xspace a 50\%-sparse model and an INT8 quantized model.
The sparse model is trained with a state-of-the-art dynamic pruning method \cite{bib:NIPS21:Peste}; the quantized model is trained with straight-through-estimator with a output-channel-wise floating point scaling factors similar as \cite{bib:ECCV16:Rastegari}.
To maintain the same on-device inference cost, partial updating is only applied on nonzero values of sparse models; for quantized models, the updated weights are still in INT8 format.
Note that we do not impose sparsity or quantization on batch normalization and bias.
\fakeparagraph{Results}
We compare DPU\xspace with the vanilla-updates, i.e.,\xspace the models are trained from a random initialization with the corresponding methods on all available samples.
The test accuracy and the ratio of (server-to-edge) communication cost related to full updating on FP32 dense model are reported in \tabref{ch5-tab:singleround}.
Results show that DPU\xspace often yields a higher accuracy than vanilla updating while requiring substantially lower communication cost.
\begin{table}[tb]
\centering
\caption[The test accuracy of single-round updating on different initially deployed models.]{The test accuracy of single-round updating on different initial models (MobileNetV1 on ImageNet). The updating ratio $k=0.2$. The ratio of communication cost related to full updating is reported in brackets}
\label{ch5-tab:singleround}
\small
\begin{tabular}{cccccc}
\toprule
\#Samples & \multicolumn{3}{c}{$\{8\times10^5,4.8\times10^5\}$} \\
\cmidrule(lr){2-4}
& Initial & Vanilla-update & DPU\xspace \\ \hline
FP32 Dense & 68.5\% & 70.7\% (1) & 71.1\% (0.22) \\
50\%-Sparse & 68.1\% & 70.5\% (0.53) & 70.8\% (0.22) \\
INT8 & 68.4\% & 70.6\% (0.25) & 70.6\% (0.07) \\
\bottomrule
\end{tabular}
\end{table}
\section{Summary}
\label{ch5-sec:summary}
In this chapter, we propose a novel pipeline DPU\xspace for edge-server system.
DPU\xspace enables deep learning on edge-server system that has limited on-device resources and limited communication resources.
Particularly, when newly collected data samples from edge devices or from other sources are available at the server, the server smartly selects only a subset of critical weights to update at the server-to-edge communication round.
This partial updating scheme reduces the redundant updating by reusing the pretrained weights, i.e.,\xspace the learned knowledge on prior data, which achieves a similar performance as full updating yet with a significantly lower communication cost.
The main contributions of DPU\xspace are summarized as follows,
\begin{itemize}
\item We formalize the deep partial updating paradigm, i.e.,\xspace how to iteratively perform weight-wise partial updating of the inference models on remote edge devices, if newly collected training samples are available at the server. This substantially reduces the computation and communication demand on edge devices.
\item We propose a novel approach that determines the optimized subset of weights that shall be selected for partial updating, through measuring each weight's contribution to the analytical upper bound on the loss reduction. This simple yet effective metric can be applied to any models that are trained with gradient-based optimizers.
\item Experimental results on public vision datasets show that, under the similar accuracy level along the rounds, our approach can reduce the size of the transmitted data by $95.3\%$ on average (up to $99.3\%$), namely can update the model averagely $21$ times more frequent than full updating.
\end{itemize}
\chapter{Conclusion}
\label{ch6:conclusion}
State-of-the-art DNNs achieve excellent prediction accuracy in many perception tasks, e.g.,\xspace computer vision, natural language processing, reinforcement learning, etc.
However, a large amount of resources is essential in both the inference phase and the training phase to ensure the high performance of DNNs.
Due to the intensive resource demands, DNNs are often deployed on a cloud server with plenty of high-performed computers and shared storage infrastructures.
On the other hand, there is a growing interest to deploy DNNs on edge devices to enable new edge intelligent applications, e.g.,\xspace AR/VR, mobile assistants, IoT, autonomous driving, etc.
In comparison to a cloud server, edge devices have a rather small amount of resources from memory, computation, and energy, and often also a limited scalability.
Conventional DNNs need to be compressed in order to fit the resource constraints on edge devices.
As DNNs are prone to be over-parameterized, this thesis focuses on reducing the redundancy of DNNs to achieve a better trade-off between resource consumption and model accuracy.
In this thesis, we studied how to enable deep learning on edge devices in four different scenarios.
Especially, we studied (\textit{i}) efficient inference on edge devices given fixed resource constraints in \chref{ch2:inference}, (\textit{ii}) efficient adaptation on edge devices under varying resource constraints in \chref{ch3:adaptation}, (\textit{iii}) efficient learning on edge devices with a few training samples of unseen tasks in \chref{ch4:learning}, and (\textit{iv}) efficient inference and updating on edge-server systems with a constrained communication bus in \chref{ch5:edgeserver}.
Note that different scenarios may have different main resource constraints that hinder us from deploying DNNs on edge devices.
According to the main resource constraints in these scenarios, we developed different methodologies to remove the redundant components, such that the compressed DNNs require a lower resource demand while reaching a similar accuracy level as the original ones.
In the following sections of this chapter, we will first summarize our main contributions in each scenario, then discuss the potential directions for future work.
\section{Contributions}
\label{ch6-sec:contribution}
This section summarizes the main contributions of our work in each scenario.
\subsection{Inference on Edge Devices (\chref{ch2:inference})}
\label{ch6-sec:inference}
In \chref{ch2:inference}, we enabled an efficient inference of DNNs on edge devices.
In comparison to cloud inference, inference on edge devices does not need to upload the input data to the cloud server, which can achieve a more stable, fast, and energy-efficient inference, especially with a constrained communication bus.
Regarding the main resource constraints from storing a large number of weights and computation during inference, we proposed ALQ\xspace, an adaptive loss-aware trained quantizer for multi-bit networks.
ALQ\xspace reduces the redundancy on the quantization bitwidth.
Unlike prior multi-bit quantization that often assigns an empirical uniform bitwidth, ALQ\xspace learns an adaptive bitwidth assignment across different groups of weights according to their loss criticality.
ALQ\xspace also proposes to optimize the multi-bit quantized weights by directly minimizing the loss function rather than the reconstruction error to the full precision weights.
The multi-bit quantized network uses cheaper operations from \texttt{xnor} and \texttt{popcount} to replace the expensive FLOPs, achieving computation efficiency;
the learned adaptive bitwidth yields a smaller average bitwidth by only allocating a high bitwidth to the loss-critical weights, achieving storage efficiency;
the direct optimization objective (i.e.,\xspace the loss) allows us to acquire a quantized network with higher prediction accuracy.
In addition, ALQ\xspace also enables extremely low-bit networks with an average bitwidth below 1-bit by entirely pruned groups (i.e.,\xspace 0-bit weights in some groups).
\subsection{Adaptation on Edge Devices (\chref{ch3:adaptation})}
\label{ch6-sec:adaptation}
The methods proposed in \chref{ch2:inference} are able to compress DNNs for efficient inference if the amount of available resources on edge devices is fixed and known beforehand.
However, the resource constraints on the target edge devices may dynamically change during runtime, e.g.,\xspace the allowed execution time, the allocatable RAM, and the battery energy.
To maximize the model accuracy during on-device inference, in \chref{ch3:adaptation}, we enabled a DNN with dynamic capacity, such that the DNN can be adapted and executed under varying resource constraints.
Particularly, we developed a new synthesis approach DRESS\xspace that can sample and execute sub-networks with different resource demands from a backbone network for on-device inference.
DRESS\xspace reduces the redundancy among multiple sub-networks by weight sharing and architecture sharing.
DRESS\xspace samples sub-networks in a row-based unstructured manner (a.k.a. fine-grained structure sparsity) from the backbone network, and introduces a novel compressed sparse row (CSR) format to utilize sparse tensor computation provided by recent compilation libraries.
In DRESS\xspace, the nonzero weights of the higher sparsity sub-networks are reused by the lower sparsity sub-networks, achieving memory efficiency;
all sparse sub-networks leverage the same architecture as the backbone network, achieving re-configuration efficiency.
The sub-networks have different sparsity, and thus can be fetched and executed under various resource constraints.
\subsection{Learning on Edge Devices (\chref{ch4:learning})}
\label{ch6-sec:learning}
In \chref{ch2:inference} and \chref{ch3:adaptation}, we compressed DNNs to realize an efficient on-device inference under \textit{fixed} and \textit{varying} resource constraints, respectively.
However, when facing unseen environments or users on edge devices, it is crucial to retrain the DNN with newly collected data samples to deliver consistent performance and customized services.
On the one hand, data samples collected by edge devices are often private and limited; on the other hand, training a DNN often consumes several orders of magnitude more peak memory than inference.
Hence, in \chref{ch4:learning}, we proposed a new meta learning method p-Meta\xspace to enable memory-efficient few-shot learning on unseen tasks.
p-Meta\xspace reduces the updating redundancy by fixing some weights during few-shot learning, which saves the memory consumption that is necessary for the updated weights.
p-Meta\xspace enables both data- and memory-efficient on-device learning given unseen tasks, which is realized by automatically identifying adaptation-critical weights during few-shot learning via a meta-trained selection mechanism.
p-Meta\xspace adopts a hierarchical approach that combines a static selection on adaptation-critical layers and a dynamic selection on adaptation-critical channels.
To the best of our knowledge, p-Meta\xspace is the first meta learning method designed for on-device few-shot learning.
Evaluations on few-shot image classification and reinforcement learning show that p-Meta\xspace not only improves the accuracy but also reduces the peak dynamic memory by a factor of 2.5 on average over the state-of-the-art few-shot learning methods.
\subsection{Edge-Server-System (\chref{ch5:edgeserver})}
\label{ch6-sec:edgeserver}
In \chref{ch2:inference}, \chref{ch3:adaptation} and \chref{ch4:learning}, we enabled deep learning on a single edge platform in three different scenarios.
In \chref{ch5:edgeserver}, we designed a new pipeline DPU\xspace to enable efficient inference and efficient updating for edge-server system.
In edge-server system, a set of resource-constrained edge devices are connected to a remote server with sufficient resources, and some information is allowed to be communicated between edge devices and the server.
Due to the limited relevant training data beforehand, pretrained DNNs may be significantly improved after the initial deployment.
On such an edge-server system, on-device inference is preferred over cloud inference, since it can achieve a fast and stable inference with less energy consumption.
Yet retraining on the cloud server is preferred over on-device retraining (or federated learning) due to the limited memory and computing power on edge devices.
Therefore, we proposed a two-stage iterative process to update the deployed inference models, (\textit{i}) at each round, edge devices collect new data samples and send them to the server, and (\textit{ii}) the server retrains the network using collected data, and then sends the updates to each edge device.
In comparison to the edge-to-server stage, the transmissions in the server-to-edge stage are highly constrained by the limited communication resource (e.g.,\xspace bandwidth, energy).
Our DPU\xspace reduces the server-to-edge communication cost by distinguishing the redundant updating given newly collected samples.
Particularly, DPU\xspace studied how to iteratively perform weight-wise partial updating of inference models on remote edge devices, if newly collected training samples are available at the server.
In each round, DPU\xspace smartly selects and updates a small subset of critical weights that have a large contribution to the loss reduction during the retraining.
Experimental results show that DPU\xspace can reach a similar accuracy level as full updating yet with a significantly lower communication cost.
\section{Potential Future Directions}
\label{ch6-sec:future}
In this section, we discuss some potential directions for the future work.
These potential future directions are either some extensions or complementaries of the works presented in the main chapters, or some other edge intelligence scenarios that have not been studied yet due to the time limitation.
\subsection{Hardware Accelerators of ALQ\xspace}
\label{ch6-sec:future_alq}
ALQ\xspace exhibits a high compression ratio on the benchmark evaluations in \chref{ch2:inference} without introducing sparse tensor computation.
To deploy the multi-bit networks generated by ALQ\xspace, the target hardware must support bitwise \texttt{xnor} and \texttt{popcount} operations for efficient execution.
However, the current Arm Cortex CPUs \cite{bib:arm} do not include the computation units of \texttt{popcount}.
Although some software libraries may provide functions for \texttt{popcount}, they are less efficient in pipelined computation.
Designing some hardware accelerators e.g.,\xspace with FPGA that can support bitwise \texttt{xnor}, \texttt{popcount} and accumulation operations is a promising direction to enable efficient inference with multi-bit networks.
\subsection{Quantized DRESS\xspace}
\label{ch6-sec:future_dress}
Current DRESS\xspace samples sub-networks from a floating-point backbone network.
Applying DRESS\xspace on a quantized backbone network (e.g.,\xspace 8-bit integer network) is also worth studying.
Especially, the sampled quantized sub-networks can be further accelerated by the fast kernels of sparse quantized computation.
For example, CMSIS-NN \cite{bib:CMSIS-NN} can achieve a $4\times$ acceleration on 8-bit integer quantized networks compared to 32-bit floating-point networks on a 32-bit Arm Cortex-M CPUs.
In addition, it would be also interesting to explore the possibility of applying DRESS\xspace on multi-bit quantized networks, i.e.,\xspace the combination of ALQ\xspace and DRESS\xspace.
\subsection{Latency-Aware DRESS\xspace}
\label{ch6-sec:future_latency}
Note also that current DRESS\xspace requires predefined sparsity levels.
However, a higher sparsity level, i.e.,\xspace a smaller number of nonzero weights, does not always result in a shorter inference latency \cite{bib:ICLR20:Renda}.
In the future, we encourage the following researchers to build a direct relation between sparsity and inference latency (or energy consumption).
This can be realized by (\textit{i}) measuring the inference latency with some hardware simulators, (\textit{ii}) leveraging some real-time models to bound the computation time theoretically.
The latency-aware DRESS\xspace that does not rely on proxies may fill the gap between the realistic speedup and the theoretical reduction of FLOPs mentioned in \secref{ch3-sec:deployment}.
\subsection{Low-Precision Few-Shot Learning}
\label{ch6-sec:future_fsl}
In \chref{ch4:learning}, we introduced p-Meta\xspace, a hierarchical structured partial updating on meta-trained models when only a few training samples of new unseen tasks are given.
Although p-Meta\xspace can dramatically reduce the peak dynamic memory as well as the computation burden during few-shot learning, it still needs full-precision calculation during the backward propagation.
As noted in prior works \cite{bib:NIPS20:Raihan,bib:ICLR20:Cambier,bib:NIPS18:Wang}, adopting a low-precision backward propagation can bring a similar performance as its full-precision versions in the vanilla training.
A straightforward future direction is to apply low-precision training on few-shot learning scenarios, where weights, activations, and gradients are all presented in low-precision formats, e.g.,\xspace 8-bit integer.
The step size of 8-bit integer training could be the number of bit shifting, which may be also meta-trained in a per layer per step manner.
Conducting 8-bit integer few-shot learning on edge devices can not only further reduce the peak memory consumption, but also speedup the training process in comparison to 32-bit floating-point training.
\begin{table}[t]
\centering
\caption[Static memory of the model and the training samples in example self-supervised learning.]{Static memory of the model and the training samples in example self-supervised learning.}
\label{ch6-tab:examples}
\footnotesize
\begin{tabular}{lccc}
\toprule
\multirow{2}{*}{Benchmark} & WRN-28-2 \cite{bib:BMVC16:Zagoruyko} & WRN-28-2 \cite{bib:BMVC16:Zagoruyko} \\
& CIFAR10 & SVHN \\ \midrule
Static Storage of Model (MB) & $6.02$ & $6.02$ \\
Static Storage of Samples (MB) & $184.32$ & $305.02$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Streaming Self-Supervised Learning}
\label{ch6-sec:future_ssl}
In \chref{ch4:learning}, we studied efficient few-shot learning on edge devices, where only a few training samples are given.
In some other cases of on-device learning, although the labeled samples are limited due to the limited labor resources, it might be easy to collect a large number of unlabeled samples.
Learning a DNN with a small number of labeled samples and a large number of unlabeled samples is known as self-supervised learning (or semi-supervised learning).
Current self-supervised learning methods \cite{bib:NIPS19:Berthelot,bib:NIPS20:Sohn} often need to maintain all unlabeled samples.
Even if on small-scale datasets, the static memory for storing samples is much larger than that for storing the self-supervised model.
We summarize the static memory consumption for training samples and the self-supervised DNNs in two sample applications in \tabref{ch6-tab:examples}.
We consider that the unlabeled samples are collected in a round-based streaming manner, and during the collection we can query the user for labeling.
In this scenario, the main resource constraints are (\textit{i}) the limited number of querying labels, (\textit{ii}) the memory consumption for storing data samples, particularly unlabeled samples.
We focus on reducing the redundancy of data samples.
We will only select a coreset of unconfident samples to label and a coreset of representative samples to store \cite{bib:NIPS21:Killamsetty}.
\noindent
The problem in round $r$ is defined as follows.
\fakeparagraph{Inputs}
We have the current optimized model, and the stored datasets from the last round, which contain labeled set $\mathcal{D}_\mathrm{S}^{r-1}$ and unlabeled set $\mathcal{D}_\mathrm{U}^{r-1}$.
We also receive some new unlabeled samples in $\delta \mathcal{D}^r$.
\fakeparagraph{Outputs}
We are expected to output the updated model according to newly collected data.
We also need to update the datasets.
Because of the limited memory and limited querying number, we update the datasets based on two selected coresets $\mathcal{C}_\mathrm{S}$ and $\mathcal{C}_\mathrm{U}$.
Both coresets are selected from all available unlabeled samples, i.e.,\xspace $\mathcal{C}_\mathrm{S},\mathcal{C}_\mathrm{U} \subset \mathcal{D}_\mathrm{U}^{r-1}\cup\delta \mathcal{D}^r$.
\fakeparagraph{Methods}
In order to select two coresets, we use a confidence score $\bm{\alpha}$ to weight each unlabeled samples, where $\bm{\alpha}\in\mathbb{R}_{+}^{|\mathcal{D}_\mathrm{U}^{r-1}|+|\delta \mathcal{D}^r|}$
A larger $\alpha$ means the sample can better match the learned likelihood, whereas a smaller alpha means the model has less confidence on that sample.
Similar to \cite{bib:NIPS20:Sohn,bib:NIPS21:Killamsetty}, we also conduct a two-level minmax optimization.
Particularly, in the inner loop, the binarized $\bm{\alpha}$ is used to weight the unsupervised loss, and the model will be then trained with semi-supervised loss.
In the outer loop, the confidence score $\bm{\alpha}$ is optimized on the current labeled dataset $\mathcal{D}_\mathrm{S}^{r-1}$ with the optimized model.
Both loops are conducted alternatively in several iterations.
Then, the current unlabeled samples in $\mathcal{D}_\mathrm{U}^{r-1}\cup\delta \mathcal{D}^r$ are selected to build two coresets $\mathcal{C}_\mathrm{S}$ and $\mathcal{C}_\mathrm{U}$ according to the optimized score $\bm{\alpha}$.
Note that both coresets $\mathcal{C}_\mathrm{S}$ and $\mathcal{C}_\mathrm{U}$ have a constrained cardinality due to the limited querying number and the limited memory, respectively.
The samples in $\mathcal{C}_\mathrm{S}$ will be further queried for labeling.
The labeled dataset is then updated as $\mathcal{D}_\mathrm{S}^{r}=\mathcal{D}_\mathrm{S}^{r-1}\cup\mathcal{C}_\mathrm{S}$, and the unlabeled dataset is updated as $\mathcal{D}_\mathrm{U}^{r}=\mathcal{C}_\mathrm{U}$.
\clearpage
\noindent
\large{This concludes my thesis.}
\chapter{List of Publications}
\label{ch8:publication}
\small
The following list includes publications that form the basis of this thesis. The corresponding chapters are indicated in parentheses.
\\
\aut{Zhongnan Qu, Zimu Zhou, Yun Cheng, Lothar Thiele.}
\tit{Adaptive Loss-aware Quantization for Multi-bit Networks.}
\con{In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),}
\loc{IEEE, 2020, Acceptance ratio: 22.1\%.}
\rem{(\chref{ch2:inference}) \cite{bib:CVPR20:Qu}}
\\
\aut{Zhongnan Qu, Syed Shakib Sarwar, Xin Dong, Yuecheng Li, Huseyin Sumbul, Barbara De Salvo.}
\tit{DRESS: Dynamic REal-time Sparse Subnets.}
\con{In Efficient Deep Learning for Computer Vision (ECV),}
\loc{CVPRWorkshop, 2022, Acceptance ratio: 29.9\%.}
\rem{(\chref{ch3:adaptation}) \cite{bib:CVPRWorkshop22:Qu}}
\\
\aut{Zhongnan Qu, Zimu Zhou, Yongxin Tong, Lothar Thiele.}
\tit{p-Meta: Towards On-device Deep Model Adaptation.}
\con{In Proceedings of ACM Conference on Knowledge Discovery and Data Mining (SIGKDD),}
\loc{ACM, 2022, Acceptance ratio: 15.0\%.}
\rem{(\chref{ch4:learning}) \cite{bib:KDD22:Qu}}
\\
\aut{Zhongnan Qu, Cong Liu, Lothar Thiele.}
\tit{Deep Partial Updating: Towards Communication Efficient Updating for On-device Inference.}
\con{In Proceedings of European Conference on Computer Vision (ECCV),}
\loc{Springer, 2022, Acceptance ratio: 28.4\%.}
\rem{(\chref{ch5:edgeserver}) \cite{bib:ECCV22:Qu}}
\\
\newpage
\noindent
The following list includes publications that were written during the PhD studies, yet are not part of this thesis.
\\
\aut{Fan Lu, Guang Chen, Yinlong Liu, Zhongnan Qu, Alois Knoll.}
\tit{RSKDD-Net: Random Sample-based Keypoint Detector and Descriptor.}
\con{In Proceedings of Annual Conference on Neural Information Processing Systems (NeurIPS),}
\loc{2020, Acceptance ratio: 20.1\%.}
\rem{\cite{bib:NIPS20:Lu}}
\\
\aut{Xin Dong, Barbara De Salvo, Meng Li, Chiao Liu, Zhongnan Qu, H.T. Kung, Ziyun Li.}
\tit{SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems.}
\con{In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),}
\loc{IEEE, 2022, Acceptance ratio: 25.3\%.}
\rem{\cite{bib:CVPR22:Dong}}
\\
\chapter{Curriculum Vit\ae}
\renewcommand{\arraystretch}{1.5}
\newcolumntype{P}[1]{>{\raggedright\arraybackslash}p{#1}}
\begin{footnotesize}
\subsection*{Personal Data}
\begin{tabular}{@{}P{2.5cm}P{9.5cm}}
Name & Zhongnan Qu \\
Date of Birth & May 5, 1992 \\
Citizenship & China \\
\end{tabular}
\vspace{-0.5em}
\subsection*{Education}
\begin{tabular}{@{}P{2.5cm}P{9.5cm}}
2018--2022 &
ETH Zurich, \textit{Zurich Switzerland}\linebreak
Ph.D. in Computer Engineering\linebreak
Advised by Prof. Lothar Thiele \\
2014--2018 &
TU Munich, \textit{Munich Germany}\linebreak
M.Sc. in Electrical and Computer Engineering\linebreak
Advised by Prof. Daniel Cremers and Prof. Dongheui Lee\\
2014--2017 &
TU Munich, \textit{Munich Germany}\linebreak
M.Sc. in Mechanical Engineering\linebreak
Advised by Prof. Alois Knoll\\
2013--2014 &
Munich University of Applied Sciences, \textit{Munich Germany}\linebreak
B.Eng. in Mechatronics Engineering\\
2010--2014 &
Tongji University, \textit{Shanghai China}\linebreak
B.Eng. in Mechatronics Engineering
\end{tabular}
\vspace{-0.5em}
\subsection*{Professional Experience}
\begin{tabular}{@{}P{2.5cm}P{9.5cm}}
2018--2022 &
ETH Zurich, \textit{Zurich Switzerland}\linebreak
Research and teaching assistant\\
2021 &
Meta (Facebook) Reality Labs, \textit{Seattle US (Remotely)}\linebreak
Research Intern\\
2017 &
BMW Group, \textit{Munich Germany}\linebreak
Intern\\
2014 &
Canon Group Company $\bullet$ Oc\'e Printing Systems GmbH, \textit{Munich Germany}\linebreak
Intern and Thesis Student\\
2013 &
State Grid, \textit{Henan China}\linebreak
Intern
\end{tabular}
\vspace{-0.5em}
\subsection*{Research Interests}
I focus on efficient deep learning in computer vision, natural language processing, and robotics. The vision is to deploy deep learning on edge devices for new emerging intelligent applications that face challenges of resource constraints, privacy issues, and data scarcity.
\end{footnotesize}
\chapter{Datasheets}
|
1,108,101,565,803 | arxiv | \section{\label{sec1}Introduction}
The discovery of the $\chi_{c1}(3872)$ in 2003 \cite{Cho03} may be
considered as the initio of a new era in heavy-quark meson spectroscopy. This
resonance and a plethora of new states ($\psi(4260)$, $\psi(4360)$, $X(3915)$, and many others, see \cite{PDG20})
discovered since then have masses and
decay properties that do not correspond to the
conventional heavy quark ($Q$) -- heavy antiquark ($\overline{Q}$)
meson description, such as the one provided by nonrelativistic or
semirelativistic quark models that has been so successful in the past \cite{Eic80,Eic04,GI85}. A feature of any of
these unconventional states is that its mass lies close below or above the
lowest open flavor meson-meson threshold with the same quantum numbers. This
suggests a possible relevant role of open flavor meson-meson thresholds in the
explanation of the structure of the new states. As a matter of fact, the
nonrelativistic Cornell quark model \cite{Eic80,Eic04} incorporates some of
these effects through meson loops where the interaction
connecting $Q\overline{Q}$ and open flavor meson-meson is derived from the $Q\overline{Q}$
binding potential. Similar kind of loop contributions, with quark pair creation models like
the $^{3}\!P_{0}$ one providing the valence-continuum coupling, have been extensively
studied in the literature (see for instance \cite{Bar08} and \cite{Fer19}). However,
these perturbative loop contributions seem to be insufficient for a detailed description of
the new structures. This has led to the building of phenomenological models
involving implicit or explicit meson-meson components, for example in the forms of tetraquarks, meson
molecules, and hadroquarkonium (see \cite{Ch16,Leb17,Guo17,Esp17} and references therein).
\textit{Ab initio} calculations from QCD have been also carried out. From
lattice QCD, a Born-Oppenheimer ({\BO}) approximation for heavy-quark
mesons has been developed \cite{Jug99} (for a connection with effective field
theories see \cite{Bra20} and references therein). In this approximation,
based on the large ratio of the heavy quark
mass to the QCD energy scale associated with the gluon field, the
heavy-quark meson masses correspond to energy levels of a {\schr}
equation for $Q\overline{Q}$ in an effective potential. This potential is defined by the energy
of a stationary state of light-quark and gluon fields in the presence
of static $Q$ and $\overline{Q}$ sources, which is calculated in lattice QCD.
Thus, conventional quarkonium masses are the energy levels in the ground state
potential calculated in quenched (without light quarks) lattice QCD whose form is Cornell-like \cite{Bal01}, whereas quarkonium hybrid
($Q\overline{Q}g$ bound state where $g$ stands for a gluon) masses are energy
levels in the quenched excited state potentials. Although no tetraquark potentials have
been calculated yet from lattice QCD, some information on them has been also
extracted \cite{Braa14}. The immediate question arising is whether these
hybrid and tetraquark {\BO} potentials may correctly describe or not the new
states. The answer to this question can be derived from \cite{Braa14}, where an
assignment of the masses of some of the new states to energy levels in these
potentials has been pursued. In essence, quoting this reference, although the
{\BO} approximation provides a starting point for a coherent description of the
new states based firmly on QCD, a detailed description of them
requires to go beyond quenched lattice calculations and beyond the {\BO} approximation.
An intermediate step in this direction was taken in \cite{Gon14,*Gon15,*Gon19} by
identifying the unquenched lattice energy for
static $Q$ and $\overline{Q}$ sources, when the $Q\overline{Q}$ configuration
mixes with one or two open flavor meson-meson ones \cite{Bal05,Bul19}, with a $Q\overline{Q}$ potential. This
unquenched approximation allows for some physical understanding of threshold
effects beyond hadron loops. However, the description in terms of effective
$Q\overline{Q}$ channels does not give detailed account of the configuration mixing.
In this article we take a step further to go beyond the {\BO}
approximation. For this purpose we use the diabatic approach developed
in molecular physics for tackling the configuration mixing problem (see for
instance \cite{Bae06}). This allows us to establish a general framework
for a unified description of conventional and unconventional heavy-quark meson
states. This framework is applied to the calculation of $J^{++}$ and the
low-lying $1^{--}$ meson states with $Q=c$ (charm quark) where there are
sufficient data available to test its validity.
In this manner a complete treatment of heavy-quark
meson states involving heavy quark-antiquark and meson-meson degrees of freedom,
that incorporates the results from \textit{ab initio} calculations in quenched and
unquenched lattice QCD, comes out.
The contents of the paper are organized as follows.
In Sec.~\ref{sec2} the mathematical formalism and the physical picture leading to the
{\BO} approximation for heavy-quark mesons is revisited. In Sec.~\ref{sec3} we
detail the diabatic approach and in Sec.~\ref{sec4} we adapt it to the
description of heavy-quark meson states. The application to meson states containing
$c\overline{c}$ is detailed in Sec.~\ref{sec5}. For the sake of simplicity we
consider states involving non-overlapping thresholds with small
widths. The comparison of our results to existing data serves as a stringent test
of our treatment. Finally, in Sec.~\ref{sec6} our main conclusions are summarized.
\section{\label{sec2}Born-Oppenheimer approximation in QCD}
The Born-Oppenheimer ({\BO}) approximation was developed in 1927 for the
description of molecules \cite{Bo27}, and since then it has been a fundamental
approximation in chemistry. More recently it has been employed for the
description of heavy-quark meson bound states from QCD \cite{Jug99,Braa14}.
Next, we briefly recall the main steps in its construction for the description of
a heavy-quark meson system containing a heavy quark-antiquark ($Q\overline{Q}$)
interacting with light fields (gluons and light
quarks), with Hamiltonian
\begin{equation}
H=K_{Q\overline{Q}}+H^\text{lf}_{Q\overline{Q}}
\end{equation}
where $K_{Q\overline{Q}}$ is the $Q\overline{Q}$ kinetic energy operator
\begin{equation}
K_{Q\overline{Q}}=\frac{\vb*{p}_{Q}^{2}}{2m_{Q}}+\frac
{\vb*{p}_{\overline{Q}}^{2}}{2m_{\overline{Q}}}=\frac
{\vb*{p}^{2}}{2\mu_{Q\overline{Q}}}+\frac{\vb*{P}^{2}}{2(m_{Q}+m_{\overline{Q}})}
\end{equation}
with $\mu_{Q\overline{Q}}$ being the reduced $Q\overline{Q}$ mass,
$\vb*{p}$ ($\vb*{P}$) the $Q\overline{Q}$
relative (total) three-momentum,
and $H^\text{lf}_{Q\overline{Q}}$ the part of the Hamiltonian containing the light field energy operator and
the $Q\overline{Q}$ -- light-field interaction. Notice that $H^\text{lf}_{Q\overline{Q}}$ depends on the $Q$ and
$\overline{Q}$ positions but does not contain any derivative with respect to the $Q$ and $\overline{Q}$ coordinates.
A heavy-quark meson bound state $\ket{\psi}$ is a solution of the characteristic equation
\begin{equation}
H\ket{\psi} = E \ket{\psi}
\end{equation}
where $E$ is the energy of the state. Note that $\ket{\psi}$ contains information
on both the $Q\overline{Q}$ and light fields.
\subsection{Static limit}
The first step in building the {\BO} approximation consists in solving the dynamics of the light
fields by neglecting the $Q\overline{Q}$ motion, i.e.\ setting the kinetic energy term $K_{Q\overline{Q}}$ equal to zero.
This corresponds to the limit where $Q$ and $\overline{Q}$ are infinitely massive,
what can be justified because the $Q$ and
$\overline{Q}$ masses, $m_{Q}$ and $m_{\overline{Q}}$, are much bigger than the QCD scale $\Lambda_\text{QCD}$, which is the energy
scale associated with the light fields.
As we are interested in the internal structure of the system and this does not depend on the center of
mass motion (which coincides with the $Q\overline{Q}$ center of mass motion in
the infinite mass limit) it is convenient to use the $Q\overline{Q}$ relative
position $\vb*{r}=\vb*{r}_{Q}-\vb*{r}_{\overline{Q}}$, and work in the $Q\overline{Q}$ center of mass
frame where $\vb*{P}=0$.
In this \emph{static limit} $\vb*{r}$ is fixed, ceasing to be a dynamical variable. This is, the components of
$\vb*{r}$ can be considered as parameters, rather than operators,
in the expression of $H^\text{lf}_{Q\overline{Q}}$ that will depend operationally on the light fields
only.
We shall indicate this parametric dependence renaming $H^\text{lf}_{Q\overline{Q}}$ as $H_\text{static}^\text{lf}(\vb*
{r})$.
It is then possible to solve the dynamics of the light fields for any value of $\vb*{r}$:
\begin{equation}
(H_\text{static}^\text{lf}(\vb*{r}) -V_{i}(\vb*{r}))\ket{\zeta_{i}(\vb*{r})} =0
\label{STATIC}
\end{equation}
where $\ket{\zeta_{i}(\vb*{r})} $ are the light field eigenstates, $V_{i}(\vb*{r})$ the corresponding eigenvalues,
and $i$ stands for the set of quantum numbers labelling the eigenstates. Note that both the eigenvalues and the
eigenstates depend parametrically on $\vb*{r}$, and that for every value of $\vb*{r}$ the eigenstates
$\{\ket{\zeta_{i}(\vb*{r})}\}$ form a complete orthonormal set for the light fields:
\begin{equation}
\braket{\zeta_{j}(\vb*{r})}{\zeta_{i}(\vb*{r})} = \delta_{ji} .
\end{equation}
As for the eigenvalues $V_{i}(\vb*{r})$, they correspond to the energies of stationary states of the light fields in the
presence of static $Q$ and $\overline{Q}$ sources placed at a relative position $\vb*{r}$,
and can be calculated \textit{ab initio} in lattice QCD.
More precisely, in quenched (with gluon but not light-quark fields) lattice QCD \cite{Bal01} the ground state of the light fields
is associated with a $Q\overline{Q}$ configuration, and up to spin dependent terms that we
shall not consider the static energy of this ground state mimics the form of
the phenomenological Cornell potential
\begin{equation}
V_\text{C}(r)=\sigma r - \frac{\chi}{r} + m_Q + m_{\overline{Q}} - \beta
\label{CPOT}
\end{equation}
with $\sigma$, $\chi$ and $\beta$ standing for the string tension,
the color coulomb strength, and a constant fixing the origin of the potential respectively.
On the other hand, unquenched (with gluon and light-quark fields) lattice QCD calculations \cite{Bal05,Bul19} have shown that due
to string breaking the association of the light field
ground state with a $Q\overline{Q}$ configuration holds only for small values of the
relative $Q\overline{Q}$ distance $r \equiv \abs{\vb*{r}}$.
When increasing $r$ the $Q\overline{Q}$ configuration mixes significantly with meson-meson configurations.
More in detail: below (above) an
open-flavor meson-meson threshold the energy of a stationary
state of the light fields changes with $r$, from the one corresponding to the
$Q\overline{Q}$ (meson-meson) configuration to the one of meson-meson
($Q\overline{Q}$) configuration, avoiding in this manner the crossing of the static light field
energies corresponding to pure $Q\overline{Q}$ and meson-meson configurations that would take place at
the threshold mass in absence of string breaking. In Fig.~\ref{cross} we have represented graphically this situation for $Q\overline{Q}$ and one
meson-meson threshold (the representation for two meson-meson thresholds can
be seen in \cite{Bal05,Bul19}).
\begin{figure}
\includegraphics{Adiabatic_potentials}
\caption{Pictorial representation of lattice static energies. Dashed line: ground state static light field energy in quenched lattice QCD.
Dotted line: meson-meson threshold.
Dash-dotted lines: ground and excited state static light field energies in unquenched lattice QCD, showing an avoided crossing.}
\label{cross}
\end{figure}
\subsection{Adiabatic expansion}
Having solved the static problem for the light fields, the next step in the construction of the
{\BO} approximation consists in reintroducing the $Q\overline{Q}$ motion. This is done by solving the bound state equation
\begin{equation}
\pqty{\frac{\vb*{p}^2}{2\mu_{Q\overline{Q}}} + H_\text{static}^\text{lf}(\vb*{r})-E} \ket{\psi} =0,
\label{BSECM}
\end{equation}
where $E$ denotes the mass of the bound state, making use of the so-called adiabatic expansion for $\ket{\psi}$:
\begin{equation}
\ket{\psi} =\sum_{i}\int \dd\vb*{r}^{\prime}\psi_{i}(\vb*{r}^{\prime})\ket{\vb*{r}^{\prime}}\ket{\zeta_{i}(\vb*{r}^{\prime})}
\label{BOSTATE}
\end{equation}
where $\ket{\vb*{r}^\prime}$ is a state indicating the $Q\overline{Q}$ relative position and
we have temporarily omitted spin degrees of freedom for simplicity. The qualifier ``adiabatic'' refers to the fact that each term
in the expansion depends only on a single value of $\vb*{r}^\prime$, what can be related to the physical situation where
the light fields respond almost instantaneously to the motion of the quark and antiquark. However, as will be shown in what
follows, this physical expansion is not mathematically convenient when configuration mixing takes place. Note that as the
states $\ket{\zeta_{i}(\vb*{r}^\prime)}$ depend on $\vb*{r}^\prime$, so do the coefficients $\psi_{i}$, one for each light field state.
Using \eqref{BOSTATE} and multiplying on the left by $\bra{\vb*{r}}$ the bound state equation can be rewritten as
\begin{equation}
\sum_{i}\pqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian+V_{i}(\vb*{r})-E}\psi_{i}(
\vb*{r})\ket{\zeta_{i}(\vb*{r})} =0,
\end{equation}
then multiplying on the left by $\bra{\zeta_{j}(\vb*{r})}$ yields
\begin{widetext}
\begin{equation}
\sum_{i}\bqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\mel{\zeta_{j}(\vb*{r})}{\laplacian\psi_{i}(\vb*{r})}{\zeta_{i}(
\vb*{r})} + \pqty{V_{j}(\vb*
{r})-E} \delta_{j i} \psi_{i}(\vb*{r})} =0.
\label{EQS}
\end{equation}
The first term on the left hand side of \eqref{EQS} can be developed as
\begin{equation}
\mel{\zeta_{j}(\vb*{r})}{\laplacian\psi_{i}(\vb*{r})}{\zeta_{i}(\vb*{r})} = \delta_{ji}\laplacian\psi_{i}(\vb*{r}) +
2\vb*{\tau}_{ji}(\vb*{r}) \vdot \grad\psi_{i}(\vb*{r}) + \tau_{ji}^{(2)}(\vb*{r})\psi_{i}(\vb*{r})
\end{equation}
with
\begin{equation}
\vb*{\tau}_{ji}(\vb*{r}) \equiv \braket{\zeta_{j}(\vb*{r})}{\grad \zeta_{i}(\vb*{r})} \qand
\tau_{ji}^{(2)}(\vb*{r}) \equiv \braket{\zeta_{j}(\vb*{r})}{\laplacian\zeta_{i}(\vb*{r})}
\end{equation}
being the so-called \emph{Non-Adiabatic Coupling Terms} (NACTs) of the first and second order respectively.
Furthermore, using $\grad\braket{\zeta_{j}(\vb*{r})}{\zeta_{i}(\vb*{r})} =\grad\delta_{ji}=0$ we have
\begin{equation}
\vb*{\tau}_{ji}(\vb*{r})\equiv \braket{\zeta_{j}(\vb*{r})}{\grad \zeta_{i}(\vb*{r})} =
- \braket{\grad\zeta_{j}(\vb*{r})}{\zeta_{i}(\vb*{r})} \equiv-\vb*{\tau}_{ij}^*(\vb*{r}),
\end{equation}
from which it follows
\begin{equation}
\braket{\grad\zeta_{j}(\vb*{r})}{\grad\zeta_{i}(\vb*{r})} =
\sum_{k}\braket{\grad\zeta_{j}(\vb*{r})}{\zeta_{k}(\vb*{r})} \vdot
\braket{\zeta_{k}(\vb*{r})}{\grad\zeta_{i}(\vb*{r})}
= \sum_{k}\vb*{\tau}_{kj}^*(\vb*{r})\vdot\vb*{\tau}_{ki}(\vb*{r}) = - \sum_{k}\vb*{\tau}_{jk}(\vb*{r})
\vdot\vb*{\tau}_{ki}(\vb*{r}) \equiv-(\vb*{\tau}(\vb*{r})^{2})_{ji},
\end{equation}
so that
\begin{equation}
(\grad\vb*{\tau}(\vb*{r}))_{ji} = \braket{\zeta_{j}(\vb*{r})}{\laplacian \zeta_{i}(\vb*{r})} +
\braket{\grad\zeta_{j}(\vb*{r})}{\grad\zeta_{i}(\vb*{r})}
=\tau_{ji}^{(2)}(\vb*{r})-
(\vb*{\tau}(\vb*{r})^{2})_{ji}
\end{equation}
and finally
\begin{equation}
\mel{\zeta_{j}(\vb*{r})}{\laplacian\psi_{i}(\vb*{r})}{\zeta_{i}(\vb*{r})} =
\delta_{ji}\laplacian\psi_{i}(\vb*{r})+2\vb*{\tau}_{ji}(\vb*{r})\vdot\grad\psi_{i}(\vb*{r})
+ ((\div\vb*{\tau}(\vb*{r}))_{ji}+(\vb*{\tau}(\vb*{r})^{2})_{ji})\psi_{i}(\vb*{r})
\equiv ((\grad+\vb*{\tau}(\vb*{r}))^2)_{ji}\psi_{i}(\vb*{r}).
\end{equation}
The bound state equation \eqref{EQS} then reads
\begin{equation}
\sum_{i}\bqty{ -\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}((\grad+\vb*{\tau}(\vb*{r}))^2)_{ji}+(V_{j}(\vb*{r})-E)\delta_{ji}}\psi_{i}(\vb*{r})=0.
\label{BSE}
\end{equation}
\end{widetext}
This is a multichannel equation where $\psi_{i}(\vb*{r})$ stands for the $i$-th component of the heavy-quark meson wave function, that is
in general a mixing of $Q\overline{Q}$ and meson-meson components. Notice though that this is not the usual {\schr} equation because of
the presence of the NACTs $\vb*{\tau}$ inside the kinetic energy operator. These terms introduce a coupling
between the wave function components and reflect the non-trivial interaction between the $Q\overline{Q}$ motion and
the light field states.
\subsection{Single channel approximation}
The last step in the construction of the {\BO} approximation consists in neglecting the NACTs inside the kinetic energy operator:
\begin{equation}
\vb*{\tau}_{ji}(\vb*{r}) = \braket{\zeta_{j}(\vb*{r})}{\grad\zeta_{i}(\vb*{r})} \approx 0.
\label{BOCOND}
\end{equation}
This is called the \emph{single channel approximation} because the bound state
equation \eqref{BSE} then factorizes in a set of decoupled
single channel {\schr} equations
\begin{equation}
\bqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian+(V_{j}(\vb*{r})-E)}\psi_{j}(\vb*{r})=0
\label{BOSE}
\end{equation}
where $V_{j}(\vb*{r})$, corresponding to the energy of the stationary $j$-th state
of the light fields in the presence of static $Q$ and $\overline{Q}$ sources, plays the role of an
effective potential.
Eqs.~\eqref{STATIC}, \eqref{BOSTATE}, \eqref{BOCOND} and \eqref{BOSE} define the {\BO} approximation.
Notice that the single channel approximation can be deemed reasonable only up to
$Q\overline{Q}$ distances for which the NACTs can be neglected, i.e.\ for distances where the $Q\overline{Q}$
and meson-meson configuration mixing associated with the light field eigenstates is negligible
(for a specific calculation see Sec.~\ref{angsec}).
This makes the {\BO} approximation to be justified only for bound state energies far below the lowest open flavor
meson-meson threshold. In particular, conventional heavy-quark meson masses, far below the lowest open flavor
meson-meson threshold, can be described as the energy levels in the potential
corresponding to the quenched ground state of the light fields, i.e.\ the Cornell potential.
\section{\label{sec3}Diabatic approach}
For energies close below or above an open flavor meson-meson threshold the mixing between the
$Q\overline{Q}$ and meson-meson configurations gives
rise to nonvanishing NACTs, so that the single channel approximation \eqref{BOCOND} cannot be maintained. Instead, one
has to deal with the set of coupled equations~\eqref{BSE}, which is not practicable for two reasons:
\begin{enumerate}[i)]
\item There is no yet direct lattice QCD calculation of the NACTs $\vb*{\tau}$.
\item When $\vb*{\tau} \ne 0$, the wave function components in the expansion \eqref{BOSTATE} do not correspond
to pure $Q\overline{Q}$ or meson-meson but rather to a mixing of both, the amount of mixing depending on $\vb*{r}$.
\end{enumerate}
These drawbacks can be overcome through the use of the \emph{diabatic approach}, where
one expands the bound state $\ket{\psi}$ on a basis of light field
eigenstates calculated at some fixed point $\vb*{r}_0$. As the $\Bqty{\ket{\zeta_i(\vb*{r})}}$
form a complete set for the light fields whatever the value of $\vb*{r}$, switching from a
$\Bqty{\ket{\zeta_i(\vb*{r})}}$ to $\Bqty{\ket{\zeta_i(\vb*{r}_0)}}$
is equivalent to a $\vb*{r}$-dependent change of basis in the light degrees of freedom.
The diabatic expansion of the bound state reads
\begin{equation}
\ket{\psi} =\sum_{i}\int \dd{\vb*{r}^{\prime}}\widetilde{\psi}_{i}(\vb*{r}^{\prime},\vb*{r}_{0})\ket{\vb*{r}^{\prime}} \ket{\zeta_{i}(\vb*{r}_{0})}
\label{DSTATE}
\end{equation}
where the coefficients $\widetilde{\psi}_{i}$, one coefficient for each light field state, are functions of $\vb*{r}^{\prime}$ that depend
parametrically on $\vb*{r}_{0}$.
A nice physical feature of this expansion is that the light field state $\ket{\zeta_i (\vb*{r}_0)}$ corresponding to each
component $\widetilde{\psi}_i$ does not depend on the $Q\overline{Q}$ relative position $\vb*{r}^\prime$. This means that if
one chooses the fixed point $\vb*{r}_0$ far from the avoided crossing, then the wave function components correspond to
either pure $Q\overline{Q}$ or meson-meson for any value of $\vb*{r}^\prime$. In other words, in the diabatic approach one expands the
bound states in terms of the more intuitive Fock components (pure $Q\overline{Q}$ and pure meson-meson)
instead of components which are a mixing of $Q\overline{Q}$ and meson-meson.
Substituting \eqref{DSTATE} in the bound
state equation \eqref{BSECM} and projecting on $\bra{\vb*{r}}$ yields
\begin{equation}
\sum_{i}\pqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian+H_\text{static}^\text{lf}(\vb*{r})-E}
\widetilde{\psi}_{i}(\vb*{r},\vb*{r}_{0})
\ket{\zeta_{i}(\vb*{r}_{0})}
=0
\end{equation}
where all the derivatives are taken with respect to $\vb*{r}$.
If we now multiply on the left by $\bra{\zeta_{j}(\vb*{r}_{0})}$, as $\grad\ket{
\zeta_{i}(\vb*{r}_{0})}=0$ the equation reads
\begin{equation}
\sum_{i}\pqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\delta_{ji} \laplacian+V_{ji}(\vb*{r},\vb*{r}_{0}) -
E\delta_{ji}}\widetilde{\psi}_{i}(\vb*{r},\vb*{r}_{0})=0
\label{DEQ}
\end{equation}
where
\begin{equation}
V_{ji}(\vb*{r},\vb*{r}_{0})\equiv \mel{\zeta_{j}(\vb*{r}_{0})}{H_\text{static}^\text{lf}(\vb*{r})}{\zeta_{i}(\vb*{r}_{0})}
\label{DPOT}
\end{equation}
is the so-called \emph{diabatic potential matrix}.
The multichannel {\schr} equation \eqref{DEQ} together with
\eqref{DPOT} and \eqref{DSTATE} define the diabatic approach which is widely
employed in molecular physics \cite{Bae06}.
The complete equivalence between Eqs.~\eqref{BSE} and \eqref{DEQ} has been shown elsewhere \cite{Bae06}
and is reproduced, for the sake of completeness, in Appendix~\ref{apdxADT}. In short, the troublesome NACTs
in \eqref{BSE} that break the single channel approximation when configuration mixing is present
(thus invalidating the {\BO} framework) are taken into account in \eqref{DEQ}
through the diabatic potential matrix. This is utterly convenient since,
as we shall see in Sec.~\ref{mixpotsec}, the elements of this matrix are directly related to the
static light field energy levels calculated in quenched and unquenched lattice QCD.
It is also easy to show that when the single channel approximation
\eqref{BOCOND} holds the diabatic potential matrix \eqref{DPOT} becomes a diagonal
matrix containing the static light field energy levels calculated in quenched lattice QCD,
and consequently Eq.~\eqref{DEQ} reproduces the set of single channel {\schr} equations \eqref{BOSE}.
Therefore, the diabatic approach is a complete general framework appliable to conventional heavy-quark mesons
lying far below the lowest open flavor meson-meson threshold as well as to unconventional ones lying close below
or above that threshold.
\section{\label{sec4}Heavy-quark mesons in the diabatic framework}
In order to apply the diabatic framework to the description of
heavy-quark meson bound states we examine first the case of a single
meson-meson threshold. Then we proceed to the generalization to an arbitrary
number of thresholds.
\subsection{Spectroscopic equations}
Let us consider one meson-meson threshold. Let us fix a value for $\vb*{r}_0$
such that the ground state of the light fields
is associated with the $Q\overline{Q}$ configuration and the first excited
state with the meson-meson one. To make this more clear we relabel
the diabatic light field states as
\begin{equation}
\ket{\zeta_0 (\vb*{r}_0)} \rightarrow \ket*{\zeta_{Q\overline{Q}}}, \qquad
\ket{\zeta_1 (\vb*{r}_0)} \rightarrow \ket*{\zeta_{M_1\overline{M}_2}},
\end{equation}
and the diabatic wave function components as
\begin{equation}
\widetilde{\psi}_0(\vb*{r},\vb*{r}_0) \rightarrow \psi_{Q\overline{Q}}(\vb*{r}), \qquad
\widetilde{\psi}_1(\vb*{r},\vb*{r}_0) \rightarrow \psi_{M_1\overline{M}_2}(\vb*{r}) .
\end{equation}
Accordingly, we rename the diabatic potential matrix components \eqref{DPOT} as
\begin{subequations} \label{mmat}
\begin{align}
V_{00}(\vb*{r},\vb*{r}_{0}) \rightarrow V_{Q\overline{Q}}(\vb*{r}) &=
\mel*{\zeta_{Q\overline{Q}}}{H_\text{static}^\text{lf}(\vb*{r})}{\zeta_{Q\overline{Q}}}
\label{qqmat}\\
V_{11}(\vb*{r},\vb*{r}_{0}) \rightarrow V_{M_1\overline{M}_2}(\vb*{r}) &=
\mel*{\zeta_{M_1\overline{M}_2}}{H_\text{static}^\text{lf}(\vb*{r})}{\zeta_{M_1\overline{M}_2}}
\label{mmmat}\\
V_{01}(\vb*{r},\vb*{r}_{0}) \rightarrow V_\text{mix}(\vb*{r}) &=
\mel*{\zeta_{Q\overline{Q}}}{H_\text{static}^\text{lf}(\vb*{r})}{\zeta_{M_1\overline{M}_2}}.
\label{mixmat}
\end{align}
\end{subequations}
Let us realize that having associated each component of the wave function with pure $Q\overline{Q}$ or pure meson-meson,
we can easily incorporate to the kinetic energy operator the fact that the reduced mass of the meson-meson component,
$\mu_{M_1\overline{M}_2}$, is different from $\mu_{Q\overline{Q}}$. Hence, we shall use $- \frac{\hbar^2}{2 \mu_{Q\overline{Q}}}\laplacian$
and $ -\frac{\hbar^2}{2 \mu_{M_1\overline{M}_2}}\laplacian$ for the kinetic energy operators of the $Q\overline{Q}$
and meson-meson components respectively. (Note that this improvement is possible only in the diabatic framework.)
Then, the bound state equations read
\begin{widetext}
\begin{subequations} \label{CD}
\begin{align}
\pqty{-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian+V_{Q\overline{Q}}(\vb*{r})-E}\psi_{Q\overline{Q}}(\vb*{r})+
V_\text{mix}(\vb*{r})\psi_{M_{1}\overline{M}_{2}}(\vb*{r}) &= 0 \label{C1} \\
\pqty{-\frac{\hbar^{2}}{2\mu_{M_{1}\overline{M}_2}}\laplacian+ V_{M_1\overline{M}_2}(\vb*{r})-E}
\psi_{M_{1}\overline{M}_2}(\vb*{r})+{V_\text{mix}(\vb*{r})}\psi_{Q\overline{Q}}(\vb*{r}) &= 0 ,\label{C2}
\end{align}
\end{subequations}
\end{widetext}
or in matrix notation
\begin{equation}
\pqty{\mathrm{K} + \mathrm{V}(\vb*{r})} \Psi(\vb*{r}) = E \Psi(\vb*{r})
\label{CM}
\end{equation}
where $\mathrm{K}$ is the kinetic energy matrix
\begin{equation}
\mathrm{K} \equiv \pmqty{ -\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian & 0 \\
0 & -\frac{\hbar^{2}}{2\mu_{M_{1}\overline{M}_2}}\laplacian} ,
\end{equation}
$\mathrm{V}(\vb*{r})$ is the diabatic potential matrix
\begin{equation}
\mathrm{V}(\vb*{r}) \equiv
\pmqty{V_{Q\overline{Q}}(\vb*{r}) & V_\text{mix}(\vb*{r}) \\ {V_\text{mix}(\vb*{r})} & V_{M_1\overline{M}_2}(\vb*{r})} ,
\label{VM}
\end{equation}
and $\Psi(\vb*{r})$ is a column vector notation for the wave function:
\begin{equation}
\Psi(\vb*{r}) \equiv \pmqty{\psi_{Q\overline{Q}}(\vb*{r}) \\ \psi_{M_{1}\overline{M}_2}(\vb*{r})} .
\end{equation}
In this notation the normalization of the wavefunciton reads
\begin{equation}
\int \dd{\vb*{r}} \Psi^\dagger(\vb*{r})\Psi(\vb*{r}) = \mathcal{P}(Q\overline{Q}) + \mathcal{P}(M_1\overline{M}_2) = 1
\end{equation}
where we have defined the $Q\overline{Q}$ probability
\begin{equation}
\mathcal{P}(Q\overline{Q}) \equiv \int \dd{\vb*{r}}\abs*{\psi_{Q\overline{Q}}(\vb*{r})}^2
\end{equation}
and the meson-meson probability
\begin{equation}
\mathcal{P}(M_1\overline{M}_2) \equiv \int \dd{\vb*{r}}\abs*{\psi_{M_{1}\overline{M}_2}(\vb*{r})}^2.
\end{equation}
The multichannel {\schr} equation \eqref{CD}, or equivalently \eqref{CM}, defines formally the diabatic approach for
the description of the heavy-quark meson system.
\subsection{\label{mixpotsec}Mixing potential}
To solve \eqref{CD} we need to know the diabatic potential matrix Eq.~\eqref{VM}. Regarding the diagonal element
$V_{Q\overline{Q}}(\vb*{r})$, we see from
\eqref{qqmat} that it corresponds to the expectation value of the
static energy operator in the light field state
associated with a pure $Q\overline{Q}$ configuration.
This can be identified with the ground state static energy
calculated in quenched lattice QCD, see Fig.~\ref{cross},
given by the Cornell potential
\begin{equation}
V_{Q\overline{Q}}(\vb*{r}) = V_\text{C}(r).
\label{VQQDEF}
\end{equation}
In the same way, from \eqref{mmmat} we identify the other diagonal term $V_{M_1\overline{M}_2}(\vb*{r})$ with the static energy
associated with a pure meson-meson configuration, given by
the threshold mass $T_{M_1\overline{M}_2}$ (the sum of the meson masses)
\begin{equation}
V_{M_1\overline{M}_2}(\vb*{r}) = T_{M_1\overline{M}_2} \equiv m_{M_1} + m_{\overline{M}_2} ,
\label{VMMDEF}
\end{equation}
up to one pion exchange effects that we do not consider here.
As for the off-diagonal term, the mixing potential $V_\text{mix}(\vb*{r})$, we can use the eigenvalues of the diabatic
potential matrix to derive its form. As shown in Appendix~\ref{apdxADT}, these eigenvalues correspond to the static
energy levels that are calculated in unquenched lattice QCD which have been pictorially represented
in Fig.~\ref{cross}. More precisely, the eigenvalues of the diabatic potential matrix are the
two solutions $V_\pm(\vb*{r})$ of the secular equation
\begin{equation}
\det{\mathrm{V}(\vb*{r}) - V_\pm(\vb*{r}) \mathbb{I}} = 0
\end{equation}
where $\mathbb{I}$ is the identity matrix. These solutions read
\begin{equation}
\begin{split}
V_\pm(\vb*{r}) = &\frac{V_\text{C}(r) + T_{M_1\overline{M}_2}}{2} \\
&\pm \sqrt{\pqty{\frac{V_\text{C}(r) -
T_{M_1\overline{M}_2}}{2}}^2 + V_\text{mix}(\vb*{r})^2},
\end{split}
\end{equation}
from which we obtain
\begin{equation}
\abs{V_\text{mix}(r)} = \frac{\sqrt{\pqty{V_+(r) - V_-(r)}^2 - \pqty{V_\text{C}(r) - T_{M_1\overline{M}_2}}^2}}{2},
\label{VMIX}
\end{equation}
where we have dropped the vector notation for $\vb*{r}$ as the energy levels calculated in
lattice QCD depend only on the modulus $r=\abs{\vb*{r}}$.
Eq.~\eqref{VMIX} tells us that a detailed calculation of the mixing potential $\abs{V_\text{mix}(r)}$ from \textit{ab initio} lattice
data on $V_\pm(r)$ is possible. As a matter of fact, an effective parametrization of $V_\text{mix}(r)$ from lattice data has been proposed
\cite{Bul19, Bic20}. While we encourage more work along this direction, we resort to general arguments
to get the shape of $\abs{V_\text{mix}(r)}$. In this regard, the general form of the curves $V_{+}(r)$ and $V_{-}(r)$
near any threshold, reflecting the physical picture of the
$Q\overline{Q}$ -- meson-meson mixing, is expected to be similar as it happens to be the case when two thresholds
are incorporated into the lattice
calculation \cite{Bal05,Bul19}. Furthermore, the same form is expected for $Q=b$ and $Q=c$ since the underlying
mixing mechanism (string breaking) is the same.
Therefore, we shall proceed to a parametrization of $\abs{V_\text{mix}(r)}$ according to this general form, and we shall
rely on phenomenology to fix the values of the parameters.
Let us begin by observing that unquenched lattice QCD results show that
\begin{equation}
\abs{V_+(r) - V_-(r)} \ge \abs{V_C (r) - T_{M_{1} \overline{M}_2}}
\end{equation}
for every value $r$, and that at the crossing radius $r_\text{c}^{M_{1} \overline{M}_2}$, defined by
\begin{equation}
V_\text{C}\bigl(r_\text{c}^{M_{1} \overline{M}_2}\bigr)=T_{M_{1} \overline{M}_2},
\end{equation}
$\abs{V_\text{mix}(r)}$ gets approximately its maximum value
\begin{equation}
\max_r \abs{V_\text{mix}(r)} \approx \abs{V_\text{mix}\bigl(r_\text{c}^{M_{1} \overline{M}_2}\bigr)} = \frac{\Delta}{2},
\end{equation}
with $\Delta$ being the distance of the static energy levels at the crossing
radius
\begin{equation}
\Delta \equiv \abs{V_+\bigl(r_\text{c}^{M_{1} \overline{M}_2}\bigr) - V_-\bigl(r_\text{c}^{M_{1} \overline{M}_2}\bigr)}.
\end{equation}
On the other hand we have
\begin{equation}
V_{-}(r)\approx V_\text{C}(r) \qand V_{+}(r)\approx T_{M_{1} \overline{M}_2}
\end{equation}
for $r\ll r_\text{c}^{M_{1} \overline{M}_2}$, and
\begin{equation}
V_{-}(r)\approx T_{M_{1} \overline{M}_2} \qand V_{+}(r)\approx V_\text{C}(r)
\end{equation}
for $r\gg r_\text{c}^{M_{1} \overline{M}_2}$, so that
\begin{equation}
(V_{+}(r) - V_{-}(r))^{2} \approx (V_\text{C}(r) - T_{M_{1} \overline{M}_2})^{2}
\end{equation}
far from the crossing radius $r_\text{c}^{M_{1} \overline{M}_2}$.
Consequently, from \eqref{VMIX} we obtain that $V_\text{mix}(r)$ vanishes in both asymptotic limits:
\begin{equation}
\lim_{r \to 0}V_\text{mix}(r) = \lim_{r \to \infty}V_\text{mix}(r) = 0 .
\end{equation}
To summarize, lattice QCD indicates that the mixing potential $\abs{V_\text{mix}(r)}$ approaches a maximum value of $\Delta/2$ at
$r\approx r_\text{c}^{M_{1} \overline{M}_2}$ and vanishes asymptotically as the distance from the crossing
radius increases. The simplest
parametrization that takes into account these behaviors, thus providing a
good fit to lattice QCD calculations of $V_{\pm}(r)$, is a Gaussian shape:
\begin{equation}
\abs{V_\text{mix}(r)} =\frac{\Delta}{2}\exp{-\frac{\pqty{V_\text{C}(r)-T_{M_{1} \overline{M}_2}}^{2}}{2\Lambda^{2}}}
\label{VMIXDEF}
\end{equation}
where $\Lambda$ is a parameter with dimensions of energy. To better understand the physical meaning of
$\Lambda$ we write it in terms of the string tension $\sigma$ as
\begin{equation}
\Lambda\equiv\sigma\rho
\end{equation}
where $\rho$ has now dimensions of length. Then at distances for which
$V_\text{C}(r)\approx\sigma r + m_Q + m_{\overline{Q}} - \beta$ the mixing potential can be also written as
\begin{equation}
\abs{V_\text{mix}(r)} \approx\frac{\Delta}{2}\exp{-\frac{\bigl(r-r_\text{c}^{M_{1} \overline{M}_2}\bigr)^{2}}{2\rho^{2}}}
\tag{\ref*{VMIXDEF}$^\prime$}
\end{equation}
from which it is clear that $\rho$, the width of the Gaussian curve, fixes a radial scale for the mixing.
\subsection{\label{angsec}Configuration mixing}
The knowledge of the diabatic potential matrix is quite equivalent to the knowledge of the $r$-dependent change of basis matrix
from $\Bqty{\ket{\zeta_0(r)}, \ket{\zeta_1(r)}}$ to $\Bqty{\ket{\zeta_0(r_0)},\ket{\zeta_1(r_0)}}$.
Let us name, according to our previous notation,
$\ket*{\zeta_-(r)}\equiv\ket{\zeta_0(r)}$ and $\ket*{\zeta_+(r)}\equiv\ket{\zeta_1(r)}$ the ground and excited
states of the light fields, with static energies $V_-(r)$ and $V_+(r)$ respectively.
These are related to the $Q\overline{Q}$ and meson-meson states $\ket*{\zeta_{Q\overline{Q}}}\equiv \ket{\zeta_0(r_0)}$
and $\ket*{\zeta_{M_1\overline{M}_2}}\equiv \ket{\zeta_1(r_0)}$ via
\begin{subequations} \label{mixangdef}
\begin{align}
\ket{\zeta_-(r)} &= \cos(\theta(r)) \ket*{\zeta_{Q\overline{Q}}} + \sin(\theta(r)) \ket*{\zeta_{M_1\overline{M}_2}} \\
\ket{\zeta_+(r)} &= \cos(\theta(r)) \ket*{\zeta_{M_1\overline{M}_2}} - \sin(\theta(r)) \ket*{\zeta_{Q\overline{Q}}}
\end{align}
\end{subequations}
where $\theta(r)$ is the \emph{mixing angle} between the $Q\overline{Q}$ and meson-meson configurations.
As explained in Appendix~\ref{apdxADT}, the change of basis matrix connecting the two
sets of states,
\begin{equation}
\pmqty{\ket{\zeta_-(r)} \\ \ket{\zeta_+(r)}} = \mathrm{A}^\dagger(r) \pmqty{\ket*{\zeta_{Q\overline{Q}}} \\ \ket*{\zeta_{M_1\overline{M}_2}}}
\end{equation}
with
\begin{equation}
\mathrm{A}(r) \equiv \pmqty{\cos(\theta(r)) & -\sin(\theta(r)) \\ \sin(\theta(r)) & \cos(\theta(r))},
\end{equation}
is also the matrix that diagonalizes the diabatic potential matrix. Therefore it is possible to extract the mixing angle $\theta$ from the matrix equation
\begin{equation}
\mathrm{A} (r) \mathrm{V}(r) \mathrm{A}^\dagger (r) = \text{diag}(V_-(r), V_+(r))
\label{mateqq}
\end{equation}
where $\text{diag}(V_-(r), V_+(r))$ is a diagonal 2$\times$2 matrix containing the unquenched static light field energies.
It is sufficient to take any off-diagonal element of Eq.~\eqref{mateqq} to obtain
\begin{equation}
V_\text{mix}(r) \cos (2\theta(r)) = \frac{T_{M_1 \overline{M}_2} - V_\text{C}(r)}{2} \sin(2\theta(r))
\end{equation}
from which we get the mixing angle as
\begin{equation}
\theta (r) = \frac{1}{2} \arctan\biggl(\frac{2 V_\text{mix}(r)}{T_{M_1 \overline{M}_2} - V_\text{C}(r)}\biggr).
\label{mixangeq}
\end{equation}
Furthermore, from this expression of the mixing angle and from Eqs.~\eqref{mixangdef} we can also calculate the NACTs:
\begin{subequations}
\begin{align}
\vb*{\tau}_{00}(r) &= \vb*{\tau}_{11}(r) = 0 \\
\vb*{\tau}_{01}(r) &= - \vb*{\tau}_{10}(r)
\end{align}
\end{subequations}
with
\begin{equation}
\vb*{\tau}_{0 1}(r) \equiv \braket{\zeta_-(r)}{\grad\zeta_+(r)} = (\mathrm{A} (r) \grad \mathrm{A}^\dagger (r))_{0 1} = \vu*{r} \dv{\theta}{r} .
\end{equation}
Therefore the NACTs only vanish for values of $r$ where $\theta$
is constant. This happens for small (big) values of $r$ where $\theta$ is $0$ $(\pi/2)$,
corresponding to no mixing between the $Q\overline{Q}$ and meson-meson configurations
in the light field eigenstates.
\subsection{General case}
The multichannel {\schr} equation \eqref{CD} defines the heavy quark meson system when only one threshold is considered, but in general
it may be necessary to incorporate several meson-meson thresholds.
In such a case one has to extend the formalism, what is is more easily done in the matrix notation \eqref{CM}.
The generalization of the kinetic energy matrix is straightforward:
\begin{equation}
\mathrm{K} =
\begin{pmatrix}
-\frac{\hbar^{2}}{2\mu_{Q\overline{Q}}}\laplacian & & &
\\
& -\frac{\hbar^{2}}{2\mu^{(1)}_{M\overline{M}}}\laplacian & &
\\
& & \ddots &
\\
& & &
-\frac{\hbar^{2}}{2\mu^{(N)}_{M\overline{M}}}\laplacian \\
\end{pmatrix}
\end{equation}
where $\mu_{M\overline{M}}^{(i)}$ with $i=1,\dots,N$ is the reduced mass of the $i$-th meson-meson component,
$N$ is the number of meson-meson thresholds, and matrix elements equal to zero are not displayed.
As for the extension of the diabatic potential matrix \eqref{VM}, the presence of interaction
terms between different meson-meson components would make not practicable our
procedure to extract the mixing potentials.
Following what it is usually done in molecular physics \cite{Bae06}, we neglect some interactions between components.
Namely, in line with lattice QCD studies of string breaking \cite{Bul19}, we assume that different
meson-meson components do not interact with each other.
It seems reasonable to think that this is a good approximation when dealing with relatively
narrow, well-separated thresholds. If so, we may consider the uncertainty of this approximation
to be proportional to the ratio between the average of the threshold widths and the threshold
mass difference. More precisely, for values of this ratio smaller than one we expect the
threshold-threshold interaction to be negligible. According to this, we restrict our study
to non-overlapping, narrow thresholds.
Then, the diabatic potential matrix with $N$ thresholds reads
\begin{equation}
\mathrm{V}(r) =
\begin{pmatrix}
V_\text{C}(r) & V_\text{mix}^{(1)}(r) & \hdots & V_\text{mix}^{(N)}(r) \\
{ V_\text{mix}^{(1)}(r)} & T_{M\overline{M}}^{(1)} & & \\
\vdots & & \ddots & \\
{V_\text{mix}^{(N)}(r)} & & & T_{M\overline{M}}^{(N)}\\
\end{pmatrix}
\label{dpotmany}
\end{equation}
where $V_\text{C}(r)$ stands for the Cornell potential, $T_{M\overline{M}}^{(i)}$ for the mass
of the $i$-th threshold and $V_\text{mix}^{(i)}(r)$
for the mixing potential between the $Q\overline{Q}$ and the $i$-th meson-meson components.
In Fig.~\ref{many} we draw the eigenvalues of this matrix for $c\overline{c}$ and the first three open flavor meson-meson thresholds.
\begin{figure}
\includegraphics{Adiabatic_potentials_many}
\caption{Static energies. Dashed line: $c\overline{c}$ (Cornell) potential~\eqref{CPOT} with $\sigma=925.6$~MeV/fm,
$\chi=102.6$~MeV~fm, $\beta = 855$~MeV and $m_c=1840$~MeV.
Dotted lines: meson-meson thresholds ($D \overline{D}$, $D \overline{D}^*$, $D_s \overline{D}_s$).
Dash-dotted lines: $r$-dependent eigenvalues of the diabatic potential matrix. For the sake of simplicity we have assumed the same mixing potential
parameters for all the meson-meson components: $\Delta_{c\overline{c}} = 130$~MeV and $\rho_{c\overline{c}} = 0.3$~fm.}
\label{many}
\end{figure}
The diabatic potential matrix \eqref{dpotmany} can be regarded as a generalization of the two threshold model
of string breaking introduced in \cite{Bul19}, the two main differences being that in our study each dynamical quark flavor
can introduce more than one threshold and that we have parametrized the coupling between quark-antiquark and meson-meson
components with a Gaussian instead of a constant.
Let us add that even tough there is presumably an infinite number of possible meson-meson components,
in practice one needs to consider only a limited subset of them when searching for bound
states. As a matter of fact, a meson-meson component hardly plays any role in the
composition of a bound state whose mass lies far below the corresponding threshold.
\subsection{Quantum numbers}
Heavy-quark meson states are characterized
by quantum numbers $I^{G}\pqty{J^{PC}}$ where $I$, $G$, $J$, $P$, $C$ stand for the isospin, G-parity, total
angular momentum, parity, and charge conjugation quantum numbers respectively.
Let us focus on isoscalars $I=0$ heavy-quark mesons, for which $G=C$. Since the diabatic potential matrix is spherically symmetric
and spin-independent, the $Q\overline{Q}$ component of the wave function can be characterized by the relative orbital angular momentum
quantum number $l_{Q\overline{Q}}$, the total spin $s_{Q\overline{Q}}$, the total angular momentum $J$ and its projection $m_J$ so that
\begin{subequations}\label{quanums}
\begin{align}
\vb*{L}^2_{Q\overline{Q}} Y_{l_{Q\overline{Q}}}^{m_l}(\vu*{r}) &= \hbar^2 l_{Q\overline{Q}} (l_{Q\overline{Q}} + 1)
Y_{l_{Q\overline{Q}}}^{m_l}(\vu*{r}) \\
\vb*{S}^2_{Q\overline{Q}} \xi_{s_{Q\overline{Q}}}^{m_s} &= \hbar^2 s_{Q\overline{Q}} (s_{Q\overline{Q}} + 1) \xi_{s_{Q\overline{Q}}}^{m_s} \\
\vb*{J}^2 \bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J} &=
\hbar^2 J (J+1) \bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J} \\
J_z \bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J} &=
\hbar \, m_J \bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J}
\end{align}
\end{subequations}
where $Y_{l}^{m_l}(\vu*{r})$ is the spherical harmonic of degree $l$, $\xi_{s}^{m_s}$ is the eigenstate of the total $Q\overline{Q}$ spin
and $\bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J}$ is a shorthand notation for the sum
\begin{equation}
\bqty{Y_{l_{Q\overline{Q}}} (\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J} \equiv
\sum_{m_l, m_s} C_{l_{Q\overline{Q}}, s_{Q\overline{Q}}, J}^{m_l, m_s, m_J} Y_{l_{Q\overline{Q}}}^{m_l} (\vu*{r}) \xi_{s_{Q\overline{Q}}}^{m_s}
\end{equation}
where $C_{l, s, J}^{m_l, m_s, m_J}$ is the Clebsch-Gordan coefficient. Given this set of quantum numbers,
the $Q\overline{Q}$ component of the wave function can be factorized as
\begin{equation}
\psi_{Q\overline{Q}}(\vb*{r}) = u^{(Q\overline{Q})}_{E, l_{Q\overline{Q}}}(r) \bqty{Y_{l_{Q\overline{Q}}}(\vu*{r}) \xi_{s_{Q\overline{Q}}}}_J^{m_J}
\end{equation}
where $u^{(Q\overline{Q})}_{E, l_{Q\overline{Q}}}(r)$ is the $Q\overline{Q}$ radial wave function.
The same can be done for the meson-meson components of the wave function, considering the meson-meson relative orbital angular momentum
$l_{M_1\overline{M}_2}$ and the sum of their spins $s_{M_1\overline{M}_2}$. Therefore, with a straightforward extension of the above notation we write
\begin{equation}
\psi_{M_1\overline{M}_2}(\vb*{r}) = u^{(M_1\overline{M}_2)}_{E, l_{M_1\overline{M}_2}}(r)
\bqty{Y_{l_{M_1\overline{M}_2}}(\vu*{r}) \xi_{s_{M_1\overline{M}_2}}}_J^{m_J}.
\end{equation}
Note that for the spectroscopic state to have a definite value of $J$, the $Q\overline{Q}$ and all the meson-meson
components must have the same total angular momentum, hence the unified notation for $J$.
A bound state made of $Q\overline{Q}$ and meson-meson has definite parity and $C$\nobreakdash-parity only if all the wave function components
have the same parity under these transformations. This requirement translates into different conditions depending on whether
the wave function component is associated with $Q\overline{Q}$ or meson-meson. For the
$Q\overline{Q}$ component, $P$ and $C$ quantum numbers are given by
\begin{equation}
P = (-1)^{l_{Q\overline{Q}} + 1} \qand C = (-1)^{l_{Q\overline{Q}} + s_{Q\overline{Q}}}.
\end{equation}
On the other hand, for each meson-meson component one has
\begin{equation}
P = P_{M_1} P_{\overline{M}_2} (-1)^{l_{M_1\overline{M}_2}}
\end{equation}
where $P_{M}$ is the parity of the meson. As for $C$\nobreakdash-parity, one has to consider two distinct cases: if $M_1 = M_2$
the $C$\nobreakdash-parity of the meson-meson component is given by
\begin{equation}
C = (-1)^{l_{M_1 \overline{M}_2} + s_{M_1 \overline{M}_2}},
\label{Cmesmes}
\end{equation}
if otherwise $M_1 \ne M_2$ one can build both positive and negative $C$\nobreakdash-parity states
\begin{equation}
C \ket*{M_1 \overline{M}_2}_\pm = \pm \ket*{M_1 \overline{M}_2}_\pm
\end{equation}
taking the linear combinations
\begin{equation}
\ket*{M_1 \overline{M}_2}_\pm \equiv \frac{1}{\sqrt{2}}\pqty{\ket*{M_1 \overline{M_2}}_0 \pm
\mathcal{C}_{M_1 \overline{M}_2} \ket*{M_2 \overline{M}_1}_0}
\label{Cpm}
\end{equation}
with $\ket*{M_1 \overline{M}_2}_0$ being the isospin singlet state obtained from the combination of the $M_1$ and $\overline{M}_2$ isomultiplets and
\begin{equation}
\mathcal{C}_{M_1 \overline{M}_2}\equiv (-1)^{l_{M_1 \overline{M}_2} + s_{M_1 \overline{M}_2}
+ l_{M_1} + l_{\overline{M}_2} + s_{M_1} + s_{\overline{M}_2} + j_{M_1} + j_{\overline{M}_2}}
\label{Cm1m2}
\end{equation}
where $l_M$ is the internal orbital angular momentum of the meson, $s_M$ its internal spin and $j_M$ its total spin.
The derivation of Eqs.~\eqref{Cpm} and \eqref{Cm1m2} is detailed in Appendix~\ref{apdxcpar}.
\subsection{Bound state solutions}
Given a spherically-symmetric and spin-independent diabatic potential matrix, each $Q\overline{Q}$ configuration with a distinct value of
$(l_{Q\overline{Q}},s_{Q\overline{Q}})$ can be treated as a channel \textit{per se}, and the same can be said for each meson-meson configuration
with a distinct value of $(l_{M_1\overline{M}_2},s_{M_1\overline{M}_2})$. Then finding the spectrum of a given $J^{PC}$ family
boils down to solving a multichannel, spherical {\schr} equation involving only those channels with the corresponding $J^{PC}$ quantum numbers.
One should realize though that a complete numerical nonperturbative solution of the spectroscopic equations \eqref{CM}
is only possible for energies below the lowest $J^{PC}$ threshold. Above it the asymptotic behavior of its meson-meson
component as a free wave, against the confined $Q\overline{Q}$ wave, prevents obtaining a physical solution. Nonetheless,
an approximate physical solution for energies above threshold is still possible, under the assumption that the effect of an open
threshold on the above-lying bound states can be treated perturbatively. More in detail, we proceed in the following way:
\begin{enumerate}[i)]
\item We build the effective $J^{PC}$ diabatic potential matrix out of the Cornell $Q\overline{Q}$ potential, the threshold masses, and the
$Q\overline{Q}$ -- meson-meson mixing potentials.
\item \label{it1} We solve the spectroscopic equations for energies up to the lowest $J^{PC}$ threshold mass, and we analyze the
$(n \,{^{2S+1}\!L}_{J})$ $Q\overline{Q}$ and meson-meson content of the bound states.
\item \label{it2}We build a new $J^{PC}$ diabatic potential matrix neglecting the $Q\overline{Q}$ coupling to the lowest (first) threshold. We solve it for
energies in between the lowest and the second thresholds and discard as spurious any solution containing a $(n \,{^{2S+1}\!L}_{J})$
$Q\overline{Q}$ state entering in the bound states calculated in \ref{it1}). The rationale underlying this step is that a given spectral state in between the
lowest and the second thresholds containing such a $(n \,{^{2S+1}\!L}_{J})$ $Q\overline{Q}$ component would become, when the lowest
threshold were incorporated, the bound state below threshold containing it found in \ref{it1}).
\item We build a new $J^{PC}$ diabatic potential matrix by neglecting the coupling to the lowest threshold and to the second one. We solve it for
energies in between the second and the third thresholds and discard as spurious any solution containing a $(n \,{^{2S+1}\!L}_{J})$
$Q\overline{Q}$ state entering in the bound states calculated in \ref{it1}) and \ref{it2}), and so on.
\item We assume that corrections to the physical states thus obtained due to the coupling with open thresholds can be implemented perturbatively.
\end{enumerate}
The formulation of an appropriate perturbative scheme for the calculation of these corrections,
giving rise to mass shifts as well as to decay
widths to open flavor meson-meson states, will be the subject of a forthcoming paper.
On the other hand there are certainly more corrections to the spectrum that are not
included in our treatment, in particular those due to spin interactions.
Regarding the $Q\overline{Q}$ component, these effects can be
incorporated by adding spin-dependent operators (e.g.\ spin-spin, spin-orbit, tensor) to the Cornell
potential, what has proven to be very effective for a detailed description
of the low-lying spectral states \cite{GI85}. As for meson-meson components, the part of these
corrections involving quark and antiquark within the same heavy-light meson are included through the meson
masses, whereas the remaining ones can be implemented through the one pion exchange interaction
between mesons.
Assuming that these additional energy contributions (fine and hyperfine splittings,
one pion exchange corrections, mass shifts from coupling to open thresholds) can be
taken into account using perturbation theory, we shall concentrate henceforth on the calculation
of the ``unperturbed'' heavy-quark meson spectrum. The technical procedure followed to solve
the spectroscopic equations is detailed in
Appendices~\ref{apdxvar} and \ref{apdxlag}.
\section{\label{sec5}Charmonium-like mesons}
The formalism we have developed in the previous sections can be tested in
charmonium-like mesons (heavy mesons containing $c\overline{c}$) where, unlike in the bottomonium-like case, there are
several well-established experimental candidates
for unconventional isoscalar states, presumably containing significant
meson-meson components. In particular, we center on
isoscalar states with masses up to about $4.1$~GeV, for which the relevant
thresholds have very small widths and do not overlap. A list of these
thresholds is shown in Table~\ref{tablist}.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cc}
$M_{1}\overline{M}_{2}$ & $T_{M_{1}\overline{M}_{2}}$ (MeV) \\
\hline
$D\overline{D}$ & $3730$ \\
$D\overline{D}^*(2007)$ & $3872$ \\
$D_{s}^{+}D_{s}^{-}$ & $3937$ \\
$D^*(2007)\overline{D}^*(2007)$ & $4014$ \\
$D_{s}^{+}D_{s}^{*-}$ & $4080$ \\
\end{tabular}
\end{ruledtabular}
\caption{\label{tablist}Low-lying open charm meson-meson thresholds $M_{1}\overline{M}_{2}$.
Threshold masses $T_{M_{1}\overline{M}_{2}}$ from the charmed and charmed
strange meson masses quoted in \cite{PDG20}.}
\end{table}
The possible values of the meson-meson relative orbital angular momentum contributing
to any given set of quantum numbers $J^{PC}$ are shown in Table~\ref{quanum}. Note that
we use the common notation $D_{(s)}$ to refer to charmed as well as to charmed strange
mesons and the shorthand notation $D_{(s)}\overline{D}^*_{(s)}$ for
the meson-meson $C$\nobreakdash-parity eigenstate defined by Eq.~\eqref{Cpm}.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cccc}
$J^{PC}$ & $l_{D_{(s)} \overline{D}_{(s)}}$ & $l_{D_{(s)} \overline{D}_{(s)}^*}$ & $l_{D_{(s)}^{*} \overline{D}_{(s)}^*}$ \\
\hline
$0^{++}$ & $0$ & & 0,\,2 \\
$1^{++}$ & & $0,\,2$ & $2$ \\
$2^{++}$ & $2$ & $2$ & $0,\,2$ \\
$1^{--}$ & $1$ & $1$ & $1,\,3$ \\
\end{tabular}
\end{ruledtabular}
\caption{\label{quanum}Values of $l_{M_1\overline{M}_2}$ corresponding to meson-meson configurations
with definite values of $J^{PC}$. A missing entry means that the particular meson-meson configuration cannot
form a state with the corresponding quantum numbers.}
\end{table}
In order to calculate the heavy-quark meson bound states we have to fix the
values of the parameters. For the Cornell potential \eqref{CPOT} we use the standard values \cite{Eic94
\begin{subequations} \label{params}
\begin{align}
\sigma &= 925.6 \text{~MeV/fm}, \\
\chi &= 102.6 \text{~MeV~fm}, \\
m_{c} &= 1840\text{~MeV}
\end{align}
and we choose
\begin{equation}
\beta = 855\text{~MeV}
\end{equation}
\end{subequations}
in order to fit the $2s$ center of gravity. Let us note that
one could alternatively choose to fit the $1s$ or $1p$ centers of
gravity, or to get a reasonable fit to the three of them. Our choice is based on the
assumption that relativistic mass effects in the higher states, which are at
least in part incorporated in $\beta$, are expected to deviate
less from those in the $2s$ states.
We should also mention that the value of the charm quark mass we use
is completely consistent with the one needed to
correctly describe $c\overline{c}$ electromagnetic decays within the Cornell
potential model framework \cite{Bru20}.
The low-lying spectrum from this Cornell potential for $J^{++}$ and
$1^{--}$ isoscalar states is shown in Table~\ref{cortable}.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cccc}
$J^{PC}$ & $nl$ & $M_{c\overline{c}}$ (MeV) & $M_\text{cog}^\text{Expt}$ (MeV)\\
\hline
\multirow{4}*{$1^{--}$} & $1s$ & $3082.5$ & $3068.65\pm0.13$\\
& $2s$ & $3673.2$ & $3674.0\pm0.3$\\
& $1d$ & $3795.8$ & \\
& $3s$ & $4097.0$ & \\
\\
\multirow{2}*{$(0,1,2)^{++}$} & $1p$ & $3510.9$ & $3525.30\pm0.11$\\
& $2p$ & $3953.7$ & \\
\end{tabular}
\end{ruledtabular}
\caption{\label{cortable}Calculated $J^{++}$ and $1^{--}$ charmonium masses, $M_{c\overline
{c}}$, for spectroscopic $nl$ states from the Cornell potential \eqref{CPOT}
with parameters \eqref{params}. Experimental mass
centroids from \cite{PDG20}, $M_\text{cog}^\text{Expt}$, are listed for
comparison.}
\end{table}
For the lowest $J^{++}$ states it is worth to remark, apart from the good average mass description, the
excellent fit to the mass of the lowest $1^{++}$ state, $\chi_{c_{1}}(1p)$
($3510.9$~MeV versus the experimental mass $3510.7$
MeV). However, an accurate fit of the lowest $(0,2)^{++}$ masses, in particular for
$\chi_{c_{0}}(1p)$, would require the incorporation of
correction terms (e.g.\ spin-spin, spin-orbit, tensor) to the Cornell radial potential. As
for the first excited $J^{++}$ states one could expect a similar situation (the $2s$ states
lie in between the $1p$ and $2p$ ones$)$ in the absence of threshold effects
that we analyse in what follows.
As for the parameters of the mixing potential \eqref{VMIXDEF},
we have to rely on phenomenology since the only lattice information available is
for $b\overline{b}$. We fix them by requiring that our diabatic treatment
fits the mass of some unconventional experimental state lying close below
threshold. In particular, we can use the mass of $\chi_{c1}(3872)$, a
well-established experimental resonance lying just below the $D\overline
{D}^*$ threshold, to infer the possible of values for $\Delta_{c\overline
{c}}$ and $\rho_{c\overline{c}}$.
As the crossing of the Cornell potential with the $D\overline{{D}}^*$
threshold takes place around $r_\text{c}^{D\overline
{D}^*}=1.76$~fm, we conservatively vary
$\rho_{c\overline{c}}$ from $0.1$~fm to $0.8$~fm, this last value
corresponding to almost half of $r_\text{c}^{D\overline{D}^*}$. Then, for
every value of $\rho_{c\overline{c}}$ we get the minimal value of
$\Delta_{c\overline{c}}$ to accurately fit the mass of $\chi_{c1}(3872)$. The calculated values are
listed in Table~\ref{rodel}.
\begin{table}
\begin{ruledtabular}
\begin{tabular}{cc}
$\rho_{c\overline{c}}$ (fm) & $\Delta_{c\overline{c}}$ (MeV) \\
\hline
0.1 & 290 \\
0.2 & 165 \\
0.3 & 130 \\
0.4 & 115 \\
0.5 & 108 \\
0.6 & 104 \\
0.7 & 102 \\
0.8 & 101 \\
\end{tabular}
\end{ruledtabular}
\caption{\label{rodel}Correlated values of the mixing potential parameters giving rise
to a $0^+(1^{++})$ bound state with a mass close below the $D \overline{D}^*$ threshold.}
\end{table}
It should be pointed out that large values of $\Delta_{c\overline{c}}$ would
deform the shape of the avoided energy crossings as compared to the one
calculated in lattice for $b\overline{b}$, against our $b\overline
{b}$ -- $c\overline{c}$ universality arguments for the shape of the mixing potential. On the
other hand, large values of $\rho_{c\overline{c}}$ would make the
mixing angle between the $c\overline{c}$ and a single $M_1 \overline{M}_2$ threshold,
calculated from Eq.~\eqref{mixangeq},
to have an asymptotic behavior in conflict with
the one observed in the lattice under the
natural assumption that this behavior is similar for $b\overline{b}$ and
$c\overline{c}$.
More precisely, unquenched lattice QCD
calculations of the mixing angle \cite{Bal05} show that $\theta$ approaches
$\pi/2$ quite rapidly for $r>r_\text{c}^{M_1\overline{M}_2}$, thus ruling out
a large radial scale for the mixing. Henceforth we us
\begin{subequations}\label{mixparam}
\begin{equation}
\rho_{c\overline{c}}=0.3\text{ fm}
\end{equation}
for this value gives the most accurate asymptotic behavior of the mixing
angle, see Fig.~\ref{mixang}, and consequently
\begin{equation}
\Delta_{c\overline{c}}=130\text{~MeV}.
\end{equation}
\end{subequations}
\begin{figure}
\includegraphics{Mixing_angle}
\caption{\label{mixang}Mixing angle between $c\overline{c}$ and $D \overline{D}^*$ with $\Delta_{c\overline{c}}$=130~MeV,
$\rho_{c\overline{c}}$ = 0.3 fm and Cornell potential parameters \eqref{params}.}
\end{figure}
The resulting mixing potential is drawn in Fig.~\ref{mixpot} for $M_1\overline{M}_2=D\overline{D}^*$. For any
other threshold the only difference comes from the substitution of
the threshold mass.
\begin{figure}
\includegraphics{Mixing_potential}
\caption{\label{mixpot}Mixing potential for $c\overline{c}$ and $D\overline{D}^*$ with $\Delta_{c\overline{c}}$=130~MeV and
$\rho_{c\overline{c}}$ = 0.3 fm.}
\end{figure}
Notice that we have drawn $\abs{V_\text{mix}(r)}$ with no sign prescription for $V_\text{mix}(r)$.
This sign can be reabsorbed as a relative phase between the charmonium and meson-meson components.
For the calculations in this paper a positive sign has been taken. We have checked that
for the observables considered in this article the same results are obtained with a negative sign.
It should be realized though that this could not be the case for other observables.
The calculated spectrum of $J^{++}$ states, containing one $c\overline{c}$
state with $l_{c\overline{c}}=1$ ($1p$ or $2p$), is shown in Table~\ref{charm_jpp_table}.
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{ccdddddd}
$J^{PC}$ & Mass (MeV) & c\overline{c} & D \overline{D} & D \overline{D}^*
& D_s \overline{D}_s & D^* \overline{D}^* & D_s \overline{D}_s^* \\
\hline
\multirow{2}*{$1^{++}$} & 3510.0 & 100 \% & & & & & \\
& 3871.7 & 1 \% & & 99 \% & & & \\
\\
\multirow{2}*{$0^{++}$} & 3509.1 & 100 \% & & & & & \\
& 3920.4 & 59 \% & & & 37 \% & 4 \% & \\
\\
\multirow{2}*{$2^{++}$} & 3509.6 & 100 \% & & & & & \\
& 3933.5 & 86 \% & & & 7 \% & 7 \% & \\
\end{tabular}
\end{ruledtabular}
\caption{\label{charm_jpp_table}Calculated masses, $c\overline{c}$ and meson-meson probabilities for $J^{++}$ charmonium-like states. A missing entry
means that the corresponding component gives negligible (i.e.\ inferior to $1\%$) or no contribution to the state.}
\end{table*}
It is illustrative to compare these results with the $c\overline{c}$ masses in
Table~\ref{cortable} obtained with the Cornell potential. A glance at these tables
makes clear that the presence of the thresholds gives rise to attraction in
the sense that the resulting masses are reduced with respect the corresponding
Cornell $c\overline{c}$ masses. For the lowest-lying $0^+(J^{++})$ states ($
J=0,1,2$) there is a very small mass difference indicating an almost
negligible attraction for these states. This is understood for the thresholds
are far above in energy ($\geq200$~MeV) so that no significant mixing occurs
(less than $1\%$ meson-meson probability).
The situation is completely altered for the first excited $0^+(J^{++})$ states. Thus, the fitting of
the first excited $1^{++}$ resonance, $\chi_{c_{1}}(3872)$
with a measured mass of $3871.69\pm0.17$~MeV, requiring\ a mass
reduction of $81$~MeV with respect to the Cornell $c\overline{c}$ mass,
implies a very strong mixing, $99\%$ of $D\overline{D}^*$ component,
whereas for the $0^{++}$ and $2^{++}$ states the predicted mixing is about
$40\%$ (mainly from $D_{s}\overline{D}_{s})$ and $15\%$ (shared by
$D_{s}\overline{D}_{s}$ and $D^*\overline{D}^*)$ respectively, with
corresponding mass reductions of $33$~MeV and $20$~MeV.
It is amazing that these $(0,2)^{++}$ mass predictions are in
complete agreement with data regarding their positions with respect to
the $D_{s}\overline{D}_{s}\ $threshold, both below it. Moreover, their calculated
numerical values are pretty close to the measured ones. So, the calculated
$2^{++}$ mass, $3933.5$~MeV, is very close to that of the experimental
resonance $\chi_{c_{2}}(3930)$: $3927.2\pm2.6$~MeV. And the $0^{++}$
calculated mass, $3920.4$~MeV, is consistent with the ones of the experimental
candidates: $\chi_{c_{0}}(3860)$, with a measured mass of $3862_{-32-13
^{+26+40}$~MeV, and $X(3915)$ with a measured mass of $3918.4\pm1.9$~MeV,
although in this last case the assignment to a $2^{++}$ state cannot be
completely ruled out, see \cite{PDG20} and references therein. This suggests
that further mass corrections for these sates as the ones due to
spin-dependent terms in the $c\overline{c}$ potential,
or to one pion exchange in the meson-meson potential,
or those taking into account the effect of the lower threshold $D\overline
{D}$, or the deviations from the assumption of the same
values of the mixing potential parameters for all the thresholds,
are either small and might be implemented perturbatively, or have been
partially taken into account through the effectiveness of the
parameters of the mixing potential.
It should also be emphasized that our nonperturbative formalism provides us
with the meson wave functions in terms of their $c\overline{c}$ and
meson-meson components.
For $\chi_{c_{1}}(3872)$ the radial $c\overline{c}$
and $D\overline{D}^*$ ($l_{D\overline{D}^*}=0,2$) wave function
components are plotted in Fig.~\ref{xc13872}.
\begin{figure}
\includegraphics{X_c1_3872}
\caption{Radial wave function of the calculated $0^+(1^{++})$ state with a mass of $3871.7$~MeV.
$c\overline{c}(2\,{^{3}\!p}_{1})$, $D \overline{D}^*(l_{D \overline{D}^*}=0)$
and $D \overline{D}^*(l_{D \overline{D}^*}=2)$ components are drawn with a solid, dashed and dotted line respectively.}
\label{xc13872}
\end{figure}
A look at this figure makes clear the prevalence of the $D\overline{D}^*$
channel with $l_{D\overline{D}^*}=0$ for
distances beyond $2$~fm. As the estimated Cornell rms radius for $D$ is about
$0.54$~fm we may conclude that $\chi_{c_{1}}(3872)$, with a
calculated rms radius of $26.17$~fm is at large distances a loose hadromolecular
state. At short distances, though, the $(2\,{^{3}\!p}_{1})$
$c\overline{c}$ component, with a rms radius of $1.01$~fm plays a role at
least as prominent as the $D\overline{D}^*$ one, see Fig.~\ref{xc13872}. These
features are quite in line with the indications from phenomenology requiring a
$c\overline{c}$ component to give proper account of short distance properties.
For the calculated $0^{++}$ state the radial wave function is drawn in
Fig.~\ref{xc03915}.
\begin{figure}
\includegraphics{X_c0_3915}
\caption{\label{xc03915}Radial wave function of the calculated $0^+(0^{++})$ state with a mass
of $3920.4$~MeV. $c\overline{c}(2\,{^{3}\!p}_{0})$, $D_s \overline{D}_s(l_{D_s \overline{D}_s}=0)$,
$D^* \overline{D}^*(l_{D^* \overline{D}^*}=0)$ and $D^* \overline{D}^*(l_{D^* \overline{D}^*}=2)$
components are drawn with a solid, dashed, dotted and dash-dotted line respectively.}
\end{figure}
As can be checked, the wave function with a rms radius of $1.26$
fm is made mainly of $c\overline{c}$ and
$D_{s}\overline{D}_{s}$ with a $59\%$ and $37\%$ probability respectively.
This indicates a dominant
$D\overline{D}$ strong decay mode from $c\overline{c}$ as it is experimentally
the case for $\chi_{c_{0}}(3860)$. On the other hand, a $J/\psi\,\omega$ decay
mode may get a significant contribution from $D_{s}\overline{D}_{s}$ since it
is OZI allowed through the small $s\overline{s}$ content of $\omega$. This
could cause this mode to be also a dominant one as it is experimentally the
case for $X(3915)$. Hence, it could be that $\chi_{c_{0}}(3860)$ and $X(3915)$
are just the same resonance observed through two different decay modes.
As for the calculated $2^{++}$ state, the wave function, with a rms radius of
$1.06$~fm, is plotted in Fig.~\ref{xc23930}. It is mostly that of the $c\overline{c}$
component. This is in accord with a very dominant $D\overline{D}$ strong decay
mode as it is experimentally the case for $\chi_{c_{2}}(3930)$.
\begin{figure}
\includegraphics{X_c2_3930}
\caption{\label{xc23930}Radial wave function of the calculated $0^+(2^{++})$ state with a
mass of $3933.5$~MeV. $c\overline{c}(2\,{^{3}\!p}_{2})$, $D_s \overline{D}_s(l_{D_s \overline{D}_s}=2)$,
$D^* \overline{D}^*(l_{D^* \overline{D}^*}=0)$ and $D^* \overline{D}^*(l_{D^* \overline{D}^*}=2)$
components are drawn with a solid, dashed, dotted and dash-dotted line respectively.}
\end{figure}
Certainly these qualitative arguments on the dominant strong decay modes
should be supported by trustable and predictive quantitative calculations.
As mentioned before, the
development of a consistent formalism for the calculation of the decay widths
to open flavor meson-meson states, which is out of the scope of this article,
is in progress. One should keep in mind though that the dearth of current
detailed quantitative decay data for comparison will be a serious drawback to
test it. We strongly encourage experimental efforts along this line.
Regarding electromagnetic radiative transitions, although important
progress for the accurate calculation of decays from the $c\overline{c}$
component has been reported \cite{Bru20}, a reliable and consistent
calculation incorporating the meson-meson contribution as well is lacking. We
encourage a theoretical effort along this line.
One can do better, as we show next, for leptonic decays from the low-lying $1^{--}$ states since the decay widths
depend on the wave function at the origin and the contribution from meson-meson components is suppressed as
they are not in $s$-wave, see Table~\ref{quanum}.
The calculated $1^{--}$ spectrum of states is listed in Table~\ref{charm_omm_table}.
\begin{table*}
\begin{ruledtabular}
\begin{tabular}{ccdddddd}
$J^{PC}$ & Mass (MeV) & c\overline{c} & D \overline{D} & D \overline{D}^*
& D_s \overline{D}_s & D^* \overline{D}^* & D_s \overline{D}_s^* \\
\hline
\multirow{4}*{$1^{--}$} & 3082.4 & 100 \% & & & & & \\
& 3664.2 & 95 \% & 4 \% & 1 \% & & & \\
& 3790.2 & 97 \% & & 2 \% & 1 \% & & \\
& 4071.0 & 64 \% & & & & & 36 \% \\
\end{tabular}
\end{ruledtabular}
\caption{\label{charm_omm_table}Calculated masses, $c\overline{c}$ and meson-meson probabilities for
$1^{--}$ charmonium-like states. A missing entry means that the corresponding component gives negligible
(i.e.\ inferior to $1\%$) or no contribution to the state.}
\end{table*}
Again, a comparison with the $c\overline{c}$ masses in Table~\ref{cortable} makes clear
that the presence of the thresholds gives rise to attraction. As it was the
case for $J^{++}$, the lowest state, lying far below the lowest threshold, has
no mixing at all being the $1s$ $c\overline{c}$ state. A pretty small mixing
is present for the next two higher states that can be mostly assigned to the
$2s$ $(95\%)$ and $1d$ $(97\%)$ $c\overline{c}$
states respectively. It is worth to mention that for the $1d$ state with a
Cornell $c\overline{c}$ mass of $3795.8$~MeV, the $D\overline{D}$ threshold
lying $66$~MeV below does not produce enough attraction to bring the state below threshold.
The first state with a significant mixing, $36\%$ of $D_{s}\overline{D}_{s}^*$,
is predicted at $4071$~MeV and contains a $60\%$ of $3s$
$c\overline{c}$ and a $4\%$ of $2d$ $c\overline{c}$ as well. Its wave function
is drawn in Fig.~\ref{psi4040}.
\begin{figure}
\includegraphics{psi_4040}
\caption{\label{psi4040}Radial wave function of the calculated $0^{-}(1^{--})$ state with
a mass of $4071$~MeV. $c\overline{c}(3\,{^{3}\!s}_{1})$, $c\overline{c}(2\,{^{3}\!d}_{1})$
and $D_s \overline{D}_s^*(l_{D_s \overline{D}_s^*}=1)$ components are drawn with a solid,
dashed and dotted line respectively.}
\end{figure}
In this case the vicinity of the $D_{s}\overline{D}_{s}^*$ threshold at
$4080$~MeV to the $3s$ $c\overline{c}$ Cornell mass at $4097$~MeV produces
sufficient attraction to bring the state below threshold, in agreement with
data under its assignment to the $\psi(4040)$ resonance with a
measured mass of $4039\pm1$~MeV. Furthermore the expected dominant decay
modes, ($D\overline{D},D\overline{D}^*,D_{s}\overline{D}_{s},D^*\overline{D}^*$) from $c\overline{c}$, and
$D_{s}\overline{D}_{s}\,\gamma$ from $D_{s}\overline{D}_{s}^*$, are in
perfect accord with the ones observed from $e^{+}e^{-}\rightarrow \text{hadrons}$.
As for the well-measured leptonic width
\begin{equation}
\left(\Gamma\left(\psi(4040)\rightarrow e^{+}e^{-}\right)\right)
_\text{Expt}=0.86\pm0.07\text{~KeV},
\end{equation}
we can trustfully predict the ratios
\begin{subequations}\label{thratios}
\begin{equation}
\frac{\Gamma_{\psi(4040)\rightarrow e^{+}e^{-}}^\text{Theor}}{\Gamma_{\psi(1s)\rightarrow e^{+}e^{-}}^\text{Theor}}
=\frac{\abs{R_{\psi(4040)}(0)}^{2}}{\abs{R_{\psi(1s)}(0)}^{2}}\frac{M_{\psi(1s)}^{2}}{M_{\psi(4040)}^{2}}\\
\approx 0.18
\end{equation}
and
\begin{equation}
\frac{\Gamma_{\psi(4040)\rightarrow e^{+}e^{-}}^\text{Theor}}{\Gamma_{\psi(2s)\rightarrow e^{+}e^{-}}^\text{Theor}}
=\frac{\abs{R_{\psi(4040)}(0)}^{2}}{\abs{R_{\psi(2s)}(0)}^{2}}\frac{M_{\psi(2s)}^{2}}{M_{\psi(4040)}^{2}}\\
\approx 0.43
\end{equation}
\end{subequations}
to be compared to
\begin{subequations}\label{expratios}
\begin{equation}
\frac{\Gamma_{\psi(4040)\rightarrow e^{+}e^{-}}^\text{Expt}}{\Gamma_{\psi(1s)\rightarrow e^{+}e^{-}}^\text{Expt}}
=0.15\pm0.03
\end{equation}
and
\begin{equation}
\frac{\Gamma_{\psi(4040)\rightarrow e^{+}e^{-}}^\text{Expt}}{\Gamma_{\psi(2s)\rightarrow e^{+}e^{-}}^\text{Expt}}
=0.37\pm0.07 .
\end{equation}
\end{subequations}
Hence, our results agree with data within the experimental intervals. The
reason for this agreement has to do with the reduced probability of the $3s$
$c\overline{c}$ component, $60\%$, induced by the mixing with the
$D_{s}\overline{D}_{s}^*$ threshold. This mixing is also responsible for
the $4\%$ of $2d$ $c\overline{c}$ component. This small (big) $2d$ ($3s$)
probability could be increased (decreased) if a tensor interaction were
incorporated as a correction term to the Cornell potential. Maybe the bias we
observe in our results, both agreeing with the maximum allowed experimental
values, is an indication in this sense. In any case a modest additional
probability reduction of the $3s$ $c\overline{c}$ component should be expected.
It is worth to mention that the explanation of the leptonic width for
$\psi(4040)$ has been linked in the literature to that of
$\psi(4160)$ through a very significant $s$-$d$ mixing
\cite{Bad09}. Our results do not support this idea. Instead the $D_{s}\overline{D}_{s}^*$ -- $c\overline{c}(3s)$
mixing appears to be the main physical mechanism underlying the $\psi(4040)$ decay to
$e^{+}e^{-}$.
Unfortunately, at the current stage of our diabatic development
we cannot properly evaluate $\psi(4160)$, the main reason being
that the dominant Cornell $2d$ $c\overline{c}$ state lies only $100$~MeV below
the first $s$-wave $1^{--}$ threshold, $D\overline{D}_{1}$, which is composed
of two overlapping thresholds, $D\overline{D}_{1}(2420)$ and
$D\overline{D}_{1}(2430)$, the last one with a large width.
Quite presumably this double threshold gives a significant contribution by
itself to the leptonic width of $\psi(4160)$.
This current limitation applies as well to the description of unconventional
states with masses above $4.1$~GeV such as $\psi(4260)$ lying
close below the $D\overline{D}_{1}$ double threshold, or $\psi(4360)$ and $\psi(4415)$ lying close below a multiple
threshold at $4429$~MeV. The same limitation applies for $J^{++}$ states. Work along this line is in progress.
\section{\label{sec6}Summary and conclusions}
A general formalism for a unified description of conventional and
unconventional heavy-quark meson states has been\ developed and successfully
applied to isoscalar $J^{++}$ and $1^{--}$ charmonium-like states with masses
below $4.1$~GeV.
The formalism adapts the diabatic approach, widely used in molecular physics
to tackle the configuration mixing problem, to the study of heavy-quark meson
states involving quark-antiquark as well as meson-meson components. A great
advantage of using this approach, against the Born-Oppenheimer ({\BO})
approximation commonly used for heavy-quark mesons, is that the bound states
are expanded in terms of $Q\overline{Q}$ and meson-meson configurations
instead of the mixed configurations that correspond to the ground and
excited states of the light fields. Then instead of being forced to use a
single channel approximation to solve the bound state problem as in {\BO},
what in practice is equivalent to neglect the configuration mixing, one
can write a treatable multichannel {\schr} equation where the
interaction between configurations is incorporated through a diabatic
potential matrix. Moreover, the diagonal and off-diagonal elements of this
potential matrix can be directly related to the static energies obtained from
\textit{ab initio} quenched (only $Q\overline{Q}$ or meson-meson
configuration) and unquenched ($Q\overline{Q}$ and meson-meson configurations)
lattice calculations. This connection defines the diabatic
approach in QCD.
It is worth to emphasize that this approach goes also beyond the incorporation
of hadron loop corrections to the {\BO} scheme that have been used sometimes in
the literature to deal with unconventional charmonium-like mesons. Indeed, the
diabatic bound state wave functions, given in terms of quark-antiquark and
meson-meson components, allow for a complete nonperturbative evaluation of
observable properties.
This theoretical framework has been tested in the charmonium-like meson sector
where there is compelling evidence of the existence of mixed-configuration
states, in particular the very well-established $0^{+}(1^{++})$
resonance $\chi_{c_{1}}(3872)$ that we use to fix our
parametrization of the mixing potential.
Although a complete (at all energies) spectral description would require
additional theoretical refinements, as for example the incorporation of
threshold widths, the results obtained for states with mass below $4.1$~GeV,
for which the significant thresholds are very narrow, are encouraging. All the
mass values are well reproduced and their locations with respect to the
thresholds correctly predicted making clear the $c\overline{c}$ -- threshold
attraction. This points out to the diabatic
approach as an appropriate framework for a unified and complete
nonperturbative description of heavy-quark meson states.
\begin{acknowledgments}
This work has been supported by MINECO of Spain and EU Feder Grant No.~FPA2016-77177-C2-1-P, by SEV-2014-0398,
by EU Horizon 2020 Grant No.~824093 (STRONG-2020) and by PID2019-105439GB-C21. R.~B. acknowledges a FPI
fellowship from MICIU of Spain under Grant No.~BES-2017-079860.
\end{acknowledgments}
|
1,108,101,565,804 | arxiv | \section{Introduction}\label{sec:1}
The Yule distribution \cite{harding, yule} is a fundamental probability model of tree topologies, also called ``histories'', used in evolutionary analyses. Histories are full binary rooted trees, with a ranking of internal nodes that divides the tree in different layers (Fig. \ref{layers}A). The probabilistic features of Yule distributed histories have been subject of numerous investigations, with a particular interest on combinatorial properties that affect the frequency spectrum of mutations in population genetic tree models. A particular focus is on the length distribution of tree branches. Branch length can be seen as a discrete parameter---when only the number of tree layers spanned by a branch is considered---or as a time related quantity---when each tree layer is in turn considered with a length given by a continuous random variable. In the latter case, histories are called ``coalescent'' trees. While branch length of coalescent trees has been widely studied (see, e.g., \cite{blum,caliebe,dahmer,diehl,freund,fu,janson}), the discrete length of the edges of a random history has received less attention.
In this paper, extending previous results \cite{DisantoAndWiehe}, we investigate the distribution of the different lengths of the external branches---i.e., those branches ending with a leaf---of random histories of given size selected under the Yule model. External branch length is an important parameter to study as it relates to singleton mutations in the site frequency spectrum of population genetic trees. Denoting by $\ell_k$ the $k$th largest length of an external branch in a Yule distributed random history of $n$ leaves, our main finding is that, for every $k \geq 1$, the rescaled variable $\frac{n-\ell_k}{\sqrt{n/2}}$ follows asymptotically a $\chi$-distribution with $2k$ degrees of freedom, with convergence of all moments~(Theorem~\ref{teo}).
The paper is organized as follows. We introduce terminology and some useful properties of histories in Section~\ref{sec:2}, showing in particular that external branch lengths in random histories can also be analyzed in terms of peaks of random permutations. In Section \ref{long1}, we refine results of \cite{DisantoAndWiehe} finding a closed formula for the probability of the length, $\ell_1$, of the longest external branch in a random history of given size $n$ and a recurrence for calculating the probability of the $k$th largest length, $\ell_k$, of an external branch. For increasing $n$, the asymptotic distribution of the variables $\ell_1, \ell_2, ..., \ell_k, ...$ is finally examined in Section \ref{sec:4}.
\section{Yule histories, external branches and non-peaks of permutations}\label{sec:2}
For a given positive integer $n$, a {\it history} \cite {rosenberg} of size $n$ is a full binary rooted tree with $n$ leaves and $n-1$ ranked internal nodes (Fig. \ref{layers}A). The rank of each internal node is defined by an integer label in $[1,n-1]$ bijectively associated with the node. The labeling decreases along any path from the root toward a leaf of the tree, determining a temporal ordering of the coalescent events---the merging of two edges---that characterize the branching structure of the tree. In a history of size $n$, there are $2n-1$ edges, or {\it branches}. A branch connecting an internal node and a leaf is said to be an {\it external} branch. The {\it length} of a branch is the difference between the rank of the nodes it connects. If the branch is external, then its length is simply the rank of its parent node.
In Population Genetics, histories are tree structures that represent the evolution of individual genes from a common ancestor. Conditioning on a given history, an infinite sites model \cite{NielsenAndSlatkin} produces a set of mutations across the genes associated with the leaves of the tree.
Roughly speaking, mutations occur as random events along the branches of the history (Fig. \ref{layers}B), with each branch containing a number of mutations that depends on its length, and with each mutation affecting only the set of gene copies descended from the branch it belongs to.
In particular, a history with one or more ``long'' external branches will be associated with a biological scenario in which one or more gene copies will possess a ``large'' number of singleton mutations---i.e., mutations affecting only one individual.
A random history of size $n$ selected under a proper null model distribution describes the evolutionary relationships of $n$ individual genes randomly sampled from a population under neutral evolution, and the length of the longest external branches in the random history relates to the largest number of singleton mutations that characterize single individuals in the sample.
In this paper, we focus on distributive properties of external branch length for random histories considered under a well known model of neutral evolution. More precisely, we will study external branch lengths ordered by size over random histories of size $n$ selected under the {\it Yule} probability model \cite{harding, yule}, or, equivalently, over random ordered histories of size $n$ selected uniformly at random.
\begin{figure}[tpb]
\begin{center}
\begin{tabular}{c c c}
\includegraphics*[scale=0.68,trim=0 0 0 0]{f1.pdf}
\end{tabular}
\end{center}
\vspace{-.7cm}
\caption{{\small Histories and gene sequences. {\bf (A)} A history of size $n=8$. The ranking of internal nodes decreases along any path going from the root to the leaves of the tree. The length of an external branch is the rank of its parent node. The different lengths of the external branches ordered by size are $\ell_1 = 7 > \ell_2 = 4 > \ell_3 = 3 > \ell_4 = 2 > \ell_5 = 1$.
{\bf (B)} The history depicted in A with leaves associated with genes represented as binary sequences with ancestral alleles of type $0$ and derived alleles of type $1$. A mutation (white circle) affects only the gene sequences associated with the leaves descending from the branch where it occurs. In this example, there is a mutation for each layer of the tree: the $i$th mutation (looking from top to bottom) changes the allele at the $i$th locus (position) of the gene.
}} \label{layers}
\end{figure}
An {\it ordered} history of size $n$ is a plane embedding of a history of size $n$ in which subtrees carry a left-right orientation. The number of ordered histories of size $n$ is thus $(n-1)!$, and the Yule distribution over the set of histories of size $n$ is induced by the uniform distribution over the set of ordered histories of size $n$ by summing the probability $1/(n-1)!$ of each ordered history with the same underlying (un-ordered) history \cite{anconfig}. In particular, if $\text{c}(t)$ is the number of cherries (i.e., subtrees of size $2$) in a history $t$ of $n$ leaves, then $2^{n-1-\text{c}(t)}$ is the number of different plane embeddings of $t$, and therefore $2^{n-1-\text{c}(t)}/(n-1)!$ is the Yule probability of the history $t$ \cite{rosenberg}.
A series of combinatorial results on the lengths of external branches of uniformly distributed ordered histories (or Yule distributed histories) has been obtained in
\cite{DisantoAndWiehe} in relationship with a study \cite{peaks} of the number of permutations of fixed size with a given set of peak entries, where the entry $\pi(i)$ is a {\it peak} in the permutation $( \pi(1), ..., \pi(i), ..., \pi(n) )$ when $i \neq 1, i \neq n$ and $\pi(i-1) < \pi(i) >\pi(i+1)$. Indeed, there exists a well known \cite{goulden} bijection that associates an ordered history $t$ of size $n$ with a permutation $\pi_t$ of the first $n-1$ positive integers. The mapping $t \rightarrow \pi_t$ can be described recursively by setting $\pi_t = (\pi_{t_L}, r(t), \pi_{t_R})$, where $r(t)$ is the (label of the) root of $t$, and $t_L, t_R$ are respectively the left and right root subtrees of $t$ (if any). In particular, ordering by size the different lengths $\ell_1 > \ell_2 > ... > \ell_k > ...$ of the external branches of $t$, the $k$th length, $\ell_k$, is easily seen to correspond to the $k$th largest non-peak entry in the permutation $\pi_t$.
For example, if $t$ is the ordered history of size $n=8$ depicted in Fig. \ref{layers}, then $\pi_t = (2,6,4,5,3,1,7)$ has the following non-peak entries: $2, 4, 3, 1, 7$, which correspond to the different lengths $\ell_1 = 7 > \ell_2 = 4 > \ell_3 = 3 > \ell_4 = 2 > \ell_5 = 1$ of the external branches of $t$.
By using the correspondence with non-peak entries of permutations, in the next section we calculate the probability of the varibale $\ell_k$ in an ordered history of size $n$ selected uniformly at random.
\section{The probability of the $k$th external branch length}\label{long1}
Given an ordered history $t$ of size $n$, consider the different external branch lengths of $t$ ordered by size as $\ell_1 > \ell_2 > ... > \ell_k > ...$, where
$\ell_k \leq n-k$.
As observed above, the value of $\ell_k$ corresponds to the $k$th largest non-peak entry in the associated permutation $\pi_t$. In this section, we study the number $h_{n}(\ell_1=s_1, \ell_2=s_2, ..., \ell_k= s_k)$ of ordered histories of size $n$ in which $\ell_j = s_j$ for $j=1, ..., k$, which determines the probability $p_n(\ell_1=s_1, \ell_2=s_2, ..., \ell_k= s_k) = h_{n}(\ell_1=s_1, \ell_2=s_2, ..., \ell_k= s_k)/(n-1)!$.
We start our calculations by using a result of \cite{peaks} for the number $\Pi_n(Q)$ of permutations of size $n \geq 3$ with peak entries matching the elements of a given set $Q \subseteq [3,n]$.
Fix $s_1, s_2, ..., s_{k-1}, s_k$ such that $n \geq s_1 > s_2 > ... > s_{k-1} > s_k$, and let $Z$ be a subset of the integers in the interval $[3,s_k-1]$. Then, by replacing $S=Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1]$ and $K = n - s_1$ in Lemma 3.3 of \cite{peaks} (in which $k$ is the capital $K$ here), we find
\begin{eqnarray}\nonumber
&&\Pi_n(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1+1,n]) \\\nonumber
&=& \Pi_n(S \cup [n-K+1,n])
= 2(K+1) \Pi_{n-1}(S \cup [n-K,n-1]) + K (K+1) \Pi_{n-2}(S \cup [n-K,n-2]) \\\nonumber
&=& 2(n-s_1+1) \Pi_{n-1}(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1,n-1]) \\\nonumber
&& + (n-s_1)(n-s_1+1) \Pi_{n-2}(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1,n-2]).
\end{eqnarray}
If we sum both sides of the latter equation over the possible subsets $Z$ of $[3,s_k-1]$, then we obtain
\begin{eqnarray}\label{sam}
&&\sum_Z \Pi_n(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1+1,n]) \\\nonumber
&=& 2(n-s_1+1) \, \sum_Z \Pi_{n-1}(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1,n-1]) \\\nonumber
&+& (n-s_1)(n-s_1+1) \, \sum_Z \Pi_{n-2}(Z \cup [s_k+1,s_{k-1}-1] \cup [s_{k-1}+1,s_{k-2}-1] \cup ... \cup [s_2+1,s_1-1] \cup [s_1,n-2]),
\end{eqnarray}
where the first sum
counts the permutations of size $n$ in which the first largest non-peak entry is $\ell_1=s_1$, the second largest non-peak entry is $\ell_2=s_2$, ..., and the $k$th largest non-peak entry is $\ell_k=s_k$.
Similarly, the second and third sums
count respectively the permutations of size $n-1$ and $n-2$ in which $\ell_1=s_2, \ell_2=s_3$, ..., and $\ell_{k-1}=s_{k}$.
Note that when we set $k=1$ and $s_1=s$, we have $S=Z\subseteq [3,s-1]$ and the calculation above yields
\begin{equation} \label{sami}
\sum_Z \Pi_n(Z \cup [s+1,n]) = 2(n-s+1) \, \sum_Z \Pi_{n-1}(Z \cup [s,n-1]) + (n-s)(n-s+1) \, \sum_Z \Pi_{n-2}(Z \cup [s,n-2]),
\end{equation}
where the first sum
counts the permutations of size $n$ in which the largest non-peak entry is $\ell_1=s$, while the second and third sums
count respectively the permutations of size $n-1$ and $n-2$ in which the largest non-peak entry is strictly smaller than $s$, that is, $\ell_1 < s$. By rewriting (\ref{sam}) and (\ref{sami}) in terms of ordered histories, we find
\begin{eqnarray}\nonumber
h_{n+1}(\ell_1=s_1, \ell_2=s_2, ..., \ell_k= s_k) &=& 2(n-s_1+1) \, h_{n}(\ell_1=s_2, \ell_2=s_3, ..., \ell_{k-1}=s_{k}) \\\label{joint1}
&& + (n-s_1)(n-s_1+1) \, h_{n-1}(\ell_1=s_2, \ell_2=s_3, ..., \ell_{k-1}=s_{k})
\end{eqnarray}
and
\begin{equation}\label{sd}
h_{n+1}(\ell_1=s) = 2(n-s+1) \, h_n(\ell_1 < s) + (n-s)(n-s+1) \, h_{n-1}(\ell_1<s),
\end{equation}
where $h_i(\ell_1 < s) \equiv \sum_{j < s} h_i(\ell_1 = j)$.
Because $h_{n+1}(\ell_1=s) = h_{n+1}(\ell_1 < s+1) - h_{n+1}(\ell_1 < s)$, Eq. (\ref{sd}) yields the recurrence
$h_{n+1}(\ell_1 < s+1) = h_{n+1}(\ell_1<s) + 2(n-s+1) \, h_n(\ell_1 < s) + (n-s)(n-s+1) \, h_{n-1}(\ell_1<s),$
which, by replacing $n+1$ by $n$ and $s+1$ by $s$, reads as
\begin{equation}\label{ress}
h_{n}(\ell_1 < s) = h_{n}(\ell_1<s-1) + 2(n-s+1) \, h_{n-1}(\ell_1 < s-1) + (n-s)(n-s+1) \, h_{n-2}(\ell_1<s-1),
\end{equation}
where $h_n(\ell_1<s) = 0$ if $s = \lceil n/2 \rceil$ ($\ell_1$ is at least $\lceil n/2 \rceil$), and $h_n(\ell_1<s) = (n-1)!$ if $s = n$ ($\ell_1$ is at most $n-1$).
In particular, when $\lceil n/2 \rceil \leq s \leq n \geq 3$, we have
\begin{equation}\label{p4}
h_n(\ell_1 < s)
= \frac{(s-1)! \, (s-2)! \, (2s-n) \, (2s-n-1)}{(2s-n)!}
\end{equation}
as the right-hand side---say $r(n,s)$---of the latter equation satisfies the same recurrence (\ref{ress}) given for $h_{n}(\ell_1 < s)$. Indeed, $r(n,\lceil n/2 \rceil) = 0$ and $r(n,n) = (n-1)!$. Furthermore, assuming $\lceil n/2 \rceil < s < n$, a simple calculation shows that $r(n,s) = r(n,s-1) + 2(n-s+1) \, r(n-1,s-1) + (n-s)(n-s+1) \, r(n-2,s-1)$, where we note that all the factorials in $r(n,s-1), r(n-1,s-1)$, and $r(n-2,s-1)$ are well defined being of the form $m!$ with $m \geq 0$.
The next proposition summarizes our enumerative results from a probability point of view.
\begin{pro}\label{firstprop}
Let $n\geq 3$. If $p_n(\ell_1=s)$ denotes the probability of $\ell_1=s$ in an ordered history of size $n$ selected uniformly at random, then
\begin{equation}\label{prop1a}
p_n(\ell_1=s)
= \frac{(s-1)! (s-2)! (4 n s + s - n^2 - n -3 s^2)}{(2s-n)! \, (n-1)!},
\end{equation}
where $\lceil n/2 \rceil \leq s \leq n-1$. Furthermore, the joint probability $p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k)$ of $\ell_1=s_1, \ell_2=s_2$, ..., and $\ell_k = s_k$ in an ordered history of size $n$ selected uniformly at random satisfies the recurrence
\begin{eqnarray} \label{joint2}
p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k) &=& \frac{2(n - s_1)}{n-1} p_{n-1}(\ell_1=s_2, \ell_2=s_3,..., \ell_{k-1}=s_k) \\\nonumber
&& + \frac{(n - s_1)(n - s_1 - 1)}{(n-1)(n-2)} p_{n-2}(\ell_1=s_2, \ell_2=s_3,..., \ell_{k-1}=s_k),
\end{eqnarray}
with initial condition given by (\ref{prop1a}).
\end{pro}
\noindent \emph{Proof.}\ Equation (\ref{prop1a}) follows from (\ref{p4}) as $p_n(\ell_1=s) = [h_n(\ell_1<s+1)-h_n(\ell_1<s)]/(n-1)!$. The recurrence in (\ref{joint2}) is obtained by replacing $n+1$ by $n$ in (\ref{joint1}) and dividing both sides of the resulting equation by $(n-1)!$. {\quad\rule{1mm}{3mm}\,}
By summing over the possible values of $\ell_1,...,\ell_{k-1}$ the joint probability $p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k)$ yields for $k \geq 2$ the probability of $\ell_k = s_k$ in random ordered history of $n$ leaves:
\begin{equation}\label{summa}
p_n(\ell_k=s_k) = \sum_{s_1=s_k+k-1}^{n-1} \sum_{s_2=s_k+k-2}^{s_1-1} ... \sum_{s_i=s_k+k-i}^{s_{i-1}-1} ... \sum_{s_{k-1}=s_k+1}^{s_{k-2}-1} p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k).
\end{equation}
For instance, if $k=2$, then we obtain
\begin{eqnarray}\label{prop1b}
p_n(\ell_2=s_2) &=& \sum_{s_1=s_2+1}^{n-1} p_n(\ell_1=s_1, \ell_2=s_2) \\\nonumber
&=&\sum_{s_1=s_2+1}^{n-1} \frac{2(n - s_1)}{n-1} p_{n-1}(\ell_1=s_2) + \frac{(n - s_1)(n - s_1 - 1)}{(n-1)(n-2)} p_{n-2}(\ell_1=s_2) \\\nonumber
&=& \frac{2 p_{n-1}(\ell_1=s_2)}{n-1} \sum_{s_1=s_2+1}^{n-1} (n - s_1) + \frac{p_{n-2}(\ell_1=s_2)}{(n-1)(n-2)} \sum_{s_1=s_2+1}^{n-1} (n - s_1)(n - s_1 - 1),
\end{eqnarray}
which can be used together with (\ref{prop1a}), when $n\geq5$ and $s_2$ is in the range $\lceil n/2 \rceil-1\leq s_2 \leq n-2$. Similarly, if $k=3$, then we have
\begin{eqnarray}\label{prop1c}
p_n(\ell_3=s_3) &=& \sum_{s_1=s_3+2}^{n-1} \sum_{s_2=s_3+1}^{s_1-1} p_n(\ell_1=s_1, \ell_2=s_2, \ell_3=s_3) \\\nonumber
&=& \sum_{s_1=s_3+2}^{n-1} \sum_{s_2=s_3+1}^{s_1-1} \frac{2(n - s_1)}{n-1} p_{n-1}(\ell_1=s_2, \ell_2=s_3) + \frac{(n - s_1)(n - s_1 - 1)}{(n-1)(n-2)} p_{n-2}(\ell_1=s_2, \ell_2=s_3) \\\nonumber
&=&
\frac{4p_{n-2}(\ell_1=s_3)}{(n-1)(n-2)} \sum_{s_1=s_3+2}^{n-1} \sum_{s_2=s_3+1}^{s_1-1} (n-s_1)(n-1-s_2)
\\\nonumber
&&+ \frac{2 p_{n-3}(\ell_1=s_3)}{(n-1)(n-2)(n-3)} \sum_{s_1=s_3+2}^{n-1} \sum_{s_2=s_3+1}^{s_1-1}
(n-s_1)(n-s_2-2)(2n-2-s_2-s_1) \\\nonumber
&&+ \frac{p_{n-4}(\ell_1=s_3)}{(n-1)(n-2)(n-3)(n-4)} \sum_{s_1=s_3+2}^{n-1} \sum_{s_2=s_3+1}^{s_1-1} (n-s_1)(n-s_1-1)(n-2-s_2)(n-s_2-3),
\end{eqnarray}
which can be coupled with (\ref{prop1a}), when $n\geq7$ and $\lceil n/2 \rceil-2\leq s_3 \leq n-3$.
\section{Asymptotic distribution of the $k$th external branch length} \label{sec:4}
In this section, we derive distributive properties of the random variable $\ell_k$---the $k$th largest external branch length---considered over ordered histories of size $n$ selected under the uniform distribution. We start by considering the case $k=1$, and then generalize to arbitrary values of $k$.
By dividing Eq. (\ref{p4}) by the number $(n-1)!$ of ordered histories of size $n$, we obtain the probability
\[
p_n(\ell_1<s)=\frac{(s-1)!(s-2)!}{(2s-n-2)!(n-1)!},\, \lceil n/2\rceil<s\leq n,
\]
or alternatively, with $u=s-1$,
\begin{equation}\label{dis-func}
p_n(\ell_1\leq u)=\frac{u!(u-1)!}{(2u-n)!(n-1)!},\, \lceil n/2\rceil\leq u\leq n-1.
\end{equation}
Our first result is the following local limit theorem.
\begin{lmm} \label{xzc}
When $n\rightarrow \infty$,
\begin{itemize}
\item[(a)] the probability $p_n(\ell_1=\lfloor n-x\sqrt{n/2}\rfloor)$ admits an asymptotic expansion of the form
\[
p_n(\ell_1=\lfloor n-x\sqrt{n/2}\rfloor)=\frac{x}{\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right)
\]
uniformly for $0\leq x\leq x^{*}\equiv n^{1/7}$.
\item[(b)] Furthermore,
\[
p_n(\ell_1\leq n-x^{*}\sqrt{n/2})={\mathcal O}\left(e^{-n^{2/7}/2}\right),
\]
with $x^{*}$ as defined in part (a).
\end{itemize}
\end{lmm}
\noindent \emph{Proof.}\ For part (a), first assume that $x\leq x^{*}$ is such that $u\equiv n-x\sqrt{n/2}$ is a non-negative integer smaller than $n$. Then,
Eq. (\ref{dis-func}) yields
\[
p_n(\ell_1=u)=p_n(\ell_1 \leq u)-p_n(\ell_1 \leq u-1)=\frac{u!(u-1)!}{(2u-n)!(n-1)!}-\frac{(u-1)!(u-2)!}{(2u-2-n)!(n-1)!}.
\]
Plugging in Stirling's formula $z! \sim z^z e^{-z} \sqrt{2 \pi z} \left(1 + \frac{1}{12 z} + \frac{1}{288 z^2} - \frac{139}{51840 z^3} - ... \right)$ gives the (complete) asymptotic expansion
\[
p_n(\ell_1=u)\sim \frac{x}{\sqrt{n/2}}e^{-x^2/2}\left(1+\sum_{d=1}^{\infty}\frac{q_d(x)}{n^{d/2}}\right),
\]
where $q_{d}(x)$ is a polynomial of degree $3d$. Thus, for the given range of $x$, $q_{d}(x)={\mathcal O}(n^{3d/7})$ and consequently
\[
\frac{q_{d}(x)}{n^{d/2}}={\mathcal O}(n^{3d/7-d/2})=o(1).
\]
This shows that $\sum_{d=1}^{k}\frac{q_d(x)}{n^{d/2}}=o(1)$ for every choice of $k$ and the claimed expansion (without the last term) holds for this case. Note that the case $u=n$, i.e., $x=0$, is trivially covered as $p_n(\ell_1=n)=0$.
Next, if $u$ is not an integer, then $\lfloor u\rfloor=u+{\mathcal O}(1)=n-x\sqrt{n/2}+{\mathcal O}(1)=n-(x+{\mathcal O}(1/\sqrt{n}))\sqrt{n/2}$, and thus we are in the first case with $x$ replaced by $\tilde{x}=x+{\mathcal O}(1/\sqrt{n})$. Hence,
\begin{eqnarray}\nonumber
p_n(\ell_1=\lfloor u\rfloor)
&=& \frac{\tilde{x}}{\sqrt{n/2}}e^{-\tilde{x}^2/2}(1+o(1))
= \frac{x+{\mathcal O}(1/\sqrt{n})}{\sqrt{n/2}}e^{-x^2/2+o(1)}(1+o(1)) \\\nonumber
&=& \frac{x}{\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right),
\end{eqnarray}
which establishes the claim also in this case.
For part (b), we are interested in $p_n(\ell_1\leq \lfloor n-x^{*}\sqrt{n/2} \rfloor)$. Starting from (\ref{dis-func}), we use Stirling's approximation $\log(z) = z \log(z)-z+(1/2) \log(2 \pi z) + o(1)$ to expand $\log(p_n(\ell_1 \leq u)) = \log(u!)+\log((u-1)!)-\log((2u-n)!)-\log((n-1)!)$ as
\begin{small}
$$\frac{1}{2} (2 (n-2 u) \log (2 u-n)-\log (2 u - n)-2 n \log (n-1)+\log (n-1)+(2 u-1) \log (u-1)+2 u \log (u)+\log (u)) + o(1).$$
\end{small}
Then, we plug in $u = \lfloor n-x^{*}\sqrt{n/2} \rfloor = n-n^{1/7}\sqrt{n/2}-c_n$, where $c_n$ is the fractional part of $ n-n^{1/7}\sqrt{n/2}$, and replace the resulting terms of the form $\log(n+f(n))$ by $\log(n)+f(n)/n-f(n)^2/n^2$ (where $f(n)/n \rightarrow 0$). Simple algebraic manipulations finally give
$$
\log(p_n(\ell_1 \leq \lfloor n-x^{*}\sqrt{n/2} \rfloor)) = -\frac{n^{2/7}}{2} + o(1),$$
which shows the claim. {\quad\rule{1mm}{3mm}\,}
From the previous lemma, we obtain the following proposition that describes the asymptotic distribution of the random variable $\ell_1$ considered over ordered histories of size $n$ selected uniformly at random.
\begin{pro}\label{limit-law}
As $n\rightarrow\infty$,
\[
\frac{n-\ell_1}{\sqrt{n/2}}\stackrel{d}{\longrightarrow}{\rm Rayleigh}(1)
\]
with convergence of all moments. In particular, the mean and the variance of $\ell_1$ satisfy respectively
\begin{equation}\label{mean-var}
{\mathbb E}(\ell_1)\sim n \qquad \text{and} \qquad{\mathbb V}(\ell_1) \sim \left(1-\frac{\pi}{4} \right) n.
\end{equation}
\end{pro}
\noindent \emph{Proof.}\ Fix an $x\geq 0$. In order to prove the limit law, we have to show that, when $n\rightarrow \infty$, the probability of $(n-\ell_1)/\sqrt{n/2}\leq x$ converges to $1-e^{-x^2/2}$, which is the cumulative function of the Rayleigh distribution with parameter $1$. We first write
\begin{align}
p_n\left(\frac{n-\ell_1}{\sqrt{n/2}}\leq x\right)&=p_n(n-x\sqrt{n/2}\leq \ell_1)=p_n(\lceil n-x\sqrt{n/2}\rceil\leq \ell_1)=
\sum_{s=\lceil n-x\sqrt{n/2}\rceil}^{n}p_n(\ell_1=s) \\\label{calc2}
&=\sum_{t=0}^{\tilde{x}}p_n(\ell_1= n-t\sqrt{n/2}),
\end{align}
where the latter sum is in steps of size $\sqrt{2/n}$ and $\tilde{x}=x+{\mathcal O}(1/\sqrt{n})$ is such that $n-\tilde{x}\sqrt{n/2}=\lceil n-x\sqrt{n/2}\rceil$.
For $n$ sufficiently large, we can assume $\tilde{x} \leq x \leq n^{1/7}$ and thus use part (a) of the lemma writing (\ref{calc2}) as
\begin{equation}\label{calc}
\sum_{t=0}^{\tilde{x}} \frac{t}{\sqrt{n/2}}e^{-t^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-t^2/2}}{n}\right) = \sum_{t=0}^{\tilde{x}} \frac{t}{\sqrt{n/2}}e^{-t^2/2}(1+o(1)) + \sum_{t=0}^{\tilde{x}} {\mathcal O}\left(\frac{e^{-t^2/2}}{n}\right).
\end{equation}
Because the $1+o(1)$ factor in the second sum of (\ref{calc}) holds uniformly, it can be put in front of the sum obtaining
$$\sum_{t=0}^{\tilde{x}} \frac{t}{\sqrt{n/2}}e^{-t^2/2}(1+o(1)) = (1+o(1)) \sum_{t=0}^{\tilde{x}} \frac{t}{\sqrt{n/2}}e^{-t^2/2} = (1+o(1)) \sum_{t=0}^{x} \frac{t}{\sqrt{n/2}}e^{-t^2/2} + o(1),$$
where the upper limit in the last sum is now $x$. Moreover, the third sum in (\ref{calc}) can be bounded as
$$\sum_{t=0}^{\tilde{x}} {\mathcal O}\left(\frac{e^{-t^2/2}}{n}\right) = {\mathcal O}\left(\sum_{t=0}^{\infty} \frac{e^{-t^2/2}}{n}\right) = o(1).$$
Hence, for $n\rightarrow \infty$, the probability $p_n\left(\frac{n-\ell_1}{\sqrt{n/2}}\leq x\right)$ converges to the Riemann sum $\sum_{t=0}^{x} \frac{t}{\sqrt{n/2}}e^{-t^2/2}$ with step size $dt=\sqrt{2/n}$, which can be approximated by the integral $\int_{0}^{x}t e^{-t^2/2}{\rm d}t=1-e^{-x^2/2},$
as claimed.
By a similar approach, one can also show that all moments converge. Starting from
\[
{\mathbb E}\left(\frac{n-\ell_1}{\sqrt{n/2}}\right)^m=\sum_{s=0}^{n}\left(\frac{n-s}{\sqrt{n/2}}\right)^m p_n(\ell_1=s),
\]
we replace $s$ by $s=n-x\sqrt{n/2}$ and break the sum into two parts obtaining
$$\sum_{x=0}^{\sqrt{2n}} x^m p_n(\ell_1=n-x\sqrt{n/2} )=\sum_{0 \leq x< n^{1/7}} x^m p_n(\ell_1=n-x\sqrt{n/2} ) +\sum_{n^{1/7} \leq x \leq \sqrt{2n}} x^m p_n(\ell_1=n-x\sqrt{n/2} ) \equiv \Sigma_1+\Sigma_2,$$
where all the sums proceed in steps of size $\sqrt{2/n}$.
For $\Sigma_2$, by part (b) of the lemma, we have
\[
\Sigma_2={\mathcal O}\left(n^{m/2}e^{-n^{2/7}/2}\right)=o(1).
\]
For $\Sigma_1$, by part (a) of the lemma, we have
\[
\Sigma_1=(1+o(1))\sum_{0\leq x<n^{1/7}}\frac{x^{m+1}}{\sqrt{n/2}}e^{-x^2/2}+{\mathcal O}\left(n^{-1}\sum_{0\leq x<n^{1/7}}e^{-x^2/2}\right).
\]
Here, the Riemann sum in $\Sigma_1$ can be approximated by the integral $\int_{0}^{n^{1/7}}x^{m+1}e^{-x^2/2}{\rm d}x$, which converges to $\int_{0}^{\infty}x^{m+1}e^{-x^2/2}{\rm d}x$. Overall,
\[
{\mathbb E}\left(\frac{n-\ell_1}{\sqrt{n/2}}\right)^m\stackrel{n \to \infty}{\longrightarrow} \int_{0}^{\infty}x^{m+1}e^{-x^2/2}{\rm d}x
\]
which proves the claimed convergence of moments. Finally, (\ref{mean-var}) follows from this convergence by straightforward computation.{\quad\rule{1mm}{3mm}\,}
Note that when the limit distribution is uniquely determined by its moment sequence (which is the case for the Rayleigh distribution), convergence of all moments implies weak convergence. Although the second part of the proof of the latter proposition suffices to show that also the first claim holds true, we decided to provide the calculations for the convergence in distribution with the aim of improving the readability of the remaining part of the proof.
In the following, our goal is to show that, for an arbitrary fixed value of $k \geq 1$, the random variable $\ell_k$ follows asymptotically a $\chi$ distribution with $2k$ degrees of freedom. Indeed, note that the Rayleigh distribution found for the case $k=1$ is a $\chi$ distribution with $2$ parameters.
The next lemma describes the solution to the recurrence (\ref{joint2}) for the joint probability $p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k)$ and a formula for the probability $p_n(\ell_k=s_k)$ given in (\ref{summa}) in terms of the probability of $\ell_1 = s_k$ in trees of size smaller than or equal to $n$.
\begin{lmm}\label{muevu}
By setting
$\mu_n(x)\equiv\frac{2x}{n-1}$ and $\nu_n(x)\equiv\frac{x(x-1)}{(n-1)(n-2)}$, we have
\begin{equation}\label{piox}
p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k) =
\sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}\left(n-n_{\omega,\ell}-\ell-s_{\ell+1}\right)\right)p_{n-n_{\omega,k-1}-k+1}(\ell_1 = s_k),
\end{equation}
where the sum runs over all words $\omega = \omega^{[0]}\cdots\omega^{[k-2]}$ of length $k-1$ with letters from the alphabet $\{\mu,\nu\}$, and $n_{\omega,\ell}$ is the number of $\nu$ in the first $\ell$ letters of $\omega$ (with $n_{\omega,0}=0$). With the same notation, we also have
\begin{equation} \label{multi-sum}
p_n(\ell_k=s_k) = \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right)
p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k),
\end{equation}
where $s_k^{*}\equiv n-k+1-s_k$.
\end{lmm}
\begin{figure}[tpb]
\begin{center}
\includegraphics*[scale=0.63,trim=0 0 0 0]{alb.pdf}
\end{center}
\vspace{-.7cm}
\caption{{\small Schematic diagram of the first three iterative steps of the procedure (\ref{proce}) for calculating $p'_0(1)=p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k)$.
}} \label{alb}
\end{figure}
\noindent \emph{Proof.}\ For a fixed $n$ and $k$, set $p'_i(j) \equiv p_{n-i}(\ell_1=s_j,...,\ell_{k-j+1}=s_k)$, $\mu'_i(j) \equiv \frac{2(n-i-s_{j})}{n-i-1}$, and $\nu'_i(j) \equiv \frac{(n-i-s_{j})(n-i-1-s_j)}{(n-i-1)(n-i-2)}$. The recurrence (\ref{joint2}) finds $p'_0(1)=p_n(\ell_1=s_1, \ell_2=s_2,..., \ell_k=s_k)$ by iteratively computing
\begin{equation}\label{proce}
p'_i(j)=\mu'_i(j) \, p'_{i+1}(j+1) + \nu'_i(j) \, p'_{i+2}(j+1).
\end{equation}
The procedure ends after $k-1$ steps, that is, when we obtain terms of the form $p_{n-x}(\ell_1=s_k)=p'_x(k)$, for a certain value of $x$. For $k=4$, the diagram in Fig. \ref{alb} depicts the three iterations needed for evaluating $p'_0(1)$. The latter quantity is calculated as the sum of the probabilities at the bottom of the diagram, each multiplied by the sum of the words of length $k-1$ over the alphabet $\{\mu',\nu'\}$ that encode the different paths connecting the corresponding leaf node to the root of the diagram.
More precisely, for arbitrary values of $n$ and $k$, we have
$$p'_0(1) = \sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n_{\omega,\ell}+\ell}\left(\ell+1\right)\right)p'_{n_{\omega,k-1}+k-1}(k),$$
where the sum runs over all words $\omega = \omega^{[0]}\cdots\omega^{[k-2]}$ of length $k-1$ with letters from the alphabet $\{\mu',\nu'\}$, and $n_{\omega,\ell}$ is the number of $\nu'$ in the first $\ell$ letters of $\omega$ (with $n_{\omega,0}=0$). By replacing indices, the latter formula is equivalent to that claimed in (\ref{piox}).
Finally, plugging (\ref{piox}) into (\ref{summa}) yields
\begin{align*}
p_n(\ell_k=s_k)=&\sum_{s_1=s_k+k-1}^{n-1}\sum_{s_2=s_{k}+k-2}^{s_1-1}\cdots\sum_{s_{k-1}=s_k+1}^{s_{k-2}-1}\sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}
\left(n-n_{\omega,\ell}-\ell-s_{\ell+1}\right)\right)\nonumber\\
&\hspace*{7cm}\times p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k).
\end{align*}
By setting $s_{\ell}^{*}=n-\ell+1-s_{\ell}$ for $\ell=1,...,k$,
the right-hand side can be written as
$$\sum_{s_1^{*}=1}^{s_k^{*}}\sum_{s_2^{*}=s_1^{*}}^{s_k^*}\cdots\sum_{s_{k-1}^{*}=s_{k-2}^{*}}^{s_k^{*}}\sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}^{*}-n_{\omega,\ell})\right)
p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k),
$$
which gives (\ref{multi-sum}).
{\quad\rule{1mm}{3mm}\,}
With the same notation used above, we now provide two more useful lemmas.
\begin{lmm}\label{ll-longest}
For $s_k=\lfloor n-x\sqrt{n/2}\rfloor$, we have
\[
\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1})=\frac{x^{2k-2}}{2^{k-1}(k-1)!}+{\mathcal O}\left(\frac{1+ x^{2k-3}}{\sqrt{n}}\right)
\]
uniformly for $0 \leq x \leq \sqrt{2n}$.
\end{lmm}
\noindent \emph{Proof.}\ Note that
$$\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1})=\frac{2^{k-1} \sum_{s_1=1}^{s_k^{*}} s_1 \sum_{s_2=s_1}^{s_k^*} s_2 \cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}} s_{k-1}}{(n-1) \cdots (n-k+1)}= \frac{2^{k-1}r(s_k^{*})}{n^{k-1}}+{\mathcal O}\left(\frac{r(s_k^{*})}{n^{k}}\right),$$
where $r(z)$ is the polynomial $r(z)\equiv \sum_{s_1=1}^{z}s_1 \sum_{s_2=s_1}^{z} s_2 \cdots\sum_{s_{k-1}=s_{k-2}}^{z} s_{k-1}$.
In order to determine the asymptotic behavior of $r(z)$, we rely on Faulhaber's formula:
\begin{equation}\label{Faul}
\sum_{m=1}^{N}m^t=\frac{1}{t+1}\sum_{k=0}^{t}\binom{t+1}{k}B_k(N+1)^{t+1-k}\stackrel{N \rightarrow \infty}{\sim} \frac{N^{t+1}}{t+1}\stackrel{N \rightarrow \infty}{\sim}\int_{1}^{N}x^t{\rm d}x,
\end{equation}
where $B_k$ denotes the $k$-th Bernoulli number. In particular, we use the fact that, for a given polynomial $p(u)=\alpha_k u^k + ... + \alpha_1 u + \alpha_0$, the polynomial $\sum_{u=a}^b p(u) = \sum_{u=1}^b p(u) - \sum_{u=1}^{a-1} p(u)$ has its term $\frac{\alpha_k b^{k+1}}{k+1}$ with the highest power in $b$ and its term $-\frac{\alpha_k a^{k+1}}{k+1}$ with the highest power in $a$ matching those that appear in the integral $\int_{a}^b p(z) {\rm d}z$. As a consequence, if we substitute each sum in $r(z)$ by an integral sign, we then find a polynomial $\int_{1}^{z} s_1 {\rm d}s_{1} \int_{s_1}^{z} s_2 {\rm d}s_{2} \cdots\int_{s_{k-2}}^{z} s_{k-1} \, {\rm d}s_{k-1}$ with the same leading term of $r(z)$.
Furthermore, by a simple induction on $k$ one can show that
$\int_{z_{k+1}}^z z_{k} {\rm d} z_{k} \cdots \int_{z_3}^z z_2 {\rm d} z_2 \int_{z_2}^z z_1 {\rm d} z_1 = \frac{1}{2^{k}} \sum_{i=0}^{k} \frac{(-1)^{i} z^{2 k - 2 i} z_{k+1}^{2 i}}{i! (k-i)!},$
and therefore the leading term of $r(z)$ is that of $\frac{1}{2^{k-1}}\sum_{i=0}^{k-1} \frac{(-1)^{i} z^{2 k - 2 - 2 i}}{i! (k-1-i)!},$ that is, $\frac{x^{2k-2}}{2^{k-1} (k-1)!}$. Hence,
$$\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1})=\frac{(s_k^{*})^{2k-2}}{n^{k-1}(k-1)!}+{\mathcal O}\left(\frac{(s_k^{*})^{2k-3}}{n^{k-1}}+\frac{r(s_k^{*})}{n^{k}}\right)$$
By plugging $s_k^{*}=x\sqrt{n/2}+{\mathcal O}(1)$ into the latter asymptotic formula and performing a straightforward expansion, we obtain the claimed result. {\quad\rule{1mm}{3mm}\,}
The next result shows that Lemma \ref{ll-longest} gives the main term of the multiple sum in (\ref{multi-sum}).
\begin{lmm}\label{le4}
For $s_k=\lfloor n-x\sqrt{n/2}\rfloor$, we have
\[
\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})={\mathcal O}\left(\frac{1+ x^{2k-1}}{\sqrt{n}}\right)
\]
uniformly for $0 \leq x \leq \sqrt{2n}$ and for all words $\omega = \omega^{[0]}\cdots\omega^{[k-2]}$ of length $k-1$ with letters from the alphabet $\{\mu,\nu\}$ different from the word whose letters are all equal to $\mu$.
\end{lmm}
\noindent \emph{Proof.}\ Assume that $\omega$ has $m\geq 1$ letters equal to $\nu$. Then, since $\nu_n(x)$ is a quadratic polynomial, by again using Faulhaber's formula (\ref{Faul}), we obtain that
\[
\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})
=\frac{r(s_{k}^{*})}{q(n)},
\]
where $r(z)$ is a polynomial of degree $m+2k-2$ and $q(z)$ is a polynomial of degree $m+k-1$. Thus, by setting $s_k^{*}=x\sqrt{n/2}+{\mathcal O}(1)$, we obtain that
\[
\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})
=\frac{r(s_{k}^{*})}{q(n)}={\mathcal O}\left(\frac{1+ x^{m+2k-2}}{n^{m/2}}\right).
\]
From this the result follows by observing that $x \leq \sqrt{2n}$.{\quad\rule{1mm}{3mm}\,}
\bigskip
From the last three lemmas, we can now deduce the following generalization of Lemma \ref{xzc}.
\begin{cor} \label{coro1}
When $n \rightarrow \infty$,
\begin{itemize}
\item[(a)] the probability $p_n(\ell_k = \lfloor n-x\sqrt{n/2}\rfloor)$ admits an asymptotic expansion of the form
\[
p_n(\ell_{k}=\lfloor n-x\sqrt{n/2}\rfloor)=\frac{x^{2k-1}}{2^{k-1}(k-1)!\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right)
\]
uniformly for $0\leq x\leq x^{*}\equiv n^{1/7}$.
\item[(b)] Furthermore,
\[
p_n(\ell_{k}\leq n-x^{*}\sqrt{n/2})={\mathcal O}\left(n^{k-1}e^{-n^{2/7}/2}\right)
\]
with $x^{*}$ as defined in part (a).
\end{itemize}
\end{cor}
\noindent \emph{Proof.}\ First, note that for any given word $\omega$ of length $k-1$ over the alphabet $\{\mu,\nu\}$ (in the sense of Lemma \ref{muevu}), we have
\[
p_{n-n_{\omega}-k+1}(\ell_1=\lfloor n-x\sqrt{n/2}\rfloor)=p_{n-n_{\omega}-k+1}(\ell_1=\lfloor n-n_{\omega}-k+1-\tilde{x}\sqrt{(n-n_{\omega}-k+1)/2}\rfloor),
\]
where $\tilde{x}=x+{\mathcal O}(1/\sqrt{n})$. As a consequence, by applying part (a) of Lemma 1 with $x$ replaced by $\tilde{x}$ and $n$ replaced by $n-n_{\omega}-k+1$, it follows that part (a) of Lemma~\ref{xzc} also holds when $p_n$ is replaced by $p_{n-n_{\omega}-k+1}$.
Moreover, also part (b) of Lemma~\ref{xzc} holds true when $p_n$ is replaced by $p_{n-n_{\omega}-k+1}$. Indeed, from (\ref{dis-func}), we find
\begin{align*}
p_{n-n_{\omega}-k+1}(\ell_1\leq n^{*})&= \frac{n^{*}! (n^{*}-1)!}{(2 n^{*}- n+n_{\omega}+k-1 )! (n-n_{\omega}-k)!}\\ &=\frac{(n-1)\cdots(n-n_{\omega}-k+1)}{(2n^{*}-n+n_{\omega}+k-1)\cdots(2n^{*}-n+1)} \cdot \frac{n^{*}!(n^{*}-1)!}{(2n^{*}-n)!(n-1)!} = {\mathcal O}(p_n(\ell_1 \leq n^{*})),
\end{align*}
where $n_{\omega}\equiv n_{\omega,k-1}$ and $n^{*} \equiv \lfloor n-x^{*}\sqrt{n/2}\rfloor$.
In order to prove part (a) of the corollary, assume $0\leq x\leq x^{*}$ and set $s_k=\lfloor n-x\sqrt{n/2}\rfloor$. From (\ref{multi-sum}), we find
\begin{align*}
&p_n(\ell_k=s_k) = \\
&\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left[\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1}) p_{n-k+1}(\ell_1=s_k)+\sum_{\omega\neq \mu \mu \cdots \mu}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right) p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k)\right].
\end{align*}
Then, the expansion of Lemma 1 for the factors $p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k)$ coupled with Lemmas \ref{ll-longest} and \ref{le4} yield
\begin{align*}
&p_n(\ell_k=s_k) = \\
&\left[\frac{x}{\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right) \right]\sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left[\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1}) +\sum_{\omega\neq \mu \mu \cdots \mu}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right) \right] \\
&=\left[\frac{x}{\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right) \right] \left[ \frac{x^{2k-2}}{2^{k-1}(k-1)!}+{\mathcal O}\left(\frac{1+ x^{2k-3}}{\sqrt{n}}\right) + {\mathcal O}\left(\frac{1+ x^{2k-1}}{\sqrt{n}}\right) \right] \\
&=\frac{x^{2k-1}}{2^{k-1}(k-1)!\sqrt{n/2}}e^{-x^2/2}(1+o(1))+{\mathcal O}\left(\frac{e^{-x^2/2}}{n}\right),
\end{align*}
as claimed in (a).
For part (b) we can write $p_n(\ell_{k}\leq n-x^{*}\sqrt{n/2}) = \sum_{x} p_n(\ell_{k} = \lfloor n-x\sqrt{n/2} \rfloor) = \sum_{x} p_n(\ell_{k} = s_k)$, where the sum proceeds in steps of $\sqrt{2/n}$ over the range $x^{*} \leq x \leq \sqrt{2n}$ and we set $s_k=\lfloor n-x\sqrt{n/2}\rfloor$.
Hence, by using (\ref{multi-sum}) together with Lemmas \ref{ll-longest} and \ref{le4}, we obtain
\begin{align*}
& p_n(\ell_{k}\leq n-x^{*}\sqrt{n/2}) =
\sum_x \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\sum_{\omega}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right)p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \\
&= \sum_{\omega} \sum_{x} \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right) p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \\
&= \sum_x \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left(\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1})\right) p_{n-k+1}(\ell_1=s_k)\\
&+ \sum_{\omega \neq \mu \mu \cdots \mu} \sum_x \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right) p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \\
&= \sum_x p_{n-k+1}(\ell_1=s_k) \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left(\prod_{\ell=0}^{k-2}\mu_{n-\ell}(s_{\ell+1})\right) \\
&+ \sum_{\omega \neq \mu \mu \cdots \mu} \sum_x p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \sum_{s_1=1}^{s_k^{*}}\sum_{s_2=s_1}^{s_k^*}\cdots\sum_{s_{k-1}=s_{k-2}}^{s_k^{*}}\left(\prod_{\ell=0}^{k-2}\omega^{[\ell]}_{n-n_{\omega,\ell}-\ell}(s_{\ell+1}-n_{\omega,\ell})\right) \\
&= \sum_x p_{n-k+1}(\ell_1=s_k) \left[ \frac{x^{2k-2}}{2^{k-1}(k-1)!}+{\mathcal O}\left(\frac{1+ x^{2k-3}}{\sqrt{n}}\right) \right]
+ \sum_{\omega \neq \mu \mu \cdots \mu} \sum_x p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \left[ {\mathcal O}\left(\frac{1+ x^{2k-1}}{\sqrt{n}}\right) \right].
\end{align*}
Finally, since $x \leq \sqrt{2n}$, we have
\begin{align*}
p_n(\ell_{k}\leq n-x^{*}\sqrt{n/2}) &= {\mathcal O}\left(n^{k-1}\right) \sum_x p_{n-k+1}(\ell_1=s_k)
+ {\mathcal O}\left(n^{k-1}\right) \sum_{\omega \neq \mu \mu \cdots \mu} \sum_x p_{n-n_{\omega,k-1}-k+1}(\ell_1=s_k) \\
&= {\mathcal O}\left(n^{k-1}\right) p_{n-k+1}(\ell_1 \leq n^{*})
+ {\mathcal O}\left(n^{k-1}\right) \sum_{\omega \neq \mu \mu \cdots \mu} p_{n-n_{\omega,k-1}-k+1}(\ell_1 \leq n^{*}) \\
&= {\mathcal O}\left(n^{k-1}\right) {\mathcal O}\left(e^{-n^{2/7}/2}\right) = {\mathcal O}\left(n^{k-1} e^{-n^{2/7}/2}\right). \,\,\, {\quad\rule{1mm}{3mm}\,}
\end{align*}
The next theorem, which extends Proposition~\ref{limit-law}, is our main result.
\begin{thm}\label{teo}
For a fixed $k \geq 1$, let $\ell_k$ be the $k$th largest external branch length in a random ordered history of size $n$ selected uniformly at random and denote by $\chi(2k)$ the $\chi$-distribution with $2k$ degrees of freedom. Then, as $n\rightarrow\infty$,
\[
\frac{n-\ell_k}{\sqrt{n/2}}\stackrel{d}{\longrightarrow}\chi(2k),
\]
with convergence of all moments. In particular, the mean and the variance of $\ell_k$ satisfy respectively
\begin{equation} \label{mean-var-gen}
{\mathbb E}(\ell_k)\sim n\qquad\text{and}\qquad{\rm Var}(\ell_k)\sim\left(k-\frac{\pi k^2}{16^k}\binom{2k}{k}^2\right)n.
\end{equation}
\end{thm}
\noindent \emph{Proof.}\ Following the proof of Proposition \ref{limit-law}, we show that all moments converge, which implies convergence in distribution. Starting from
\[
{\mathbb E}\left(\frac{n-\ell_k}{\sqrt{n/2}}\right)^m=\sum_{s=0}^{n}\left(\frac{n-s}{\sqrt{n/2}}\right)^m p_n(\ell_k=s),
\]
we replace $s$ by $s=n-x\sqrt{n/2}$ and break the sum into two parts obtaining
$$\sum_{x=0}^{\sqrt{2n}} x^m p_n(\ell_k=n-x\sqrt{n/2} )=\sum_{0 \leq x< n^{1/7}} x^m p_n(\ell_k=n-x\sqrt{n/2} ) +\sum_{n^{1/7} \leq x \leq \sqrt{2n}} x^m p_n(\ell_k=n-x\sqrt{n/2} ) \equiv \Sigma_1+\Sigma_2,$$
where all the sums proceed in steps of size $\sqrt{2/n}$.
For $\Sigma_2$, by part (b) of the latter corollary, we have
\[
\Sigma_2={\mathcal O}\left(n^{m/2+k-1}e^{-n^{2/7}/2}\right)=o(1).
\]
For $\Sigma_1$, by part (a) of Corollary \ref{coro1}, we have
\[
\Sigma_1=\frac{1+o(1)}{2^{k-1} (k-1)!} \cdot \sum_{0\leq x<n^{1/7}}\frac{x^{m+2k-1}}{\sqrt{n/2}}e^{-x^2/2}+{\mathcal O}\left(n^{-1}\sum_{0\leq x<n^{1/7}}e^{-x^2/2}\right).
\]
Hence, the Riemann sum in $\Sigma_1$ can be approximated by the integral $\int_{0}^{n^{1/7}}x^{m+2k-1}e^{-x^2/2}{\rm d}x$, which converges to $\int_{0}^{\infty}x^{m+2k-1}e^{-x^2/2}{\rm d}x$. Overall,
\begin{align*}
{\mathbb E}\left(\frac{n-\ell_k}{\sqrt{n/2}}\right)^m & \stackrel{n \to \infty}{\longrightarrow} \frac{1}{2^{k-1} (k-1)!} \int_{0}^{\infty}x^{m+2k-1}e^{-x^2/2}{\rm d}x = \frac{1}{2^{k-1} (k-1)!} \cdot 2^{m/2+k-1} \, \Gamma\left( \frac{m+2k}{2} \right) \\
& = 2^{m/2} \, \frac{\Gamma\left( \frac{m}{2} + k\right)}{\Gamma(k)}
\end{align*}
which proves the claimed convergence of moments. Finally, (\ref{mean-var-gen}) follows from this convergence by straightforward computation. For instance, setting $m=1$ we obtain
\begin{equation}\label{mean}
\frac{n-{\mathbb E}(\ell_k)}{\sqrt{n/2}} \stackrel{n \to \infty}{\longrightarrow} \frac{\sqrt{2 \pi} k {{2k}\choose{k}} }{4^k},
\end{equation}
and similarly for the variance.
{\quad\rule{1mm}{3mm}\,}
\section{Conclusions}
\begin{figure}[tpb]
\begin{center}
\includegraphics*[scale=0.57,trim=0 0 0 0]{figcumul.pdf}
\end{center}
\vspace{-.7cm}
\caption{{\small Probability that for $n=1000$ the rescaled variable $\mathcal{L}_k \equiv \frac{n-\ell_k}{\sqrt{n/2}}$ is less than or equal to $x \in [0,5]$ (in steps of $0.2$), when $k=1$ (dots), $k=2$ (squares), and $k=3$ (triangles). Values are calculated from Eqs. (\ref{prop1a}), (\ref{prop1b}), and (\ref{prop1c}). Solid lines give the cumulative function for the $\chi$-distribution with $2k$ degrees of freedom, with $k=1, 2, 3$ from left to right.
}} \label{figcumul}
\end{figure}
For random histories of fixed size $n$ selected under the Yule probability model, or, equivalently, for ordered histories of size $n$ selected uniformly at random, we have studied the variable $\ell_k$ defined as the $k$th largest length of an external branch. Measuring the length of an external branch as the rank of its parent node, Theorem \ref{teo} shows that the rescaled variable $\mathcal{L}_k \equiv \frac{n-\ell_k}{\sqrt{n/2}}$ follows asymptotically a $\chi$-distribution with $2k$ degrees of freedom (Fig. \ref{figcumul}), with convergence of all moments. The mean of $\ell_k$ is shown to be asymptotically equivalent to $n$, independently of $k$. More precisely, by plugging the approximation ${{2k}\choose{k}} \approx \frac{4^k}{\sqrt{\pi k}}$ into (\ref{mean}), we find that ${\mathbb E}(\ell_k)$ behaves like $n - \sqrt{k \, n}$ for increasing $n$. The variance of $\ell_k$ is asymptotically equivalent to $\left(k-\frac{\pi k^2}{16^k}\binom{2k}{k}^2\right)n$.
Our approach has used a well known correspondence between trees and permutations, in which the $k$th largest length of an external branch of an ordered history of size $n$ is the value of the $k$th largest non-peak entry in the associated permutation of size $n-1$ (Section \ref{sec:2}). Thus, Proposition \ref{firstprop} and Theorem \ref{teo} also contribute to the study of the probabilistic properties of the value-peaks of permutations investigated in \cite{peaks}.
In this paper we focused only on the {\it discrete} length of the external branches of random trees. Nevertheless, our results can also find applications in the analysis of the {\it time} length of the external branches of ``coalescent'' trees \cite{hudson, kingman:1982, tajima}. A coalescent tree of size $n$ is a pair consisting of a random Yule history $t$ of $n$ leaves and a sequence $( \tau_2, \dots, \tau_{n} )$ of independent exponentially distributed random variables assigning a time length to the different layers of $t$ (Fig. \ref{layers}). The variable $\tau_i$ gives the time length of the layer in which exactly $i$ branches of $t$ coexist, and its mean is $\mathbb{E}(\tau_i) = 1/\lambda_i$, with $\lambda_i = {{i}\choose{2}}$. Hence, the expected value of the time length of an external branch of $t$ of discrete length $s$ can be calculated as
$\sum_{i=n+1-s}^{n} \mathbb{E}(\tau_i) = \frac{2}{n-s} - \frac{2}{n}.$ By using our finding that ${\mathbb E}(\ell_k) \approx n - \sqrt{k \, n}$, we thus see that, in a random coalescent tree of large size $n$, the mean of the $k$th time length of an external branch will behave roughly like $\frac{2}{\sqrt{k \, n}}.$
Yule and coalescent trees enable the simulation of the spread of mutations in a population under neutral evolution.
Singleton mutations---i.e. mutations affecting single individuals---can be modeled as random events occurring along the external branches of the tree. Doubleton mutations---which affect pairs of individuals---take place along those branches of the tree from which exactly two leaves descend.
It would be of interest to extend the calculations of this article to investigate the length of this additional type of branches.
\medskip
\noindent
{\footnotesize
{\bf Acknowledgments} Support to MF was provided by the MOST (Ministry of Science and Technology, Taiwan) grant MOST-111-2115-M-004-002-MY2.}
|
1,108,101,565,805 | arxiv | \section{Introduction}
\label{sec:intro}
Communication networks are ubiquitous in contemporary society, from the widely used Internet and 4G/5G cellular networks to the fast-growing Internet of Things (IoT) networks. The growing of communication networks has gone beyond the imagination of their designers. For example, based on Cisco Annual Internet Report (2018–2023) White Paper, nearly two-thirds of the global population will have Internet access by 2023~\footnote{\url{https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html}}. It would be very challenging to operate and manage such giant networks and new network types keep bringing new problems. For example, the manual configuration becomes infeasible or inefficient in modern networks. While the research for communication networks has a long history, it is still an active area with a steady stream of new ideas, e.g., Software-Defined Networking (SDN) and Space-Air-Ground Integrated Network. The challenges may not only include the traditional ones, e.g., routing and load balancing, power control and resource allocation, but also the emerging ones, e.g., virtual network embedding.
To solve these challenges, various solutions are introduced to the networking domain, which includes deep learning~\cite{goodfellow2016deep}. Represented by deep neural networks, deep learning has achieved a great success in many problems, especially in image recognition, natural language processing, and time series problems~\cite{he2016deep, young2018recent, jiang2018geospatial, jiang2020applications}. Deep learning models are also applied in various communication networks and are proven extremely useful for a series of problems, e.g., network design, traffic prediction, resource allocation, etc~\cite{zappone2019wireless, zhang2019deep, wang2020thirty, abbasi2021deep}. However, in these studies, the network topology structure is not fully utilized because most of the deep neural networks are designed for Euclidean structure data, e.g., images and videos. To amend this shortcoming, graph-based deep learning represented by Graph Neural Networks (GNNs) are proposed for non-Euclidean structure data in recent years~\cite{wu2020comprehensive, zhou2020graph, zhang2020deeplearning}. They are suitable for problems in communication networks because of their strong learning ability to capture the spatial information hidden in the network topology and their generalization ability to be used in unseen topologies when the networks are dynamic. As to be discussed in this survey, GNN-based solutions are proven effective for a wide range of problems in different network scenarios and are worthy of being explored deeper in the future.
To the best of the authors' knowledge, this paper presents the first literature survey of graph-based deep learning studies for problems in communication networks, covering a total of 81 papers ranging from 2016 to 2021. The scope of communication networks used in this survey is broad, thus the surveyed papers are selected from a wide range of journals and conferences. Because it is still a very rapidly developing research topic of applying graph-based deep learning methods, we also include preprints that have not yet gone through the traditional peer review process (e.g., arXiv papers) to present the latest progress.
The surveyed papers are classified into three major scenarios, as organized in Figure~\ref{fig:fig1}. Some of the common problems are discussed in two or three scenarios, e.g., network modeling, routing, traffic prediction. The other problems are only mentioned in one of these scenarios. This kind of organization is not exclusive, because the idea of software-defined networking can be applied for both the wireless and wired networks. Graph-based deep learning is being frequently used in the assumption of future softwarized networks, without a strict constraint about which type of substrate network is being used. By taking the software-defined networking scenario as a separate section, the relevant discussion would be inspiring for both the future work in the wireless and wired scenarios.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{The organization of this survey.}
\label{fig:fig1}
\end{figure}
In this survey, the problem to solve, the graph-based solution and the specific GNN used in each study are identified and summarized. We also try to give the future directions of applying GNNs in communication networks. Our aim is to provide an up-to-date summary of related work and a useful starting point for new researchers interested in related topics. In addition to this paper, we have also created an open GitHub repository~\footnote{\url{https://github.com/jwwthu/GNN-Communication-Networks}} to update new papers continuously.
Our contributions are summarized as follows:
1) \textit{Comprehensive Review}: We present the up-to-date comprehensive review of graph-based deep learning solutions for problems in various types of communication networks, in the past six years (2016-2021).
2) \textit{Well-organized Summary}: We summarize the problem to solve, the graph-based solution and the GNNs used in each study in a unified format, which would be useful as a reference manual.
3) \textit{Future Directions}: We propose several potential future directions for researchers interested in relevant topics.
For reference, the list of the acronyms frequently used in this survey is summarized in Table~\ref{tab:acronyms}.
\begin{center}
\begin{longtable}{lp{10cm}}
\caption{The list of the acronyms used in this survey.} \label{tab:acronyms} \\
\hline Acronym & Full Name \\ \hline
\endfirsthead
\multicolumn{2}{c}%
{{\bfseries \tablename\ \thetable{} -- continued from previous page}} \\
\hline Acronym & Full Name \\ \hline
\endhead
\hline \multicolumn{2}{r}{{Continued on next page}} \\ \hline
\endfoot
\hline
\endlastfoot
BGP & Border Gateway Protocol \\
DC-STGCN & Dual-Channel based Graph Convolutional Network \\
DCRNN & Diffusion Convolutional Recurrent Neural Network \\
DL & Deep Learning \\
DQN & Deep Q Network \\
DRL & Deep Reinforcement Learning \\
FDS-MARL & Fully Decentralized Soft Multi-Agent Reinforcement Learning \\
GASTN & Graph Attention Spatial-Temporal Network \\
GAT & Graph Attention Network \\
GCLR & GNN based Cross Layer optimization by Routing \\
GCN & Graph Convolutional Network \\
GE & Graph Embedding \\
GGS-NN & Gated Graph Sequence Neural Network \\
GIN & Graph Isomorphism Network \\
GN & Graph Network \\
GNN & Graph Neural Network \\
HIGNN & Heterogeneous Interference Graph Neural Network \\
HetGAT & Heterogeneous Graph Attention Network \\
IGCNet & Interference Graph Convolutional Neural Network \\
ML & Machine Learning \\
MPGNNs & Message Passing Graph Neural Networks \\
MPLS & Multiprotocol Label Switching \\
MPNN & Message Passing Neural Network \\
MSTNN & Multi-scale Spatial-Temporal Graph Neural Network \\
NFV & Network Function Virtualization \\
REGNNs & Random Edge Graph Neural Networks \\
S-RNN & Structural-RNN \\
SDN & Software Defined Networking \\
SFC & Service Function Chaining \\
SGCRN & Spatiotemporal Graph Convolutional Recurrent Network \\
TCN & Temporal Convolutional Network \\
TGCN & Temporal Graph Convolutional Network \\
UWMMSE & Unfolded iterative Weighted Minimum Mean Squared Error \\
VNE & Virtual Network Embedding \\
VNF & Virtual Network Function \\
\hline
\end{longtable}
\end{center}
The remainder of this paper is organized as follows. In Section~\ref{sec:method}, we introduce the progress of conducting literature search and selection. In Section~\ref{sec:gnns}, we introduce the GNNs used in the reviewed studies. In Section~\ref{sec:wireless}, we summarize the studies in the wireless networks. In Section~\ref{sec:wired}, we summarize the studies in the wired networks. In Section~\ref{sec:sdn}, we summarize the studies in the software-defined networks. In Section~\ref{sec:direction}, we point out future directions. In Section~\ref{sec:conclusion}, we draw the conclusion.
\section{Survey Methodology}
\label{sec:method}
To collect relevant studies, the literature is searched with various combinations of two groups of keywords. The first group is about the graph-based deep learning techniques, e.g., ``Graph", ``Graph Embedding", ``Graph Neural Network", ``Graph Convolutional Network", ``Graph Attention Networks", ``GraphSAGE", ``Message Passing Neural Network", ``Graph Isomorphism Network", etc. The second group is about the communication networks as well as specific problems, e.g., ``Wireless Network", ``Cellular Network", ``Computer Network", ``Software Defined Networking", ``Traffic Prediction", ``Routing", ``Service Function Chaining", ``Virtual Network Function", etc. The databases from major publishers are carefully covered one by one, e.g., ACM, Elsevier, IEEE, Springer, Wiley, etc. To track the citation relationship among these papers and avoid missing records from smaller publishers, Google Scholar is also leveraged in the literature search process.
A total of 81 papers are finally selected and covered in this survey, with the earliest one published in year 2016, as shown in Figure~\ref{fig:paper_count}. Most of the surveyed papers are published in recent three years, i.e., 2019, 2020, and the first five months of 2021. Compared with 14 papers in 2019, there is a 207\% growth of papers in 2020, with a total of 43 papers. While there are only 20 papers in the first five months of 2021, it is expected that more relevant studies would be published or publicized in the remaining months with the growing impact of graph-based deep learning methods being applied in the networking domain. We also show the paper statistics for different network types in Figure~\ref{fig:type_count}. The wireless network scenario draws more attention than the other two and this trend may continue in 2021.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{paper_count.pdf}
\caption{The paper count of different types annually.}
\label{fig:paper_count}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{type_count.pdf}
\caption{The paper count of different network types annually.}
\label{fig:type_count}
\end{figure}
For a full coverage of relevant studies, workshop, conference, and journal papers as well as preprint papers are covered in this survey, to track the latest achievements as well as the on-going progress. The journal list (alphabetically) is shown in Table~\ref{tab:journal}. The conference list (alphabetically) is shown in Table~\ref{tab:conference}. And the workshop list (alphabetically) is shown in Table~\ref{tab:workshop}. All the preprint papers are from the arXiv platform~\footnote{\url{https://arxiv.org/}}. Since we cover a wide area with various communication networks, the papers are selected from various publications or conference proceedings, some of which may focus on telecommunications or related subjects and the others may be multidisciplinary. As an emerging topic which has not been widely adopted, graph-based deep learning appears in recent years for solving networking-related problems, with only one paper selected for most journals or conferences.
\begin{table}[!htb]
\centering
\caption{List of source journals and the corresponding studies we cover in this study.}
\label{tab:journal}
\begin{tabular}{ll}
\hline
Journal Name & Studies \\
\hline
Computer Networks & \cite{sun2020efficient, li2020traffic} \\
Electronics & \cite{pan2021dc} \\
IEEE Access & \cite{nakashima2020deep, zhu2020gclr} \\
IEEE Communications Letters & \cite{cheng2021discovering, sun2020combining, simsek2020iab, rusek2018message} \\
IEEE Internet of Things Journal & \cite{liu2020dynamic} \\
IEEE Journal on Selected Areas in Communications & \cite{fang2019idle, rusek2020routenet, yan2020automatic, shen2020graph} \\
IEEE Systems Journal & \cite{zhuang2019toward} \\
IEEE Transactions on Industrial Informatics & \cite{wang2020graph} \\
IEEE Transactions on Information Forensics and Security & \cite{shen2021accurate} \\
IEEE Transactions on Mobile Computing & \cite{sun2021mobile, he2020graph} \\
IEEE Transactions on Network Science and Engineering & \cite{geyer2020graph} \\
IEEE Transactions on Network and Service Management & \cite{mijumbi2017topology} \\
IEEE Transactions on Signal Processing & \cite{eisen2020optimal} \\
IEEE Transactions on Vehicular Technology & \cite{yan2020cooperative} \\
IEEE Transactions on Wireless Communications & \cite{chowdhury2021unfolding, lee2020graph} \\
International Journal of Network Management & \cite{kim2021graph} \\
Performance Evaluation & \cite{geyer2019deepcomnet} \\
Sensors & \cite{zhao2020graph} \\
Transactions on Emerging Telecommunications Technologies & \cite{zhao2020spatiotemporal} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!htb]
\centering
\caption{List of source conferences and the corresponding studies we cover in this study.}
\label{tab:conference}
\begin{tabular}{p{12cm}l}
\hline
Conference Name & Studies \\
\hline
ACM SIGCOMM conference & \cite{suarez2019challenging} \\
ACM Symposium on SDN Research (SOSR) & \cite{rusek2019unveiling} \\
Asia-Pacific Network Operations and Management Symposium (APNOMS) & \cite{kim2020graph, heo2020graph} \\
IEEE Annual Consumer Communications \& Networking Conference (CCNC) & \cite{rkhami2021learn} \\
IEEE Conference on Computer Communications (INFOCOM) & \cite{geyer2019deeptma} \\
IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN) & \cite{jalodia2019deep} \\
IEEE Global Communications Conference (GLOBECOM) & \cite{he2020resource, he2019graph} \\
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) & \cite{chowdhury2021efficient, eisen2020transferable} \\
IEEE International Conference on Communications (ICC) & \cite{yang2020mstnn,sun2020deepmigration, lee2020wireless, tekbiyik2021channel, wang2021drl} \\
IEEE Symposium on Computers and Communications (ISCC) & \cite{geyer2020robustness} \\
IEEE Vehicular Technology Conference (VTC) & \cite{fu2020wireless} \\
IEEE Wireless Communications and Networking Conference (WCNC) & \cite{guo2021learning, shao2021graph, hou2021user} \\
IFIP Networking Conference (IFIP Networking) & \cite{geyer2019deepmpls} \\
International Conference on Information Networking (ICOIN) & \cite{sawada2020network, suzuki2020estimating} \\
International Conference on Information and Communication Technology Convergence (ICTC) & \cite{rafiq2020service} \\
International Conference on Network and Service Management (CNSM) & \cite{kim2020graph1, habibi2020accelerating, mijumbi2016connectionist} \\
International Conference on Real-Time Networks and Systems (RTNS) & \cite{mai2021improvements} \\
International Conference on Wireless Communications and Signal Processing (WCSP) & \cite{yang2020noval} \\
International Conference on emerging Networking EXperiments and Technologies (CoNEXT) & \cite{badia2019towards} \\
International Symposium on Networks, Computers and Communications (ISNCC) & \cite{rkhami2020use} \\
Opto-Electronics and Communications Conference (OECC) & \cite{gui2020optical} \\
\hline
\end{tabular}
\end{table}
\begin{table}[!htb]
\centering
\caption{List of source workshops and the corresponding studies we cover in this study.}
\label{tab:workshop}
\begin{tabular}{p{12cm}l}
\hline
Workshop Name & Studies \\
\hline
AutoML for Networking and Systems Workshop of MLSys Conference & \cite{zhou2020auto} \\
IEEE Globecom Workshops (GC Wkshps) & \cite{shen2019graph} \\
IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) & \cite{eisen2019large, naderializadeh2020wireless} \\
Workshop on Big Data Analytics and Machine Learning for Data Communication Networks & \cite{geyer2018learning} \\
Workshop on Network Meets AI \& ML & \cite{bahnasy2020deepbgp, xiao2020neural} \\
\hline
\end{tabular}
\end{table}
\section{Graph-based Deep Learning Introduction}
\label{sec:gnns}
In this section, we give a short introduction of the graph-based deep learning techniques used in the surveyed papers. The relevant graph-based deep learning models are listed chronologically in Figure~\ref{fig:fig2}. The list of the acronyms used in Figure~\ref{fig:fig2} is shown separately in Table~\ref{tab:acronyms_figure}. Please note that the listed conferences may be lagged behind the preprint versions, which could be released one or two years earlier. Since the research for graph-based deep learning is still in a fast pace with new models appearing continuously, we have no intention of conducting a thorough literature search on the graph-based models. For now, we would give a short introduction for the GNNs used in the surveyed studies. For those who are interested in the whole picture of graph neural networks and a deeper discussion of the technical details, recent surveys~\cite{wu2020comprehensive, zhou2020graph, zhang2020deeplearning} are recommended.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{fig2.pdf}
\caption{The relevant graph-based deep learning models of this survey.}
\label{fig:fig2}
\end{figure}
\begin{table}[!htb]
\centering
\caption{The list of the acronyms used in Figure~\ref{fig:fig2}}
\label{tab:acronyms_figure}
\begin{tabular}{p{2.6cm}p{10cm}}
\hline
Acronym & Full Name \\
\hline
CVPR & IEEE Conference on Computer Vision and Pattern Recognition \\
ICLR & International Conference on Learning Representations \\
ICML & International Conference on Machine Learning \\
IEEE Trans Neural Networks & IEEE Transactions on Neural Networks \\
IEEE Trans Cybern & IEEE Transactions on Cybernetics \\
NIPS & Advances in Neural Information Processing Systems \\
WWW & The World Wide Web Conference \\
\hline
\end{tabular}
\end{table}
As a pioneering study, Graph Neural Network (GNN) is introduced in~\cite{scarselli2008graph}, which extends the application of neural networks from Euclidean structure data to non-Euclidean structure data. GNN is based on the message passing mechanism, in which each node updates its state by exchanging information with each other until it reaches a certain stable state. Afterwards, various GNN variants are proposed, e.g., Graph Convolutional Network (GCN) and Graph Attention Networks (GAT).
We first introduce the Graph Embedding (GE) models. In mathematics, embedding is a mapping function $f: X \rightarrow Y$, in which a point in one space $X$ is mapped to another space $Y$. Embedding is usually performed from a high-dimensional abstract space to a low-dimensional space. Generally speaking, the representation mapped to the low-dimensional space is easier for neural networks to handle with. In the case of graphs, graph embedding is used to transform nodes, edges, and their features into vector space, while preserving properties like graph structure and information as much as possible. For the studies covered in this survey, several graph embedding models are involved, including structure2vec~\cite{dai2016discriminative}, GraphSAGE~\cite{hamilton2017inductive}, and Graph Embedding (GE)~\cite{pan2019learning}. In a transductive learning approach, Structure2vec~\cite{dai2016discriminative} is based on the idea that if the two sequences composed of all the neighbors of two nodes are similar, then the two nodes are similar. GraphSAGE~\cite{hamilton2017inductive} is a representative of inductive learning. It does not directly learn the representation of each node, but learns the aggregation function instead. For the new node, its embedding representation is generated directly without the need to learn again. Furthermore, a novel adversarial regularized framework is proposed for graph embedding in~\cite{pan2019learning}.
Then we introduce the Graph Convolutional Network (GCN) models. GCN extends the convolution operation from traditional data (such as image) to graph data, inspired by the convolutional neural networks which are extremely successful for image-based tasks. The core idea is to learn a function mapping, through which a node can aggregate its own features and the features of its neighbors to generate the new representation. Generally speaking, there are two types of GCN models, namely, spectral-based and spatial-based. Based on graph signal processing, spectral-based GCNs define the convolution operation in the spectral domain, e.g., the Fourier domain. To conduct the convolution operation, a graph signal is transformed to the spectral domain by the graph Fourier transform. Then the result after the convolution is transformed back by the inverse graph Fourier transform. Several spectral-based GCNs are used in the surveyed studies, e.g., Graph Neural Network (GNN)~\cite{henaff2015deep}, ChebNet~\cite{defferrard2016convolutional}, and Graph Convolutional Network (GCN)~\cite{kipf2017semi}, which improve the convolution operation with different techniques. By introducing a parameterization with smooth coefficients, GNN~\cite{henaff2015deep} attempts to make the spectral filters spatially localized. ChebNet~\cite{defferrard2016convolutional} learns the diagonal matrix as an approximation of a truncated expansion in terms of Chebyshev polynomials up to $K$th order. To avoid overfitting, $K=1$ is used in GCN~\cite{kipf2017semi}.
Several spatial-based GCNs are also used in the surveyed studies, which defines the convolution operation directly on the graph based on the graph topology. Attention-based GNN models can be categorized into the spatial-based type. Graph Attention Networks (GAT)~\cite{velivckovic2018graph} incorporates the attention mechanism into the propagation step and further utilizes the multi-head attention mechanism to stabilize the learning process. To unify different spatial-based variants, Message Passing Neural Network (MPNN)~\cite{gilmer2017neural} proposes the usage of message passing functions, which contain a message passing phase and a readout phase. Graph Network (GN)~\cite{battaglia2018relational} also unifies many GNN variants, by learning node-level, edge-level and graph level representations. Graph Isomorphism Network (GIN)~\cite{xu2019powerful} takes a step further by pointing out that previous MPNN-based methods are incapable of distinguishing different graph structures based on the graph embedding they produced and adjusting the weight of the central node by a learnable parameter to amend this drawback.
Other than the convolution operation, the recurrent operation can also be applied in the propagation module of GNNs. The key difference is that the convolution operations use different weights while the recurrent operations share same weights. For example, Gated Graph Sequence Neural Network (GGS-NN)~\cite{li2016gated} uses the Gate Recurrent Units (GRU) in the propagation step.
In realistic networks, the network topology may change occasionally, e.g., with the addition or deletion of routers, which corresponds to the case of dynamic graphs, instead of static graphs. Several GNN variants are proposed for dealing with dynamic graphs. Diffusion Convolutional Recurrent Neural Network (DCRNN)~\cite{li2018dcrnn_traffic} leverages GNNs to collect the spatial information, which is further used in sequence-to-sequence models. By extending the static graph structure with temporal connections, Structural-RNN (S-RNN)~\cite{jain2016structural} can learn the spatial and temporal information simultaneously.
The last case to discuss is heterogeneous graphs, where the nodes and edges are multi-typed or multi-modal. For this case, meta-path is introduced as a path scheme which determines the type of node in each position of the path, then one heterogeneous graph can be reduced to several homogeneous graphs to perform graph learning algorithms. To generate the final representation of nodes, graph attention is performed on the meta-path-based neighbors and a semantic attention is used over output embeddings of nodes under all meta-path schemes in Heterogeneous Graph Attention Network (HetGAT)~\cite{wang2019heterogeneous}.
\section{Wireless Networks}
\label{sec:wireless}
In this section, we focus on the relevant studies in the wireless network scenarios. For wireless networks, we refer to those transmitting information through wireless data connections without using a cable, including wireless local area network, cellular network, wireless ad hoc network, cognitive radio network, device-to-device (D2D) network, satellite network, vehicular network, etc. Some problems are ubiquitous in different formats of wireless networks, e.g., power control. We would first talk about these problems in the general wireless network scenarios. Then we discuss the papers focusing on a specific wireless network scenario.
\subsection{General Wireless Network}
Compared with other deep learning models, GNNs have the advantage of handle the topology information, which may not be leveraged in previous studies with Euclidean deep learning models. In densely deployed wireless local area networks WLANs, the channel resource is limited. To increase the system throughput, the channels must be allocated efficiently. The features of the channel vectors with topology information are extracted in~\cite{nakashima2020deep}, with the graph convolutional model. Then a deep reinforcement learning is developed for channel allocation, which utilizes the features extracted by GCN. Topology information is also used in~\cite{zhang2019topology} for wireless network optimization. Combining a graph embedding unit and a deep feed-forward network, a two-stage topology-aware framework is proposed and validated for the network flow optimization problem, which achieves a trade-off between computation time and inference performance.
Compared with wired communications, wireless transmission may be imperfect with more errors. While GNNs models may be applied in wireless networks, the transmission uncertainty would deteriorate the robustness of GNN models. This situation is considered in~\cite{lee2021decentralized}, in which decentralized GNN binary classifiers are used for multiple problems, e.g., the power control problem or the wireless link scheduling problem. To handle this situation, re-transmission mechanisms are proposed to enhance the robustness of GNN classifiers, for both uncoded and coded wireless communication systems.
Power allocation or control is an important topic in the wireless network scenario, in which the devices connected to the network may be powered by batteries with a limited energy storage. The transmission in the free space may also interference with each other if the power is not properly controlled. To handle this problem, multiple GNN-based solutions are proposed~\cite{eisen2019large, eisen2020transferable, eisen2020optimal, chowdhury2021efficient, chowdhury2021unfolding, nikoloska2021fast, shen2019graph, shen2020graph, naderializadeh2020wireless}. In a series of studies~\cite{eisen2019large, eisen2020transferable, eisen2020optimal, nikoloska2021fast}, Random Edge Graph Neural Networks (REGNNs) are selected as the optimal solution for the power allocation and control optimization problem, with various system constraints. REGNNs outperform baselines with an essential permutation invariance property, which are desirable in networks of growing size. For the optimal power allocation in a single-hop ad hoc wireless network, an iterative weighted minimum mean squared error method named UWMMSE is proposed, in which GNNs are used to learn the model parameters~\cite{chowdhury2021efficient, chowdhury2021unfolding}. UWMMSE effectively reduces the computational complexity without harming the allocation performance, over the classic algorithm for power control. For solving the similar problem in an unsupervised approach, Interference Graph Convolutional Neural Network (IGCNet) is proposed and validated in~\cite{shen2019graph}, which is robust to imperfect Channel State Information (CSI). Beamforming is further considered in \cite{shen2020graph}, in which Message Passing Graph Neural Networks (MPGNNs) are proposed to solve both the power control and beamforming problems. Similarly, in an unsupervised approach to learn optimal power allocation decisions, a primal-dual counterfactual optimization approach is proposed in~\cite{naderializadeh2020wireless}, in which GNNs are used to handle the network topology.
To sum up, the papers in the general wireless network scenario are listed in Table~\ref{tab:wireless}. The target problem, proposed solution and the relevant GNN component(s) are also listed. The similar tabular format for the paper summary applies in the following sections.
\begin{table}[!htb]
\centering
\caption{List of the papers in the wireless network scenario.}
\label{tab:wireless}
\begin{tabular}{|p{3.5cm}|p{2cm}|p{3.3cm}|p{3cm}|}
\hline
Problem & Paper & Solution & GNN \\
\hline
Binary Classification & \cite{lee2021decentralized} & Decentralized GNN & GCN~\cite{kipf2017semi}, GIN~\cite{xu2019powerful} \\
\hline
Channel Allocation & \cite{nakashima2020deep} & DRL with GCN & ChebNet~\cite{defferrard2016convolutional} \\
\hline
Network Flow Optimization & \cite{zhang2019topology} & Two-stage Topology-aware ML Framework & MPNN~\cite{gilmer2017neural} \\
\hline
Power Allocation & \cite{eisen2019large, eisen2020transferable, eisen2020optimal} & REGNN & GNN~\cite{henaff2015deep} \\
\hline
Power Allocation & \cite{chowdhury2021efficient, chowdhury2021unfolding} & UWMMSE Method & GCN~\cite{kipf2017semi} \\
\hline
Power Control & \cite{nikoloska2021fast} & REGNN & GNN~\cite{henaff2015deep} \\
\hline
Power Control & \cite{shen2019graph} & IGCNet & GIN~\cite{xu2019powerful} \\
\hline
Power Control and Beamforming & \cite{shen2020graph} & MPGNNs & GIN~\cite{xu2019powerful}, GCN~\cite{kipf2017semi} \\
\hline
Power Control & \cite{naderializadeh2020wireless} & Unsupervised Primal-dual Counterfactual Optimization & GNN~\cite{henaff2015deep} \\
\hline
\end{tabular}
\end{table}
\subsection{Cellular Network}
Cellular networks are discussed separately in this part, not only because more than ten papers focus on this specific scenario, but also because the wide application of the cellular network. For example, there were 5.95 billion LTE subscriptions worldwide by the end of Q4 2020~\footnote{\url{https://gsacom.com/paper/lte-and-5g-subscribers-march-2021-q4/}}. While the growing trend may be affected by COVID-19, cellular networks are still one of the major approach for accessing the Internet.
Driven by the huge demand, the research in the cellular network scenario keeps increasing, including those leveraging graph-based deep learning models for some traditional communication problems, e.g., resource allocation, power control and traffic prediction. Driven by the ideas from SDN, some new problems also appear in the cellular network scenario, e.g., network slicing and virtual network embedding. Both types of problems have been investigated in the surveyed papers.
To fully utilize the network resources, multipath TCP is considered for 5G networks, which transfer packets over multiple paths concurrently. However, network heterogeneity in 5G networks makes the multipath routing problem become more complex for the existing routing algorithms to handle. A GNN-based multipath routing model is proposed as the solution in~\cite{zhu2020gclr}. The experiments under the SDN framework demonstrate that the GNN-based model can achieve a significant throughput improvement.
Traffic prediction is also considered in cellular networks, with GNN-based solutions being proposed in recent years~\cite{he2019graph, he2020graph, pan2021dc, sun2021mobile}. As a prediction problem, the temporal dependencies may be modeled by a recurrent neural network, e.g., Long Short Term Memory (LSTM) or Gated Recurrent Unit (GRU). Different attention mechanisms may also be incorporated. As an improvement over baselines, GNN is capable of modeling the spatial correlation between different nodes, e.g., a cell tower or an access point. Different structures have been explored in existing studies, e.g., GAT in~\cite{he2019graph, he2020graph}, GCN in~\cite{pan2021dc}, and GraphSAGE in~\cite{sun2021mobile}.
Energy consumption is another concern for 5G network, which is designed to enable a denser network with microcells, femtocells and picocells. To better control the transmission power, GNN-based power control solutions are proposed in~\cite{guo2021learning, hou2021user}. Heterogeneous GNNs (HetGNNs) with a novel parameter sharing scheme are proposed for power control in multi-user multi-cell networks~\cite{guo2021learning}. Take a step further, the joint optimization problem of user association and power control of the downlink is considered in~\cite{hou2021user}, in which an unsupervised GNN is used for power allocation and the Spectral Clustering algorithm is used for user association.
Green network management is proposed to improve the energy efficiency. A specific problem, the Idle Time Windows (ITWs) prediction, is considered in~\cite{fang2019idle}. To capture the spatio-temporal features, a novel Temporal Graph Convolutional Network (TGCN) is proposed for learning the network representation, which improves the prediction performance. Also for the denser cell sites, the Integrated Access and Backhaul (IAB) architecture defined by the 3rd generation partnership project (3GPP) is used in~\cite{simsek2020iab}. The IAB topology design is formulated as a graph optimization problem and a combination of deep reinforcement learning and graph embedding is proposed for solving this problem efficiently.
The integration of satellite-terrestrial networks is proposed for the future 6G network. In this direction, a High Altitude Platform Station (HAPS) is a network node that operates in the stratosphere at an altitude around 20 km and is instrumental for providing communication services~\cite{kurt2021vision}. For HAPS, GAT is firstly utilized for channel estimation in~\cite{tekbiyik2021graph, tekbiyik2021channel}, and the proposed GAT estimator outperforms the traditional least square method in full-duplex channel estimation and is also robust to hardware imperfections and changes in small-scale fading characteristics.
As a softwarized concept, network slicing has been proposed for 5G network, using network virtualization to divide single network connection into multiple distinct virtual connections that provide services with different Quality-of-Service (QoS) requirements. However, the increasing network complexity is becoming a huge challenge for deploying network slicing. A scalable Digital Twin (DT) technology with GNN is developed in~\cite{wang2020graph} for mirroring the network behavior and predicting the end-to-end latency, which can also be applied in unseen network situations. Take a step further, GAT is incorporated into deep Q network (DQN) for designing an intelligent resource management strategy in~\cite{shao2021graph}, which is proven effective through simulations.
Virtual Network Embedding (VNE) is also a softwarized concept, which can be used for modeling the resource allocation of 5G network slices. Since the VNE problem is NP-hard, heuristic methods and deep learning models are both being proposed for this specific problem. Deep Reinforcement Learning (DRL) and GCN are combined for solving this problem~\cite{rkhami2020use, rkhami2021learn}, in which the episodic Markov Decision Process is solved by different GCN models.
To sum up, the papers in the cellular network scenario are listed in Table~\ref{tab:cellular}.
\begin{table}[!htb]
\centering
\caption{List of the papers in the cellular network scenario.}
\label{tab:cellular}
\begin{tabular}{|p{4cm}|p{1.5cm}|p{3.5cm}|p{2.5cm}|}
\hline
Problem & Paper & Solution & GNN \\
\hline
Channel Estimation & \cite{tekbiyik2021graph, tekbiyik2021channel} & GAT-based Estimator & GAT~\cite{velivckovic2018graph} \\
\hline
Idle Time Windows Prediction & \cite{fang2019idle} & TGCN & GCN~\cite{kipf2017semi} \\
\hline
Integrated Access and Backhaul Topology Design & \cite{simsek2020iab} & DRL with Graph Embedding & structure2vec~\cite{dai2016discriminative} \\
\hline
Network Modeling, Network Slicing & \cite{wang2020graph} & GNN-based Digital Twin & GraphSAGE~\cite{hamilton2017inductive} \\
\hline
Network Slicing & \cite{shao2021graph} & DQN with GAT & GAT~\cite{velivckovic2018graph} \\
\hline
Power Control & \cite{guo2021learning} & Heterogeneous GNNs & HetGAT~\cite{wang2019heterogeneous} \\
\hline
Routing & \cite{zhu2020gclr} & GCLR & MPNN~\cite{gilmer2017neural} \\
\hline
Traffic Prediction & \cite{sun2021mobile} & Graph-based TCN & GraphSAGE~\cite{hamilton2017inductive} \\
\hline
Traffic Prediction & \cite{he2019graph, he2020graph} & GASTN & S-RNN~\cite{jain2016structural} \\
\hline
Traffic Prediction & \cite{pan2021dc} & DC-STGCN & GCN~\cite{kipf2017semi} \\
\hline
User Association, Power Control & \cite{hou2021user} & Unsupervised Graph Model & GraphSAGE~\cite{hamilton2017inductive} \\
\hline
VNE & \cite{rkhami2020use, rkhami2021learn} & DRL with GCN & GCN~\cite{kipf2017semi} \\
\hline
\end{tabular}
\end{table}
\subsection{Other Wireless Networks}
In this part, we discuss the other formats of wireless networks, with their own challenges and solutions.
The first case is the cognitive radio network, which aims to increase the spectrum utilization by secondary users with an opportunistic use of the free spectrum that is not used by the primary users. In this scenario, the challenge is to improve the resource utilization, without degrading the quality of service (QoS) of primary users. To solve this challenge, a joint channel selection and power adaptation scheme is proposed in~\cite{zhao2020graph}, in which GCN is leveraged to extract the crucial interference features. Based on the estimated CSI, a DRL-based framework is further used to allocate spectrum resources efficiently.
The second case is the Device-to-Device (D2D) network, which uses the direct communication between two users or devices, without traversing the base station or router. Without deploying additional infrastructure, D2D network is promising for provide communication services with an ultra-low latency. However, there are still many challenges for this objective to happen. To minimize the content fetching delay in D2D network, the joint optimization of cooperative caching and fetching is considered in~\cite{yan2020cooperative} and a DRL-based algorithm is proposed. In the proposed algorithm, GAT is used for cooperative inter-agent coordination. For power control and beamforming in D2D network, an unsupervised learning-based framework is proposed in~\cite{zhang2021scalable}, in which heterogeneous graphs and GNNs are used for the characteristics of diversified link features and interference relations. Wireless link scheduling is also considered in a series of studies~\cite{lee2020wireless, lee2020graph, fu2020wireless}. Graph embedding based method is proposed in~\cite{lee2020wireless, lee2020graph}, in which the graph embedding process is based on the distances of both communication and interference links, without requiring the accurate CSI. The proposed method manages to reduce the computational complexity for the link scheduling problem significantly.
The third case is the Internet of Things (IoT) network, which is designed for connecting smart devices, e.g., smart meters, smart light bulbs, connected valves and pumps, etc. The application of IoT networks covers a wide range, e.g., smart factory, smart agriculture, smart city, etc. The wide application also arises a great number of challenges, e.g., resource utilization efficiency, battery limitation for computation and communication, and security concerns. Some of these challenges can be solved with graph-based methods. One example is the channel estimation problem considered in~\cite{tekbiyik2021graph}, in which Direct-to-satellite (DtS) communication is used for globally connected IoT networks and the high path loss must be considered. GAT is proposed as the solution and further used for the reconfigurable intelligent surfaces in the considered scenario. Another example is the network intrusion detection, which is drawing a growing attention in recent years. GraphSAGE is used in~\cite{lo2021graphsage} for using the edge features and classifying the network flows into benign and attack types. The new solution is proven more effective than the state-of-the-art methods on six benchmark datasets. SDN concepts are also applied in IoT networks and can be combined with graph-based solutions. NFV-enabled Service Function Chain (SFC) is considered in~\cite{liu2020dynamic}, in which the challenge is that SFCs should be dynamically and adaptively reconfigured in order to achieve a lower resource consumption and a higher revenue. This problem is formulated as a discrete-time Markov decision process and a deep Dyna-Q (DDQ) approach is proposed as the solution, in which GNNs are used for predicting available virtual network functions (VNFs).
The fourth case is the satellite network, in which the communication between satellites are considered. With the growing Low Earth Orbit (LEO) satellites launched by commercial companies, e.g., Starlink and OneWeb, satellite networks are drawing more attention, with a potential application in both IoT and future 6G networks. The traffic prediction problem in the satellite network is considered in~\cite{yang2020noval}, in which the spatial dependency of the network topology is captured by GCN and the temporal dependency is captured by GRU. The simulation using the satellite network traffic shows the combination with GCN improves the performance of the single GRU model.
The last case is the vehicular network, which aims to connect the vehicle nodes. Vehicular network has been proposed for autonomous driving in future smart cities, as an important infrastructure. One challenge is to improve the spectrum allocation efficiency. The vehicle-to-everything (V2X) network is considered in~\cite{he2020resource}, in which GNN is used to learn the low-dimensional feature and DRL is used to make spectrum allocation decisions. This kind of GNN-DRL combination has already been used in similar problems of other network types. Another challenge is to reduce the communication latency within vehicular networks, especially in the large-scale and fast-moving scenario. To model the communication latency between the vehicle and the infrastructure, a graph-based framework named SMART is proposed in~\cite{liu2021spatio}, in which GCN is combined with a deep Q-networks algorithm to capture the spatial and temporal patterns within a limited observation zone. Then the latency performance is re-constructed for the whole geographical area.
To sum up, the papers in other wireless network scenarios are listed in Table~\ref{tab:wireless_other}.
\begin{table}[!htb]
\centering
\caption{List of the papers specified in other wireless network scenarios.}
\label{tab:wireless_other}
\begin{tabular}{|p{3cm}|p{3cm}|p{1.5cm}|p{3cm}|p{2.5cm}|}
\hline
Scenario & Problem & Paper & Solution & GNN \\
\hline
Cognitive Radio Network & Resource Allocation & \cite{zhao2020graph} & DRL with GCN & GCN~\cite{kipf2017semi} \\
\hline
D2D Network & Cooperative Caching and Fetching & \cite{yan2020cooperative} & FDS-MARL & GAT~\cite{velivckovic2018graph} \\
\hline
D2D Network & Power Control and Beamforming & \cite{zhang2021scalable} & HIGNN & GN~\cite{battaglia2018relational} \\
\hline
D2D Network & Wireless Link Scheduling & \cite{lee2020graph, lee2020wireless} & Graph Embedding based Method & structure2vec~\cite{dai2016discriminative} \\
\hline
D2D Network & Wireless Link Scheduling & \cite{fu2020wireless} & Graph Embedding based Method & structure2vec~\cite{dai2016discriminative} \\
\hline
IoT Network & Intrusion Detection & \cite{lo2021graphsage} & E-GraphSAGE & GraphSAGE~\cite{hamilton2017inductive} \\
\hline
IoT Network & Service Function Chain Dynamic Reconfiguration & \cite{liu2020dynamic} & Deep Dyna-Q Approach & GNN~\cite{scarselli2008graph} \\
\hline
Satellite Network & Traffic Prediction & \cite{yang2020noval} & GCN-GRU & GCN~\cite{kipf2017semi} \\
\hline
Vehicular Network & Communication Latency Modeling & \cite{liu2021spatio} & SMART Framework & GCN~\cite{kipf2017semi} \\
\hline
Vehicular Network & Spectrum Allocation & \cite{he2020resource} & DQN with GNN & GNN~\cite{scarselli2008graph} \\
\hline
\end{tabular}
\end{table}
\section{Wired Networks}
\label{sec:wired}
For wired networks, we mainly refer to the computer networks that are connected with cables, such as laptop or desktop computers. A typical example is the Ethernet network. In this section, we first discuss the graph-based studies in the wired network scenario from five aspects, namely, network modeling, network configuration, network prediction, network management, and network security. Then three special cases are further discussed, i.e., blockchain platform, data center network, and optical network.
GNNs are suitable for network modeling as the computer networks are often modeled as graphs. With the growing trend of contemporary Internet, it becomes more and more challenging to understand the overall network topology, the architecture and different elements of the networks, and their configurations. To solve this challenge, GNNs are proposed for network modeling. They are not only used to reconstruct the existing networks, but also used to model the non-existing networks, in order to provide an estimation of the unseen cases for network operators to make better network deployment decisions in the future. By modeling networks, the estimation of different end-to-end metrics are concerned in surveyed studies, given the input network topology, routing scheme and traffic matrices of the network, in a supervised~\cite{suarez2019challenging, badia2019towards, ferriol2020applying, geyer2019deepcomnet} or semi-supervised~\cite{suzuki2020estimating} way. Delay and jitter are considered in~\cite{suarez2019challenging, badia2019towards, ferriol2020applying, suzuki2020estimating}, while the throughput of TCP flows and the end-to-end latency of UDP flows are considered in~\cite{geyer2019deepcomnet}. Different GNNs are used for the network modeling purpose, including GGS-NN in~\cite{geyer2019deepcomnet}, MPNN in~\cite{suarez2019challenging, badia2019towards}, GN and GNN in~\cite{ferriol2020applying}, and GCN in~\cite{suzuki2020estimating}. GNNs are also used for network calculus analysis in~\cite{geyer2019deeptma, geyer2020graph, geyer2020robustness}.
Based on the modeling ability of GNNs, they are further proposed for network configuration feasibility analysis or decision. Based on the prediction of ensemble GNN model, different network configurations are evaluated in~\cite{mai2021improvements}, bound to the deadline constraints. Border Gateway Protocol (BGP) configuration synthesis is considered in~\cite{bahnasy2020deepbgp}, which is the standard inter-domain routing protocol to exchange reachability information among Wide Area Networks (WANs). GNN is adopted to represent the network topology with partial network configuration in a system named DeepBGP, which is further validated for both Huawei and Cisco devices while fulfilling operator requirements. Another relevant study is to use GNN for Multiprotocol Label Switching (MPLS) configuration analysis. A GNN-based solution named DeepMPLS is proposed in~\cite{geyer2019deepmpls} to speed up the analysis of network properties as well as to suggest configuration changes in case a network property is not satisfied. The GNN-based solution manages to achieve low execution times and high accuracies in real-world network topologies.
GNNs can also be used for network prediction, e.g., delay prediction~\cite{rusek2018message} and traffic prediction~\cite{zhao2020spatiotemporal, yang2020mstnn, mallick2020dynamic}. The better prediction is the basis of proactive management. A case study of delay prediction in queuing networks is conducted in~\cite{rusek2018message}, which uses MPNN for topology representation and network operation. Several studies are concerned about data-driven traffic prediction, based on the real-world network traffic data and GNN-based solutions. A framework named Spatio-temporal Graph Convolutional Recurrent Network (SGCRN) is proposed in~\cite{zhao2020spatiotemporal}, which combines GCN and GRU and is validated on the network traffic data from four real IP backbone networks. Another framework named Multi-scale Spatial-temporal Graph Neural Network (MSTNN) is proposed for Origin-Destination Traffic Prediction (ODTP) and two real-world datasets are used for evaluation~\cite{yang2020mstnn}. Inspired by the prediction model DCRNN~\cite{li2018dcrnn_traffic} developed for road traffic, a nonautoregressive graph-based neural network is used in~\cite{mallick2020dynamic} for network traffic prediction and evaluated on the U.S. Department of Energy's dedicated science network.
Network prediction results can be used further for network operation optimization and management~\cite{otoshi2015traffic}, e.g., traffic engineering, load balancing, routing, etc. For the time point of preparing this survey, routing is considered with graph-based deep learning models~\cite{geyer2018learning, xiao2020neural}. Instead of using reinforcement learning, a novel semi-supervised architecture named Graph-Query Neural Network is proposed in~\cite{geyer2018learning} for shortest path and max-min routing. Another graph-based framework named NGR is proposed in~\cite{xiao2020neural} for shortest-path routing and load balancing. These graph-based routing solutions are validated with use-cases and show high accuracies and resilience to packet loss.
Last but not the least, graph-based deep learning solutions are used for network security problems in computer networks~\cite{zhou2020auto, cheng2021discovering}. Automatic detection for Botnets, which is the source of DDoS attacks and spam, is considered in~\cite{zhou2020auto}. GNN is used to detect the patterns hidden in the botnet connections and is proven more effective than non-learning methods. Their dataset is also made available for future studies. In another study, intrusion detection is considered~\cite{cheng2021discovering}. A GCN-based framework named Alert-GCN is proposed to solve the intrusion alert problem as a node classification task. The alert graph is built with the alert information from farther neighbors, which is used as the input for the GCN module. The experiments demonstrate that Alert-GCN outperforms traditional classification models in correlating alerts.
To sum up, the papers in the wired network scenario are listed in Table~\ref{tab:wired}.
\begin{table}[!htb]
\centering
\caption{List of the papers in the wired network scenario.}
\label{tab:wired}
\begin{tabular}{|p{3.5cm}|p{2cm}|p{3.3cm}|p{3cm}|}
\hline
Problem & Paper & Solution & GNN \\
\hline
BGP Configuration Synthesis & \cite{bahnasy2020deepbgp} & DeepBGP & GraphSAGE~\cite{hamilton2017inductive}, GNN~\cite{scarselli2008graph} \\
\hline
Botnet Detection & \cite{zhou2020auto} & GNN Approach & GCN~\cite{kipf2017semi} \\
\hline
Communication Delay Estimation & \cite{suzuki2020estimating} & GNNs with Semi-supervised Learning & GCN~\cite{kipf2017semi} \\
\hline
Delay Prediction & \cite{rusek2018message} & Message-passing Neural Networks & MPNN~\cite{gilmer2017neural} \\
\hline
Intrusion Detection & \cite{cheng2021discovering} & Alert-GCN & GCN~\cite{kipf2017semi} \\
\hline
MPLS Configuration Analysis & \cite{geyer2019deepmpls} & DeepMPLS & GNN~\cite{scarselli2008graph} \\
\hline
Network Calculus Analysis & \cite{geyer2019deeptma, geyer2020graph, geyer2020robustness} & DL-assisted Tandem Matching Analysis & GNN~\cite{scarselli2008graph} \\
\hline
Network Configuration Feasibility & \cite{mai2021improvements} & Ensemble GNN Model & GN~\cite{battaglia2018relational} \\
\hline
Network Modeling & \cite{suarez2019challenging} & RouteNet & MPNN~\cite{gilmer2017neural} \\
\hline
Network Modeling & \cite{badia2019towards} & Extended RouteNet & MPNN~\cite{gilmer2017neural} \\
\hline
Network Modeling & \cite{ferriol2020applying} & Graph-based DL & GN~\cite{battaglia2018relational}, GNN~\cite{scarselli2008graph} \\
\hline
Network Modeling & \cite{geyer2019deepcomnet} & DeepComNet & GGS-NN~\cite{li2016gated} \\
\hline
Routing & \cite{geyer2018learning} & Graph-Query Neural Network & GNN~\cite{scarselli2008graph} \\
\hline
Routing and Load Balancing & \cite{xiao2020neural} & DL-based Distributed Routing & GNN~\cite{scarselli2008graph} \\
\hline
Traffic Prediction & \cite{zhao2020spatiotemporal} & SGCRN & GCN~\cite{kipf2017semi}\\
\hline
Traffic Prediction & \cite{yang2020mstnn} & MSTNN & GAT~\cite{velivckovic2018graph} \\
\hline
Traffic Prediction & \cite{mallick2020dynamic} & Nonautoregressive Graph-based Neural Network & DCRNN~\cite{li2018dcrnn_traffic} \\
\hline
\end{tabular}
\end{table}
Other than the general computer network case, three specific network cases are discussed with graph-based methods.
The first case is the blockchain platform, which is well-known by the public thanks to Bitcoin, the most famous cryptocurrency. Generally speaking, the blockchain is a chain of blocks that store information with digital signatures in a decentralized and distributed network, which has a wide range of applications other than digital cryptocurrencies, e.g., financial and social services, risk management, healthcare facilities, etc~\cite{monrat2019survey}. A specific task of encrypted traffic classification is considered in~\cite{shen2021accurate} for Decentralized Applications (DApps). A GNN-based DApp fingerprinting method named GraphDApp is proposed for this task and a novel graph structure named Traffic Interaction Graph (TIG) is constructed as the representation of encrypted DApp flows as well as the input for GNNs. Real-world traffic datasets from 1,300 DApps with more than 169,000 flows are used for experiments, of which the result shows that GraphDApp is superior to the other state-of-the-art methods in terms of classification accuracy.
The second case is the data center network, which connects all data centers to share data or computation abilities. Nowadays, data centers are heavily used for cloud services. In such circumstances, traffic engineering is becoming more and more important for the data center network to avoid traffic congestion and improve routing efficiency. However, this task is still challenging, especially when the network topology changes. In a recent study~\cite{li2020traffic}, the generalization ability of GNNs is used for predicting Flow Completion Time (FCT) and a GNN-based optimizer is further designed for flow routing, flow scheduling and topology management. The experiments demonstrate both the high inference accuracy and the FCT reduction ability of GNNs.
The last case is the optical network, which uses light signals, instead of electronic ones, to send information between two or more points. There are many unique problems when light signals are used for communication, e.g., wavelength assignment. The optimal resource allocation in a special network type, i.e., free space optical (FSO) fronthaul networks, is considered in~\cite{gao2020resource} and GNNs are used for evaluating and choosing the resource allocation policy. The routing optimization for an Optical Transport Network (OTN) scenario is considered in~\cite{almasan2019deep} and the learning and generalization capabilities of GNNs are combined with DRL for routing in unseen network typologies. Similar to cellular and computer networks, traffic prediction is also considered in the optical network scenario~\cite{gui2020optical}, with the solution combined by GCN and GRU.
To sum up, the papers in other wired network scenarios are listed in Table~\ref{tab:wired_other}.
\begin{table}[!htb]
\centering
\caption{List of the papers specified in other wired network scenarios.}
\label{tab:wired_other}
\begin{tabular}{|p{3cm}|p{3cm}|p{1.5cm}|p{3cm}|p{2.5cm}|}
\hline
Scenario & Problem & Paper & Solution & GNN \\
\hline
Blockchain Platform & Encrypted Traffic Classification & \cite{shen2021accurate} & GNN-based DApps Fingerprinting & GIN~\cite{xu2019powerful} \\
\hline
Data Center Network & Traffic Optimization & \cite{li2020traffic} & GNN-based Optimizer & GN~\cite{battaglia2018relational} \\
\hline
Optical Network & Resource Allocation & \cite{gao2020resource} & GNN & GNN~\cite{henaff2015deep} \\
\hline
Optical Network & Routing & \cite{almasan2019deep} & DRL with GNN & MPNN~\cite{gilmer2017neural} \\
\hline
Optical Network & Traffic Prediction & \cite{gui2020optical} & GCN-GRU & GCN~\cite{kipf2017semi} \\
\hline
\end{tabular}
\end{table}
\section{Software-defined Networks}
\label{sec:sdn}
Software-defined networking (SDN) emerges as the most promising solution for bringing a revolution in how networks are built. Based on the white paper released by the Open Networking Foundation (ONF), the explosion of mobile devices and content, server virtualization, and advent of cloud services are among the trends driving the networking industry to reexamine traditional network architectures~\footnote{\url{https://opennetworking.org/sdn-resources/whitepapers/software-defined-networking-the-new-norm-for-networks/}}. While SDN was proposed back to 1996, its concept has gone through a lot of changes ever since then. Based on the a widely used definition in~\cite{sezer2013we}, in the SDN architecture, the control and data planes are decoupled, network intelligence and state are logically centralized, and the underlying network infrastructure is abstracted from the applications.
The central control ability of SDN becomes the basis of network optimization in many scenarios and arises several problems which are in the scope of graph-based deep learning methods. Based on the surveyed studies in this paper, there is a growing trend of using GNNs with SDN, or the SDN concept in specific network scenarios. The benefits of this combination are two-folds. For GNNs, SDN provides the ability of measuring network performance, which is used as the data for training GNNs. For SDN, GNNs act as the best option of using the network topology information in modeling and optimizing the networks. In recent years, many graph-based solutions are proposed for various problems with the SDN concept.
Based on topology, routing, and input traffic, MPNN-based network model is proven to produce accurate estimates of the per-source/destination per-packet delay distribution and loss, with a worst case Mean Relative Error (MRE) of 15.4\%, and the estimation can be further used for efficient routing optimization and network planning~\cite{rusek2019unveiling, rusek2020routenet}. The decoupling of the control plane and data plane gives more computing power for routing optimization. Based on this observation, an intelligent routing strategy based on graph-aware neural networks is designed in~\cite{zhuang2019toward}, in which a novel graph-aware convolution structure is constructed to learn topological information efficiently. In another study for routing optimization, a GN-based solution is proposed for maximum bandwidth utilization, which achieves a satisfactory accuracy and a prediction time 150 times faster than Genetic Algorithm (GA)~\cite{sawada2020network}.
In SDN, network virtualization is a powerful way to efficient utilize the network infrastructure. Virtual Network Functions (VNFs) are virtualized network services running on physical resources. How to map VNFs into shared substrate networks has become a challenging problem in SDN, known as Virtual Network Embedding (VNE) or VNF placement, which is already proven to be NP-hard. To efficiently solve this problem, a bunch of heuristic algorithms are proposed in the literature. Recently, graph-based models have also been used for this problem~\cite{mijumbi2016connectionist, mijumbi2017topology, kim2020graph1, kim2021graph, habibi2020accelerating, kim2020graph, sun2020combining}, which can get near-optimal solutions in a short time. To predict future resource requirements for VNFs, a GNN-based algorithm using the VNF forwarding graph topology information is proposed in~\cite{mijumbi2016connectionist, mijumbi2017topology}. Deployed in a virtualized IP multimedia subsystem and tested with real VoIP traffic traces, the new algorithm achieves an average prediction accuracy of 90\% and improves call setup latency by over 29\%, compared with the case without using GNNs. A parallelizable VNE solution based on spatial GNNs is proposed for accelerating the embedding process in~\cite{habibi2020accelerating}, which improves the revenue-to-cost ratio by about 18\%, compared to other simulated algorithms. Similarly, GNN-based algorithms are proposed for VNF resource prediction and management in a series of studies~\cite{kim2020graph1, kim2021graph, kim2020graph}. On another aspect, DRL is often combined with GNNs for automatic virtual network embedding~\cite{jalodia2019deep, yan2020automatic, sun2020deepmigration, sun2020efficient}. Asynchronous DRL enhanced GNN is proposed in~\cite{jalodia2019deep} for topology-aware VNF resource prediction in dynamic environments. An efficient algorithm combining DRL with GCN is proposed in~\cite{yan2020automatic}, with up to 39.6\% and 70.6\% improvement on acceptance ratio and average revenue, compared with the existing state-of-the-art solutions. A more specific problem, i.e., traffic flow migration among different network function instances, is considered in~\cite{sun2020deepmigration, sun2020efficient}, in which GNN is used for migration latency modeling and DRL is used for deploying dynamic and effective flow migration policies.
Last but not the least, Service Function Chaining (SFC) is considered in several studies~\cite{heo2020reinforcement, heo2020graph, rafiq2020service, wang2021drl}. SFC uses SDN's programmability to create a service chain of connected virtual network services, resulting in a service function path that provides an end-to-end chain and traffic steering through them. Graph-structured properties of network topology can be extracted by GNNs, which outperforms DNNs for SFC~\cite{heo2020graph, rafiq2020service}. However, most of the existing studies for SFC use a supervised learning approach, which may not be suitable for dynamic VNF resources, various requests, and changes of topologies. To solve this problem, DRL is applied for training models on various network topologies with unlabeled data in~\cite{heo2020reinforcement} and achieves remarkable flexibility in new topologies without re-designing and re-training, while preserving a similar level of performance compared to the supervised learning method. DRL is also used for adaptive SFC placement to maximize the long-term average revenue~\cite{wang2021drl}.
To sum up, the papers in the SDN scenario are listed in Table~\ref{tab:sdn}.
\begin{table}[!htb]
\centering
\caption{List of the papers in the SDN scenario.}
\label{tab:sdn}
\begin{tabular}{|p{3.5cm}|p{2cm}|p{3.3cm}|p{3cm}|}
\hline
Problem & Paper & Solution & GNN \\
\hline
Network Modeling & \cite{rusek2019unveiling, rusek2020routenet} & RouteNet & MPNN~\cite{gilmer2017neural} \\
\hline
Routing & \cite{zhuang2019toward} & Revised Graph-aware Neural Networks & A Novel Graph-aware Convolution Structure \\
\hline
Routing Optimization, Bandwidth Utilization Maximization & \cite{sawada2020network} & GN-based Model & GN~\cite{battaglia2018relational} \\
\hline
SFC & \cite{heo2020reinforcement} & DRL with GNN & GNN~\cite{scarselli2008graph} \\
\hline
SFC & \cite{heo2020graph} & GNN-based SFC & GCN~\cite{kipf2017semi} \\
\hline
SFC Deployment, Traffic Steering & \cite{rafiq2020service} & Knowledge-Defined Networking System with GNN & GNN~\cite{scarselli2008graph} \\
\hline
SFC Placement & \cite{wang2021drl} & DRL-SFCP & GCN~\cite{kipf2017semi} \\
\hline
Traffic Flow Migration in NFV & \cite{sun2020deepmigration, sun2020efficient} & DRL with GNN & GN~\cite{battaglia2018relational} \\
\hline
VNE & \cite{habibi2020accelerating} & GraphViNE Solution & GraphSAGE~\cite{hamilton2017inductive}, GE~\cite{pan2019learning} \\
\hline
VNE & \cite{yan2020automatic} & DRL with GCN & GCN~\cite{kipf2017semi} \\
\hline
VNF Deployment Prediction & \cite{kim2020graph1, kim2021graph} & GNN-based Algorithm & GNN~\cite{scarselli2008graph} \\
\hline
VNF Management & \cite{kim2020graph} & GNN-based Algorithm & GNN~\cite{scarselli2008graph} \\
\hline
VNF Placement & \cite{sun2020combining} & DRL with GNN & GN~\cite{battaglia2018relational} \\
\hline
VNF Resource Prediction & \cite{jalodia2019deep} & Asynchronous DRL enhanced GNN & GNN~\cite{scarselli2008graph} \\
\hline
VNF Resource Prediction & \cite{mijumbi2016connectionist, mijumbi2017topology} & GNN-based Algorithm & GNN~\cite{scarselli2008graph} \\
\hline
\end{tabular}
\end{table}
\section{Future Directions}
\label{sec:direction}
In this section, we discuss some future directions for graph-based deep learning in communication networks. Even though different network scenarios and applications are already covered in this survey, there are still many open research opportunities for this topic.
The first research direction is the combination of GNNs and other artificial intelligence techniques. Some examples are already seen in this survey, e.g., the combination of GNN and GRU for traffic prediction~\cite{yang2020noval, gui2020optical}, the combination of GNN and DRL for resource allocation~\cite{zhao2020graph}, routing~\cite{almasan2019deep}, and VNE~\cite{yan2020automatic}. The advantages of GNNs include its learning ability for topological dependencies and the generalization capability for unseen network typologies, but GNN is not a panacea. For example, for some cases which is lack of training data or is too expensive to collect real data, Generative Adversarial Nets (GANs)~\cite{goodfellow2014generative} is a possible solution. Even though GANs have been widely used in other fields, e.g., image and video, the combination of GANs and GNNs~\cite{wang2018graphgan} has not been applied for communication networks, at least in the scope of this survey. Another example is the Automated Machine Learning (AutoML) technique~\cite{he2021automl}, which can be used for optimizing the GNN parameters automatically.
Another research direction is to apply graph-based deep learning on larger networks. In most of the surveyed studies, the network topology is small, e.g., less than 100 nodes, compared with contemporary networks. However, the modeling of larger networks would require huge computation requirements. Graph partitioning and parallel computing infrastructures are two possible solutions for this problem. A larger network may be decomposed into smaller ones that is within the computing capacity. However, the optimal divide-and-conquer approach remains unknown and may vary in different network scenarios. Another concern is that whether it is worthy of achieving narrow performance margins in the cost of the increased computation burden caused by graph-based models, compared with traditional methods.
Finally, we believe this is still an early stage of the research about graph-based deep learning for communication networks. There are many opportunities of applying novel GNNs in traditional networking problems in a wider range of network scenarios, especially those who get little or no attention for now. The studies covered in this survey are only the beginning of this exciting research area. And we would keep track of this area and update the progress and new publications in the public Github repository.
\section{Conclusion}
\label{sec:conclusion}
In this paper, a survey is presented for the application of graph-based deep learning in communication networks. The relevant studies are organized in three network scenarios, namely, wireless networks, wired networks, and software defined networks. For each study, the problem and GNN-based solution are listed in this survey. Future directions are further pointed out for the following research. We hope this survey could be the milestone of summarizing the latest progresses and a reference manual for new-comers in this emerging research topic.
\nocite{*}
|
1,108,101,565,806 | arxiv | \section{Encoders as projectors}
A good \emph{learned representation} has many desiderata \citep{bengio2013representation}. The perhaps most elementary constraint placed over most learned representations is that a given observation $\vec{x}$ should have a \emph{unique} representation $\vec{z}$, at least in distribution. In practice this is ensured by letting the representation be given by the output of a function, $\vec{z} = g(\vec{x})$, often represented with a neural network. Having a unique representation for a given observation is important for downstream tasks, as otherwise the downstream loss function is complicated by having to be defined on sets \citep{zaheer2018deep}.
Uniqueness of representation is also important when a human investigator seeks information about the phenomenon underlying data, e.g.\@ through visualizations, as uniqueness is a ubiquitous assumption.
The \emph{autoencoder} \citep{rumelhart1986learning} is an example where uniqueness of representation is explicitly enforced, even if its basic construction does not suggest unique representations. In the most elementary form, the autoencoder consists of an \emph{encoder} $g_{\psi}: \R^D \rightarrow \R^d$ and a \emph{decoder}
$f_{\phi}: \R^d \rightarrow \R^D$, parametrized by $\psi$ and $\phi$, respectively.
These are trained by minimizing the \emph{reconstruction error} of the training
data $\{ \vec{x}_1, \ldots, \vec{x}_N \}$,
\begin{align}
\psi^*, \phi^* = \argmin_{\psi, \phi} \sum_{n=1}^N \| f(g(\vec{x}_n)) - \vec{x}_n \|^2.
\end{align}
Here $d$ is practically always smaller than $D$, such that the output of the encoder
is a low-dimensional latent representation of high-dimensional data. The data is assumed
to lie near a $d$-dimensional manifold $\M$ spanned by the decoder.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/reach-roll.pdf}
\hspace{9mm}
\includegraphics[width=0.4\columnwidth]{figures/global_reach.pdf}
\vspace{-3mm}
\caption{\emph{Left:} The projection of a point (yellow) onto a nonlinear manifold can take
unique (green) or multiple values (red) depending on the reach of the manifold. When training data is inside the reach, the encoder can match the projection resulting in more trustworthy representations.
\emph{Right:} The global reach defines a region around the manifold $\M$ consisting of all points below a certain distance to $\M$. This captures both local manifold curvature as well as global shape.
}
\label{fig:encoding_vs_projecting}
\end{figure}
For a given decoder, we see that the optimal choice of encoder is the
projection onto $\M$, i.e.
\begin{align}
g_{\text{optimal}}(\vec{x}) &= \proj_{\M}(\vec{x}) = \argmin_{\vec{z}} \| \vec{x} - f(\vec{z}) \|^2.
\end{align}
For any \emph{nonlinear} choice of decoder $f$, this optimal encoder does \emph{not} exists everywhere. That is, multiple best choices of latent representation may exist for a given point, as the projection is not unique everywhere. As the learned encoder enforces a unique representation, it will choose arbitrarily among the potential representations.
In this case, any analysis of the latent representations can be misleading, as it does not contain the information that another choice of representation would be equally as good.
\textbf{In this paper} we investigate the \emph{reach} of the manifold $\M$ spanned
by the decoder $f$. This concept, predominantly studied in geometric measure theory, informs us
about regions of observation space where the projection onto $\M$ is unique,
such that trustworthy unique representations exist. If training data resides inside
this region we may have hope that a suitable encoder can be estimated, leading
to trustworthy representations. The classic \emph{reach} construction is global in nature,
so we develop a local generalization that gives a more fine-grained estimate of the
uniqueness of a specific representation. We provide a new local, numerical, estimator of
this reach, which allows us to determine which observations can be expected to have unique
representations, thereby allowing investigations of the latent space to disregard
observations with non-unique representations. Empirically we find that in large autoencoders, practically all data is outside the reach and risk not having a unique representation. To counter this, we design a reach-based regularizer that penalizes decoders for which unique representations of given data do not exist. Empirically, this significantly improves the guaranteed uniqueness of representations with only a small penalty in reconstruction error.
\section{Reach and uniqueness of representation}
Our starting question is \emph{which observations $\vec{x}$ have a unique representation $\vec{z}$ for a given decoder $f$?} To answer this, we first introduce the \emph{reach} \citep{federer:1959} of the manifold spanned by decoder $f$. This is a \emph{global} scalar that quantifies how much points can deviate from the manifold while having a unique projection. Secondly, we contribute a generalization of this classic geometric construct to characterize the local uniqueness properties of the learned representation.
\subsection{Defining reach}
The nearest point projection $\proj_{\M}$ is a well-defined function on all points for which there exists a unique nearest point. We denote this set
\begin{align*}
\Unp(\M) = \{\vec{x}\in \R^D : \vec{x}\text{ has a unique nearest point in }\M\},
\end{align*}
where $\M = f(\R^d)$ is the manifold spanned by mapping the entire latent space through the decoder. Observations that lie within $\Unp(\M)$ are certain to have a unique optimal representation, but there is no guarantee that the encoder will recover this. With the objective of characterizing the uniqueness of representation, the set $\Unp(\M)$ is a good starting point as here the encoder at least has a chance of finding a representation that is similar to that of a projection.
However, for an arbitrary manifold $\M$ it is generally not possible to explicitly find the set $\Unp(\M)$. Introduced by \citet{federer:1959}, the \emph{reach} of $\M$ provides us with an implicit way to understand which points are in and outside $\Unp(\M)$.
\begin{definition}\label{def:reach}
The \emph{global reach} of a manifold $\M$ is
\begin{align}
\reach(\M) = \inf_{\vec{x}\in \M} r_{\text{max}}(\vec{x}),
\end{align}
where
\begin{align}
r_{\text{max}}(\vec{x}) = \sup\{r> 0 : B_r(\vec{x})\subset \Unp(\M)\}.
\end{align}
Here $B_r$ denotes the open ball of radius $r$.
\end{definition}
Hence, $\reach(\M)$ is the greatest radius $r$ such that any open $r$-ball centered on the manifold lies in $\Unp(\M)$.
In the existing literature, the \emph{global reach} is referred to as the \emph{reach}; we emphasize the global nature of this quantity as we will later develop local counterparts.
Definition~\ref{def:reach} does not immediately lend itself to computation. Fortunately, \citet{federer:1959} provides a step in this direction, through the following result.
\begin{theorem}[\citet{federer:1959}]
Suppose $\M$ is a manifold, then
\begin{align}\label{eq:fed_reach_calc}
\reach(\M) = \inf_{\substack{ \vec{x}, \vec{y} \in \M\\ \vec{y}-\vec{x}\notin T_\vec{x}\M}} \frac{\norm{\vec{x}-\vec{y}}^2}{2\norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}},
\end{align}
where $P_{N_\vec{x}\M}$ is the orthogonal projection onto the normal space of $\M$ at $\vec{x}$. If $\vec{y}-\vec{x}\in T_\vec{x}\M$ for all pairs $\vec{x},\vec{y}\in \M$ we let $\reach(\M) = \infty$, as $\M$ will be flat and the projection unique everywhere.
\end{theorem}
%
For our objective of understanding which observations have a unique representation, i.e.\@ are inside $\Unp(\M)$, the global reach provides some information. Specifically, the set
\begin{align}
\M_r = \left\{ \vec{x} | \inf_{\vec{y} \in \M} \norm{\vec{y} - \vec{x}} < \reach(\M) \right\}
\end{align}
is a subset of $\Unp(\M)$. This implies that observations $\vec{x}$ that are inside $\M_r$ will have a unique projection, such that we can expect the representation to be unique. The downside is that since $\reach(\M)$ is a global quantity, $\M_r$ is an overly restrictive small subset of $\Unp(\M)$.
%
Fig.~\ref{fig:encoding_vs_projecting}(right) illustrates this issue. Note how the global reach in the example is determined by the \emph{bottleneck}\footnote{Not to be confused with \emph{bottleneck network architectures} or the \emph{information bottleneck}.} of the manifold. Even if this bottleneck only influences the uniqueness of projections of a single point, it determines the global reach of the entire manifold. This implies that many points exist outside the reach which nonetheless has a unique projection.
\subsection{Pointwise normal reach}
\begin{wrapfigure}[7]{r}{0.25\textwidth}
\vspace{-13.5mm}
\includegraphics[width=0.25\textwidth]{figures/proof.pdf}
\vspace{-4mm}
\caption{Notation for the proof of theorem~\ref{thm:r_n_in_normal_dir}.}
\label{fig:proof}
\end{wrapfigure}
%
In order to get a more informative notion of reach, we now develop a local version, which we, for reasons that will be clear, call the \emph{pointwise normal reach}. For ease of notation denote
\begin{align}
R(\vec{x}, \vec{y}) = \frac{\norm{\vec{x}-\vec{y}}^2}{2\norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}}
\end{align}
for $\vec{x},\vec{y}$ with $\vec{y}-\vec{x}\notin T_\vec{x}\M$, else we let $R(\vec{x},\vec{y}) = \infty$.
We then define the pointwise normal reach as the local infimum of eq.~\ref{eq:fed_reach_calc}.
\begin{definition}[Pointwise normal reach]\label{def:local_normal_reach}
At a point $\vec{x} \in \M$, the pointwise normal reach is
\begin{align}
r_N(\vec{x}) = \inf_{\vec{y} \in M} R(\vec{x},\vec{y}).
\end{align}
\end{definition}
In theorem~\ref{thm:r_n_in_normal_dir} below we prove that the local estimate $r_N(\vec{x})$ describes how far we can move along a normal vector at $\vec{x}$ and still stay within $\Unp(\M)$. This is useful as we know that $\vec{x}$ will lie in the normal space of $\M$ at $\proj_\M(\vec{x})$ (\citet{federer:1959} Thm. 4.8).
%
\begin{theorem}\label{thm:r_n_in_normal_dir}
For all $x\in \M$
\begin{align}
B_{r_N(\vec{x})}(\vec{x}) \cap N_{\vec{x}} \M \subset \Unp(\M),
\end{align}
where $N_{\vec{x}} \M$ denotes the normal space at $\vec{x}$.
\end{theorem}
\begin{proof}
Suppose for the sake of contradiction that there exists $\vec{w}\in \left(B_{r_N(\vec{x})}(\vec{x}) \cap N_{\vec{x}}\M\right)\cap \Unp(\M)^c$. That is, there exists $\vec{y}_1,\vec{y}_2 \in \M$ such that
\begin{align}
d(\vec{w}, \M) = \norm{\vec{y}_1-\vec{w}} = \norm{\vec{y}_2-\vec{w}},
\end{align}
where $d(\vec{w}, \M) = \inf_{\vec{x}\in \M} \norm{\vec{x}-\vec{w}}$.
In particular, we know there exists $\vec{y}\in \M$ such that
\begin{align}
\norm{\vec{y}-\vec{w}}\leq \norm{\vec{x}-\vec{w}} < r_N(\vec{x}).
\end{align}
Now, let $\theta_1$ denote the (acute) angle between $T_\vec{x}\M$ and $\vec{y}-\vec{x}$, and let $\theta_2$ denote the angle between $\vec{y}-\vec{x}$ and $\vec{w}-\vec{x}$. The sum of $\theta_1$ and $\theta_2$ is a right angle, see Fig.~\ref{fig:proof}. Let $t$ be the distance from $\vec{x}$ to $\vec{y}$. The altitude through the vertex $\vec{w}$ divides $\vec{y}-\vec{x}$ into two line segments. Denote the length of the segment from the foot of the altitude to $\vec{x}$, $t_1$, and the length of the segment from the foot to $\vec{y}$, $t_2$. Note, $t_2$ will always be less or equal to $t_1$, as $\norm{\vec{y}-\vec{w}}\leq \norm{\vec{x}-\vec{w}}$.
By the definition of cosine, $\cos \theta_2 = \frac{t_1}{\norm{\vec{w}-\vec{x}}} \geq \frac{t/2}{\norm{\vec{w}-\vec{x}}}$. At the same time $
\cos\theta_2 = \cos(\pi/2-\theta_1) = \sin \theta_1 = \frac{d}{t},
$
where $d = \norm{P_{\vec{w}-\vec{x}}(\vec{y}-\vec{x})}$, and as $\vec{w}-\vec{x}\in N_\vec{x}\M$, $d\leq \norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}$.
Thus, we have $\frac{t/2}{\norm{\vec{w}-\vec{x}}} < \frac{d}{t}< \frac{\norm{P_{N_\vec{x}\M} (\vec{y}-\vec{x})}}{t}$, implying $R(\vec{x},\vec{y})\leq \norm{\vec{w}-\vec{x}}$, which contradicts $r_N(\vec{x}) \leq R(\vec{x},\vec{y})$.
\end{proof}
In lemma~\ref{thm:local_reach_bouds} below we show that the pointwise normal reach bounds the reach. For this, we need theorem~4.8(7) from \citet{federer:1959}
\begin{lemma}[\citet{federer:1959}]
Let $\vec{x},\vec{y}$ be points on $\M$ with $r_{\text{max}}(\vec{x}) > 0$, and let $\vec{n}$ be a normal vector in $N_{\vec{x}}\M$, then
\begin{align}
\inner{\vec{n}}{\vec{y}-\vec{x}} \leq \frac{\norm{\vec{y}-\vec{x}}^2\norm{\vec{n}}}{2r_{\text{max}}(\vec{x})}
\end{align}
\end{lemma}
\begin{lemma}\label{thm:local_reach_bouds}
For all $\vec{x} \in \M$ we have that
\begin{align}
\inf_{\vec{y}\in B_{2r_N(\vec{x})}(\vec{x})\cap \M} r_N(\vec{y}) \leq r_{\text{max}}(\vec{x}) \leq r_N(\vec{x}).
\end{align}
\end{lemma}
%
\begin{proof}\phantom{\qedhere}
Applying the result from Federer to the vector $\vec{n} = \frac{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}{\norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}}$ gives
\begin{align}
r_{\text{max}}(\vec{x})\leq \frac{\norm{\vec{x}-\vec{y}}^2\norm{\vec{n}}}{2\norm{\vec{n}}\norm{\vec{x}-\vec{y}}\cos\theta},
\end{align}
where $\theta$ is the angle between $\vec{x}-\vec{y}$ and $\vec{n}$. Hence, $\cos\theta = \frac{\norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}}{\norm{\vec{y}-\vec{x}}}$. Thus, for all $\vec{y}\in \M$
\begin{align}
r_{\text{max}}(\vec{x}) \leq \frac{\norm{\vec{x}-\vec{y}}^2}{2\norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}},
\end{align}
proving the right inequality.
%
Consider $B = B_{r_N(\vec{x})}(\vec{x})$. Suppose there exists $\vec{w} \in B$ with $\vec{w} \notin \Unp(\M)$. Then $\vec{w} \notin N_\vec{x}\M$. Hence there exists $\vec{y}_1,\vec{y}_2\in \M$ such that
$
d(\vec{w}, \M) = \norm{\vec{y}_1 - \vec{w}} = \norm{\vec{y}_2 - \vec{w}} < r_N(\vec{x}).
$
From \citet{federer:1959} theorem 4.8 we know that $\vec{w}\in N_{\vec{y}_1}\M, N_{\vec{y}_2}\M$. Combining this with lemma \ref{thm:r_n_in_normal_dir} gives that $r_N(\vec{y}_1),r_N(\vec{y}_2) \leq d(\vec{w},\M)$ and that $d(\vec{x},\M)< \norm{\vec{w}-\vec{x}}$. We also have that $\norm{\vec{y}_1-\vec{x}},\norm{\vec{y}_2-\vec{x}}\leq 2 r_N(\vec{x})$.
Combining these inequalities gives us that the distance from $\vec{x}$ to any point not in $\Unp(\M)$ is greater than $\inf_{\vec{y}\in B_{2r_N(\vec{x})}(\vec{x}) \cap \M} r_N(\vec{y})$, which implies that
\begin{equation}
\inf_{\vec{y} \in B_{2r_N(\vec{x})}(\vec{x}) \cap \M}r_N(\vec{y}) \leq r_{\text{max}}(\vec{x}).
\tag*{\qed}
\end{equation}
\end{proof}
We presented the theoretical analysis under the assumption that $\M = f(\R^d)$ is a manifold. Although the theoretical results can be extended to arbitrary subsets of Euclidean space, the experimental setup requires the Jacobian to span the entire tangent space. This might not be the case if $\M$ has self-intersections. The theory can be extended to handle such self-intersections, but this significantly complicates the algorithmic development. See the appendix for a discussion.
\subsection{Estimating the pointwise normal reach}
The definition of $r_N$, prompts us to minimize $R(\vec{x},\vec{y})$ over all of $\M$, which is generally infeasible and approximations are in order. As a first step towards an estimator, assume that we are given a finite sample $\mat{S}$ of points on the manifold. We can then replace the infimum in definition~\ref{def:local_normal_reach} with a minimization over the samples. Using that the projection matrix onto $N_{\vec{x}}\M$ is given by $P_{N_\vec{x}\M} = \mat{I} - \mat{J}(\mat{J}\T \mat{J})\ensuremath{^{-1}} \mat{J}\T$, we get the following estimator
\begin{align}
\hat r_N(\vec{x}) = \min_{\vec{y}\in \mat{S}} \frac{\norm{\vec{y}-\vec{x}}^2}{2\norm{(\mat{I}-\mat{J}(\mat{J}\T\mat{J})\ensuremath{^{-1}} \mat{J}\T)(\vec{y}-\vec{x})}},
\label{eq:reach_est}
\end{align}
where $\mat{J} \in \mathbb{R}^{D \times d}$ is the Jacobian matrix of $f$ at $\vec{x}$. Note that since we replace the infimum with a minimization over a finite set, we have that $\hat r_N(\vec{x}) \geq r_N(\vec{x})$.
There are different choices of sampling sets $\mat{S}$. Given a trained autoencoder, a cheap way to obtain samples is to use the reconstructed training data as the sampling set. This will generally be sufficient if the training data is dense on the manifold, but this is rarely the case in high data dimensions.
%
The following lemma provides us a way to restrict the area over which we must minimize.
%
\begin{lemma}
For any $\vec{x}, \vec{y} \in \M$
\begin{align}
R(\vec{x},\vec{y}) \geq \frac{1}{2} \norm{\vec{x}-\vec{y}}.
\end{align}
\end{lemma}
\begin{proof}
Recall that $\vec{y}-\vec{x} = P_{N_\vec{x}\M}(\vec{y}-\vec{x}) + P_{T_\vec{x}\M}(\vec{y}-\vec{x})$, as $\R^D = T_\vec{x}\M \oplus N_\vec{x}\M$. Hence $\norm{\vec{y}-\vec{x}} \geq \norm{P_{N_\vec{x}\M}(\vec{y}-\vec{x})}$. The statement, thus, follows from the definition of $R$.
\end{proof}
\begin{wrapfigure}[10]{r}{0.55\textwidth}
\begin{minipage}{0.55\textwidth}
\vspace{-8mm}
\begin{algorithm}[H]
\caption{Sampling-based reach estimator}\label{alg:sample}
\begin{algorithmic}
\STATE radius $\gets r_0$
\STATE reach $\gets \infty$
\FOR{$i \gets 1, \ldots, $ num\_batches}
\STATE samples $\gets$ \texttt{sample\_ball}($\vec{x}, \text{radius}, \text{batch\_size}$)
\STATE projected $\gets$ \texttt{decode}(\texttt{encode}(samples))
\STATE reach $\gets \min\left(\text{reach}, \texttt{reach\_est}(\vec{x}, \text{projected})\right)$
\STATE radius $\gets 2 \cdot$ reach
\ENDFOR
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
%
The lemma points towards a simple computational procedure for numerically estimating the pointwise normal reach, which is explicated in algorithm~\ref{alg:sample}. Here $\texttt{reach\_est}$ refers to the application of eq.~\ref{eq:reach_est}.
The algorithm samples uniformly inside a ball centered on $\vec{x}$ and repeatedly shrinks the radius of the ball as tighter estimates of the reach are recovered. We further use the autoencoding reconstruction as an approximation to the projection of $\vec{x}$ onto $\M$.
\subsection{Is a point within reach?}
Suppose that a point $\vec{x} \in \R^D$ is represented by a point on the manifold $f(\vec{z})$. From definition~\ref{def:reach} we know that $\vec{x}$ has a unique nearest point on the manifold if
\begin{align}\label{eq:r_max_boud}
\norm{\vec{x}-f(\vec{z})} < r_{\text{max}}(f(\vec{z})).
\end{align}
A point $\vec{x}$ which does not satisfy this inequality risks not having a unique nearest point, and hence no unique representation. From lemma~\ref{thm:local_reach_bouds} we know that $r_{\text{max}}(f(\vec{z})) \leq r_N(f(\vec{z}))$. So $\vec{x}$ risks not having a unique nearest point if
\begin{align}
\norm{\vec{x}-f(\vec{z})} \geq r_N(f(\vec{z})) \geq r_{\text{max}}(f(\vec{z})).
\end{align}
We note that to show that $\norm{\vec{x}-f(\vec{z})} \geq r_N(f(\vec{z}))$, it is enough to compute
\begin{align}
\hat r_N(f(\vec{z})) = \inf_{\substack{\vec{y}\in \M\cap B_{2\norm{\vec{x}-f(\vec{z})}}(f(\vec{z})) \\ \vec{y} \neq f(\vec{z})}} R(f(\vec{z}), \vec{y}),
\end{align}
i.e.\@ limit the search to a ball of radius $2\norm{\vec{x}-f(\vec{z})}$. Thus, when we only need to determine if a point is inside the pointwise normal reach, we can pick $r_0 = 2\norm{\vec{x}-f(\vec{z})}$ in Algorithm~\ref{alg:sample}.
Notice that given any set of points on the manifold, the resulting estimation of $r_N$ will always be larger than the true value. It means that any point which lies outside the estimated normal reach, will in fact lie outside the true normal reach. However, a point which lies inside the estimated normal reach, risks lying outside the true normal reach, and thus not having a unique projection.
\subsection{Regularizing for reach}
The autoencoder minimizes an $l_2$ error which is directly comparable to the pointwise normal reach. This suggests a regularizer that penalizes if the $l_2$ error is larger than the pointwise normal reach. In practice, we propose to use
\begin{align}
\mathcal{R}(\vec{x}) &= \texttt{Softplus}\left( \norm{f\left(g(\vec{x})\right) - \vec{x}} - \hat r_N\left( f\left(g(\vec{x})\right)\right) \right).
\end{align}
The reach-regularized autoencoder then minimizes
\begin{align}
\mathcal{L} &= \sum_{n=1}^N \| f(g(\vec{x}_n)) - \vec{x}_n \|^2 + \lambda \sum_{n=1}^N \mathcal{R}(\vec{x}_n),
\end{align}
where we in practice use $\lambda=1$. We also experimented with a \texttt{ReLU} activation instead of \texttt{Softplus}, but found the latter to yield more stable training. When estimating the pointwise normal reach, $\hat r_N$, we apply Algorithm~\ref{alg:sample} with an initial radius of $r_0 = 2 \| f(g(\vec{x}_n)) - \vec{x}_n \|$.
\section{Experiments}
Having established a theory and algorithm for determining when a representation can be expected to be unique, we next investigate its use empirically. We first compute the pointwise local reach across a selection of models to see if it provides useful information. We then carry on to investigate the use of reach regularization. \footnote{The code is available at \url{https://github.com/HeleneHauschultz/is_an_encoder_within_reach}.}
\begin{SCfigure}
\includegraphics[width=0.7\columnwidth]{figures/reach_reg.pdf}
\caption{\emph{Left:} An autoencoder trained on noisy points scattered along a circular arc. \emph{Right:} The manifold spanned by the decoder of an autoencoder trained with reach regularization. In both panels, the gray circles illustrates the estimated pointwise normal reach at points along the autoencoder curve.}
\label{fig:no_reg_reach}
\end{SCfigure}
\subsection{Analysing reach}
\subsubsection{Toy circle}\label{subsec:circle}
We start our investigations with a simple toy example to get an intuitive understanding. We generate observations along a circular arc with added Gaussian noise of varying magnitude. Specifically, we generate approximately $400$ points as $z \mapsto t\left(\sin(z), -\cos(z)\right) + 1.5\cos(z)\epsilon$, where $\epsilon \sim \mathcal{N}(0,1)$. On this, we train an autoencoder with a one-dimensional latent space. The encoder and decoder both consist of linear layers, with three hidden layers with $128$ nodes and with ELU non-linearities.
Figure~\ref{fig:no_reg_reach}(left) shows the data alongside the estimated manifold and its pointwise normal reach.
We observe that the manifold spanned by the decoder has areas with small reach, where the manifold curves to fit the noisy data. The pointwise normal reach seems to well-reflect the curvature of the estimated manifold. The plot illustrates how some of the points end up further away from the manifold than the reach. For some of the points, this is not a problem, as they still have a unique projection onto the manifold. However, some of the points are equally close to different points on the manifold, such that their representation cannot be trusted.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/celeba/celeba-val-recon2.png}
\vspace{-6mm}
\caption{CelebA validation set reconstructions.}
\label{fig:celeba-val-recon}
\end{figure}
\subsubsection{CelebA}
To investigate the reach on a non-toy dataset, we train a deep autoencoder on the CelebA face dataset \citep{celeba}. The dataset consists of approximately $200\,000$ images of celebrity faces.
We train a symmetric encoder-decoder pair that maps the $64 \times 64 \times 3$ images to a $128$ dimensional latent space, and back. The encoder consists of a single 2d convolution operation without stride followed by six convolution operations with stride 2, resulting in a $1 \times 1 \times C$ image. We use $C=128$ channels for all convolutional operations, a filter size of 5 and Exponential Linear Unit (ELU) non-linearities. The decoder is symmetric, using transpose convolutions with stride 2 to upsample and ending with a convolution operation mapping to $64 \times 64 \times 3$. The model is trained for 1M gradient updates on the mean square error loss, with a batch size of 128, using the Adam optimizer with a learning rate of $10^{-4}$.
Example reconstructions on the validation set are provided in Fig.~\ref{fig:celeba-val-recon}.
After training we estimate the reach of the validation set using the sampling based approach (Alg.~\ref{alg:sample}). Fig.~\ref{fig:plots}(left) plots the reconstruction error $\norm{\vec{x} - f(\vec{z})}$ versus the pointwise normal reach. We observe that almost all observations lie outside the pointwise normal reach, implying that we cannot guarantee a unique representation. This is a warning sign that our representations need not be trustworthy.
\begin{figure}
\includegraphics[width=\columnwidth]{figures/celeba/plots.pdf}
\vspace{-4mm}
\caption{
\emph{Left:} Estimated reach for CelebA validation samples plotted against the L2 error. Samples below the diagonal red line does not have a unique encoder.
\emph{Center:} Normalized reach as a function of batches used to estimate the reach.
The normalized reach is the estimated pointwise normal reach divided by the estimated pointwise normal reach after the first batch.
The hyperball sampling reach estimator quickly converges.
\emph{Right:} Sensitivity analysis of the hyperball sampling reach estimator to the initial hyperball radius. The reach of CelebA validation samples are estimated with initial radii $r_0=1.0$ and $r_0=0.01$ respectively and their final reach after 100 batches are plotted against each other.
}
\label{fig:plots}
\end{figure}
Next we analyze the empirical convergence properties of our estimator on the CelebA autoencoder. Fig.~\ref{fig:plots}(center) shows the average pointwise normal reach over the validation set as a function of the number of iterations in the sampling based estimator. We observe that the estimator converges after just a few iterations, suggesting that the estimator is practical.
The estimator relies on an initial radius for its search. Fig.~\ref{fig:plots}(right) shows the estimated pointwise normal reach on the validation set, plotted for two different initial radii. We observe that the estimator converges to approximately the same value in both cases, suggesting that the method is not sensitive to this initial radius. However, initializing with a tight radius will allow for faster convergence.
\subsection{Reach regularization}
Having established that the pointwise normal reach provides a meaningful measure of uniqueness, we carry on to regularize accordingly.
\subsubsection{Toy circle}
Returning to the example from section~\ref{subsec:circle}, we train an autoencoder of the same architecture with the reach regularization. We pretrain the network $100$ epochs without regularization, and then $2000$ iterations with reach regularization. \looseness=-1
Fig.~\ref{fig:no_reg_reach}(right) shows that reach regularization gives a significantly smoother manifold than without regularization (left panel). The gray circles on the plot indicate that almost all the points are now within the pointwise normal reach, and arguably the associated representations are now more trustworthy.\looseness=-1
\subsubsection{MNIST}
Next we train an autoencoder on 5000 randomly chosen images from the classes 2, 4 and 8 from MNIST \citep{lecun1998gradient}. We use a symmetric architecture reducing to two dimensional representation through a sequence of $784\rightarrow500\rightarrow250\rightarrow150\rightarrow100\rightarrow50\rightarrow2$ linear layers with ELU non-linearities. We pretrain 5000 epochs without any regularization, and proceed with reach regularization enabled. Fig.~\ref{fig:mnist_reg}(left) shows the percentage of points which lies within reach of the estimated manifold. We observe that reach regularization slightly increases the reconstruction error, as any regularization would, while significantly increasing the percentage of points that are known to have a unique representation. This suggests that reach regularization only minimally changes reconstructions while giving a significantly more smooth model, which is more reliable.
Figure~\ref{fig:mnist_reg}(center) shows the latent representations given by the pretrained autoencoder without regularization, while Fig~\ref{fig:mnist_reg}(right) shows the latent representations after an additional 200 epochs with reach regularization. The latent representations with corresponding data points outside reach, that is, where the reconstruction error is greater than the pointwise normal reach at the reconstructed point is plotted in red. The points inside reach are plotted in green. We observe that after regularization, significantly more points can be expected to be unique and thereby trustworthy. Note that the latent configuration is only changed slightly after reach regularization, which suggests that the expressive power of the model is largely unaffected by the reach regularization.
\begin{figure}
\includegraphics[width=\textwidth]{figures/mnist_latent.pdf}
\vspace{-4mm}
\caption{The effect of reach regularization on an MNIST model.
\emph{Left:} The plot shows that the percentage of points within reach increases, while the $l_2$-loss is nearly unchanged.
We plot the loss curve from the initial 5000 epochs without regularization, to show how the $l_2$-loss behaves when regularizing.
\emph{Center \& right:} Latent representations of the MNIST autoencoder before and after the reach regularization. The red numbers are outside the reach, while green are within. Reach regularization smoothens the decoder to increase reach with minimal changes to both reconstructions and latent configuration.
}
\label{fig:mnist_reg}
\end{figure}
\section{Related work}
Representation learning is a foundational aspect of current machine learning, and the discussion paper by \citet{bengio2013representation} is an excellent starting point. As is common, \citet{bengio2013representation} defines a representation as the output of a function applied to an observation, implying that a representation is unique. In the specific context of autoencoders, we question this implicit assumption of uniqueness as many equally good representations may exist for a given observation. While only studied here for autoencoders, the issue applies more generally when representations span submanifolds of the observation space.
In principle, probabilistic models may place multimodal distributions over the representation of an observation in order to reflect lack of uniqueness. In practice, this rarely happens. For example, the highly influential \emph{variational autoencoder} \citep{kingma:iclr:2014, rezende:icml:2014} amortizes the estimation of $p(\vec{z}|\vec{x})$ such that it is parametrized by the output of a function. Alternatives relying on Monte Carlo estimates of $p(\vec{z}|\vec{x})$ do allow for capturing non-uniqueness \citep{pmlr-v70-hoffman17a}, but this is rarely done in practical implementations. That Monte Carlo estimates provide state-of-the-art performance is perhaps indicative that coping with non-unique representations is important. Our approach, instead, aim to determine which observations can be expected to have a unique representation, which is arguably simpler than actually finding the multiple representations.
Our approach relies on the reach of the manifold spanned by the decoder. This quantity is traditionally studied in geometric measure theory as the reach is informative of many properties of a given manifold. For example, manifolds which satisfy that $\reach(\M)> 0$ are $C^{1,1}$, i.e.\@ the transition functions are differentiable with Lipschitz continuous derivatives. In machine learning, the reach is, however, a rarely used concept. \citet{Fefferman2016} investigates if a manifold of a given reach can be fitted to observed data, and develops the associated statistical test. Further notable exceptions are the multichart autoencoder by \citet{schonsheck2020chart}, and the adaptive clustering of \citet{besold2020adaptive}. Both works rely on the reach as a tool of derivation. Similarly, \citet{chae2021likelihood} relies on the assumption of positive reach when deriving properties of deep generative models. These works all rely on the global reach, while we have introduced a local generalization.
The work closest to ours appears to be that of \citet{aamari2019estimating} which studies the convergence of an estimator of the global reach \eqref{eq:fed_reach_calc}. This only provides limited insights into the uniqueness of a representation as the global reach only carries limited information about the local properties of the studied manifold. We therefore introduced the pointwise normal reach alongside an estimator thereof. This gives more precise information about which observations can be expected to have a unique representation.
\vspace{-3mm}
\section{Discussion}
\vspace{-2mm}
The overarching question driving this paper is \emph{when can representations be expected to be unique?}
Though commonly assumed, there is little mathematical reason to believe that the choice of optimal representation is generally unique. The theoretical implications of this is that enforcing uniqueness on non-unique representations leads to untrustworthy representations.
We provide a partial answer for the question in the context of autoencoders, through the introduction of the \emph{pointwise normal reach}. This provides an upper bound for a radius centered around each point on the manifold spanned by the decoder, such that any observation within the ball has a unique representation. This bound can be directly compared to the reconstruction error of the autoencoding to determine if a given observation might not have a unique representation. This is a step towards a systematic quantification of the reliability and trustworthiness of learned representations.
Empirically, we generally find that most trained models do not ensure that representations are unique. For example, on CelebA we found that almost no observations were within reach, suggesting that uniqueness was not ensured. This is indicative that the problem of uniqueness is not purely an academic question, but one of practical importance.
We provide a Monte Carlo estimator of the pointwise normal reach, which is guaranteed to upper bound the true pointwise normal reach. The estimator is easy to implement, with the main difficulty being the need to access the Jacobian of the decoder. This is readily accessible using forward-mode automatic differentiation, but it can be memory-demanding for large models.
It is easy to see that the sample-based pointwise normal reach estimator converges to the correct value in the limit of infinitely many samples. We, however, have no results on the rate of convergence. In practice we observe that the estimator converges in a few iterations for most models, suggesting the convergence is relatively fast. In practice, the estimator, however, remains computationally expensive.
While we can estimate the pointwise normal reach quite reliably even for large models within manageable time, the estimator is currently too expensive to use for regularization of large models. On small models, we observe significant improvements in the uniqueness properties of the representations at minimal cost in terms of reconstruction error. This is a promising result and indicative that it may be well-worth using this form of regularization. While more work is needed to speed up the estimating of pointwise normal reach, our work does pave a path to follow.
\begin{ack}
This work was supported by research grants (15334, 42062) from VILLUM FONDEN. This project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 757360). This work was funded in part by the Novo Nordisk Foundation through the Center for Basic Machine Learning Research in Life Science (NNF20OC0062606). Helene Hauschultz is partly financed by Aarhus University Centre for Digitalisation, Big Data and Data Analytics (DIGIT)
\end{ack}
\small
|
1,108,101,565,807 | arxiv | \section{\uppercase{Introduction}}
\section{\Large Introduction}
\label{introduction}
In this paper, we present a protocol to manage communications between tiles
of a tiling of the hyperbolic plane.
Why the hyperbolic plane? Because the geometry
of this space allows to implement there any tree structure. The reason for that
is that the simplest tilings defined in this plane are spanned by a tree.
We refer to the author's papers and books on this topic, see~\cite{mmbook1,mmbook2}.
Tree structures are already intensively used in computer science, especially by
operating systems.
But the fact that trees are naturally embedded in this geometrical space,
especially on tilings living there, was never used.
This paper proposes to take advantage of this property.
In Section~\ref{geom}, we remind what is needed of hyperbolic geometry in
order the reader could understand the content of the paper.
In Section~\ref{geom}, we remind what is needed of hyperbolic geometry in
order the reader could understand the content of the paper. In
Section~\ref{navigation}, we sketchily describe the navigation technique
with a new aspect which was not used in~\cite{mmLNEEhongkong}. In
Section~\ref{scenario}, we present the protocol which allows us to improve
the system briefly mentioned in~\cite{mmLNEEhongkong}.
In Section~\ref{program}, we give an account of the simulation program and
in Section~\ref{experiment} we present the experiment which was performed by
running the simulation program. In Section~\ref{conclusion}, we
conclude with further possible development of the scenario implemented by the
protocol.
\section{\Large Hyperbolic geometry and its tilings}
\label{geom}
\noindent
Our first sub-section reminds the Poincar\'e's disc model which allows us to
have a partial visualization of the hyperbolic plane. Our second sub-section defines
the simplest tilings which can there be defined, allowing us to construct
{\bf grids}.
In our third sub-section, we focus on the tiling on which our proposal is based,
the tiling $\{7,3\}$ of the hyperbolic plane which we call the {\bf heptagrid}.
\subsection{Hyperbolic geometry}
\label{hypgeom}
\noindent
Hyperbolic geometry appeared in the first half of the
19$^{\rm th}$ century, proving the independence of the parallel
axiom of Euclidean geometry. This was the end of a search during two thousand
years in order to prove that the well known axiom about parallels in Euclid's
treatise is a consequent of his other axioms. The search is by itself a very
interesting story, full of deep teachings of high price for the philosophy of
sciences, we recommend the interested reader to have a look at~\cite{bonola}.
Hyperbolic geometry was the first issue of the notion of axiomatic independence.
But also, it raised the first doubts on the absolute power of our abstract mind.
It opened the way to the foundational works of mathematical logics, so that
computer science, the daughter of logics, appears to be a grand-child of
hyperbolic geometry. The paper hopes to show that this kind of relations
holds not only on a philosophical ground.
The search failed. This was proved in the second third of the 19$^{\rm th}$
century by the discovery of hyperbolic geometry. In this new geometry, all the
non-parallel axioms of Euclidean old and, from a point~$A$ out of a line~$\ell$
and in the plane defined by~$\ell$ and~$A$ there are two parallels to~$\ell$
passing through~$A$. Around forty years after the discovery, models of the
new geometry in the Euclidean one were found. Presently, we turn to the most popular
model of the hyperbolic plane nowadays, Poincar\'e's model.
\subsection{The Poincar\'e's disc model}
\label{poincare}
\noindent
This model is represented by Figure~\ref{poincare_disc}.
In the model, the points of the hyperbolic plane are represented by the points
of an open disc fixed in advance called the {\bf unit disc}. The circle which
is the boundary of the disc is called the set of {\bf points at infinity} and we
denote it by~$\partial U$. These
points do not belong to the hyperbolic plane, but they play an important role
in this geometry. The lines of the hyperbolic plane are represented by the trace
in the disc of its diameters or by the trace in the disc of circles which are orthogonal
to~$\partial U$. It is not difficult to see that the lines are also characterized in the
models by their points at infinity as diameters of the disc and circles orthogonal
to~$\partial U$ meet $\partial U$ twice exactly.
\vskip 7pt
\vtop{
\centerline{
\mbox{\includegraphics[width=150pt]{new_figure1.ps}}
}
\begin{fig}\label{poincare_disc}
\small
\hskip -5pt :
Poincar\'e's disc model. We can see that in the model, both lines~$p$ and~$q$
pass through~$A$ and that they are parallel to~$\ell$: $p$ and~$q$ touch~$m$ in
the model at~$P$ and~$Q$, points at infinity.
\end{fig}
}
\vskip 7pt
The figure also shows us an important feature of the new plane: outside parallel
and secant lines to a given line~$\ell$, passing through a point~$A$ not on~$\ell$,
there are also lines passing through~$A$ which do not cut~$\ell$, neither inside the disc,
nor on~$\partial U$ and nor outside the disc: they are called {\bf non-secant} lines.
A last but not least property of the new plane is that there are no similarity: a figure
of this plane cannot be re sized at another scale. Resizing necessarily changes the
shape of the figure.
\subsection{The tilings $\{p,q\}$ of the hyperbolic plane}
\label{pq}
\noindent
Consider the following process. We start from a convex regular polygon~$P$. We
replicate $P$~in its sides and, recursively, the images in their sides. If we cover
the plane without overlapping, then we say that $P$ {\bf tiles the plane by
tessellation}. We shall often say for short that $P$ {\bf tiles the plane}, here,
the hyperbolic
plane. A theorem proved by Poincar\'e in 1882 tells us that if~$P$ has $p$~sides and
if its interior angle is $\displaystyle{{2\pi}\over q}$, then $P$ tiles the hyperbolic
plane, provided that:
\vskip 7pt
\hbox to \hsize{\hfill$\displaystyle{1\over p}+\displaystyle{1\over q} <
\displaystyle{1\over 2}$.
\hfill(1)}
\vskip 7pt
\noindent
The numbers $p$ and~$q$ characterize the tiling which is denoted $\{p,q\}$ and
the condition says that the considered polygons live in the hyperbolic plane.
This inequality entails that
there are infinitely many tessellations of the hyperbolic plane. The tessellation
attached to $p$ and~$q$ satisfying the inequality is denoted by $\{p,q\}$.
Note that we find the well known Euclidean tessellations
if we replace $<$ by~$=$ in the above expression.
We get, in this way, $\{4,4\}$ for the square, $\{3,6\}$ for the
equilateral
triangle and $\{6,3\}$ for the regular hexagon.
In~\cite{mmbook1,mmbook2}, the author provides the reader with a uniform treatment
of these tessellations. The basic feature is that each tessellations is spanned by
a tree whose structure obeys well defined properties. We shall illustrate these
points in our next sub-section where we focus on a particular tessellation: the
tiling $\{7,3\}$ of the hyperbolic plane which we call the {\bf heptagrid}.
\subsection{The heptagrid}
\label{hepta}
The heptagrid is illustrated by Figure~\ref{til73}.
The figure is very symmetric, but at this stage, it is difficult to
identify each tile of the figure, especially for the tiles which are outside the
first two rings around the central cell.
The tiles look very much the hexagonal tiles of the corresponding tessellation of the
Euclidean plane,but the global organization is very different. It seems to us that
this figure indicates that we need something to locate the tiles. We turn to this point
in Section~\ref{navigation}
\vskip 7pt
\vtop{
\centerline{
\mbox{\includegraphics[width=160pt]{tiling_7_3.ps}}
}
\begin{fig}\label{til73}
\small
The heptagrid: an illustrative representation.
\end{fig}
}
\vskip 7pt
\section{\Large Navigation in the heptagrid}
\label{navigation}
We have seen on the heptagrid that it is not easy to locate the tiles of this tiling,
especially as we go further and further from the centre of the disc.
This is the point to draw the attention of the reader on two points. The disc model
may be misleading if we forget that we are in the Euclidean plane and that some symmetries
of the figure have no counter part in hyperbolic geometry. As an example, the central tile
seems to play an important role, in particular its centre. However, the hyperbolic plane
has no central point as well as the Euclidean plane has no such point. We decide to
fix an origin when we define coordinates in the Euclidean plane. This is the same here.
The central tile is simply a tile which we decide to be the origin of our coordinate system.
The second point is that Poincar\'e's disc model gives us a local picture only. We see
the immediate neighbourhood of a point which we decided to place at the centre of the disc.
The models looks like a small window on the hyperbolic plane whose central part only
is well observable. From this lens effect, we conclude that walking on the hyperbolic
plane, we are in the situation of the pilot of a plane flying with instruments only.
And so, what are our instruments?
The first two pictures of Figure~\ref{tree_73} represent them. On the left-hand
side of the figure, we can seen the {\bf mid-point lines} defined by the
following property: the line which joins the mid-points of two consecutive sides
of a heptagon cuts a heptagon of the tiling at the mid-point of a side only.
If we consider two rays issued from the meeting point of two secant mid-point
lines and defined by the smallest angle, we can define a set of tiles
for which all vertices but possibly one or two lie inside this angle. We call this
the restriction of the tiling to a sector. The remarkable property is that the
restriction of the tiling to a sector is spanned by a tree, the tree which is
illustrated by the right-hand side of the figure. In the middle of the figure,
we can see that the whole tiling can exactly be split into a central cell and
seven sectors dispatched around the central tile.
\vskip 7pt
\vtop{
\centerline{\hskip 10pt
\mbox{\includegraphics[width=100pt]{middle_global_0.ps}}
\hskip 10pt
\raise2.5pt\hbox{\mbox{\includegraphics[width=97.5pt]{new_eclate_7_3.ps}}}
\raise-2.5pt\hbox{\mbox{\includegraphics[width=130pt]{tree_sector73.ps}}}
}
\begin{fig}\label{tree_73}
\small
Left-hand side: the mid-point lines. This tool which shows how a sector spanned
by the tree is defined in the heptagrid.
\vskip 0pt
Middle: first part of the splitting, around a central tile, fixed in advance,
seven sectors. Each of them is spanned by the tree represented in the right-hand
side.
\vskip 0pt
Right-hand side: the tree which spans the tiling.
\end{fig}
}
\vskip 7pt
The tree which spans the tiling can be generated by very simple rules
indicating the exact connection between a node and its sons. There are two
kind of nodes, the {\bf black} and the {\bf white} ones. The rules are,
in self-explaining notations:
\vskip 7pt
\hbox to\hsize{\hfill
$B\rightarrow BW$,\hskip 20pt $W\rightarrow BWW$
\hfill(2)}
\vskip 7pt
\noindent
From these rules, it can be proved that the
number of nodes which are on the level~$n$ of the tree is $f_{2n+1}$ where
$\{f_n\}_{n\in I\!\!N}$ is the Fibonacci sequence defined by $f_0=f_1=1$ and
the induction equation \hbox{$f_{n+2}=f_{n+1}+f_n$}. For this reason, the
tree is called the {\bf Fibonacci tree}.
It is known that natural numbers
can be represented as sums of distinct terms of the Fibonacci sequence:
$n=\displaystyle{\sum\limits_{i=0}^ka_if_i}$, with $a_i\in\{0,1\}$, $a_k\not=0$ if
$n\not=0$. This representation is not unique, but it can be made unique by requiring
$k$~to be maximal in the above representation. This is the {\bf greedy Fibonacci}
representation of~$n$. It is characterized by the fact that considering the $a_i$'s
in their order as a word on $\{0,1\}^*$, there are no contiguous~1's in the word.
Presently, number the nodes of the tree level by level and, on each level, from the
left to the right, starting from the root which receives~1 as its number. Next,
call {\bf coordinate} of a node~$\nu$ the greedy Fibonacci representation of
its number. Then the coordinates of the nodes of the tree have this striking
property. If $[\nu]$ is the coordinate of~$\nu$, among the sons of~$\nu$ there
is a single node whose coordinate is~$[\nu]00$, which is called the
{\bf preferred son}. Also, we can rewrite~(2) as~(3)
\vskip 7pt
\hbox to\hsize{\hfill
$B\rightarrow \overline{B}W$,\hskip 20pt $W\rightarrow B\overline{W}W$
\hfill(3)}
\vskip 7pt
\noindent
where the bar indicates the position of the preferred son.
A {\bf path} between a tile~$A$ and a tile~$B$ is a sequence $\{T_i\}_{i\in[0..n]}$
with $T_0=A$, $T_n=B$ and, for each $i\in[1..n]$, $T_{i-1}$ and~$T_i$ have a
common side. We say that $n$~is the length of the path and we note that from
this definition, the path is oriented: its {\bf source} is the tile $T_0$ and its
{\bf target} is the tile~$T_n$. We also say that the path goes from~$A$ to~$B$.
Clearly, a path from~$B$ to~$A$ is obtained by reversing the numbering of the $T_i$'s
defining the path from~$A$ to~$B$.
For complexity reasons, it is convenient to take a {\bf shortest} path between~$A$
and~$B$: it is a path between~$A$ and~$B$ whose length is minimal. Note that in general
there is no unique shortest path but, by the very definition, such a path exists.
An important particular case is when both tiles are on the same branch of a tree:
this part of the branch is a shortest path between the tiles.
From the properties of the preferred son, we obtain:
\begin{thm}\label{pathlin} {\rm(see\cite{mmASTC,mmbook1})}
There is an algorithm which computes the path from a node of the Fibonacci tree
to the root from the coordinate of the node which is linear in the size of the
coordinate.
\end{thm}
From this algorithm, it is easy to compute a path between two nodes which is most
often almost a shortest path between the nodes: first, go from the nodes to the
root and connect the two paths at the root. Moreover, the computation is linear
in the size of the two coordinates. However, in certain situations, this is not
the shortest way. Now, in~\cite{mmbook2}, we proved a refinement of
Theorem~\ref{pathlin}. In order to
state it properly, we have to define coordinates for the tiles of the heptagrid.
To do this, we look at the middle picture of Figure~\ref{tree_73}. We
increasingly number the sectors from~1 up to~7 by counter-clockwise turning around
the central cell, fixing the sector which receives number~1 once and for all. Now,
the coordinate of a tile is defined by~0 for the central tile and, for any other
tile~$T$, by the couple~$(\sigma,\nu)$ where $\sigma\in\{1..7\}$ defines the sector
which contains the tile and where~$\nu$ is the coordinate of~$T$ in the tree which
spans the sector.
Now, we can state the result:
\begin{thm}\label{strongpathlin} {\rm(see\cite{mmbook2})}
There is an algorithm which computes a shortest path between two tiles of the
heptagrid which is linear in the size of the coordinates of the tiles.
\end{thm}
Another consequence of Theorem~\ref{pathlin} is the computation of the
coordinates of the neighbours of a tile, where a neighbour of the tile~$T$ is a
tile~$N$ which shares a side with~$T$. Clearly, $T$ is also a neighbour of~$T$
and we shall often say that~$N$ and~$T$ are neighbours.
The computation of the coordinates of the neighbours relies on two functions of~$n$,
the number of a tile: $f(n)$ which is the number of the father of the tile and
$\sigma(n)$ which is the number of the preferred son of the tile. According to the
previous notations, it is also interesting to consider the function~$[f]$ such that
\hbox{$[f]([n]) = [f(n)]$} and the function $[\sigma]$ such that
\hbox{$[\sigma]([n]) = [\sigma(n)]$}. From the proof of Theorem~\ref{pathlin},
there is an algorithm which allows to compute $[f]$ and~$[\sigma]$ which is linear in
the size of~$[n]$. However, note that in the computation of the path the application
of the algorithm at one step is in fact a constant. With the help of this function,
the coordinates of the neighbours of a tile are given by Tables~\ref{neigha}
and~\ref{neighb}.
In the table, the neighbours of a tile~$T$ are increasingly numbered from~1 to~7
while counter-clockwise turning around~$T$. Neighbour~1 is the father of~$T$. The
father of a root is the central tile. For the central tile, the neighbour~$i$ is
the tile associated to the root of the tree in the sector~$i$. If~$i$ is the number
of a neighbour of~$T$, we say that the side shared by~$T$ and this neighbour is
also numbered by~$i$.
\def\lignetaba #1 #2 #3
\hbox to \hsize
\ttV\hbox to 25pt{\hfill#1\hfill}
\ttV\hbox to 50pt{\hfill#2\hfill}
\ttV\hbox to 50pt{\hfill#3\hfill}\ttV
}
}
\vtop{
\begin{tab}\label{neigha}
\rm\small
\hskip -5pt :
The numbers of the neighbours for a tile~$\nu$ which is inside the tree. The tile
may be black or white.
\end{tab}
\hbox to\hsize{\hfill
\vtop{\offinterlineskip\hsize=133pt
\ttH
\lignetaba {$\tau$} {black} {white}
\ttH
\ttH
\lignetaba 1 {$f(\nu)$} {$f(\nu)$}
\ttH
\lignetaba 2 {$f(\nu)$$-$1} {$\nu$$-$1}
\ttH
\lignetaba 3 {$\nu$$-$1} {$\sigma(\nu)$$-$$1$}
\ttH
\lignetaba 4 {$\sigma(\nu)$} {$\sigma(\nu)$}
\ttH
\lignetaba 5 {$\sigma(\nu)$+1} {$\sigma(\nu)$+1}
\ttH
\lignetaba 6 {$\sigma(\nu)$+2} {$\sigma(\nu)$+2}
\ttH
\lignetaba 7 {$\nu$+1} {$\nu$+1}
\ttH
}
\hfill}
\vskip 12pt
}
Table~\ref{neigha} considers the general case, when the node associated to the
tile is inside the tree. This means that the corresponding node always has its
father in the tree and that all its neighbours are also in the tree. In
Table~\ref{neighb}, we have the exceptional cases. The nodes on the
leftmost branch, the root excepted, and those which are on the rightmost branch,
the root excepted. The nodes of the rightmost branch are white and the root is
also white. The nodes on the leftmost branch, the root excepted, are black.
It remains to indicate that in the case of a heptagon~$H$ which is on the
left- or the rightmost branch, it is easy to define the number of the sector
to which belongs the neighbours which do not belong to the tree of~$H$.
Indeed, let $\sigma$~be the number of the sector in which $H$~lies. If $H$~is
a black node, its neighbours~2 and~3 are in the sector
$\sigma\ominus1$, where $\sigma\ominus1=\sigma$$-$$1$ when $i>1$ and
$1\ominus1=7$. If $H$~is a white node, its neighbours~6 and~7
are in the sector $\sigma\oplus1$ with $\sigma\oplus1=\sigma$+$1$ when $i<7$
and $7\oplus1=1$. Note that for the root of the
sector~$\sigma$, its neighbour~2 is in the sector $\sigma\ominus1$ and its
neighbour~1 is the central
cell which is outside all the sectors.
\def\lignetabb #1 #2 #3 #4
\hbox to \hsize
\ttV\hbox to 25pt{\hfill#1\hfill}
\ttV\hbox to 50pt{\hfill#2\hfill}
\ttV\hbox to 50pt{\hfill#3\hfill}
\ttV\hbox to 50pt{\hfill#4\hfill}\ttV
}
}
\vtop{
\begin{tab}\label{neighb}
\rm\small
\hskip -5pt :
The numbers of the neighbours for a tile~$\nu$ which is either the root of the tree,
or which belongs to an extremal branch the leftmost or the rightmost ones. The numbers
are given in the columns {\tt root}, {\tt left} and {\tt right} respectively.
\end{tab}
\hbox to \hsize{\hfill
\vtop{\leftskip 0pt\offinterlineskip\hsize=185.6pt
\ttH
\lignetabb {$\tau$} {left} {right} {root}
\ttH
\ttH
\lignetabb 1 {$f(\nu)$} {$f(\nu)$} 0
\ttH
\lignetabb 2 {$\nu$$-$1} {$\nu$$-$1} 1
\ttH
\lignetabb 3 {$\sigma(\nu)$$-$$1$} {$\sigma(\nu)$$-$$1$} {$\sigma(\nu)$$-$$1$}
\ttH
\lignetabb 4 {$\sigma(\nu)$} {$\sigma(\nu)$} {$\sigma(\nu)$}
\ttH
\lignetabb 5 {$\sigma(\nu)$+1} {$\sigma(\nu)$+1} {$\sigma(\nu)$+1}
\ttH
\lignetabb 6 {$\sigma(\nu)$+2} {$\nu$+1} {$\nu$+1}
\ttH
\lignetabb 7 {$\nu$+1} {$f(\nu)$+$1$} 1
\ttH
}
\hfill}
\vskip 7pt
}
Both Theorems~\ref{pathlin} and~\ref{strongpathlin} give an algorithm to
compute new coordinates if we change the place of the central tile to another tile.
In Section~\ref{program}, we briefly give an explicit pseudo code implementing
an algorithm satisfying Theorem~\ref{strongpathlin}.
The computation of the functions $f$ and~$\sigma$ used in the computation of the
coordinates of the neighbours of a tile are also used for this purpose and based
on these considerations and on the previous results we have that:
\begin{thm}\label{chgcoord} {\rm(see~\cite{mmbook2})}
Consider a system of coordinate and a tile~$T$. Assume that we take~$T$ as the central
tile and that we fix its new side~$1$. There is an algorithm which, for each tile~$T'$
computes the coordinates of~$T'$ in the new system in linear time in the size of the
coordinate in the initially given system.
\end{thm}
It is important to remark that in order to obtain the linearity of the algorithm,
we do not compute the functions~$f$ and~$\sigma$ but $[f]$ and~$[\sigma]$ applied
to~$[n]$. This means that in order to compute $\sigma(n)$$-$$1$ for instance,
we need an algorithm for computing $[m$$-$$1]$ from~$[m]$. A similar remark holds for
$[m$+$1]$. The needed algorithms can be found in~\cite{mmbook1}.
\section{\Large The communication protocol}
\label{scenario}
In Section~\ref{introduction}, we mentioned that in~\cite{mmLNEEhongkong},
we already proposed a communication protocol for the tiles of the heptagrid.
This protocol was based on a specific system of coordinates, inherited
from~\cite{mmJCAcomm,mmbook2}. For the convenience of the reader, we briefly
describe this system in Sub-section~\ref{absolrel}. Then, in
Sub-section~\ref{protocol} we define the new protocol.
\subsection{Absolute and relative systems}
\label{absolrel}
The {\bf absolute} system is based on a numbering of the sides tiles of the
heptagrid. For each tile, we number the sides from~1 to~7{} in this order while
counter-clockwise turning around the tile. Now, how to fix side~1? We again take
the situation of the left-hand side of Figure~\ref{tree_73}: a central cell surrounded
by seven sectors, each one spanned by a copy of the Fibonacci tree. Now, side~1 is
fixed once and for all for the central tile. For the other tiles, side~1 is the
side shared by the tile and its father, considering that the central cell is the
father of the root for each copy of the Fibonacci tree spanning the sectors.
Now, a side always belongs to two tiles and so it receives two numbers.
This is why this numbering is called {\bf local}. However, the association
between both numbers is not arbitrary. There is a correspondence between them
although it is not one to one. When one number is known, the status of the tile
and the fact that the corresponding node whether lies or not on an extremal
branch of the tree are also needed to determine the other number.
This correspondence is given by Table~\ref{sidenumbers} and Table~\ref{thepairs}
lists all the couples used by the sides: note that we are far from using all
possible couples.
The absolute system consists in first fixing the local numbering once and for all.
Table~\ref{sidenumbers} will still be used to determine the two numbers of a side
of a heptagon.
\def\lignetabii #1 #2 #3 #4
\hbox to \hsize
\ttV\hbox to 40pt{\hfill#1\hfill}
\ttV\hbox to 40pt{\hfill#2\hfill}
\ttV\hbox to 40pt{\hfill#3\hfill}
\ttV\hbox to 40pt{\hfill#4\hfill}\ttV
}
}
\vtop{
\begin{tab}\label{sidenumbers}
\rm\small
\hskip -5pt :
Correspondence between the numbers of a side shared by two heptagons,
$H$ and~$K$. Note that if $H$~is white, the other number of side~$1$
may be~$4$ or~$5$ when $K$~is white and that it is always~$5$ when
$K$~is black.
\end{tab}
\hbox to \hsize{\hfill
\vtop{\offinterlineskip\hsize=170.6pt
\ttH
\hbox to\hsize{\ttV\hfill\hbox to 80pt{\hfill black $H$\hfill}
\hfill\ttV\hfill
\hbox to 80pt{\hfill white $H$\hfill}\hfill\ttV}
\ttH\ttH
\lignetabii {in $H$} {in $K$} {in $H$} {in $K$}
\ttH\ttH
\lignetabii 1 {3$^{wK}, $4$^{bK}$} 1 {4$^{wK}$, 5}
\ttH
\lignetabii 2 6 2 7
\ttH
\lignetabii 3 7 3 1
\ttH
\lignetabii 4 1 4 1
\ttH
\lignetabii 5 1 5 1
\ttH
\lignetabii 6 2 6 2
\ttH
\lignetabii 7 2 7 {2$^{wK}$, 3$^{bK}$}
\ttH
\vskip 7pt
}
\hfill}
}
Next, we remark that the local numbering gives a way to encode a path between
two tiles~$A$ and~$B$. Let $\{T_i\}_{i\in[0..n]}$ be a shortest path from~$A$
to~$B$ and denote by $s_i$ the side shared by $T_i$ and~$T_{i+1}$ for
$i\in[0..n$$-$$1]$. Let $a_i$ be the number of $s_i$ in~$T_i$ and $b_i$ be its
number in $T_{i+1}$. Then we say that the sequence $\{(a_i,b_i)\}_{i\in[0..n-1]}$
is an {\bf address} of~$B$ {\bf from}~$A$. The reverse sequence gives an
address of~$A$ from~$B$. However, on the just above sequence,
we do not know from the sequence itself that $(a_n,b_n)$ is the last side.
In order to do this, we change a bit the association of the numbers: for $T_i$
belonging to the path, we denote by~$en_i$ the side shared with $T_{i-1}$ and
by~$ex_i$ the side shared with~$T_{i+1}$. For $T_0$, as $en_0$ cannot be defined
as the number of a neighbour of~$T_0$, we put $en_0=0$. This time we say that
the sequence $\{(en_i,ex_i)\}_{i\in[O..n]}$ is the {\bf coordinate} of $B$ from~$A$.
Similarly, the sequence $\{(ex_{n-i},en_{n-i})\}_{i\in[0..n]}$ is the coordinate
of~$A$ from~$B$. And so, the correspondence between the address and the coordinate
is easy: $a_i = ex_i$ and $b_i = en_{i+1}$ for $i\in[0..n$$-$$1]$ which contains
the definitions of~$ex_0$ and~$en_n$.
Now, how to define a shortest path between $A$ and~$B$?
There are two ways: the first way is given by Theorem~\ref{strongpathlin}. We
apply the algorithm defined in Section~\ref{program} in order to find the
coordinate of $A$ from~$B$.
The second way consists in the following. If $A$~sends messages to every tile,
it considers itself as the central tile, taking its own number~1 as the number~1
of the central cell. Remember that all tiles have the same size, the same shape and
the same area, this is why each tile may feels 'equal' to the others. When it sends
the message to its neighbours, it also send them
the information that it is the central cell and it sends $(0,i)$ to its
neighbour~$i$. And so, the neighbour receives its coordinate from~$A$.
By induction, we assume that each tile~$T$ which receives the message from~$A$
also receives its coordinate from~$A$ and its status in the relative tree to~$A$
in which $T$~is. From this information, and as~$T$ knows from which
neighbour it receives the message, $T$ know which of its neighbours are its relative
sons and so, it can append the element $(en_T,ex_T)$ to the address it conveys to
the corresponding son together with the relative status of the son. And so,
we proved that each tile receiving a message from~$A$ also receives its address
from~$A$ and its relative status with respect to~$A$. In fact, we have an
implementation
of the local numbering attached to~$A$ as a central tile. This local numbering is
called the {\bf relative} system. Now, from its coordinate from~$A$, $T$~may
computes the coordinate of~$A$ from~$T$ in a linear time in the size of the
coordinate, as follows from what we already have noticed. And so, if
$T$~wishes to reply to the message sent by~$A$ it can do it easily.
Moreover, from the properties we have seen in Section~\ref{navigation},
we can see that, proceeding in the just described way, a public message is sent
to every tile once exactly, which is an important feature.
\subsection{The protocol}
\label{protocol}
\noindent
We have now the tools to describe the protocol of communication between the tiles.
For this protocol, we distinguish two types of messages, {\bf public} ones and
{\bf private} ones. By definition, a public message is a message sent by a tile to all
the other tiles. A private message is a message sent by a tile to a single other one.
This distinction belongs to the sender of the message.
In this protocol, we assume that we have a global clock defining a discrete
time and that a message leaving a tile~$T$ at time~$t$ can reach only a neighbour
of~$T$ at time~$t$+1. We say that the maximal speed for a message is~1.
The public message makes use of the relative system of the sender.
However,
in the coordinates which are constructed by the tiles which relay the message,
the numbers $en_i$ and~$ex_i$ computed by the relaying tile are defined according
to the absolute system as the tile does not know where is the sender and
as its own local numbering is defined by the absolute system.
A private message is either a reply to a message, either public or private,
or a message sent to a single tile according to the following procedure. Each
tile~$T$ has a direct access to a the system. Given the coordinates of a tile~$N$
as defined in Section~\ref{navigation}, the central cell being that of the absolute
system, the managing system gives to~$T$ a shortest path from~$T$ to~$N$
which is a coordinate of~$N$ from~$T$. And so, a private message is defined by
the fact that it has the address of the receiver.
\def\lignetab #1 #2
\hbox to \hsize
\ttV\hbox to 30pt{\hskip 10pt#1\hfill}
\ttV\hbox to 40pt{\hfill #2\hfill}\ttV
}
}
\vtop{
\begin{tab}\label{thepairs}
\small
\hskip -5pt :
The pairs $(i,j)$ of numbers of a side of a heptagon. It is assumed that
the first number denotes the side
\end{tab}
\vspace{-10pt}
\hbox to\hsize{\hfill
\vtop{\offinterlineskip\hsize=74.1pt
\ttH
\lignetab 1 {(1,3)}
\lignetab {} {(1,4)}
\lignetab {} {(1,5)}
\ttH
\lignetab 2 {(2,6)}
\lignetab {} {(2,7)}
\ttH
\lignetab 3 {(3,7)}
\lignetab {} {(3,1)}
\ttH
}
\hfill
\vtop{\offinterlineskip\hsize=74.1pt
\ttH
\lignetab 4 {(4,1)}
\ttH
\lignetab 5 {(5,1)}
\ttH
\lignetab 6 {(6,2)}
\ttH
\lignetab 7 {(7,2)}
\lignetab {} {(7,3)}
\ttH
}
\hfill}
\vskip 19pt
}
In order to deliver the information efficiently, a private message from~$A$
to~$B$ stores the coordinate of~$B$ as two stacks~$a$ and~$r$. The stack~$a$ is
for the direct run from~$A$ to~$B$, the stack~$b$ is for the way back. Each tile~$T$
on the path conveys the message to the next one~$N$ on the path, in the direction
from~$A$ to~$B$. To perform this, $T$ reads the top of~$a$, say $(en,ex)$. It knows
that~$ex$ is the number of~$N$ from itself. Just before sending~$N$ the message and
the stacks, $T$~pops the top $(en,ex)$ of~$a$ and pushes $(ex,en)$ on the top of~$r$.
In this way, $T$ knows that it is the receiver if $ex=0$. When this is the case,
$T$~pops the top $(en,ex)$ of~$a$, pushes $(ex,en)$ on~$r$ but does nothing else.
Note that at this moment $a$~is empty. When~$T$ is ready to answer, it exchange~$a$
and~$r$ and so, the same process allows the message to reach~$A$ together with a
coordinate of~$B$ from~$A$. This is illustrated by Figure~\ref{BobandAlice} on
a toy example.
It is easy to see that this process is linear in time with respect to the
coordinate of the receiver, assuming that all messages travel at maximal speed~1.
Now, we shall see that in the process of a public message, the computation
by the relaying tile of the new information which it has to append to the message
is easily computed. Here two, we have two stacks, but the stack~$r$ is always
empty.
Consider a relaying tile~$T$. Let $(en_0,ex_0)$ be the top of the stack~$a$. By
construction, $ex_0$~is the side of the neighbour~$N$ through which~$T$ received the
message. In order to facilitate the computation, $T$ also receives the number~$en_1$
in~$T$ of the side numbered~$ex_0$ in~$N$. Let~$s$ be the number of the relative son
of~$T$ in the relative tree. We know that $s\in[3..5]$ if $T$ is white in the relative
tree and that $s\in[4,5]$ if~$T$ is black in the relative tree. This index
corresponds to a position of the father at the absolute index~1. Now, the absolute
index~$ex_1$ of the son defined by~$s$ is given by:
\vskip 7pt
\hbox to\hsize{\hfill
$ex_1 = 1 + ((en_1$$-$$1)+s$$-$$1)$ mod~7,
\hfill(4)}
\vskip 7pt
Once $ex_1$ is known, the other absolute number of the side defined by~$ex_1$,
say $en_2$, may be determined by Table~\ref{thepairs}. Now,
we know that when $ex_1\in\{1,2,3,7\}$, $en_2$~is not uniquely defined. The value
of~$en_2$ depends on the absolute status of~$T$. The simplest solution is to
assume that all pairs~$(i,j)$ for $i\in\{1..7\}$ are known for each tile~$T$
in a table~{\it output} which is a sub-table of Table~\ref{thepairs}, see
Section~\ref{program} for the implementation of this important point. Then
we have:
\vskip 7pt
\hbox to\hsize{\hfill
$en_2=output(ex_1)$.
\hfill}
\vskip 7pt
For implementation, note that the root of the relative sector~1 for a sender is
its absolute neighbour~1. Also note that formula~(4) is different from the formula
given in~\cite{mmbook2}. In formula~(4), we do not need to know the relative status
of the tile, contrarily to the formula used in~\cite{mmbook2}. This is automatically
given by the relative indices used for the relative sons.
\vtop{
\vspace{-40pt}
\centerline{
\mbox{\includegraphics[width=200pt]{commBobAlice_a8.ps}}
}
\vspace{-35pt}
\begin{fig}
\label{BobandAlice}\small
Illustration of the protocol of communication between tiles of the pentagrid or of the
heptagrid. The circled tiles indicate a shortest path from Alice to Bob. Note the
numbering of the sides shared by two tiles according to the definition of this
sub-section.
\end{fig}
}
However, we cannot assume that all tiles send messages at any time. This would not
be realistic. Also, we cannot assume that public messages are sent for ever to cover
the whole plane which would also not be realistic. Indeed, in case of public messages
sent without stopping, the number of messages at any tile at each time would
increase to infinity at an exponential rate with time.
In order to limit the scope of a public message, we define a {\bf radius} of
its propagation. This means that if a public message is sent from~$A$, it will
reach any tile whose distance from~$A$ is at most the radius. The distance
between two tiles~$A$ and~$B$ is the length of a shortest path between~$A$
and~$B$. Of course, the message could also bring with it the delay which could
be decremented by~1 each time it reaches a new tile, and the message would destroy
itself when the delay would be~0. The defect of this solution is that we have to
transport the delay to all tiles within the radius and that at each time, we have
to perform this decrementing at each tile.
There is another solution. When $A$ sends a public message with radius~$r$,
the message is not sent at maximal speed~1 but at a speed~$\displaystyle{1\over2}$.
As we have a global clock, we shall distinguish between odd and even times. A
public message travels at odd times and remains on the tile at even times.
Consider the message~$\mu$ sent from~$A$ at time~1. Now, $A$ must remember~$\mu$
and this is implemented as another message~$\mu_e$ which is not sent immediately.
The new message has the minimal information. It has to destroy~$\mu$ and only~$\mu$.
To this purpose, all messages are identified by a unique number given by the system.
And so, $\mu_e$ contains the number of~$\mu$. Next, $A$ keeps~$\mu_e$ during
$r$~tops of the clock. In fact, $\mu_e$ also keeps a delay which is~$r$ when
$\mu_e$ is created and which is decremented at each top of the clock. When the
delay is~0, $\mu_e$ is sent to all tiles, according to the same process as a public
message, but at speed~1. Also, another important difference is that~$\mu_e$
does not need to transport a stack as it is not sent in order to get any reply.
As $\mu_e$ travels at speed~1, at time~$2r$, it reaches $\mu$ and it destroys it.
Later on we shall say that $\mu_e$ is an {\bf erasing} message.
With this, we completed the description of the process concerning public
messages. Private messages always travel at speed~1.
A last point for the simulation: as a tile does not send a message at any
time, we have to decide when it sends a message. For this purpose,
we use a Poisson generator, both for the decision of sending a message and,
in the case of a public message, for defining the radius of the propagation of
the message. For each parameter, we use a different coefficient for the generator.
We shall see the values in Section~\ref{experiment}.
\section{\Large The simulation program}
\label{program}
As usual for the implementation of a theoretical model, the simulation program
results from many choices decided by the programmer for the implementation of various
features, in particular, the structure representing the space of the simulation,
we shall see this point in Subsection~\ref{data}. Subsection~\ref{auxil}
is devoted to auxiliary computations connected with the implementation
of the basic algorithms. Subsection~\ref{scenar} describes the exact
implementation of the scenario described in Section~\ref{protocol}.
The simulation program was written in $ADA95$. When this is the case, we mention
facilities given by the programming language.
\subsection{Data structures}
\label{data}
In a first stage of the experiments described in Section~\ref{experiment}, the
heptagrid was implemented as a table {\tt space} with two entries: one in 0..7 and the
other in 0..{\tt maxsize}, where {\tt maxsize} is the number of tiles of a sector
represented by the simulation, the sector~$i$, with $i\in[1..7]$ being represented
by {\tt space(i,*)}. From section~\ref{navigation}, it is not difficult to
compute that up to the level~$n$, the number of tiles in a sector is
$f_{2n+2}$$-$$1$.
The elements of the table are a small table of 8 records indexed from~0 to~8.
Record~0 represents the tile {\tt space}$(i,j)$: it is the tile~$T$ of
coordinate $(i,j)$ with $i\in[1..7]$ and
\hbox{$j\in[1..${\tt maxsize}$]$}. Record~$v$, with $v\in[1..7]$ represents the
neighbour~$v$ of~$T$. The fields of the record contain the information needed
to minimize the computation time. Except the coordinate of the tile and of its
neighbours, the record is assumed to contain an information of constant size.
Presently, in order to overcome a quick overflow by the manipulation of the table,
the space of the experiment is implemented by stacks. There is a basic
table {\tt space} with a single entry in 0..7 which represents the central cell
and the roots of the Fibonacci trees.
Each element of the table is a pointer, {\tt addresstile}
which points at a tile. The pointer in {\tt space(0)} points at the central cell,
{\tt space($i$)}, with \hbox{$i \in 1..7$} points at the root of the sector~$i$.
The tiles themselves are represented by a record whose fields are:
\vskip 7pt
\vtop{\leftskip 60pt\parindent -20pt\hsize=300pt
{\tt num}: the number of the node in its tree,\vskip 0pt
{\tt sect}: the number of the sector,\vskip 0pt
{\tt neighbour}: a table of seven pointers, {\tt neighbour($i$)} pointing
at the neighbour~$i$ of the tile,\vskip 0pt
{\tt associate}: a table of seven numbers, the absolute number of side~$i$
in the neighbour~$i$,\vskip 0pt
{\tt branch}, with the self-explaining values about the position of the tile
with respect to the borders of the sector:
\vskip 0pt
\hskip 20pt{\tt left}, {\tt right} {\tt middle}, {\tt root}
and {\tt centre},\vskip 0pt
{\tt status}, the status of the tile: {\tt central}, {\tt white} or {\tt black},
\vskip 0pt
{\tt outer}, boolean: says if the considered neighbour is outside the
simulation space,\vskip 0pt
{\tt border}, boolean: says if the tile is on the border of the simulation space,
\vskip 0pt
{\tt message\_stack0}, {\tt message\_stack1}: two pointers on the stack of
messages,\vskip 0pt
{\tt last\_message0}, {\tt last\_message1}: two pointers at the last element of
the stack, {\it i.e.} the most recent message.
}
The stack of messages contains what is needed for the communication system.
Each element of the stack holds a record with the following fields:
\vskip 7pt
\vtop{\leftskip 60pt\parindent -20pt\hsize=300pt
{\tt next} : pointer for handling the stack,\vskip 0pt
{\tt relative\_father}, an integer in 1..7, 0 if not defined,\vskip 0pt
{\tt relative\_status}, with the values: {\tt black}, {\tt white}, {\tt centre},
\vskip 0pt
{\tt number}, an integer: the number given to the message,\vskip 0pt
}
\vtop{\leftskip 60pt\parindent -20pt\hsize=300pt
{\tt the\_type}, with the values: {\tt public}, {\tt private}, {\tt erasing},
\vskip 0pt
{\tt wait}, an integer: the counter for an erasing message,
\vskip 0pt
{\tt direct, wayback}, pointers of messages.
}
\vskip 7pt
Now, why two pointers at the stack of messages?
This is to perform the simulation in the following way. At each tile,
we represent the stack of messages by two disjoint stacks: {\tt stack0},
accessed through {\tt message\_stack0} and {\tt last\_message0}, and
{\tt stack1}, accessed through the pointers {\tt message\_stack1} and
{\tt last\_message1}.
We consider that {\tt stack0} represents the stack at the tile at the time~$t$
while {\tt stack1} represents the stack at the same tile but at the
time~$t$+1. The disjunction of {\tt stack0} with respect to {\tt stack1}
avoid any confusion of pointers. And this disjunction is needed as a tile
possibly receives contributions from its neighbours and this is performed
sequentially by the simulation, whence the distinction between the two
configurations of the same stack at the time~$t$ and at the time~$t$+1:
the same tile receives contribution from its neighbours at different steps of the
sequential process within the steps of computation which represent the turning
from the time~$t$ to the time~$t$+1.
\subsection{Auxiliary computations}
\label{auxil}
The computation of the shortest path is given by a simple algorithm which
hides more involved computations, although they remain within a linear estimate
with a small coefficient, see~\ref{shortest}. Among the shortest paths from the
tile~$A$ to the tile~$B$, there are two extremal ones: if we consider the~$A$
as the central cell, $B$ lies in a sector~$i$ and the the path from $A$ to~$B$
which passes through the root of the sector and going along the branch of the
tree from the root to~$B$ is one of these extremal paths: there is no other shortest
path on the left-hand side of this path while looking at~$B$ and the root with
$B$ below the root. If we start from~$B$ as the central cell, we get the other
extremal path: no shortest to the right-hand side of this one. Now, when we have
at our disposal a shortest path~$\pi$, say from~$A$ to~$B$, it is possible to
define the leftmost shortest path from~$A$ to~$B$. This path is computed
thanks to the procedure {\tt measure} which computes the distance between
two paths. It also relies on a function {\tt pathroot} which computes the
path from a node to the root from the coordinate of the node. It uses a
function {\tt leftmost} which computes the leftmost shortest path when it
is given a shortest path.
The function {\tt pathroot} is a significantly improved version of the
algorithm given in~\cite{mmbook2}, giving a much simpler proof of
Theorem~\ref{pathlin}. The key point was to notice that there is a kind
of propagation of the carry when for the first time to contiguous~1's are
detected: test in the {\tt else}-branch of the main {\tt if} of the loop.
When starting the execution of function {\tt shortest}, both paths start
from the central cell as indicated by the instructions which initialize
{\tt Lcursor} and {\tt Rcursor}. As suggested by the identifiers, {\tt Lcursor}
points at the leftmost tiles between {\tt tile1} and {\tt tile2}. The
function {\tt theleftmost} looks whether the points belong to the same sector
of not. If not, the absolute difference between the indices of the sectors
allows us to know the leftmost tile. If they are in the same sector,
then the function follows both paths from the root to the nodes
until the paths diverge as we assume that the tiles are distinct.
At the tile which is the point where both paths diverge, it is easy to
determine which tile is on the left-hand side of the other.
\vtop{
\begin{algo}\label{shortest}
\small
The computation of the shortest path. Given {\tt tile1} and {\tt tile2}
with the assumption that \hbox{\tt tile1 $\not =$ tile2},
\hbox{\tt tile1 $\not = \emptyset$} and
\hbox{\tt tile2 $\not= \emptyset$} as well.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
Ltile := theleftmost(tile1,tile2);
if tile1 = Ltile
then Rtile := tile2;
else Rtile := tile1;
end if;
Lcursor := chain\_translation(pathroot(Ltile));
Rcursor := chain\_translation(leftmost(pathroot(Rtile)));
loop
measure(distance,Lcursor,RcursoR);
if Lcursor.next = null then exit; end if;
if Rcursor.next = null then exit; end if;
exit when distance > 1;
Lcursor := Lcursor.next;
Rcursor := Rcursor.next;
end loop;
-- we go out of the loop when distance $\geq$~2
-- the distance is between Lcursor.next et Rcursor.next
ladresse := connect (Lcursor,Rcursor);
return ladresse;
\par}
\demitrait
\vskip 10pt
}
When the leftmost tile is determined, then the function first computes
the path from {\tt Ltile} to the central cell and then the leftmost path
from {\tt Rtile} to the central cell. In this way, we already know that
the distance between the paths is at most~1 as long as possible. In the
loop which starts form the central tile, the distance between the tiles
of the path at the current stage is measured by the procedure
{\tt measure} which updates {\tt distance} which outputs the computed distance.
If one path is completely
traversed or if the distance is now bigger than~1, the loop is completed
and both pointers point at the furthest tile from the central tile
where the distance is at most~1. Then the function {\tt connect} establishes
the necessary path in order to join both remaining parts of the paths in the shortest
way: it is a finite selection of cases in each of which the construction is easy.
This point raises no difficulty.
The procedure~{\tt measure} is detailed by Algorithm~\ref{measure}. It takes into
account that the leftmost path which joins the central cell to the rightmost tile
may lie on the left-hand side of the path joining the leftmost tile. This is why we
have this selection of cases into three of them denoted by {\tt equal}, when the
distance is~0, {\tt normal} when the distance is~1 and the tile on {\tt Lmark}
is on the left-hand
side of the tile on {\tt Rmark}, and {\tt opposite} when
the distance is~1 and the tile on
{\tt Lmark} is on the right-hand side of the
tile on {\tt Rmark}. Such cases do happen. Note that the case {\tt opposite}
is structurally very similar to the case {\tt normal}. The only difference
is that in the former case, we look at {\tt Rmark.status} while in the
latter we look at {\tt Lmark.status}. The reason is that in both cases, we have
to consider the status of the rightmost tile.
\vtop{
\begin{algo}\label{pathroot}
\small
The computation of the path from the root to a node of the tree.
shortest path. Given \hbox{{\tt tile} $=$ $(${\tt tile}$(0)$,{\tt tile}$(1))$},
the number of the sector and the number of the node respectively.
Also, \hbox{{\tt representation} = $[${\tt tile}$(1)]$} is an array of $0$'s and $1$'s
indexed from~$0$. The index {\tt cursor} starts from~$1$ and it is on the
lowest digit.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
stage := new T\_pairs;
stage.next := thepath;
stage.ingate := 1;
stage.outgate := 0; -- characterizes an end
thepath := letape;
while cursor < representation'last
loop
stage := new T\_pairs;
stage.next := thepath;
stage.ingate := 1;
if cursor+1 > representation'last
then
stage.outgate := 4 + representation(cursor);
else
stage.outgate := 4 - representation(cursor+1)
+ representation(cursor);
if representation(cursor+1) = 1
then if cursor+2 <= representation'last
then representation(cursor+2) := 1;
end if;
end if;
end if;
thepath := letape;
cursor := cursor+2;
end loop;
-- finalization :
stage := new T\_pairs;
-- inversion
stage.sortie := tile(0);
stage.entree := 0;
stage.next := thepath;
thepath := stage;
return thepath;
\par}
\demitrait
\vskip 10pt
}
This allows us to briefly mention that the function {\tt leftmost}
works in a similar way, also based on the computation of the distance
between the given path and the constructed one which should be the leftmost one.
During the construction, the algorithm tries to keep the distance between the
current tile on the given path and the current tile of the constructed path
exactly equal to~1. As long as this is possible, the algorithm goes on in this
way. If it can hold the condition until the last connection to the target of the
path, we are done. But it may happen, and this indeed does happen, that
at the next step after the current path, the distance must be at least~2. This
means that the constructed path went to much to the left and that it is needed
to resume the computation
\ligne{\hfill}
\vtop{
\vspace{-20pt}
\begin{algo}\label{measure}
\small
The computation of the distance between two tiles, assuming that
the distance between their fathers is at most~$1$.
Given {\tt Lmark},{\tt Rmark}, two pointers on the considered paths.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
distance0 := distance;
case side is
when equal => -- distance = 0
if Lmark.ingate = 0 then -- initialisation
maxim := maxi(Lmark.outgate,Rmark.outgate);
minim := mini(Lmark.outgate,Rmark.outgate);
ecart1 := maxim - minim;
ecart2 := minim + 7 - maxim;
ecart := mini(ecart1,ecart2);
else -- ordinary situation: distance0 = 0
if (Lmark.outgate /= 0) and (Rmark.outgate /= 0) then
distance := maxi(Lmark.outgate,Rmark.outgate)
- mini(Lmark.outgate,Rmark.outgate);
else distance := distance0;
end if;
end if;
if Lmark.outgate /= Rmark.outgate then
if Lmark.outgate = theleftmost(Lmark.outgate,
Rmark.outgate) then
side := normal;
else side := opposite;
end if;
end if;
when normal => -- distance0 = 1
if (Lmark.outgate /= 0) and (Rmark.outgate /= 0) then
distance := 5 - Lmark.outgate + Rmark.outgate - 3;
if Rmark.status = white then
distance := distance+1;
end if;
if distance = 0 then side := equal; end if;
else distance := distance0;
end if;
when inverse =>
if (Lmark.outgate /= 0) and (Rmark.outgate /= 0) then
distance := 5 - Lmark.outgate + Rmark.outgate - 3;
if Lmark.status = white then
distance := distance+1;
end if;
if distance = 0 then side := equal; end if;
else distance := distance0;
end if;
end case;
\par}
\demitrait
\vskip 10pt
}
\noindent
from a previously reached tile. If it were needed
to go back to the central time each time the computation has to be resumed,
the algorithm would be quadratic.
Fortunately, a careful analysis of the construction
shows that it is possible to find key tiles so that each time we have
to resume the computation, we have to go back to the last fixed key tiles so that
adding all the traversed parts of the path lead to a total length which is at most
twice the length of the given path. The length of this part of the program
exceeds the room for this paper. However, the overall computation of the
function {\tt leftmost} is still linear in time with respect to the length of the
given path.
As a last auxiliary function, we just remind the algorithm for the Poisson
generator, taking from~\cite{knuth}. We can rewrite it as follows:
\vtop{
\begin{algo}\label{poisson}\small
The Poisson generator for an integer valued function.
Here, {\tt random} is a uniform random integer-valued variable in the
range $0$..{\tt p\_rand}. The function {\tt random} is an algorithmic random
generator. It can be constructed as indicated also in~{\rm\cite{knuth}}.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
compte : integer := 0;
pois : double := 1.0;
begin
loop
pois := pois*double(random)/double(p\_rand);
exit when pois < exp(-lambda);
compte := compte+1;
end loop;
return compte;
end;
\par}
\demitrait
\vskip 10pt
}
\subsection{Implementing the protocol}
\label{scenar}
The simulation is controlled by the procedure {\tt execute}, see
Algorithm~\ref{exec}. As there are many variables to collect partial results
for later analysis, we use records in order to make the program more readable.
The partial results concern the various kinds of messages, so that the
fields of the records are {\tt public}, {\tt reply}, {\tt write},
{\tt nonpublic}, {\tt erase}.
The function {\tt init\_config} initializes the table {\tt space}. In particular,
for each tile, it provides the information about the number of the tile, its sector,
as well as the similar information for its seven neighbours. It also computes
the table {\tt associates} of the numbers of the sides of tile in the neighbour
sharing this side.
In Algorithm~\ref{exec}, note that we perform addition on records thanks
to the facility given by $ADA$ to overload the usual signs of operation '+',
'$-$', '$\times$' as well as relations, '$=$' and '$>=$' as an example.
We develop the function {\tt transition} in Algorithm~\ref{transit}.
The actual content of the function {\tt transition} is performed by the
procedure {\tt action\_in} which implements the exact choices of the simulation,
see Algorithm~\ref{action}.
As indicated in Section~\ref{scenario}, public messages are sent and conveyed at
odd times. Due to the copying process from{\tt stack0} onto {\tt stack1} on which
the execution is based, the information about a public message has to be copied on
{\tt stack1} at even times: otherwise, the message would be erased. This is what
the procedure {\tt replicate} performs. The same procedure is also used for the
erasing messages during the delay they observe at the tile which emitted a public
message. Of course, at each replication, the delay is decreased by~1, until the
delay reaches~1. At this moment, it is sent to catch the emitted message.
\vtop{
\vspace{-4pt}
\begin{algo}\label{exec}\small
The procedure {\tt execute}.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
auxil := init\_config;
collect(auxil,0,collect\_at\_t,nb\_max\_msg\_t);
totals := collect\_at\_t;
the\_max\_msg := nb\_max\_msg\_t;
themax\_at\_t := collect\_at\_t;
for t in 1..duration
loop
space := transition(auxil,t);
auxil := space;
collect(auxil,t,collect\_at\_t,nb\_max\_msg\_t);
totals := totals + collect\_at\_t;
themax\_at\_t := maxi(themax\_at\_t,collect\_at\_t);
the\_max\_msg := maxi(the\_max\_msg,nb\_max\_msg\_t);
end loop;
\par}
\demitrait
\vskip 10pt
}
\vtop{
\vspace{-4pt}
\begin{algo}\label{transit}\small
The function {\tt transition}.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines
\obeyspaces\global\let =\ \tt\parskip=-2pt
-- central tile:
action\_in(0,0);
-- the other tiles:
for sect in 1..space'last(1)
loop
for i in 1..space'last(2)
loop
action\_in(sect,i);
end loop;
end loop;
return copy(new\_space);
\par}
\demitrait
\vskip 10pt
}
Let us describe this procedure with more details.
Let $\mu$ be a public message issued at time~$2t$$-$1, with $t>0$ by the procedure
{\tt send}. At the same time, the procedure {\tt send} creates an erasing message
$\mu_e$ with the same number as~$\mu$. In {\tt send}, the Poisson random generator
is called with the parameter {\tt poisson\_radius} in order to define the radius
of propagation of~$\mu$. This initializes the field {\tt wait} attached to~$\mu_0$.
By construction, the radius is always positive. Note that there is no condition on
the time for the management of an erasing message. As long as its delay is greater
than~1, the erasing message remains in the tile~$T$ where it was created. When the
delay is~1, it is sent to the neighbours with its field {\tt wait} set to~0.
The procedure {\tt convey} transmits the erasing message as its delay is now
always~0, called at each time by {\tt action\_in}, see Algorithm~\ref{action}.
\vtop{
\vspace{-4pt}
\begin{algo}\label{action}\small
The procedure {\tt action\_in}.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines\leftskip 0pt
\obeyspaces\global\let =\ \tt\parskip=-2pt
cursor := space(sect,i)(0).message\_stack;
while (cursor /= null)
loop
case cursor.the\_type is
when public =>
if (time mod 2) /= 0 then
if cursor.direct = null then
send(from => (sect,i), cursor => cursor);
else convey (from => (sect,i), cursor => cursor);
end if;
else -- even time:
replicate(cursor, ontile => tile1);
if poissonrandom(poisson\_reply) > 0 then
reply(from => (sect,i), place => cursor,
num => cursor.number);
end if;
end if;
when nonpublic =>
convey (from => (sect,i), cursor => cursor);
when erasing => -- cursor.direct = null, always
if cursor.wait = 1 then
send(from => (sect,i), cursor => cursor);
elsif cursor.wait = 0 then
convey (from => (sect,i), cursor => cursor);
else -- cursor.wait > 1
replicate(cursor, ontile => tile1);
tile1.last\_message.wait := cursor.wait-1;
end if;
end case;
cursor := cursor.next;
compte := compte+1;
end loop;
if poissonrandom(poisson\_write) > 0 then
write (from => (sect,i));
end if;
if (time mod 2) = 0 then
if poissonrandom(poisson\_public) > 0 then
init\_cell(sect,i,new\_space,public);
end if;
end if;
\par}
\demitrait
\vskip 10pt
}
It is not difficult to see that if~$\mu$ is sent at the time~$t$ and $\mu_e$
is sent at the time~$t$+$r$, then $\mu$ and~$\mu_e$ are at the same tile at the
time~$t$+$2r$.
More precisely, $\mu$ and~$\mu_e$ are present on the same tile at an even time:
$t$+$2r$ has the same parity as $t$ which is odd and when $\mu$ is sent,
but $\mu_e$ is sent when $\hbox{\tt wait} = 1$. And so, the coincidence is detected
by the procedure {\tt convey} at the time $t$+$2r$+1: at this time,
$\mu$, which reached the tile~$\tau$ at~$t$+$2r$ is still there at $t$+$2r$+1.
The situation is given in a schematic way by Table~\ref{erasing} and it is
illustrated by Figure~\ref{fig_erasing}.
\vtop{
\vspace{-4pt}
\begin{algo}\label{erasing}\small
The procedure {\tt action\_in}.
\end{algo}
\vspace{-12pt}
\grostrait
\vskip 4pt
{\obeylines\leftskip 0pt
\obeyspaces\global\let =\ \tt\parskip=-2pt
--
-- sector 1 1 1 5 5 5
-- tile 7 3 1 0 1 9 24
-- wait 0 1 2 3 4 5 6
--
-- 0 X
-- 1 X 6
-- 2 X 5
-- 3 X 4
-- 4 X 3
-- 5 X 2
-- 6 o X 1
-- 7 o X 0
-- 8 o X 0
-- 9 o X 0
-- 10 o X 0
-- 11 o X 0
-- 12 Z 0
--
\par}
\demitrait
\vskip 10pt
}
\vtop{
\vspace{-25pt}
\centerline{
\mbox{\includegraphics[width=300pt]{fig_msgssys_erase.ps}}
}
\vspace{-5pt}
\begin{fig}\label{fig_erasing}
\small
The emission of a public message and of its erasing signal.
\end{fig}
}
\vskip 7pt
In the program, the cancellation of a message reached by its erasing signal
is performed by the procedure {\tt convey}. When dealing with a public message,
the first task of the procedure is to scan the stack in order to possibly detect
an erasing signal with the same number as the message. When this happens, the
message, which is scanned in {\tt stack0} is simply not copied onto {\tt stack1}.
When the procedure examine an erasing signal, it performs a similar scanning:
if a public message bears the same number as the signal, the signal is not copied
onto {\tt stack1}. As the cancellation has to occur when both signals are present,
this simple way is enough to perform this action and there is no need
to connect the two decisions of not copying the information onto
{\tt stack1}. This is guaranteed by the disjunction between {\tt stack0} and
{\tt stack1}.
\section{\Large The experiment}
\label{experiment}
The experiment was performed by the running the program on a simple laptop.
The laptop is a {\tt Lenovo} one, with two Intel processors, both working
at 2~GHz, and Linux Mandriva as operating system. The used $ADA$-compiler
belongs to the {\tt gnu}-family, version 4.4.1. In the first sub-section, we
describe the experiment and we give an account of the results. In the second
sub-section, we give an interpretation of the results.
\subsection{Description of the experiment and of the results}
\label{rawdata}
The program was run for six value of the depth of the Fibonacci tree,
ranging from~5 to~10.
Denote by $\cal S$ the observation space.
The size of~$\cal S$ is defined by the depth of the Fibonacci tree which spans the
seven sectors displayed around the central cell. We consider the tiles whose
level in the tree is at most {\tt depth} which takes values in [5..10] in our
experiments. This means that $\cal S$ contains 1625 tiles when
$\hbox{\tt depth} = 5$ and 200593 tiles when $\hbox{\tt depth} = 10$.
Increasing the depth by~1 means multiplying the number of tiles by
a coefficient which quickly tends to
$\displaystyle{{3+\sqrt5}\over2}\approx 2.618034$. This factor of
a bit more than 2.6, can be observed in Table~\ref{statnb1} which indicates
the number of tiles of~$\cal S$ for the different values of {\tt depth}.
The second parameter which we also changed
is the radius of propagation of the public messages. As indicated in
Section~\ref{scenario}, the radius is an integer valued random variable following
a Poisson law. The coefficient is fixed to~5{} in one series of experiments and
to~10{} in the second one. The range of the variable is in mean this
coefficient. However, the value may range from~0 to twice the coefficient with,
from time to time, bigger values. Table~\ref{statnb1} indicates the number of messages
emitted in the observed area under these conditions. The remaining parameters
are the following. Each tile is given the possibility to
emit a message. Again we assume that the probability of such an event is
given by a Poisson law. The coefficient is 0,005 for a public message, taking into
account that such messages are emitted at odd times only. Also, the tiles which
are on the border of the space and which are the more numerous, more than 60\%{}
of the overall number of tiles in~$\cal S$, are given an additional possibility
with again a Poisson law whose coefficient is 0.0025. This is also to reflect the
possibility for the tiles of~$\cal S$ to receive messages from outside~$\cal S$.
Now, for private messages, they are caused either as a reply to a public message,
or as a single message sent to a particular tile via the consultation of the
directory. These events are also following a Poisson law and the coefficient
is 0.0025 for the reply to a public message, and 0.001 for the consultation
of a directory. In the case of a message sent after consulting the directory,
it is assumed that the coordinate of the tile given by the directory falls
within~$\cal S$ and that both numbers constituting the coordinate are uniformly
distributed in their respective ranges.
\newdimen\thelarge\thelarge=45pt
\def\lignure #1 #2 #3 #4 #5 #6 #7
\ligne{\hfill
\hbox to \thelarge{\hfill#1\hfill}
\hbox to \thelarge{\hfill#2\hfill}
\hbox to \thelarge{\hfill#3\hfill}
\hbox to \thelarge{\hfill#4\hfill}
\hbox to \thelarge{\hfill#5\hfill}
\hbox to \thelarge{\hfill#6\hfill}
\hbox to \thelarge{\hfill#7\hfill}
\hfill}
}
\newdimen\thelargebis\thelargebis=45pt
\newdimen\thestart\thestart=60pt
\def\lignurebis #1 #2 #3 #4 #5 #6
\ligne{\hskip 10pt
\hbox to \thestart{\hskip 5pt#1\hfill}
\hbox to \thelargebis{\hfill#2\hfill}
\hbox to \thelargebis{\hfill#3\hfill}
\hbox to \thelargebis{\hfill#4\hfill}
\hbox to \thelargebis{\hfill#5\hfill}
\hbox to \thelargebis{\hfill#6\hfill}
\hfill}
}
\vtop{
\begin{tab}\label{statnb1}
\small The number of tiles and the total number of messages. The number after
{\tt sent} is the radius of propagation. The time lines indicate the number of
iterations during which the program was executed. Under the line indicating the
time for radius~$5$, we indicate the ratio between the number of messages
for consecutive depths as long as the overall duration is the same.
The lines {\tt mean} indicate the mean of the number of messages up to~$t$
divided by~$t$ for $t\in[1..T]$ where $T$ is indicated by the line {\tt time}.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignure {depth} 5 6 7 8 9 {10}
\lignure {tiles} {1625} {4264} {11173} {29261} {76616} {200593}
\lignure {sent, 5} {1101} {3308} {8636} {21797} {49295$^*$} {60453$^*$}
\lignure {time, 5} {168} {168} {168} {168} {142} {69}
\lignurebis {ratio} {3.00454} {2.61064} {2.52396} {} {}
\lignure {mean, 5} {6.81949} {19.13115} {50.10053} {128.31127} {342.79960} {877.83673}
\lignurebis {ratio} {2.80536} {2.61879} {2.56108} {2.67162} {2.56079}
\lignure {sent, 10} {2173} {8289} {13687$^*$} {13784$^*$} {19167$^*$} {30164$^*$}
\lignure {time, 10} {168} {168} {92} {41} {30} {24}
\lignure {mean, 10} {11.21332} {40.57043} {101.35430} {197.08219} {405.77815}
{965.53752}
\lignurebis {ratio} {3.618057} {2.50356} {1.94449} {2.058929} {2.37947}
\vspace{-2pt}
\demitrait
\vskip 10pt
}
\vtop{
\begin{tab}\label{statnb2}
\small The number of tiles and the total number of messages at time~$24$.
The conventions are those of Table~{\rm\ref{statnb1}}. Under each line
indicating the number of messages sent at time~$24$, we have the ratio
between two consecutive numbers.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignure {depth} 5 6 7 8 9 {10}
\lignure {tiles} {1625} {4264} {11173} {29261} {76616} {200593}
\lignure {sent, 5} {169} {423} {1158} {3017} {7888} {20556}
\lignurebis {ratio} {2.50296} {2.73759} {2.60535} {2.61452} {2.60599}
\lignure {sent, 10} {204} {582} {1538} {4509} {11413} {30164}
\lignurebis {ratio} {2.85294} {2.64261} {2.94129} {2.53116} {2.64295}
\vspace{-2pt}
\demitrait
\vskip 10pt
}
Another important feature regarding private messages is that in the experiment,
it was assumed that once a communication has started it goes on endlessly: if
$A$~replies to a message sent by~$B$, either public or private, then $B$~replies
to~$A$ which again replies to~$B$ and this process goes on periodically. Moreover,
the reply was always assumed to be immediate.
Table~\ref{statnb1} indicates the number of messages issued during the whole
time of the simulation, measured by the number of iterations of the
procedure {\tt execute}. The number of iterations is also given by the table.
It can be noticed that this number is always 168 for small values. This number
was fixed for the experiment and it can be noticed that \hbox{168 = 7$\times$24}.
For a fixed value of the mean radius of propagation, we notice that the
number of iteration becomes lower and lower. This is a limitation caused by
the system and the machine under which the program was run. It can be noticed that
the decay of the number of the iterations corresponds to the increase of the
number of tiles. Accordingly, this alters the number of messages which were issued.
Table~\ref{statnb1} indicates the ratio between the numbers of messages sent
when the radius of propagation is~5 and when the depth of the Fibonacci tree is~5,
6 and~7 as for these depths, we have the same duration~168. In Table~\ref{statnb2},
we indicate the overall number of sent messages at time~24 as we have data for each
depth of the experiment. This allows us to compute the ratio between numbers
associated to consecutive depths.
\vtop{
\begin{tab}\label{statmax1}
\small The number of tiles and the maximal number of messages passing through
a tile at a time during the interval of observation.
The number after {\tt max} is the radius of propagation. The rest of the conventions
are those of Table~{\rm\ref{statnb1}}.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignure {depth} 5 6 7 8 9 {10}
\lignure {tiles} {1625} {4264} {11173} {29261} {76616} {200593}
\lignure {max, 5} {18} {39} {58} {104} {232$^*$} {192$^*$}
\lignurebis {ratio} {2.16667} {1.48718} {1.79310} {} {}
\lignure {time, 5} {168} {168} {168} {168} {142} {69}
\lignure {max, 10} {63} {169} {204$^*$} {197$^*$} {315$^*$} {694$^*$}
\lignure {time, 10} {168} {168} {92} {41} {30} {24}
\vspace{-2pt}
\demitrait
\vskip 10pt
}
\vtop{
\begin{tab}\label{statmax2}
\small The number of tiles and the maximal number of messages at time~$24$.
The conventions are those of Table~{\rm\ref{statnb1}}. Under each line
indicating the maximal number of messages passing through a tile at time~$24$,
we have the ratio between two consecutive numbers.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignure {depth} 5 6 7 8 9 {10}
\lignure {tiles} {1625} {4264} {11173} {29261} {76616} {200593}
\lignure {sent, 5} {11} {16} {25} {34} {54} {91}
\lignurebis {ratio} {1.45455} {1.56250} {1.36000} {1.58824} {1.68519}
\lignure {sent, 10} {17} {30} {61} {140} {315} {694}
\lignurebis {ratio} {1.76471} {2.03333} {2.295082} {2.25000} {2.20317}
\vspace{-2pt}
\demitrait
\vskip 10pt
}
Table~\ref{statnb1} also reports another measurement performed by the program:
if $n_t$ is the number of messages emitted up to the time~$t$ with $t\in [1..T]$,
where $T$ is the duration of the experiment, {\it i.e.} the number of iterations,
then the lines {\tt mean} give the mean value of the numbers
$\displaystyle{{n_t}\over t}$. These values are computed when the propagation
radius is~5 and when it is~10.
Another interesting information is the maximal number of messages passing through
a tile. The data are given in Table~\ref{statmax1}, in the same conditions as
in Table~\ref{statnb1}.
The data are summarized in Tables~\ref{statmax1} and~\ref{statmax2}.
Below, Tables~\ref{statcompo1} and~\ref{statcompo2} give an information on
the decomposition of the number of emitted messages between the public messages
and the private ones
\newdimen\thelargea\thelargea=45pt
\def\lignurea #1 #2 #3 #4 #5 #6
\ligne{\hfill
\hbox to \thelargea{\hfill#1\hfill}
\hbox to \thelargea{\hfill#2\hfill}
\hbox to \thelargea{\hfill#3\hfill}
\hbox to \thelargea{\hfill#4\hfill}
\hbox to \thelargea{\hfill#5\hfill}
\hbox to \thelargea{\hfill#6\hfill}
\hfill}
}
\vtop{
\begin{tab}\label{statcompo1}
\small This table is a refinement of Table~{\rm\ref{statnb1}}. It indicates
how the number of messages emitted in~$\cal S$ are distributed between the
public and the private messages and, among the latter ones, between the replies
to a public message or a direct message to a single tile via the directory.
This table gives the data for radiuses~$5$ and~$10$ for the propagation of
the public messages, upper and lower halves of the table, respectively.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignurea {depth} {time} {public} {reply} {write} {total}
\vspace{7pt}
\lignurea {5} {168} {636} {211} {254} {1101}
\lignurea {ratio} {} {0.577} {0.192} {0.231} {}
\lignurea {6} {168} {1840} {783} {685} {3308}
\lignurea {ratio} {} {0.556} {0.237} {0.207} {}
\lignurea {7} {168} {4669} {2285} {1682} {8636}
\lignurea {ratio} {} {0.541} {0.264} {0.195} {}
\lignurea {8} {168} {11982} {5295} {4520} {21797}
\lignurea {ratio} {} {0.550} {0.243} {0.207} {}
\lignurea {9} {142} {27099} {12488} {9708} {49295}
\lignurea {ratio} {} {0.550} {0.253} {0.197} {}
\lignurea {10} {69} {34536} {13467} {12450} {60453}
\lignurea {ratio} {} {0.571} {0.223} {0.206} {}
\vspace{-2pt}
\demitrait
\vspace{4pt}
\lignurea {5} {168} {654} {1281} {238} {2173}
\lignurea {ratio} {} {0.301} {0.589} {0.110} {}
\lignurea {6} {168} {1791} {5848} {650} {8289}
\lignurea {ratio} {} {0.217} {0.705} {0.078} {}
\lignurea {7} {92} {2472} {10279} {936} {13687}
\lignurea {ratio} {} {0.181} {0.751} {0.068} {}
\lignurea {8} {41} {3094} {9603} {1087} {13784}
\lignurea {ratio} {} {0.224} {0.697} {0.079} {}
\lignurea {9} {30} {6107} {10937} {2123} {19167}
\lignurea {ratio} {} {0.318} {0.571} {0.111} {}
\lignurea {10} {24} {12935} {12954} {4275} {30164}
\lignurea {ratio} {} {0.429} {0.429} {0.142} {}
\vspace{-2pt}
\demitrait
\vskip 10pt
}
\newdimen\thelargea\thelargea=45pt
\def\lignureb #1 #2 #3 #4 #5
\ligne{\hfill
\hbox to \thelargea{\hfill#1\hfill}
\hbox to \thelargea{\hfill#2\hfill}
\hbox to \thelargea{\hfill#3\hfill}
\hbox to \thelargea{\hfill#4\hfill}
\hbox to \thelargea{\hfill#5\hfill}
\hfill}
}
\noindent
and, among the latter, between replies to a public message
and direct messages to a tile whose coordinates are delivered by the directory.
As for Table~\ref{statnb1}, Table~\ref{statcompo1} gives the information for
each area of~$\cal S$ defined by the depth of the Fibonacci tree. The upper half
of the table concerns a propagation of the public messages characterized by
radius~5, while, the lower half concerns radius~10.
\vtop{
\begin{tab}\label{statcompo2}
\small This table is a refinement of Table~{\rm\ref{statnb2}}. It indicates
how the number of messages emitted in~$\cal S$ are distributed between the
public and the private messages and, among the latter ones, between the replies
to a public message or a direct message to a single tile via the directory.
All data are taken at iteration~$24$. In the upper half of the table, the radius
of propagation for the public messages is~$5$. In the lower half, the radius
is~$10$.
\end{tab}
\vspace{-12pt}
\grostrait
\vspace{4pt}
\lignureb {depth} {public} {reply} {write} {total}
\vspace{7pt}
\lignureb {5} {109} {24} {36} {169}
\lignureb {ratio} {0.577} {0.192} {0.231} {}
\lignureb {6} {277} {53} {93} {423}
\lignureb {ratio} {0.556} {0.237} {0.207} {}
\lignureb {7} {738} {200} {220} {1158}
\lignureb {ratio} {0.541} {0.264} {0.195} {}
\lignureb {8} {1856} {504} {657} {3017}
\lignureb {ratio} {0.550} {0.243} {0.207} {}
\lignureb {9} {5021} {1268} {1599} {7888}
\lignureb {ratio} {0.550} {0.253} {0.197} {}
\lignureb {10} {13026} {3181} {4349} {20556}
\lignureb {ratio} {0.571} {0.223} {0.206} {}
\vspace{-2pt}
\demitrait
\lignureb {5} {98} {78} {28} {204}
\lignureb {ratio} {0.480} {0.382} {0.138} {}
\lignureb {6} {273} {205} {104} {582}
\lignureb {ratio} {0.469} {0.352} {0.179} {}
\lignureb {7} {672} {614} {252} {1538}
\lignureb {ratio} {0.437} {0.399} {0.164} {}
\lignureb {8} {1919} {1945} {645} {4509}
\lignureb {ratio} {0.426} {0.431} {0.143} {}
\lignureb {9} {5010} {4720} {1683} {11413}
\lignureb {ratio} {0.439} {0.414} {0.147} {}
\lignureb {10} {12935} {12954} {4275} {30164}
\lignureb {ratio} {0.429} {0.429} {0.142} {}
\vspace{4pt}
\demitrait
\vskip 10pt
}
In Table~\ref{statcompo2}, the data are attached to the same time, defined by
24 iterations. This gives a direct comparison between all the data but it does not
concern a long enough time period.
We turn to the next sub-section where we try to extract a general information
from these data and from a few other ones we have not the room to give in this
paper.
\subsection{Interpretation}
\label{interpret}
Several conclusions can be drawn from the results presented in
Subsection~\ref{rawdata}.
The first one concerns the ratios between the number of messages
for consecutive depths of the Fibonacci trees. These ratios are close to the ratio
between the area of~$\cal S$ for consecutive values of the depth of the spanning
tree. It seems that we may conclude that these experimental data support
Assumption~\ref{proparea}.
\begin{hypo}\label{proparea}
For any $t$, the number of messages issued at the time~$t$ in~$\cal S$ is
proportional to the number of tiles belonging to~$\cal S$.
\end{hypo}
Indeed, the coefficient of the Poisson law is a kind of mean of the random
variable indicating whether a message is sent or not. According to the homogeneous
nature of the space and as the decision of one tile is independent from that of
its neighbours, it can be expected that the observed number of issued messages
is proportional to the area. This conclusion is strengthened by the following
consideration. As the actual radius of propagation of the public messages is
bounded, the contribution of far tiles is ruled out, starting from a certain
distance from a tile and this distance can be uniformly bounded for all the tiles.
The values of the radius exceeding say twice the mean of the radius as a random
variable which we assumed to follow a Poisson law can be considered as an event
of very small probability so that an infinite repetition of exceptionally long
radiuses can be considered as an event of probability~0. The same remark apply if we
relax a bit the conditions on the coordinates provided by the directory. We may
assume that the number of the sector is uniformly distributed and that the number
of the tile in the tree is an integral random variable following a Poisson law
with a radius of the same size as that of~$\cal S$. And so, relaxing a bit the
condition on the directory as just suggested does not alter the argument in
favor of Assumption~\ref{proparea}.
\vskip 10pt
No clear statement can be inferred from
Tables~\ref{statmax1} and~\ref{statmax2},
except the fact that the maximal number of messages passing through~$\cal S$ seems
to be increasing with the time. This can also be seen on the experiment for each
depth: as the number of iteration increases, the maximal number of messages
passing at a tile also increases.
It is also interesting to look at where the maximal number of messages appear.
We have not the room to give the relevant information computed by the program.
For each iteration, the program indicates a tile at which the number of
passing messages is maximal. It also splits the information in looking at where
is obtained the maximal number of passing public messages or private messages
replying to a public message or private messages written to a single tile.
It is interesting to notice that most often the tiles are not the same for the
different kinds of messages. Also, the position of this maximum is generally
the central tile or one of its neighbours. However, for the emission of the
pubic message, the maximal number of passages at a tile may be obtained a bit
further from the central tile, while the replies to these messages seem
to be maximal most often at the central tile or its immediate neighbours.
Tables~\ref{statcompo1} and~\ref{statcompo2} show a very interesting difference
between the cases when the radius of propagation of the public messages is~5 and
when it is~10. In both tables, we can see that the proportion of public messages
is higher when the radius is~5. This is particularly striking in
Table~\ref{statcompo1}, but it is already noticeable in Table~\ref{statcompo2}
showing that this difference appears quickly and that it tends to increase a
bit with the time. It is also interesting to see that the relative 'loss'
of the public messages 'benefit' to their replies. Indeed, in both tables,
the proportion of direct messages to a single tile is not improved when the
depth of the Fibonacci tree is increasing. Of course, the Poisson coefficient
for triggering a public message is 0.005, while that of a reply is 0.0025
and that of a direct private message is 0.001. However, the public messages are
triggered at odd times only and the replies occur only at even time while
the direct private messages can be sent at any time. There should be no big
difference between direct private messages and replies to a public message.
In fact the explanation lies in the geometry of the space. Indeed, the reply
is proposed to any tile visited by the propagation wave which covers all the tiles
within the radius fixed at the time when the public message was emitted.
This additional solicitation explains the importance of the replies.
At this point, we can also indicate why we have chosen a Poisson law to model
this message system. The reason is that we have to take into account the geometry
of the space. A uniform distribution would give much more weight to distant tiles
by the simple fact that their number increases exponentially: the farther they
are the more message they would send to the centre. This is also the reason why
we decided to limit the propagation of the public messages. If no limitation
would be put, the number of messages received at any point would grow exponentially
with time by the just mentioned argument. Accordingly, the limitation restricts
this possibility. Now, the Poisson law also gives the possibility to obtain big
values with respect to the mean value. Simply these extremal are very rare, the
more rare they are higher values.
\section{\Large Conclusion}
\label{conclusion}
It is the place here to discuss how these results can be
interpreted in a more qualitative way. The number of iterations suggests that
the unit of time is an hour. The tiles can be interpreted either as individuals
or as groups of individuals in a given constant area, the one defined by the
area of a tile. Remember that in the space we consider, all tiles have the same
area. The limitation of the public messages can be interpreted as a natural
limit due to the conditions in which the message is sent and also depending
on the intentions of the sender.
Two important points should be noted. The first one is the property of the
public messages to cover all the tiles of a given area for each tile to receive
the message once exactly. The second point is the mechanism to limit the
propagation of a public message. This mechanism needs no centralization. It is
monitored by the sender and, a priori, each tile can be a sender. To be a sender
is defined by a probability which is the same for every tile.
The third point is that in some sense, the indicated scenario is a worse
case one with respect to the traffic load supported by each tile. Indeed,
the fact that once a communication is established between two tiles goes on
endlessly contributes to increase the traffic with the time. There is room here
to tune the modeling by introducing various ways to delay answers or to
limit the number of contacts of a tile with others: here also we could consider
that this number is a Poisson random variable whose mean can be fixed uniformly
or depending on other criteria which we have not considered here.
A last point is the possible improvement of the program in order to obtain more
data and to go further in the exploration of the simulation space. Note that
the file which records all the communications when the depth is~5 and the
radius of propagation is~10 and the number of iterations is~168 has a size
of around 164 megabytes. As mentioned in Section~\ref{experiment}, increasing the
depth by~1 multiplies the area
by around 2.618. Accordingly, the depth which
defines the area of~$\cal S$ cannot be extent very much. Already depth~10
with radius~10 requires a machine more powerful than a simple laptop.
We are convinced that there is further work ahead to better analyze the
data already obtained, to improve the program in order to go further in the
exploration of the simulation space. There is also room to tune the basic
parameters in order to get a picture closer to real networks as, for instance,
social networks.
|
1,108,101,565,808 | arxiv | \section{Introduction}
What reduced states are compatible with a
quantum state of a composite system? The study of
this question has in fact a long tradition -- as the natural
quantum analogue of the marginal problem in
classical probability theory. Very recently,
this problem, now coined the
{\it quantum marginal problem},
has seen a revival of interest,
motivated by applications in the context of quantum information
theory \cite{Higuchi,Higuchi2,Bravyi,Han,Discrete,Franz}.
In fact, in the quantum information setting,
notably in quantum channel capacity expressions,
in assessments
of quantum communication protocols, or in the separability
problem, one often encounters
questions of compatibility of reductions with global quantum states
\cite{Squashed,THL,NK,JE,Daftuar}.
Since it is only natural to look at the full orbit under local unitary
operations, the quantum marginal problem immediately translates to a
question of the compatibility of spectra of quantum states. The
{\it mixed quantum marginal problem} then amounts to the following question:
Is there a state $\rho$ of a quantum system with $n$ subsystems, each
with a reduction $\rho_k$, that is
consistent with
\begin{eqnarray}
\text{spec} (\rho) &=&r,\\
\text{spec} (\rho_k) &= & r_k
\end{eqnarray}
for $k=1,\dots, n$, $r$ and $r_k$ denoting the respective
vectors of spectra.
In the {\it pure marginal problem}, one assumes $\rho=|\psi\rangle\langle\psi|$
to be pure.
In the condensed-matter context \cite{Hall,NRep},
related questions are also of
interest: For example, once one had classified all possible two-qubit
reductions of translationally invariant quantum states,
then one would
be able to obtain the ground state energy of any nearest-neighbor
Hamiltonian of a spin chain. The quantum marginal problem
was solved in several steps:
Higuchi et al. \cite{Higuchi}
solved the pure quantum marginal problem for qubits.
Subsequently, Bravyi was able to solve the mixed state case
for two qubits, followed
by Franz \cite{Franz} and Higuchi \cite{Higuchi2}
for a three qutrit system. The general solution of the
quantum marginal problem for finite-dimensional systems was found
in the celebrated work of Klyachko \cite{Discrete}, see also Refs.\
\cite{Christandl,PhDC}. This is indeed
a closed-form solution. Yet the number of constraints grows
extremely rapidly with the system size, rendering the
explicit check whether the conditions are satisfied
unfeasible even for relatively small systems.
In this work, we introduce the Gaussian version of the
quantum marginal problem. Gaussian states play a
key role in a number of contexts, specifically
whenever bosonic modes and quadratic Hamiltonians become relevant, which are ubiquitous in
quantum optical systems,
free fields, and condensed matter lattice systems. For general infinite dimensional systems the marginals problem may well be intractable. However,
given that in turn these Gaussian states can be described by merely their
first and second moments \cite{Survey,Peter},
one could reasonably hope that it could be
possible to give a full account
of the {\it Gaussian quantum marginal problem}.
This gives rise, naturally, not to a condition to spectra of
quantum states, but to symplectic spectra, as explained
below. For the specific case of three modes, the result is
known \cite{Alessio}, see also Ref.\ \cite{Rev}.
In this work we will show that this program of
characterizing the reductions of Gaussian states
can be achieved in generality, even concerning both
necessary and sufficient conditions. This means that one
can give a complete answer to what reductions entangled
Gaussian states can possible have.\footnote{We refer here to
the marginal problem for Gaussian states, which are quantum
states fully defined by their first and second moments of canonical
coordinates. However, clearly, our result equally applies to general
and hence non-Gaussian states, in that it fully answers the question
what local second moments are consistent with global second
moments of quantum states of several modes.}
Equivalently, we can describe this Gaussian marginal
problem as a problem of compatibility of temperatures
of standard harmonic systems: Given a state $\rho$, what
{\it local temperatures} -- or equivalently for single modes, what
{\it local entropies} -- are compatible with this
joint state? Of course, one can always take the temperatures to
be equal. But if they are different, they constrain each other in
a fairly
subtle way, as we will see. In a sense, the result gives rise to the
interesting situation that by looking at local temperatures, one can assess whether these reductions
may possibly originate from a joint system in a
pure state. Finally, it is important to note, since sufficiency of the conditions is always proven by an
explicit construction, the result also implies a recipe for
{\it preparing multi-mode continuous-variable entangled
states}.
\begin{figure}
\includegraphics[width=8.5cm]{Marginals.eps}
\caption{Solution of the Gaussian marginal problem.
The set of possible reductions with symplectic spectra
$(c_1,\dots, c_n)$ of a correlated or entangled
Gaussian state $\rho$ with
symplectic spectrum $(d_1,\dots, d_n)$ is characterized
by the given remarkably simple necessary and sufficient set of
$n+1$ inequalities. For example,
from local measurements, one can hence
infer about the consistency with the
purity of the joint state. It also governs the sharing of
correlations in Gaussian states.}
\end{figure}
\section{Main result}
We consider states on $n$
modes, and consider reductions to single modes. Gaussian states are
represented by the matrix of second moments, the
$2n\times 2n$ covariance matrix $\gamma$
of the system, together with the vector $\mu $
of first moments. For a definition and a survey of properties,
see Refs.\ \cite{Survey,Peter}.
In this language, the vacuum state of a standard oscillator becomes
$\gamma = \mathbbm{1}_2$, as the $2\times 2$ identity matrix. The canonical
commutation relations are embodied in the symplectic matrix
\begin{equation}\label{SymplecticMatrix}
\sigma=\bigoplus_{k=1}^n
\left[
\begin{array}{cc}
0 & 1\\
-1 & 0\\
\end{array}
\right]
\end{equation}
for $n$ modes. The {\it covariance matrices} of $n$ modes
are exactly those real matrices satisfying
\begin{equation}
\gamma + i\sigma\geq 0,
\end{equation}
which is simply a statement of the Heisenberg uncertainty
principle. The first moments can always
be made zero locally, and are hence not
interesting for our purposes here. Note also that the
set of Gaussian states is closed under reductions,
so reduced states of Gaussian states are always
Gaussian as well.
Real matrices that leave the
symplectic form invariant, $S\sigma S^T = \sigma$,
form the real symplectic group Sp$(2n,{R})$.
In the same way as symmetric matrices $M$
can be diagonalized
with orthogonal matrices to a diagonal matrix
$O M O^T =D$, one can diagonalize strictly
positive matrices using such $S\in \text{Sp}(2n,{R})$,
according to
\begin{equation}
SMS^T = D.
\end{equation}
The simply counted
main diagonal elements of $D$ form then the {\it symplectic
spectrum}
of $M$, and the collection of symplectic eigenvalues can be
abbreviated as $\text{sspec}(M)=(d_1,\dots, d_n)$,
\begin{equation}
D=\text{diag} (d_1,d_1,\dots, d_n,d_n).
\end{equation}
This procedure is nothing but
the familiar {\it normal mode decomposition}.
In turn, by definition, the
symplectic eigenvalues are given by the square roots of the eigenvalues
of the matrix $-M\sigma M\sigma$. Again, for the vacuum, the symplectic
eigenvalues are all given by unity. In a mild abuse of notation,
we will refer to the symplectic spectrum of a Gaussian state
as the symplectic spectrum of the respective covariance matrix.
Finally, for a given covariance matrix $\gamma$, and in fact any
strictly positive real matrix, we refer to the
{\it symplectic main diagonal elements} $(c_1,\dots, c_n)$
as the symplectic eigenvalues of the $2\times 2$ main
diagonal blocks. This is the natural analogue of main diagonal
elements.
Equivalently, the symplectic main diagonal elements
are the main diagonal elements after the main diagonal
$2\times 2$ blocks have been brought into the form
\begin{equation}\label{mdf}
\gamma_k =\left[
\begin{array}{cc}
c_k & 0\\
0 & c_k\end{array}
\right].
\end{equation}
We are now in the position to state
our main result, see Fig.\ 1.
It relates the symplectic spectrum of
composite systems to the ones of the reductions. We will first
state it as a mere matrix constraint, then as the actual
Gaussian marginal problem, and finally for the important
special case of having a pure joint state.
\begin{theorem}[Necessary and sufficient conditions] Let
$(d_1,\dots, d_n)$ and $(c_1,\dots, c_n)$ be two vectors
of positive numbers in non-decreasing
order. Then there exists a
strictly positive real $2n\times 2n$-matrix
$\gamma$ such that
$(d_1,\dots, d_n)$ are its symplectic eigenvalues and $(c_1,\dots, c_n)$ the
symplectic main diagonal elements
if and only if the $n+1$ conditions
\begin{eqnarray}\label{C1}
\sum_{j=1}^k c_j
&\geq& \sum_{j=1}^k d_j ,\,k=1,\dots, n\\
\label{C2}
c_n - \sum_{j=1}^{n-1} c_j&\leq&
d_n - \sum_{j=1}^{n-1} d_j
\end{eqnarray}
are satisfied.
\end{theorem}
This set of inequalities may be conceived as a general analogue of the
Sing-Thompson theorem \cite{Sing,Tho1,Tho}, see below.
More physically speaking, this means the following:
\begin{corollary}[Gaussian marginal problem]\label{MixedProblem}
Assume that $\rho$ is a Gaussian state of $n$ modes satisfying
$\text{sspec}(\rho) = (d_1,\dots, d_n)$.
Then the possible reduced states $\rho_k$ to
each of the individual modes $k=1,\dots, n$
are exactly those Gaussian states with
\begin{equation}
\text{sspec}(\rho_k) = c_k
\end{equation}
satisfying Eq.\ (\ref{C1}) and (\ref{C2}).
\end{corollary}
These conditions hence fully characterize the possible reduced marginal states.
For two modes, $n=2$, for example, the given conditions read
\begin{eqnarray}
c_1+c_2 &\geq& d_1+d_2,\\
c_2-c_1&\leq & d_2-d_1,
\end{eqnarray}
for $c_2\geq c_1 $ and $d_2\geq d_1$. The constraint
$c_1\geq d_1$ is then automatically satisfied.
For pure Gaussian states, the above conditions
take a specifically
simple form. Quite strikingly,
we will see that the resulting conditions very much resemble the situation
of the marginal problem for qubits.
\begin{corollary}[Pure Gaussian marginal problem]\label{pureprob}
Let $\rho=|\psi\rangle\langle\psi|$
be a pure Gaussian state of $n$ modes.
Then the set $(b_1+1,\dots, b_n+1)$
of symplectic eigenvalues
\begin{equation}
\text{sspec}(\rho_k) = b_k+1
\end{equation}
$k=1,\dots, n$, of the reduced states $\rho_k$
of each of the $n$ modes is given by the set
defined by
\begin{equation}\label{Cond}
b_{j} \leq \sum_{k\neq j} b_k
\end{equation}
for all $j$, for $b_j\geq 0$.
\end{corollary}
To reiterate, these conditions are necessary and sufficient for
the local symplectic spectra being consistent with the global state
being a pure Gaussian state.
Equivalently, this can be put as follows:
If $\gamma$ is the covariance matrix of a pure Gaussian state
with reductions
\begin{equation}\label{Form}
\gamma_k=\left[
\begin{array}{cc}
b_k +1& 0 \\
0 & b_k+1
\end{array}
\right],
\end{equation}
$k=1,\dots ,n$. Then, Eq.\ (\ref{Cond}) defines the
local temperatures $T_k$
per mode consistent with the whole system being in a
pure Gaussian state, according to
\begin{equation}
b_k =2 (\exp(1/T_k)-1)^{-1},
\end{equation}
for the standard harmonic oscillator (an oscillator with unit mass and
frequency).
The above condition hence determines the {\it temperatures} that modes
can have, given that a composite system is in a pure Gaussian state.
The form
of Eq.\ (\ref{Form}) can always be achieved by means
of local rotations and squeezings in phase space. One can hence equally
think in terms of local symplectic spectra or local temperatures.
It is instructive to compare the results for the pure Gaussian
marginal problem with the one for {\it qubits} as solved in
Ref.\ \cite{Higuchi}. There, it has been found that for a
system consisting of $n$ qubits, one has
\begin{equation}
\lambda_j \leq \sum_{k\neq j} \lambda_k
\end{equation}
for the spectral values $r_k=(\lambda_k,1-\lambda_k)$,
$\lambda_k\in [0,1]$.
Moreover, these
conditions
are both necessary and sufficient. It is remarkable that this
form is identical with the result for $n$ single modes
\begin{equation}
b_{j} \leq \sum_{k\neq j} b_k,
\end{equation}
$b_k\geq 0$,
as necessary and sufficient conditions.
Again,
the admissible symplectic eigenvalues are defined by a cone the base of
which is formed by a simplex.
Note that the methods used in Ref.\ \cite{Higuchi} to arrive at
the above result are entirely different. Once again, a
striking formal similarity between the case of
qubit systems and Gaussian states is encountered.
Finally, from the perspective of matrix analysis,
the above result can be seen as a general
analogue of the {\it Sing-Thompson Theorem} \cite{Sing,Tho1,Tho}
(or {\it Horn's Lemma} \cite{Horn}
in case of Hermitian matrices), first posed in Ref.\
\cite{Mirsky}, where the role of
singular values is taken by the symplectic eigenvalues. \\
\noindent
{\bf Sing-Thompson Theorem (\cite{Sing,Tho1,Tho})}\label{Thompson}
{\it Let $(x_1,\dots, x_n)$ be complex
numbers such that $|x_k|$ are non-increasingly ordered
and let $(y_1,\dots, y_n)$ be non-increasingly ordered
positive numbers. Then an $n\times n$ matrix exists with
$x_1,\dots, x_n$ as its main diagonal and $y_1,\dots, y_n$
as its singular values if and only if
\begin{eqnarray}\label{T}
\sum_{j=1}^k |x_j| &\leq&
\sum_{j=1}^k y_j,\,\, k=1,\dots, n,\\
\sum_{j=1}^{n-1} |x_j| - |x_n|
& \leq & \sum_{j=1}^{n-1} y_j- y_{n}.
\end{eqnarray}
}
It is interesting to see that -- although the symplectic
group $\text{Sp}(2n,{R})$
is not a compact group -- there is so much formal
similarity concerning the implications on main
diagonal elements of matrices. Note, however, that the ordering
of singular values
and symplectic eigenvalues, respectively,
is different in Theorem 1 and in the Sing-Thompson theorem.
\section{Proof}
As a preparation of the proof, we will identify
a simple set of necessary conditions that
constrains the possible reductions that are consistent with
the assumption that the state is pure and Gaussian.
These simple conditions derive from a connection between
the symplectic trace and the trace of the covariance matrix.
Quite surprisingly, we will see that they already define the full
set of possible marginals consistent with a Gaussian state
of $n$ modes. We shall start by stating the condition to the
reductions.
\begin{lemma}[Symplectic trace]\label{ST}
Let $\gamma$ be a strictly positive real $2n\times 2n$-matrix
such that its main diagonal $2\times 2$ blocks
are given by Eq.\ (\ref{mdf}) for $c_k\in[1,\infty)$.
Then the symplectic eigenvalues $(d_1,\dots,d_n)$ of the
matrix $\gamma$ satisfy
\begin{equation}\label{Cond2}
\sum_{k=1}^n d_k \leq \sum_{k=1}^n c_k.
\end{equation}
\end{lemma}
{\it Proof:} Note that the right hand side of Eq.\ (\ref{Cond2}) is
nothing but half the trace of the covariance matrix
$\gamma$,
whereas the left hand side is the {\it symplectic trace}
$\text{str}(\gamma)$
of $\gamma$, so
\begin{equation}
\text{str}(\gamma) = \sum_{j=1}^n d_j
\end{equation}
if $\text{sspec}(\gamma) = (d_1,\dots, d_n)$,
see, e.g., Ref.\ \cite{Hyllus}.
We arrive at
this relationship by making use of a property of the
trace-norm. The symplectic eigenvalues $d_1,\dots,d_n$
of $\gamma$ are given by the
square roots of the simply counted
eigenvalues of the matrix
$ (i \sigma) \gamma(i \sigma)
\gamma$ \cite{Survey,Peter}.
Hence, the symplectic spectrum is just given by the
spectrum of the matrix
\begin{equation}
M= |\gamma^{1/2} (i \sigma) \gamma^{1/2}| ,
\end{equation}
where $| \cdot |$ denotes the matrix absolute
value \cite{MAV}.
So
we have that
\begin{eqnarray}
2
\sum_{k=1}^n d_k &=&
\text{tr}(M)
=
\| \gamma^{1/2} (i \sigma) \gamma^{1/2}\|_1,
\end{eqnarray}
where $\| \cdot \|_1$ is the trace norm. The property we
wish to prove then immediately follows from the fact that the
trace-norm is a unitarily invariant norm: this implies that
\begin{eqnarray}
2 \sum_{k=1}^n d_k =
\| \gamma^{1/2} (i \sigma) \gamma^{1/2}\|_1
\leq \|(i\sigma) \gamma\|_1 ,
\end{eqnarray}
as $\|AB\|_1 \leq \|B A\|_1$ for any matrices
$A$, $B$ for which
$AB$ is Hermitian. This
inequality holds for any unitarily invariant norm whenever
$AB$ is a normal operator \cite{Bhatia}.
Now, since any covariance
matrix is positive, $\gamma\geq 0$, and the
largest singular value of $i\sigma$ is clearly given by
unity, we can finally
conclude that
\begin{eqnarray}
2 \sum_{k=1}^n d_k \leq
\| \gamma \|_1 = \text{tr}(\gamma) =2 \sum_{k=1}^n c_k,
\end{eqnarray}
which is the statement that we intended to show.
\hfill\rule{2mm}{2mm} \\
This observation implies as an immediate
consequence a necessary condition for the
possible reductions, given a Gaussian state of an
$n$-mode system: Let $\gamma$ be the covariance matrix of a
Gaussian pure state of $n$ modes, with reductions as above.
We can think of the state as a bi-partite
state between a distinguished mode labeled $k$, without
loss of generality being the last mode $k=n$, and the rest
of the system.
We can in fact Schmidt decompose this pure state with
respect to this split using Gaussian unitary operations
\cite{Giedke,Schmidt1,Holevo}. This means that we can find
symplectic transformations
$S_A\in \text{Sp}(2(n-1),{R})$ and
$S_B\in \text{Sp}(2,{R})$ such that
\begin{equation}
(S_A\oplus S_B) \gamma
(S_A \oplus S_B)^T =
\left[
\begin{array}{cc}
A & C\\
C^T & B\\
\end{array}
\right],
\end{equation}
where
\begin{eqnarray}
A &=& \text{diag}(1,\dots,1 ,a_n,a_n),\\
B &=& \text{diag}(a_n,a_n),
\end{eqnarray}
with some $2(n-1)\times 2$-matrix $C$.
The symplectic eigenvalues of modes $1,\dots , n-1$
are hence given by $1,\dots, 1,a_n$. The above statement
therefore implies the inequality
\begin{equation}
n-2 + a_n \leq a_1+ \dots + a_{n-1},
\end{equation}
or, by substituting $b_k=a_k-1$ for all $k=1,\dots ,n$,
\begin{equation}
b_n \leq b_1+b_2+\dots + b_{n-1}.
\end{equation}
This must obviously hold for all distinguished modes and not
only the last one, and hence,
we arrive at the following simple necessary conditions:
\begin{corollary}[Necessary conditions for
pure states]\label{Necessary}
Let $\gamma$ be the covariance matrix of
a pure Gaussian state with thermal
reductions
\begin{equation}\label{The}
\gamma_k=\left[
\begin{array}{cc}
b_k +1& 0 \\
0 & b_k+1
\end{array}
\right],
\end{equation}
$k=1,\dots ,n$. Then, for all $j$,
\begin{equation}\label{Sim}
b_{j} \leq \sum_{k\neq j} b_k.
\end{equation}
\end{corollary}
That is, the largest value of $b_j$ cannot exceed the sum of
all the other ones.
So far, we have assumed the global state $\rho$ to be a pure state.
In the full problem, however, we may of course allow
$\rho$ to be any
Gaussian state, and hence a mixed one, with symplectic spectrum
\begin{equation}
\text{sspec}(\rho) = (d_1,\dots, d_n)\geq (1,\dots, 1),
\end{equation}
instead of being $(1,\dots, 1)$. This is the Gaussian analogue of
the mixed marginal problem. For this mixed state case, we
provide necessary conditions for the main reductions, in form
of $n$ inequalities on partial sums, and one where the largest
symplectic eigenvalue of a reduction plays an important role.
The first set of $n$ conditions is up to the different ordering
a weak majorization relation for
symplectic eigenvalues, which is in fact essentially
a corollary of a
result from Ref.\ \cite{Hir} due to Hiroshima.
The second statement, the
$n+1$-th condition, as well as showing sufficiency of the
general conditions, will turn out to be significantly more involved.
\begin{lemma}[Necessity of the first $n$ conditions]\label{Majorization}
Let $(d_1,\dots, d_n)$,
and $(c_1,\dots, c_n)$ be defined
as in Theorem \ref{MixedProblem}. For any given
$(d_1,\dots, d_n)$, the admissible $(c_1,\dots, c_n)$
satisfy
\begin{equation}\label{PS}
\sum_{j=1}^k c_j
\geq \sum_{j=1}^k d_j
\end{equation}
for all $k=1,\dots, n$.
\end{lemma}
{\it Proof:} Let $S\in \text{Sp}(2n,{R})$ be the matrix
from the symplectic group
that brings $\gamma$ into
diagonal form, so
\begin{equation}\label{Williamson}
S\gamma S^T = \text{diag}(d_1,d_1,\dots, d_n,d_n).
\end{equation}
The main diagonal elements of $\gamma$, in turn,
again without loss of generality in non-decreasing order,
are given by $(c_1,\dots, c_n)$. Now according to Ref.\
\cite{Hir}, we have that
\begin{equation}\label{TheCond}
\min
\text{tr}(T \gamma T^T) = 2\sum_{j=1}^k d_j
\end{equation}
for $k=1,\dots, n$, where the minimum is taken over all real
$2k\times 2n$-matrices
$T$ for which
\begin{equation}\label{SF}
T\sigma_n T^T = \sigma_k.
\end{equation}
Here, $\sigma_k$
denotes the symplectic matrix on $k$ modes as defined
in Eq.\ (\ref{SymplecticMatrix}).
Now we can actually take
$S\in \text{Sp}(2n,{R}) $
according to
$S =
\mathbbm{1} $,
we see that $T$, consisting of the first $2k$ rows of $S$,
satisfies Eq.\ (\ref{SF}). Since this submatrix
does not necessarily correspond to a minimum in
Eq.\ (\ref{TheCond}), we find
\begin{eqnarray}
2\sum_{j=1}^k c_j = \text{tr}(T \gamma T^T)
\geq 2\sum_{j=1}^k d_j ,
\end{eqnarray}
for any $k=1,\dots, n$.
\hfill\rule{2mm}{2mm} \\
We will now prove the necessity of the $n+1$-th
inequality constraint in Theorem \ref{MixedProblem}.
\begin{lemma}[Necessity of the last condition] \label{FullLemma}
Let $(d_1,\dots, d_n)$,
and $(c_1,\dots, c_n)$ be defined
as in Theorem \ref{MixedProblem}. For any given
vector of symplectic eigenvalues
$(d_1,\dots, d_n)$, the admissible
$(c_1,\dots, c_n)$ satisfy
\begin{equation}\label{Tha}
c_n - \sum_{j=1}^{n-1} c_j \leq
d_n -
\sum_{j=1}^{n-1} d_j .
\end{equation}
\end{lemma}
{\it Proof:} We will define the function
$f:{\cal S}_n\rightarrow{R}$, where ${\cal S}_n$ is
the set of strictly positive real $2n\times 2n$-matrices,
as follows: We define the
vector $c=(c_1,\dots, c_n)$ as
\begin{equation}
c_j = (\gamma_{2j-1,2j-1} \gamma_{2j,2j} - \gamma_{2j-1,2j}^2 )^{1/2}
\end{equation}
$j=1,\dots, n$,
as the usual vector of symplectic
spectra of each of the $n$ modes,
and then set
\begin{equation}
f(\gamma) := 2 \max(c) - \sum_{j=1}^n c_j.
\end{equation}
For a diagonal matrix
$D=\text{diag}(d_1,d_1,\dots, d_n,d_n)$
with entries in non-decreasing order,
we have
\begin{equation}\label{Before}
f(D) = d_n-\sum_{j=1}^{n-1} d_j.
\end{equation}
We will now investigate the
orbit of this function $f$ under the
symplectic group,
\begin{equation}
\tilde f =\sup \left\{x\in {R}: x= f(SDS^T), \, S\in \text{Sp}(2n,{R})\right\},
\end{equation}
and will see that the supremum is actually attained
as a maximum for $S=\mathbbm{1}$. Each of the
matrices $\gamma= SDS^T$ have by construction the same
symplectic spectrum as $D$. This is a variation over
$2n^2+n$ real parameters, as any
$S\in \text{Sp}(2n,{R}) $ can be decomposed
according to the {\it Euler decomposition} as
\begin{equation}
S= O Q V,
\end{equation}
where $O,V\in K(n):= \text{Sp}(2n,{R})\cap O(2n)$ and
\begin{equation}
Q\in \left\{
(z_1,1/z_1,\dots, z_n,1/z_n):
z_k \in {R}\backslash\{0\}
\right\}.
\end{equation}
That is, $O,V$ reflect passive operations,
whereas $Q$ stands for a squeezing operation.
We will now see that the maximum of this function
$f$ -- which exists, albeit the group being non-compact --
is actually attained when the matrix is already diagonal. This means that in general, we have
that
\begin{equation}
\tilde f = 2 \max \text{sspec}(\gamma) - \text{str}(\gamma).
\end{equation}
For any global maximum, any local variation will not increase
this function further. Let us start from
some $\gamma = SDS^T$. For any such covariance
matrix $\gamma$ we can find a $T\in \text{Sp}(2(n-1),{R})$
such that
\begin{equation}\label{SimpleForm}
(T\oplus \mathbbm{1}_2)\gamma (T\oplus \mathbbm{1}_2)^T =
\left[
\begin{array}{cc}
E & F\\
F^T & G
\end{array}
\right] =:\gamma',
\end{equation}
where
\begin{equation}
E=\text{diag}(c_1',c_1',\dots, c_{n-1}',c_{n-1}')
\end{equation}
is a $(2n-2)\times (2n-2)$ matrix and $G$ is a
$2\times 2$ matrix. Using Lemma \ref{ST} again, we find that
\begin{equation}
\sum_{j=1}^{n-1} c_j' \leq \sum_{j=1}^{n-1} c_j,
\end{equation}
so
\begin{equation}
f(\gamma')\geq f(\gamma).
\end{equation}
In other words, it does not restrict generality to assume the
final covariance matrix to be of the form as in the right hand
side of Eq.\ (\ref{SimpleForm}), and we will use the notation
\begin{equation}\label{Prepa2}
\gamma = S D S^T = \left[
\begin{array}{cc}
E & F\\
F^T & G
\end{array}
\right]
\end{equation}
with $E=\text{diag}(c_1,c_1,\dots, c_{n-1},c_{n-1})$
and $G=\text{diag}(c_{n},c_{n})$.
We can now investigate submatrices of $\gamma$ associated with
modes $m$ and $n$, $1\leq m<n$,
\begin{equation}
M_{m,n}= \left[
\begin{array}{cc}
c_m \mathbbm{1}_2 & C_{n,m}\\
C^T_{n,m} & c_n \mathbbm{1}_2
\end{array}
\right].
\end{equation}
This we can always bring to a diagonal form, using symplectic diagonalization, only affecting
the main diagonal elements of modes $n$ and $m$, and leaving the other main diagonal
elements invariant. This brings this submatrix into the form
\begin{equation}
M_{m,n}'= \left[
\begin{array}{cc}
c_m' \mathbbm{1}_2 & 0\\
0 & c_n' \mathbbm{1}_2
\end{array}
\right],
\end{equation}
with $c_n'\geq c_m'$. From Lemma \ref{Difference} we know that
\begin{equation}
c_n' - c_m' \leq c_n - c_m,
\end{equation}
so we have increased the function $f$, whenever
$C_{n,m}\neq 0$. Hence, for global and hence local
optimality, we have to have $C_{n,m}=0$. However, each of the matrices $C_{n,m}=0$
for all $m=1,\dots, n-1$ exactly
if the matrix $\gamma$ is already diagonal.
What remains to be shown is that the function $f$ is bounded
from above, to exclude the
case that the maximum does not even exist. One way to
show this is to make use of the upper bound in Lemma
\ref{FromPure} to have
for every covariance matrix $\gamma$ with
symplectic spectrum $(d_1,\dots, d_n)$
in non-decreasing order
\begin{equation}
f(\gamma) \leq \sum_{j=2}^n d_j +(3-2 n) d_1,
\end{equation}
which shows that $f$ is always bounded from above. If
$\gamma$ is merely a strictly positive real matrix, but no
covariance matrix, an upper bound follows from a
rescaling with a positive number.
\hfill\rule{2mm}{2mm} \\
We now prove the upper bound required for the proof of Lemma \ref{Necessary}.
\begin{lemma}[Upper bound]\label{FromPure}
Let $(d_1,\dots, d_n)$,
and $(c_1,\dots, c_n)$ be defined
as in Theorem \ref{MixedProblem}, and $\gamma$
be additionally a $2n\times 2n$ covariance matrix.
For any given
$(d_1,\dots, d_n)$, the admissible $(c_1,\dots, c_n)$
satisfy
\begin{equation}\label{PureConditionsMixed}
c_n - \sum_{j=1}^{n-1} c_j \leq
\sum_{j=2}^n d_j +(3-2 n) d_1,
\end{equation}
\end{lemma}
{\it Proof:}
We start from a $4n\times 4n$-covariance matrix
\begin{equation}\label{OCM}
\gamma =\left[
\begin{array}{cc}
A & C\\
C^T & A
\end{array}
\right] ,
\end{equation}
corresponding to a pure Gaussian state,
where
\begin{eqnarray}
A &=& \bigoplus_{k=1}^n
\left[
\begin{array}{cc}
d_{k} & 0\\
0 & d_{k}
\end{array}
\right],\\
C&=& \bigoplus_{k=1}^n
\left[
\begin{array}{cc}
(d_{k}^2-1)^{1/2} & 0\\
0 & -(d_{k}^2-1)^{1/2}
\end{array}
\right]
\end{eqnarray}
are real $2n\times 2n$-matrices. Physically,
this means that we start from a
collection of $n$ two mode squeezed
states, with the property that the reduction to the first $n$
modes is just a diagonal covariance matrix with symplectic
eigenvalues $(d_1,\dots,d_n)$, again in non-decreasing order.
Let us first assume that $d_1=1$, this assumption will be relaxed
later. Let us now consider
\begin{equation}\label{Tr}
\left[
\begin{array}{cc}
S_1 & 0\\
0 & \mathbbm{1}
\end{array}
\right]\gamma
\left[
\begin{array}{cc}
S_1^T & 0\\
0 & \mathbbm{1}
\end{array}
\right] = \left[
\begin{array}{cc}
S_1AS^T_1 & S_1 C\\
C^T S_1^T& A
\end{array}
\right],
\end{equation}
for $S_1\in \text{Sp}(2n,{R})$. Obviously, the
set we seek to characterize is the set ${\cal B}$
of main diagonals
of the upper left block
\begin{equation}
U=S_1AS^T_1
\end{equation}
of this matrix. We can always
start from a diagonal matrix having the symplectic
eigenvalues on the main diagonal, and consider the
orbit under all symplectic transformations $S\in \text{Sp}(4n,{R})$.
We will now relax the problem by allowing all
$S\in \text{Sp}(4n,{R})$ instead of
symplectic transformations of the form $S=S_1\oplus\mathbbm{1}$,
$S_1\in \text{Sp}(2n,{R})$. We hence consider the
full orbit under all symplectic transformations.
This set ${\cal C} \supset {\cal B}$
is characterized by the
reductions to single modes of
\begin{equation}\label{BS}
\gamma' =
S\gamma S^T = \left[
\begin{array}{cc}
A' & C'\\
{C'}^T & A
\end{array}
\right]
\end{equation}
for some $S\in \text{Sp}(4n,{R})$, such that again
$A= \text{diag}(d_1,d_1,\dots, d_n,d_n)$
This includes
the case (\ref{Tr}).
We are now in the position to make use of
the statement that we have established
before: From exploiting the Schmidt decomposition on
the level of second moments, and using Lemma \ref{ST}
relating the trace to the symplectic trace,
we find
\begin{equation}\label{Full}
c_n - \sum_{j=1}^{n-1} c_j \leq
\sum_{j=2}^n d_j +3-2 n,
\end{equation}
as $d_1=1$ was
assumed.
Let us now consider the case of $d_1>1$. We will apply
the previous result, after appropriately rescaling the covariance matrix.
Indeed, we
can construct a covariance matrix $\tilde\gamma$
as in Eq.\ (\ref{OCM}) for
\begin{equation}
(\tilde d_1,\dots, \tilde d_n)
= (1,d_2/d_1,\dots, d_n/d_1).
\end{equation}
We then investigate
the orbit of
$\tilde \gamma $
under the symplectic group, and
look at the main diagonal elements of
$S\tilde \gamma S^T $.
By construction, we have that
$\tilde \gamma+i\sigma\geq 0$. We can hence
apply Eq.\ (\ref{Full}) to this case.
Multiplying both sides of Eq.\ (\ref{BS}) by $d_1$
gives rise to the condition in Eq.\ (\ref{PureConditionsMixed}).
\hfill\rule{2mm}{2mm} \\
\begin{lemma}[Solution to two-mode problem]\label{Difference}
There exists a strictly positive real $4\times4$-matrix
$\gamma$ with main diagonal blocks
$\text{diag}(c_1,c_1),\text{diag}(c_2,c_2)$
and symplectic eigenvalues $(d_1,d_2)$ if and only
if
\begin{eqnarray}
c_1+c_2 &\geq& d_1+d_2,\\
c_2-c_1 &\leq & d_2-d_1,
\end{eqnarray}
assuming $c_2\geq c_1$ and $d_2\geq d_1$.
Moreover, $c_1-c_2 = d_1-d_2$ if and only
the $2\times 2$ off diagonal
block of $\gamma$ vanishes.
\end{lemma}
{\it Proof:} The necessary conditions that $|c_1-c_2|\leq |d_1-d_2|$
are a consequence of Lemma \ref{FromPure}. The
necessary conditions
$c_1+c_2\geq d_1+d_2$ and $c_1\geq d_1$
have been previously shown in
Lemma \ref{Majorization}. Hence, we have to show that these
conditions can in fact be achieved. This can be done
by considering a
\begin{equation}
\gamma=\left[
\begin{array}{cccc}
c_1 & 0 &e & 0 \\
0 & c_1 & 0 & f \\
e & 0 & c_2 & 0\\
0 & f & 0& c_2
\end{array}
\right] = S\text{diag}(d_1,d_1,d_2,d_2)S^T.
\end{equation}
The relationship between $c_1,c_2,e,f$ and $d_1,d_2$
is given by
\begin{eqnarray}
d_{1/2}^2 &= &\bigl(c_1^2 + c_2^2+ 2 ef
\nonumber\\
&\pm&
(c_1^4 + c_2^4 + 4 ef c_2^2 - 2 c_1^2 (c_2^2 - 2 ef)
+ 4
c_1 c_2 (e^2 + f^2))^{1/2}\bigr)/2,
\end{eqnarray}
as $d_1,d_2$ are the square roots of the
eigenvalues of $-\sigma\gamma\sigma\gamma$ \cite{Survey},
compare also Ref.\ \cite{Twomodes}.
An elementary analysis shows that the above
inequalities can always be achieved. Also, the extremal values
are achieved if and only if $e=f=0$.
\hfill\rule{2mm}{2mm} \\
What we finally need to show is that the conditions that we have
derived are in fact sufficient. This will be the most involved
statement.
\begin{lemma}[Sufficiency of the conditions]
For any vectors $(c_1,\dots, c_n)$ and $(d_1,\dots, d_n)$
satisfying the conditions (\ref{C1}) and (\ref{C2}) there
exists a $2n\times 2n$ strictly positive real matrix with
$\text{diag}(c_1,\dots, c_n)$ as its symplectic
main diagonal
elements and $(d_1,\dots, d_n)$ as its symplectic eigenvalues.
\end{lemma}
{\it Proof:} The argument will essentially be an argument by
induction, in several ways resembling the argument put forth
in Refs.\ \cite{Sing,Tho1,Tho}. The underlying
idea of the proof is essentially as follows: On using the
given constraints, one constructs an appropriate
$2(n-1)\times 2(n-1)$-matrix, in a
way that it can be combined to the
desired $2n\times 2n$-matrix by means of an
appropriate $S\in \text{Sp}(4,{R})$ acting on a $4\times4$
submatrix only. Note, however, that
compared to Ref.\ \cite{Tho},
we look at
variations over the non-compact symplectic group $\text{Sp}(2n,{R})$,
and not the compact $U(2n)$.
For a single mode, $n=1$, there
is nothing to be shown. For two modes, Lemma \ref{Difference}
provides the sufficiency of the conditions. Let us hence assume
that we are given vectors $(c_1,\dots, c_n)$
and $(d_1,\dots, d_n)$ as above, and that we have already
shown that for $2(n-1)\times 2(n-1)$-matrices,
the conditions (\ref{C1}) and (\ref{C2})
are indeed sufficient. We complete the proof by explicitly
constructing an $2n\times 2n$-matrix with the stated property.
We have that $c_1\geq d_1$ by assumption. We could also
have $c_1\geq d_j$ for some $2\leq j\leq n$, so let
$k\in\{1,\dots,n\}$
be the largest index such that
\begin{equation}
c_1\geq d_k.
\end{equation}
Let us first consider the case that $k\leq n-2$, and we will
consider the cases $k=n-1$ and $k=n$ later.
Then we can set
$x:= d_k + d_{k+1}- c_1$, which means
that $x\geq 0$, and that all conditions
\begin{eqnarray}
c_1 + x &\geq& d_k + d_{k+1},\label{c1}\\
c_1 - x &\geq& d_k - d_{k+1},\label{c2}\\
- c_1 + x &\geq& d_k - d_{k+1}\label{c3}
\end{eqnarray}
are satisfied: (\ref{c1}) by definition, (\ref{c2}) because
$c_1\geq d_k$ and (\ref{c3}) as $d_{k+1}\geq c_1$.
This means that we can find a matrix
of the form
\begin{eqnarray}
\gamma' &:=&
\left[
\begin{array}{cc}
c_1 \mathbbm{1}_2 & C\\
C^T & x\mathbbm{1}_2
\end{array}
\right],
\end{eqnarray}
for some $2\times 2$-matrix $C$, with symplectic eigenvalues
$(d_k,d_{k+1})$, using Lemma \ref{Difference}. Therefore, the
matrix
\begin{eqnarray}
\gamma'' =
\gamma'
\oplus
\text{diag}(d_1,d_1,d_2,d_2,\dots, d_{k-1},d_{k-1},
d_{k+2},d_{k+2},\dots, d_n,d_n)
\end{eqnarray}
has the symplectic spectrum
$(d_1,\dots, d_n)$.
We will now show that we can
construct a $2(n-1)\times 2(n-1)$ matrix $\gamma'''$
with symplectic eigenvalues $(d_1,\dots, d_{k-1},x,d_{k+2},
\dots, d_n)$ and main diagonal elements
$(c_2,c_2,\dots, c_n,c_n)$, by invoking the induction
assumption. This matrix $\gamma'''$
we can indeed construct, as we have
\begin{eqnarray}
c_2+ \dots + c_l &\geq& d_1 +\dots + d_{l-1},\,l=2,\dots, k,\\
c_2+ \dots + c_{k+1} &\geq& d_1 +\dots + d_{k-1}+ x,\\
c_2+ \dots + c_{s} &\geq& d_1 +\dots + d_{k-1}+ x
+
d_{k+2} + \dots + d_{s}, s=k+2,\dots, n,
\end{eqnarray}
as one can show using $d_k\leq c_1 \leq d_{k+1}$ and
$x=d_k + d_{k+1}- c_1$. Also, we have
\begin{equation}
c_n - c_2- \dots - c_{n-1}\leq d_n
- d_1- \dots - d_{k-1} - x -
d_{k+2} - \dots - d_n,
\end{equation}
fulfilling all of the condition that we need invoking the
induction assumption to construct $\gamma'''$. This
matrix has the same symplectic eigenvalues as the
right lower $2(n-1)\times 2(n-1)$ submatrix $\gamma''''$
of $\gamma''$.
Therefore, there exists an $S\in \text{Sp}(2(n-1),{R})$ such that
\begin{equation}
\gamma'''' = S\gamma''' S^T.
\end{equation}
So the matrix
\begin{equation}
\gamma := (\mathbbm{1}_2\oplus S) \gamma'' (\mathbbm{1}_2\oplus S)^T
\end{equation}
has the symplectic spectrum $(d_1,\dots, d_n)$
and symplectic
main diagonal elements $(c_1, \dots, c_n )$.
Hence, by invoking the induction assumption, we have been
able to construct the desired matrix with the appropriate
symplectic spectrum and main diagonal elements. Note that
only two-mode operations have been needed in order to achieve
this goal.
We now turn to the two remaining cases, $k=n$ and $k=n-1$.
In both cases this means that we have
$c_1\geq d_{n-1}$, as $d_n\geq d_{n-1}$, and both
cases can be treated in actually exactly the same manner.
Obviously, this implies that also
$c_n\geq c_1 \geq d_{n-1}$. We can now define again an
$x$, by means of a set of inequalities. This construction is
very similar to the one in Ref.\ \cite{Tho}.
We can require on the one hand
\begin{eqnarray}
x &\geq& \max \{ d_{n-1}, d_{n-1} + d_n - c_n,
d_{n-1} - d_n + c_n,\nonumber\\
&&d_1+ \dots + d_{n-2} + c_{n-1}
- c_1 - \dots - c_{n-2}\}.
\end{eqnarray}
On the other hand, we can require
\begin{eqnarray}
x &\leq& \min \{ d_n - d_{n-1} + c_n,
c_1+ \dots + c_{n-1} - d_1 - \dots- d_{n-2},
0\}.
\end{eqnarray}
Both these conditions can be simultaneously satisfied, making
use of $c_n \geq c_{n-1}$ and $c_n\geq d_{n-1}$.
This in turn means that we have
\begin{eqnarray}
c_n + x \geq d_{n-1} + d_n,\\
c_n - x \geq d_{n-1} - d_n,\\
x - c_n \geq d_{n-1} - d_n,
\end{eqnarray}
where the latter two inequalities mean that
$|x -c_n| \leq |d_{n-1} - d_n|$.
Moreover, we satisfy all the inequalities
\begin{eqnarray}
c_1+ \dots + c_l \geq c_1+ \dots + d_l,\, l=1,\dots, n-2,\\
c_1 + \dots + c_{n-1} \geq d_1 + \dots + d_{n-2} +x,
\end{eqnarray}
and
\begin{equation}
c_{n-1} - c_1 - \dots - c_{n-2}
\leq x - d_1 - \dots - d_{n-2}.
\end{equation}
Again, we can hence invoke the induction assumption, and
construct in the same way as before the desired
covariance matrix with symplectic spectrum $(d_1,\dots, d_n)$
and symplectic main diagonal elements $(c_1,\dots, c_n)$.
This ends the proof of sufficiency of the given conditions.
\hfill\rule{2mm}{2mm} \\
\section{Physical implications of the result and outlook}
The results found in this work can also be read as a full
specification of what multipartite Gaussian states may be
prepared: Since the argument is constructive it readily provides
a recipe of how to construct {\it multi-mode Gaussian entangled
states} with all possible local entropies: For pure states, starting from
squeezed modes, all is needed is a network of passive operations.
Applied to optical systems of several modes,
notably, this gives rise to
a protocol to prepare multi-mode pure-state entangled light of
all possible entanglement structures from squeezed light, using
passive linear optical networks, via
\begin{equation}
\gamma = OPO^T,
\end{equation}
with $P= (z_1,1/z_1,\dots, z_n,1/z_n)$, $z_k\in {R}\backslash\{0\}$,
and $O\in K(n)$. $P$ is the covariance matrix of squeezed
single modes, whereas $O$ represents the passive optical
network. The latter can readily be broken down to
a network of beam splitters and phase
shifters, according to Ref.\ \cite{Reck}.
Hence, our result also generalizes
the preparation of Ref.\ \cite{Alessio} from
the case of three modes
to any number of modes. Similarly, for mixed states, the given
result readily defines a preparation procedure, but now using also squeezers in general.
The above statement also settles the
question of the {\it sharing of entanglement}
of single modes versus
the rest of the system in a multi-mode system:
For a pure Gaussian state
with $d_1=\dots = d_n=1$, the
entanglement entropy
$E_{j|\{1,\dots, n\}\backslash \{j\}}$
of a mode labeled $j$ with respect to the rest
of the system is given by
\begin{eqnarray}\label{edef}
E_{j|\{1,\dots, n\}\backslash \{j\}}:=
S(\rho_j) = s(c_j):= \frac{c_j+1}{2}\log_2 \frac{c_j+1}{2}
- \frac{c_j-1}{2}\log_2 \frac{c_j-1}{2},
\end{eqnarray}
where $s:[1,\infty)\rightarrow [0,\infty)$ is a monotone
increasing, concave function.
\begin{corollary}[Entanglement sharing in pure Gaussian states]
For pure Gaussian states, the set of all
possible entanglement
values of a single
mode with respect to the system is given by
\begin{equation}
\left(
E_{1 |\{2,\dots, n\} },\dots,
E_{n|\{1,\dots, n-1\}
}
\right) \in
\left\{
(s(c_1),\dots, s(c_n)):
c_j-1\leq\sum_{k\neq j}
(c_k -1 ),\,c_j\geq 1
\right\}.
\end{equation}
\end{corollary}
This result is an immediate consequence of the above
pure marginal problem, Corollary \ref{pureprob}.
In fact, this is for pure Gaussian states
more than a monogamy inequality: it constitutes a full
characterization
of the complete set of consistent degrees of entanglement.
A further practically useful application of our result is
the following: It tells us how {\it pure} a state must have been,
based on the information available from
{\it measuring local properties} like local
photon numbers. This is expected to be a very desirable
tool in an experimental context: In optical systems, such
measurements are readily available with homodyne
or photon counting measurements.
\begin{corollary}[Locally measuring global purity in non-Gaussian states] Let us assume
that one has acquired knowledge about the local
symplectic eigenvalues $c_1,\dots, c_n$ of a global state
$\rho$. Then one can infer
that the global von-Neumann entropy $S(\rho)$ of $\rho$
satisfies
\begin{equation}
S(\rho)\leq s\biggl(\sum_{k=1}^n c_k\biggr).
\end{equation}
This estimate is true regardless whether the state
$\rho$ is a Gaussian state or not.
\end{corollary}
{\it Proof:} Let us denote with $\omega$ the Gaussian
state with the same covariance matrix $\gamma\geq 0$
as the (unknown) state $\rho$. The vectors
$(d_1,\dots, d_n)$ and $(c_1,\dots, c_n)$ are the
symplectic eigenvalues and symplectic main diagonal
elements of $\omega$, respectively. From the fact that
$\text{diag}(d_1,d_1,\dots,d_n,d_n)$ reflects a tensor
product of Gaussian states, we can conclude that
\begin{equation}
S(\omega)=\sum_{j=1}^n s(d_j).
\end{equation}
In turn, from Lemma \ref{ST} we find that
\begin{equation}
\sum_{j=1}^n c_j \geq \sum_{j=1}^n d_j.
\end{equation}
By means of an extremality property of the von-Neumann
entropy (see, e.g., Ref.\
\cite{Holevo,Channels}) that a Gaussian state has the
largest von-Neumann entropy for fixed second moments,
we find that $S(\rho) \leq S(\omega)$.
Since the function $s:[1,\infty)\rightarrow [0,\infty)$
defined in Eq.\ (\ref{edef}) is concave and monotone increasing,
we have that
\begin{eqnarray}
S(\rho)\leq \sum_{j=1}^n s(d_j)\leq s(d_1+\dots+d_n)
\leq s(c_1+\dots+c_n).
\end{eqnarray}
This is the statement that we intended to prove. Clearly, this bound is
tight, as is obvious when applying the inequality to the
Gaussian state with covariance matrix
$\text{diag}(d_1,d_1,\dots, d_n,d_n)$ itself.
\hfill\rule{2mm}{2mm} \\
For example, if obtains
$c_1=3/2=c_2=3/2$ and $c_3=2$ in local measurements
on the local photon number, then
one finds that the global state
necessarily satisfies $S(\rho)\leq s(5)$. This is a powerful
tool when local measurements in optical systems
are more accessible than
global ones, for example, when no phase reference is
available, or bringing modes together is a difficult task.
To finally turn to the role of Gaussian operations in this work:
Our result
highlights an observation that has been encountered already
a number of times in the literature: That global Gaussian
operations applied to many modes at once
are often hardly more powerful than when applied to
pairs of modes. This resembles to some extent the situation in
the distillation of entangled Gaussian states by means of
Gaussian operations \cite{Survey,Op,Fiurasek,GiedkeOperations}.
In this work, we have given a complete characterization of
reductions of pure or mixed Gaussian states. In this way,
we have also given a general picture of the possibility
of sharing quantum correlations in a continuous-variable
setting. Since our proof is constructive, it also gives rise to a
general recipe to generate multi-mode entangled states with
all possible reductions. Formally, we established a connection
to a compatibility argument of symplectic spectra, by means
of new matrix inequalities fully characterizing the set in question.
These matrix inequalities formally resembles the well-known
Sing-Thompson Theorem relating singular values to main
diagonal elements. It is the
hope that this work can provide a significant insight into
the achievable correlations
in composite quantum systems of many modes.
\section{Acknowledgments}
We would like to thank V.\ Buzek,
P.\ Hyllus, and M.M.\ Wolf
for valuable
discussions on the subject of the paper, and especially
K.\ Audenaert for many constructive and helpful
comments concerning the
presentation of the results, and A.\ Serafini and G.\ Adesso
for further remarks on the manuscript.
This work has been supported by the DFG
(SPP 1116, SPP 1078), the EPSRC, the QIP-IRC, iCORE,
CIAR,
Microsoft Research, and
the EURYI Award.
|
1,108,101,565,809 | arxiv | \section{Introduction}
{ In quantum chromodynamics (QCD), the fact of color charged particles not being directly observed is justified with the phenomenon called \textit{confinement}, which forbids the existence of isolated color charged particles. Thus because of confinement, quarks and gluons must appear combined in the form of hadrons to be stable. Despite the simplicity and frequent use of the assertion it is a difficult task to obtain confinement as a direct consequence of QCD postulates. The aim of this letter is to provide a sufficient condition for existence of confinement as a condition on the structure of QCD $\theta$-vacuum and the nature of the gauge group. From this point of view (the structure of the $\theta$ vacuum), we will begin by considering the role of instantons, even if similar conclusions can be obtained from the topological structure of the gauge group, as we will briefly mention below.
The original motivation to study the role of instantons in QCD has been to describe the transition from weak to strong coupling (see \cite{Callan1979,Luscher1978,Callan1978}). Instantons were not aimed to solve color confinement. Instead, the project was, describing strong interactions within QCD and assuming confinement, to show that everything which was known about QCD was consistent with the notion that instantons bridge the gap between weak-and strong-coupled physics \cite{Callan1978}.} Now we can say that this project has been successful. Indeed, now
it is a fact that instantons, on the one hand, and pictures of confinement (or confinement criteria), on the other, have been revealed as two fundamental ingredients \cite{Deur2016,Boucaud2004} to study the coupling $\alpha_s (Q^2 )$. { This function sets the strength of the interactions involving quarks and gluons in QCD, as a function of the momentum transfer $Q$, over the complete $Q^2$ range in order to describe hadronic interactions at both long and short distances. } That is, instanton effects could carry the theory all the way into the strong-coupling regime \cite{Callan1979,Luscher1978,Callan1978}.
{ Going back to the confinement in QCD, let us recall that there is not yet a proof of color confinement in any non-abelian gauge theory}. The phenomenon can be qualitatively { explained} by {assum}ing that the gluon field between a pair of color charges forms a narrow flux tube between them. From this stem the string-fragmentation models were invented to account for the fact that when quarks are produced in the particle accelerator, instead of seeing the individual quarks in detectors, many color-neutral particles are {detected}. This process is called hadronization, fragmentation, or string breaking. One of the most studied models of this type is Lund string fragmentation model \cite{Andersson1998}.
In this letter we { consider the relation between both concepts, instantons and confinement, and wonder how the absence of instanton configurations in the pure gauge case may affect euclidean QCD. We will show below how, if we exclude instanton configurations from the picture in that case, we may have a vacuum structure different from the $\theta$ vacuum of QCD. If this happens (see below) the Cluster Decomposition Property (CDP) no longer holds. In that case, because of a theorem discussed in \cite{Lowdon2016}, and under some additional technical requirements, confinement must appear.
The structure of the letter is as follows. In the next section we recall the role of winding numbers in the vacuum of QCD. Then, in the second and third sections we present the main contributions of the letter, analyzing the relations between instantons, $\theta$-vacua, the Cluster Decomposition Property and Lowdon theorem. From that analysis we conclude, in the final section, that if we impose conditions on the topological structure of the vacuum and the gauge group, confinement must appear.
}
\section{ Winding numbers: $\theta$--vacua and instantons}
\label{sec:winding}
{ Instantons are solutions of non-abelian Yang-Mills equations in Euclidean space with non-vanishing first Chern class. Let us briefly recall now the main definitions and properties of these notions and the necessity of including vacua labelled by these integer numbers.}.
Our argument is essentially semiclassical in nature matching most of the usual arguments about $\theta$-vacua available in the literature. It is also possible to derive the content on this section from the topology of the gauge group \cite{Strocchi2019}, even though we will not pursue that approach in this paper. A great account of this discussion can be found in \cite{Gomes2020}.
\subsection{Why winding numbers?}
The QCD Lagrangian is
\begin{equation}
\label{eq:lcolor}
\mathcal{L}=-\frac{1}{4} \operatorname{Tr}[F^{\mu \nu} F_{\mu \nu}]+\sum_{\alpha} \bar{\psi}_{j}^{(\alpha)}\left(i \slashed{D}_{j k}-m^{(\alpha)} \delta_{j k}\right) \psi_{k}^{(\alpha)}
\end{equation}
where { the second term represents the fermion part} and $F^{\mu \nu}$ is the $\mathfrak{su}(3)$-valued coordinate expression of the curvature $F$ of an Ehresmann connection on a $SU(3)$ principle bundle $\pi:P\to \mathbb{R}^{1,3}$. In particular a single chart is enough to cover the whole Minkowskian space time and the connection is fully specified by a gauge potential $A$.
{ Let us consider for now the pure gauge sector of the model. For physical reasons, some conditions must be imposed on the gauge potential to make the action, associated with the Lagrangian $L$, finite. In particular, $A$ must vanish at spatial infinity $S_{\infty}^2$ while approaching a curvature free configuration over the asymptotic past and future Cauchy hypersurfaces $\Sigma_{\pm}$.
In geometrical terms, this implies that the First integral Chern class $\mathrm{Ch}[P]$ must be} an integer.
\begin{equation}
\label{eq:cherntermint}
\mathrm{Ch}[P]=\frac{1}{8 \pi^{2}} \int_{\mathbb{R}^{1,3}} \operatorname{Tr}[F\wedge F]
\end{equation}
A crucial fact about this number is that it is the integral of a local operator
\begin{equation}
\label{eq:chernterm}
\frac{1}{8 \pi^{2}}\operatorname{Tr}[F\wedge F]
\end{equation}
that can be included in the Lagrangian \eqref{eq:lcolor} { without breaking} Lorenz invariance. Moreover, on one chart, { the form is exact and hence} $\frac{1}{8 \pi^{2}}\operatorname{Tr}[F\wedge F]=d \operatorname{cs}_A $ with $ \operatorname{cs}_A$ the Chern-Simons form
\begin{equation}
\label{eq:ChernSimonsDensity}
\operatorname{cs}_A= \frac{1}{8 \pi^{2}} \operatorname{tr}\left(A \wedge \mathrm{d} A+\frac{2}{3} A \wedge A \wedge A\right).
\end{equation}
{ This allows us to write the Chern class as an integral over the boundaries
\begin{equation}
\label{eq:chwinding}
\mathrm{Ch}[P]= \int_{\Sigma_+}cs_A-\int_{\Sigma_{-} }cs_A=n_+-n_{-},
\end{equation}
where} $n_\pm$ are winding numbers for field configurations on the asymptotic past and future of the theory. Because of the asymptotic behavior imposed { on} the fields, the winding number can be cast in more familiar terms by means of the Wess-Zumino invariant that, over $\Sigma_{\pm}$, takes the form
\begin{equation}
\label{eq:windingNumber}
n=\frac{i}{24 \pi^{2}} \int_\Sigma \epsilon^{i j k} \operatorname{Tr}\left({A}_{i} {A}_{j} {A}_{k}\right)
\end{equation}
From this analysis one can deduce {(see \cite{Gomes2020})} that there are asymptotic past and future vacuum states $\lvert n \rangle $ indistinguishable from the point of view of local observables but different in the global winding number quantity $n$.
{ A very important property of the
action obtained from \eqref{eq:lcolor} is its invariance} over the so called large gauge transformations. These are gauge transformations not obtained directly from exponentiation of the Lie algebra variables i.e. they are not continuously connected to the identity. See \cite{Jackiw,Treiman:1986ep} for further study on those transformations.{ The large transformations relate thus two configurations of $A$ which can not be related by homotopy. Therefore} the winding numbers of the { field} configuration before and after the transformation must differ. On the vacuum states we may introduce this via a unitary operator
\begin{equation}
\label{eq:largeGenerator}
U_{1}\lvert n \rangle= \lvert n+1 \rangle,\ U_m=U^m_1\textrm{ and }U_{-1}=U^\dagger_1,
\end{equation}
that represents the action of a large gauge transformation that increases in 1 the winding number.
{ In what regards the physical meaning, it is important to remember that even}
though the winding number by itself is a meaningless quantity, tunneling between sectors of different winding number do have physical relevance. In the path integral formalism it amounts to add to the Lagrangian \eqref{eq:lcolor} the CP violating term \cite{Weinberg1996}
\begin{equation}
\label{eq:thetatem}
\mathcal{L}_{\theta}= \frac{\theta}{32 \pi^{2}} \operatorname{Tr}[ F \wedge {F}]
\end{equation}
which can be measured when coupled to massive fermions (quarks). Current measurements are compatible with $|\theta| \leq 10^{-10}$ raising the so called strong CP problem.
{
\subsection{$\theta$-vacua from locality and instantons}
If we assume that tunneling occurs in Nature we may study it via instanton configurations of the euclidean action. Remember that instanton configurations are configurations of the gauge fields $A$ with nonvanishing Chern class and labelled by an integer which is often called the winding number of the instanton. All instantons with identical winding number are related by gauge transformations belonging to the component of $SU(3)$ connected to the identity, also called small gauge transformations. By opposition, we will call large gauge transformations to those not connected with the identity.
}
Following \cite{Coleman1986} (Chap. 7 Sec 3.), we may assume that, because of the locality of \eqref{eq:chernterm}, an instanton of winding number $n$ on an Euclidean spacetime region $\Omega_{[0,T]}$ that comprises a transition between times $0$ and $T$ can be decomposed, for sufficiently large $T$, in two instantons of winding numbers $n_1+n_2=n$ in disjoint regions $\Omega_{[0,T_1]}\cup\Omega_{[T_1,T]}$. From the path integral perspective let the Euclidean action over the specified spacetime region be $S_{\Omega}=\int_{\Omega} d^4x \mathcal{L}_{E}(A)$ then we may compute the transition matrix
\begin{equation}
\label{eq:transition}
F(T,n)=N \int_n [DA] \mathrm{e}^{-S_{\Omega_{[0,T]}}}
\end{equation}
where the subindex $n$ in the integral means that we must integrate over instanton configurations of winding number $n$. Then it follows that
\begin{equation}
\label{eq:transitiondesc}
F(T,n)=\sum_{n_1+n_2=n}F(T_1,n_1)F(T-T_1,n_2).
\end{equation}
{ If we take the Fourier transform $F(T,\theta)=\sum_{n}e^{-i\theta n}F(T,n)$, convolutions become} multiplications $F(T,\theta)=F(T_1,\theta)F(T-T_1,\theta)$. Since $F(T,\theta)$ is to be interpreted as a transition matrix from an initial to a final state, and we have shown that { it} fulfills the same law as a time exponential, we may interpret it as a transition matrix of the evolution { of an eigenstate $\lvert \theta \rangle$ of the Hamiltonian as
$F(T,\theta) \propto\left\langle\theta\left|\mathrm{e}^{-H T}\right| \theta\right\rangle$, where}
\begin{equation}
\label{eq:thetavev}
\begin{aligned}
F(T, \theta)
=N^{\prime} \int [DA] \mathrm{e}^{-S_{\Omega_{[0,T]}}} \mathrm{e}^{ \frac{i\theta}{8 \pi^{2}} \int_{\Omega_{[0,T]}} \operatorname{Tr}[F\wedge F] }.
\end{aligned}
\end{equation}
{ The locality of operator \eqref{eq:chernterm} and assuming that tunneling exists, leads (see \cite{Coleman1986} ) to the $\theta$-vacuum
\begin{equation}
\label{eq:thetavacuum}
\lvert \theta \rangle=\sum_{n=-\infty}^\infty e^{-in\theta}\lvert n\rangle.
\end{equation}
This state is invariant under large gauge transformations because $U_{1}\lvert\theta\rangle = e^{i\theta} \lvert\theta\rangle$, i.e. they are proportional to the identity with a quantum mechanically irrelevant constant phase factor.
Then in summary, if we consider only the gauge part of the theory, the topological structure of the theta vacuum becomes a requirement derived from the existence of instanton solutions of non-trivial winding number together from the more fundamental condition of locality of operator \eqref{eq:chernterm}}.
{
\section{The cluster decomposition principle (CDP) and confinement: Lowdon theorem}
\subsection{ $\theta$-vacuum implies CDP and vice versa}
The Cluster Decomposition Principle (or property) uses locality of a theory to require the independence of measurements done in causally-disconnected regions of spacetime. This is equivalent to require the factorizability of expectation values of local operators on disconnected compact domains.
Following Weinberg (\cite{Weinberg1996}, section 23.6), if we consider a non-abelian Yang Mills theory in euclidean space time and assume the existence of instanton configurations with different winding numbers $n$, then it follows that CDP holds} if and only if the vacuum of the theory is given by \eqref{eq:thetavacuum}. {
Indeed, if} we compute the expectation value of an observable $\mathcal{O}$ located within an euclidean spacetime region $\Omega$ and an euclidean Lagrangian density $\mathcal{L}(A)$ and action $S_{\Omega}=\int_\Omega d^4x \mathcal{L}(A)$, we obtain:
\begin{equation}
\label{eq:cluster}
\langle \mathcal{O} \rangle_{\Omega}= \frac{\sum_{n}\omega(n) \int_n [D A] e^{-S_\Omega} \mathcal{O}(A)}{\sum_{n}\omega(n) \int_n [D A] e^{-S_\Omega}},
\end{equation}
where $\omega(n)$ are arbitrary weight factors for each instanton configuration of winding number $n$.
Now if we let $\mathcal{O}$ to be located in a { smaller region $\Omega_1$ such that $\Omega_1 \cup \Omega_2= \Omega$ for $\Omega_1\cap\Omega_2=\emptyset$ then, because of the locallity of the winding number operator exploited in the previous section}, it follows
\begin{equation}
\label{eq:cluster2}
\langle \mathcal{O} \rangle_{\Omega}= \frac{
\sum_{n,m}\omega(n+m)
\int_n [D A] e^{-S_{\Omega_1}} \mathcal{O}(A)
\int_m [D A] e^{-S_{\Omega_2}}
}
{
\sum_{n,m}\omega(n+m)
\int_n [D A] e^{-S_{\Omega_1}}
\int_m [D A] e^{-S_{\Omega_2}}
}.
\end{equation}
Now assume that the CDP holds, then region $\Omega_2$ should not contribute to the integral and therefore we must ensure $\langle \mathcal{O} \rangle_\Omega=\langle \mathcal{O} \rangle_{\Omega_1}$ which only happens if $\omega(n)=e^{in\theta}$ with $\theta$ an arbitrary parameter. Therefore, we conclude that from the existence of nontrivial configurations plus the Cluster decomposition principle it follows that the vacuum of QCD { must be given by \eqref{eq:thetavacuum}.
On the other hand if we take \eqref{eq:thetavacuum} as} given then $\omega(n)=e^{-in\theta}$ and $\langle \mathcal{O} \rangle_\Omega=\langle \mathcal{O} \rangle_{\Omega_1}$ which is the statement of the CDP.
This implies that the Cluster Decomposition Principle { can be} derived from the existence of instantons with non trivial winding number that, { as we saw in previous section, leads to the $\theta$--vacuum}. But it also implies that the $\theta$-vacuum is the { only vacuum} state compatible with the CDP if such instanton configurations are present.
\subsection{Lowdon's theorem}
Following \cite{Lowdon2016} we will assume no mass gap in the locally quantized linear space of QCD $\mathcal{V}$, i.e. no mass gap before implementing Becchi-Rouet-Stora-Tyutin (BRST) reduction to the physical degrees of freedom of the theory. { From such a space, we construct the physical Hilbert space, which we will denote as} $\mathcal{V}_{phys}$. With these assumption, Lowdon \cite{Lowdon2016} shows that the violation of the CDP implies that correlator strength between clusters of gluons increases at large distances{. This} is a sufficient condition for confinement.
It is important to notice that the absence of a mass gap in $\mathcal{V}$ does not exclude a mass gap in the physical space $\mathcal{V}_{phys}$. Nonetheless there is no strong evidence that supports of disproves the existence of a mass gap in $\mathcal{V}$ while it is commonly accepted that $\mathcal{V}_{phys}$ does have a mass gap. Under these assumptions certain lattice results {(negativity of the Schwinger functions for the quark and gluon propagators)} exposed in the aforementioned paper \cite{Lowdon2016} are understood as evidence for confinement.
\section{Conclusion: forbidding tunneling implies confinement}
{ Let us now combine all the different properties discussed above.
If tunneling between vacua with different winding numbers were not present in our theory, we could exclude instanton configurations and therefore restrictions on the form of the $\theta$ vacuum would disappear. Hence vacua different from \eqref{eq:thetavacuum} would be possible, such as
\begin{equation}
\label{eq:thetavacuumgen}
\lvert \theta \rangle=\sum_{n=-\infty}^\infty \omega(n)\lvert n\rangle,
\end{equation}
with more general weights $\omega(n)$.
It is important to notice that these vacua would not be invariant under the so called large gauge transformations, i.e., we should exclude $U_{n}$ as a gauge symmetry leaving only the so called small gauge transformations (i.e., those connected to the identity) as a legitimate gauge transformation for the model.
In such a situation, the restrictions on the topological structure of the vacuum derived from the CDP would no longer hold, and therefore under the hypothesis made in the previous section, the vacuum may be different and this mechanism would lead to QCD confinement.
It is important to notice that while our analysis so far has been based on the pure gauge sector of the theory, our proposal would also have implications on the fermion sector of the theory, the most relevant one being the problem of the $U(1)$--anomaly. While in the limit of massless quarks it is known that the tunneling rate is zero \cite{Vainshtein:1981wh}, it is also known that in the case of heavy fermions, instantons (or the nontrivial topology of the gauge group) are an efficient tool to explain the $U(1)$-anomaly (see \cite{Strocchi2019}).
Assuming this limitation, we }conclude that the { change} of the $\theta-$vacuum structure, for which the absence of tunneling {(or, equivalently, the non-trivial topology of the gauge group)} is required, is a sufficient condition of confinement under the hypothesis made in \cite{Lowdon2016}{, since with more general vacua, CDP would fail, as we saw above}. { The change in the vacuum state implies that the large gauge transformations of the form \eqref{eq:largeGenerator} are not considered a symmetry and the gauge group is reduced to the subset of transformations connected with the identity.}
When we were finishing this work we encountered \cite{dvali2022} in which the existence of axions are consistency requirements imposed by S-matrix formulation with gravity. Axions are explained in this formalism by promoting the Chern-Simons form \eqref{eq:ChernSimonsDensity} to a field without propagating degrees of freedom. From this perspective this is a massless theory with gauge symmetry. Nonetheless if we add a two-form field $B$, that plays the (dual) role of the axion, there is a Higgs phase in which the gauge redundant form $\operatorname{cs}_A$ acquires a mass by 'eating up' $B$. This axion mechanism leads to an expected value of the Chern class \eqref{eq:chernterm} unambiguously zero.
In \cite{dvali2022} the discussion leads to $\theta=0$ as is the goal of any axion mechanism. Here one could speculate and consider an step forward, since the expected value \eqref{eq:chernterm} is unambiguously zero, and assume that this, effectively, excludes instantons from the semiclassical picture. This treatment would promote axions to a potential mechanism to forbid tunneling that triggers the chain of implications exposed in this paper.
\begin{acknowledgments}
\textbf{Acknowledgments}
The authors { would like to thank V. Azcoiti, J. L. Cortés, E. Follana, V. Laliena, A. Cherman and J. J. Ruiz Lorenzo for very} useful discussions. Special mention deserves our thanks to Georgi Dvali for the correspondence on essential points of our work and his. C.B. and D.M. acknowledge financial support by Gobierno de Aragón through the grants defined in ORDEN IIU/1408/2018 and ORDEN CUS/581/2020 respectively.
\end{acknowledgments}
|
1,108,101,565,810 | arxiv | \section{Introduction}
Calculating distances to cosmological objects remains one of the most important steps required for probing cosmology. These distances are given by the distance-redshift relation, and hence one needs very accurate measures of redshift to be confident in the inferred distances. Ideally, high resolution spectra would be obtained for every object enabling for a precise measurement of the redshift. However, with current and future surveys such as the Dark Energy Survey (DES) \citep{dark2005dark, DES_more}, Euclid \citep{amendola2018cosmology}, and the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) \citep{tyson2003lsst, Ivezi__2019}, even with large spectroscopic surveys such as the Dark Energy Spectroscopic Instrument (DESI) \citep{flaugher2014dark, desi_2}, only tens of millions of the galaxies will have spectroscopy performed, despite hundreds of millions of galaxies being observed.
In the absence of real spectroscopic measurements, estimating the photometric redshifts (Photo-Z) is the only viable route available for scientists. There are two major techniques used for photometric redshift estimation, template flitting (eg. \citet{benitez2000bayesian}) and machine learning (ML) (eg. \citet{collister2004annz}). Both methods rely on the photometric information produced by the survey, usually given as magnitudes in different colour bands. These magnitudes act as approximate measures of the underlying spectral energy distribution (SED) of the observed object, and by appropriately reconstructing the SED, a corresponding redshift can be inferred \citep{bolzonella2000photometric}.
Template fitting methods use a small and fixed set of template spectra for the estimations, and inherently relies on the assumption that the best fitting SED template provides the true representation of the observed SED. There are benefits of template methods, such as, the ability to incorporate physical information, like dust extinction, into the model. However, embedding such physical constraints require very precise calibration and an accurate model~\citep{benitez2000bayesian}.
Machine learning techniques, on the other hand, do not have any explicit model for capturing the physical information of the objects or of the estimation process. Instead, ML techniques rely on a training dataset with spectroscopic redshifts from observed or simulated (or a combination of both) data for inferring an estimation model. More specifically, supervised learning models rely on a guided principle, that with sufficient examples of input-output pairs an estimation model can be inferred by understanding the latent variables of the process. In other words, ML methods derive a suitable functional mapping between the photometric observations and the corresponding redshifts.
The learning process relies on a labelled dataset consisting of a set of magnitudes in each wavelength band (the inputs) and corresponding true values of the spectroscopic redshifts (the output labels or ground-truth). The learning model, such as a random forest or neural network, learns the mappings which can be non-linear. It has been shown that the functional mapping learned through the supervised learning outperforms the template-based methods~\citep{abdalla2011comparison}.
Although the usage of ML in handling this problem has become very common ~\citep{pasquet2019photometric, d2018photometric, hoyle2016measuring}, there is still no comprehensive study outlining the overall understanding on different ML methods in handling the Photo-Z problem. In fact, this is a common problem across all domains of sciences, and as such, the notion of AI benchmarking is an upcoming challenge for the AI and scientific community. This is particularly true in light of recent developments in the ML and AI domains, such as the deep learning revolution~\citep{sejnowski2018deep}, technological development on surveys~\citep{dewdney2009square}, the ability to generate or simulate synthetic data~\citep{springel2005cosmological}, and finally the progress in computer architecture space, such as the emergence of GPUs~\citep{kirk2007nvidia}.
The notion of benchmarking~\citep{dongarra2003linpack} has conventionally been about how a given architecture (or an aspect of a given architecture) performs for a given problem, such as the LINPACK challenge~\citep{dongarra1979linpack}. However, in our case, the focus is broader than just performance. Our motivation here is many-fold, including understanding how different ML models compare when estimating the redshifts, how these techniques perform when the available training data is scaled, and finally how these techniques scale for inference. Furthermore, one of the key challenges here is the identification of appropriate metrics or figures of merit for comparing these models across different cases.
We intend to answer some of these questions in this paper by introducing this as a representative AI benchmarking problem from the astronomical community. The benchmarks will include several baseline reference implementations covering different ML models and address the challenges outlined above. The rest of this paper is organised as follows: In Section~\ref{sec:data}, we describe the the dataset used and include discussions on the features selected. In Section~\ref{sec:method}, we briefly describe the machine learning models that were evaluated in the study, followed by the descriptions of the optimisation and benchmarking processes and the different metrics that are part of our analyses. The results are then presented in Section~\ref{sec:results} along with our observations, and we conclude the paper in Section~\ref{sec:conclusion} with directions for further work.
\section{Data}
\label{sec:data}
The data used in our analysis comes entirely from the Sloan Digital Sky Survey (SDSS) \citep{york2000sloan}. Using its dedicated 2.5 meter telescope at Apache Point Observatory \citep{gunn20062}, SDSS is one of the largest public surveys with over 200 million photometric galaxies and 3 million useful galaxy spectra as of data release 12 (DR12) \citep{alam2015eleventh}.
In this work we downloaded 1,639,348 of these galaxies with spectroscopic data available to be used by the machine learning algorithms. The spectroscopic redshift was required as it was taken to be the ground truth for the redshift that the algorithms were trying to predict using the magnitudes of each galaxy. SDSS took images using five different optical filters (u, g, r, i, z), and as a result of these different wavelength bands, there were five magnitudes for each observed galaxy \citep{eisenstein2011sdss}.
The 1.6 million galaxies used in this investigation were from a cleaned dataset where it was a requirement for all five magnitudes to have been measured. In many cases for observations of galaxies there could be a missing value in one of the filters which would negatively impact it's redshift estimation. By only using galaxies with complete photometry we ensured that our comparison of methods wasn't also being affected by the kind of galaxies within the different sized datasets.
Furthermore, the redshift range of the galaxies used was constrained to only have galaxies with a redshift, $z < 1$. As there are far fewer galaxies with measured spectroscopic redshifts greater than 1, we kept within this range to ensure that the training set would be representative and allow for reliable estimates to be generated. This meant that the benchmarking performed could be carried out without also having to take into account the effects that an unclean dataset might have had on the different machine learning algorithms.
The main features of the data used by machine learning algorithms were the five magnitudes which could also be combined to give the four colours that are simply the difference in magnitudes between neighbouring wavelength bands (u-g, g-r, r-i, and i-z). There were additional feature columns contained in the SDSS data which could have been added such as the subclass of galaxy or the Petrosian radius \citep{petrosian1976surface, Soo}. However; adding these additional features wouldn't have had a large impact on the results and could have added more issues due to incompleteness if the feature wasn't recorded for every galaxy. Instead it was decided to use only the information from the five magnitudes as features which we knew to be complete.
Finally, we also scaled the features by subtracting its mean and diving by its standard deviation to give unit variance. This ensured that the machine learning algorithms used weren't being influenced by the absolute size of the values, where a difference in a feature's variance could result in it being seen as more important than other features. And by randomly splitting the full dataset to form the training and testing sets, the subsets created kept the same distribution of redshift and were representative of the overall dataframe.
\section{Methodology} \label{sec:method}
With the data prepared, the first step of the machine learning process was to split the entire dataset to create a training set, testing set, and validation set, whereby the test and validation sets were kept unseen by the machine learning algorithms until after it had been trained using the training data. As part of the benchmarking process the machine learning algorithms (described in Sec~\ref{ml_descriptions}) were trained and tested on many different sizes of datasets, and to do this the data was split randomly for each size of training and testing set required.
During training, the algorithms were also optimised by changing the hyperparameters. These are the parameters of the models that control how the machine learning algorithms create their mappings from the features to the redshift. The most complete way of optimising would be to perform brute force optimisation where every combination of a defined grid of hyperparameters would be tested. However, this is far more computationally intensive than random optimisation which instead tests a random subset of the hyperparameter grid and provides a good estimate of the best hyperparameters. The grids of hyperparameters tested for each algorithm is given in Table~\ref{tab:hyperparameters} along with the selected parameters.
\begin{table*}
\centering
\caption{Grids of hyperparameters that were searched to test and compare each machine learning algorithm, along with the hyperparameters that were selected by the random optimisation. The arrays of hyperparameters were chosen to give a good overview of different possible configurations of the algorithms, and by changing the parameters which had the greatest impact on the algorithms, we ensured finding a good representation of the `best' performing algorithms.}
\label{tab:hyperparameters}
\begin{tabular}{cccc}
\hline
Classifier & Hyperparameter & Array of Values Searched & Selected Value \\
\hline
LR & ``fit intercept" & [True, False] & \textbf{True}\\
& ``normalize"& [True, False] & \textbf{True} \\
\hline
kNN & ``no. neighbors" & [1, 200] & \textbf{21}\\
& ``weights" & [``uniform", ``distance"] & \textbf{``distance"}\\
& ``leaf size" & [10, 100] & \textbf{27}\\
& ``p" & [1, 4] & \textbf{2} \\
\hline
DT & `max. features" & [1, 5, ``auto"] & \textbf{``auto"}\\
& ``min. samples split" & [2, 100] & \textbf{38}\\
& ``min. samples leaf" & [1, 100] & \textbf{64}\\
& ``min. weight fraction leaf" & [0, 0.4] & \textbf{0}\\
& ``criterion" & [``mse", ``mae"] & \textbf{mse}\\
\hline
BDT & ``no. estimators" & [1, 200] & \textbf{88}\\
& ``loss" & [``ls", ``lad", ``huber", ``quantile"] & \textbf{``lad"}\\
& ``max. features" & [1, 5] & \textbf{4} \\
& ``max. depth" & [1, 20] & \textbf{17} \\
& ``min. samples split" & [2, 100] & \textbf{46} \\
& ``min weight fraction leaf" & [0, 0.4] & \textbf{0} \\
\hline
RF & ``no. estimators" & [1, 200] & \textbf{94}\\
& ``max. features" & [1, 5] & \textbf{4} \\
& ``min. samples leaf" & [1, 100] & \textbf{8}\\
& ``min. samples split" & [2, 100] & \textbf{13} \\
& ``min weight fraction leaf" & [0, 0.4] & \textbf{0} \\
& ``criterion" & [``mse", ``mae"] & \textbf{mae}\\
\hline
ERT & ``no. estimators" & [1, 200] & \textbf{147}\\
& ``max. features" & [1, 5] & \textbf{4} \\
& ``min. samples leaf" & [1, 100] & \textbf{3}\\
& ``min. samples split" & [2, 100] & \textbf{87} \\
& ``min weight fraction leaf" & [0, 0.4] & \textbf{0} \\
& ``criterion" & [``mse", ``mae"] & \textbf{mse}\\
\hline
MLP & ``hidden layer sizes" & [(100, 100, 100), (100, 100), 100] & \textbf{(100, 100, 100)}\\
& ``activation" & [``tanh", ``relu"] & \textbf{``tanh"}\\
& ``solver" & [``sgd", ``adam"] & \textbf{``adam"}\\
& ``alpha" & [0.00001, 0.0001, 0.001, 0.01] & \textbf{0.01}\\
& ``tol" & [0.00001, 0.0001, 0.001, 0.01] & \textbf{0.00001}\\
& ``learning rate" & [``constant",``adaptive"] & \textbf{``constant"}\\
\hline
\end{tabular}
\end{table*}
To be able to optimise the algorithms the decision first had to be made of which metric would be optimised for. There are 3 main metrics used for regression problems such as this: mean squared error (MSE), mean absolute error (MAE), and R squared score ($R^2$). The formulae for calculating each of these metrics is given below where for the $i$-th sample within a total of $n$ samples, $\hat{z}_i$ is the predicted value, and $z_i$ is the true value.
\begin{equation}
\text{MSE}(z, \hat{z}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} (z_i - \hat{z}_i)^2
\end{equation}
\begin{equation}
\text{MAE}(z, \hat{z}) = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}}-1} \left| z_i - \hat{z}_i \right|
\end{equation}
\begin{equation}
R^2(z, \hat{z}) = 1 - \frac{\sum_{i=1}^{n} (z_i - \hat{z}_i)^2}{\sum_{i=1}^{n} (z_i - \bar{z})^2}
\end{equation}
There are three additional metrics defined below that are commonly used to determine the performance of photometric redshift estimations: bias (the average separation between prediction and true value), precision (also $1.48 \times$ median absolute deviation (MAD) which gives the expected scatter), and outlier fraction (the fraction of predictions where the error is greater than a set threshold, here chosen to be $>0.10$). Each of these metrics were also calculated and the results are given in Table~\ref{tab:mag_results}.
\begin{equation}
\text{Bias} = <z_{\text{pred}} - z_{\text{spec}}>
\end{equation}
\begin{equation}
\text{Precision} = 1.48 \times \text{median}(\frac{|z_{\text{pred}} - z_{\text{spec}}|}{1 + z_{\text{spec}}})
\end{equation}
\begin{equation}
\text{Outlier Fraction} = \frac{N(\Delta z) > 0.10}{N_{\text{total}}}
\end{equation}
As well as deciding which metric to optimise for, we introduced an extra stage included in the optimisation which allowed for a time-considered optimisation (see section \ref{time-considered optimisation}). We optimised the machine learning algorithms for MSE (aiming to minimise the MSE) and used a random optimisation with 1000 iterations to ensure a good estimate of the best hyperparameters for each algorithm. Furthermore, we used a 3-fold cross validation \citep{breiman1992submodel} to ensure that the algorithms weren't overfitting (which could mean that the algorithms were able to perform well for the training data used but then fail to generalise), and that the results would be valid for any given dataset. Once optimised each algorithm was then retrained and tested to give the final results given in Sec \ref{sec:results}, along with the benchmarking results, where the benchmarking process used is described in \ref{benchmarking}.
\subsection{Descriptions of Machine Learning Algorithms Tested}
\label{ml_descriptions}
The following algorithms were selected for testing as they are some of the most widely used machine learning algorithms and all available though the python package Scikit-Learn \citep{pedregosa2011scikit}. While a simple neural network (Multi-layer Perceptron) was included, we didn't include any other examples of deep learning. This decision was made as deep learning algorithms perform best with many features (often thousands) and there's only so much information that the photometry could provide with the five magnitude features. Furthermore, it's been shown by \cite{hoyle2016measuring} that ``traditional" algorithms can perform equally well as deep learning methods, and that it might only be beneficial to use more computationally expensive deep learning models when directly using images as the training data \citep{pasquet2019photometric}.
\subsubsection{Linear Regression}
Linear Regression, or Ordinary Least Squares Regression, fits a linear model to the data with coefficients that act to minimize the sum of the squared residuals between the observations and the predictions from the linear approximation. The linear model requires independent features, as features that are correlated will give estimates that are very sensitive to random errors in the observations, resulting in a large variance \citep{hastie2009linear}.
\subsubsection{K-Nearest Neighbours}
K-Nearest Neighbours uses a predefined number of data points in the training sample, k, which are closest in Euclidean distance to the new point whose value is then predicted based off those. This is an example of an instance based algorithm, where there is no general model used to make predictions but which stores the training data. Although one of the simplest methods, being non-parametric can make it very successful especially in cases with an irregular decision boundary. Increasing the value of k acts to reduce the effects of noise, however, it also makes the decision boundary less distinct and could result in overfitting \citep{knn}.
\subsubsection{Decision Trees}
Decision Trees \citep{Breiman1983ClassificationAR} are non-parametric algorithms whereby the data features are used to learn simple decision rules. The decision rules are basic if-then-else statements and are used to split the data into branches. The tree is then trained by recursively selecting the best feature split which is taken to be the split which gives the highest information gain, or greatest discrepancy between the two classes. Typically Decision trees can produce results with a high accuracy, however, they are bad at generalising the data as they are often complex and overfitted. Instead they can be combined in an ensemble such as Boosted Decision Trees, Random Forests, or Extremely Randomised Trees.
\subsubsection{Boosted Decision Trees}
Boosted Decision Trees were the first ensemble method we considered. The process of boosting can be described whereby the machine learning algorithm is repeatedly fitted to the same dataset, but each time the weights of objects with higher errors are increased. This aims to result in an algorithm that can better handle the less common cases than a standard decision tree. The boosting can be generalised by using an arbitrary differentiable loss function which is then optimised \citep{BDT1}, and we found that for this problem using the least absolute deviation loss function produced the best results.
\subsubsection{Random Forests}
Random Forests also take many decision trees to build an ensemble method, averaging the predictions of the individual trees to result in a model with lower variance than in the case of a single decision tree. This is done by adding two elements of randomness, the first of which is using a random subset of the training data which is sampled with replacement \citep{Bagging}. Second, the feature splits are found from a random subset of the features rather than using the split which results in the greatest information gain. This randomness can yield decision trees with higher errors, however, by averaging the predictions, the errors cancel out and the variance reduction yields a greatly improved model as well as removing the typical overfitting that occurs with single decision trees \citep{RF1}.
\subsubsection{Extremely Randomised Trees}
Extremely Randomised Trees \citep{ERT} is an algorithm very similar to Random Forests but with an additional step to increase randomness. The feature splits are not only found from a random subset of the features. It also uses thresholds that are picked at random for each feature before the best of these random thresholds are then used for the decision rules, instead of simply using the thresholds which results in the greatest information gain. This acts to further reduce the variance compared to a Random Forest, however, it also results in a slightly greater bias.
\subsubsection{Multi-layer Perceptron}
The Multi-layer Perceptron is the most simple example of a fully connected, deep neural network with at least three layers of nodes. It consists of the input node, output node, and a minimum of one hidden layer, although more can be added. In the way that the perceptron learns how to map the input node to the target vector, it's similar to logistic regression, however, it differs with the addition of one or more non-linear hidden layers which allow it to approximate any continuous function \citep{WERBOS1988339, MLP}.
\begin{table*}
\centering
\caption{Results of testing the seven machine learning algorithms described in Sec~\ref{ml_descriptions}. Each algorithm was trained using 10000 galaxies and tested using 5-fold cross validation to obtain the quoted standard deviation.}
\label{tab:mag_results}
\begin{tabularx}{\textwidth}{XXXXXXXX}
\toprule
\thead{} & \thead{Linear \\ Regression \\ (LR)} & \thead{k-Nearest \\Neighbours \\ (kNN)} & \thead{Decision \\ Tree \\ (DT)} & \thead{Boosted \\ Decision Tree \\ (BDT)} & \thead{Random \\ Forest \\ (RF)} & \thead{Extremely\\ Randomised \\ Trees (ERT)} & \thead{Multi-layer \\ Perceptron \\ (MLP)} \\
\midrule
\addlinespace[0.2cm]
MSE & 0.005714 $\pm$0.000577 & 0.004438 $\pm$0.000417 & 0.004631 $\pm$0.000407 & 0.004277 $\pm$0.000394 & 0.004221 $\pm$0.000423 & 0.004327 $\pm$0.000419 & 0.004701 $\pm$0.000499 \\
\addlinespace[0.2cm]
\midrule
\addlinespace[0.2cm]
MAE & 0.050931 $\pm$0.001679 & 0.040881 $\pm$0.001626 & 0.041827 $\pm$0.001452 & 0.038757 $\pm$0.001514 & 0.038504 $\pm$0.001484 & 0.040459 $\pm$0.001537 & 0.051260 $\pm$0.008874 \\
\addlinespace[0.2cm]
\midrule
\addlinespace[0.2cm]
$R^2$ & 0.865198 $\pm$0.009009 & 0.895208 $\pm$0.007215 & 0.890677 $\pm$0.006415 & 0.899017 $\pm$0.006822 & 0.900366 $\pm$0.007373 & 0.897861 $\pm$0.007198 & 0.871507 $\pm$0.014329 \\
\addlinespace[0.2cm]
\midrule
\addlinespace[0.2cm]
Bias & 0.039742 $\pm$0.000920 & 0.031109 $\pm$0.000977 & 0.032030 $\pm$0.000895 & 0.029428 $\pm$0.000893 & 0.029334 $\pm$0.000854 & 0.030927 $\pm$0.000947 & 0.034577 $\pm$0.002209\\
\addlinespace[0.2cm]
\midrule
\addlinespace[0.2cm]
Precision & 0.043421 $\pm$0.000578 & 0.031836 $\pm$0.000845 & 0.032895 $\pm$0.001137 & 0.028986 $\pm$0.000264 & 0.029279 $\pm$0.000766 & 0.031945 $\pm$0.000609 & 0.040837 $\pm$0.003310 \\
\addlinespace[0.2cm]
\midrule
\addlinespace[0.2cm]
Outlier Fraction & 0.060500 $\pm$0.005187 & 0.034800 $\pm$0.007033 & 0.033400 $\pm$0.007439 & 0.029300 $\pm$0.005183 & 0.029700 $\pm$0.004389 & 0.033400 $\pm$0.006304 & 0.037600 $\pm$0.002709 \\
\addlinespace[0.2cm]
\bottomrule
\end{tabularx}
\end{table*}
\subsection{Time-Considered Optimisation} \label{time-considered optimisation}
In the normal process of optimising machine learning algorithms, a single metric is chosen to minimise. If brute force optimisation is used, this produces an algorithm configured with the hyperparameters from the defined grid which gives the best result for the metric (e.g. the lowest MSE). Although this algorithm by definition would have the best result, it doesn't necessarily result in the most useful or suitable algorithm. The hyperparameters selected to minimise the error likely also act to increase the computational time required both in training and inference, resulting in a much slower model.
Rather than minimising a single metric, in time-considered optimisation we also consider the time taken by the models both in training and inference. By setting an error tolerance we allow for the model selection to suggest an alternative to the `best' model (the model which minimises the error metric), instead providing a model which will have a higher error, while kept below the tolerance, but in return will also have faster training and inference times. In certain cases, such as training the Decision Trees, it was possible to achieve a two magnitude increase in efficiency while increasing the error by $< 10\%$.
For the purpose of benchmarking the machine learning algorithms in this paper, we set the error tolerance to machine precision (which is usually $10^{-16}$, close to zero) resulting in the `best' model in terms of error. This decision was made as these optimised algorithms would result in the algorithms most commonly used in other machine learning studies where time-considered optimisation hasn't be implemented.
\subsection{Benchmarking} \label{benchmarking}
The benchmarking performed was achieved by recording the system state (described by the time, CPU usage, memory usage, and disk I/O) throughout the process of running the machine learning algorithms. This allowed us to compare the efficiency of both training and inferencing performance of the machine learning models, and when combined with the regression errors obtained, allowed for a complete description of the performance of the different methods.
Our main focus of the benchmark was to investigate how training and testing times varied with different sizes of dataframes, and how the final redshift estimations would be affected. As such we incrementally changed both the training and testing datasets and recorded the times taken which allowed us to produce the plots shown in Figures \ref{fig:benchmarks_training_times} - \ref{fig:benchmarks_mse}.
\begin{figure*}
\begin{multicols}{2}
\begin{adjustwidth}{-0.7cm}{}
\includegraphics[scale=0.34]{figures/training_times_benchmarks.png}\par
\end{adjustwidth}
\caption{Graph of the training time plotted against the number of galaxies used in the training set to show how each algorithm scales with different sizes of training datasets.
We saw that the simpler LR, kNN, and DT algorithms all begin as the fastest to train, however, the DT had terrible scaling and for large training sets became one of the slowest algorithms. Conversely the ERT and MLP algorithms began as two of the slowest algorithm to train, but scaled much better than the rest and could be more useful for massive training datasets.}
\label{fig:benchmarks_training_times}
\includegraphics[scale=0.34]{figures/inference_times_with_training_benchmarks.png}\par
\caption{Graph of the inference time plotted against the number of galaxies used in the training set to show how each algorithm scaled with different sizes of training datasets (and a constant test set of 327870 galaxies).
We saw all algorithms other than LR and MLP exhibit a training bloat, whereby the inference time increased with the number of galaxies included in the training set; however, the algorithms inference times generally increased by only a factor of $10$ despite the training dataset increasing by a factor of $10^4$.}
\label{fig:benchmarks_inference_times_training}
\end{multicols}
\begin{multicols}{2}
\begin{adjustwidth}{-0.7cm}{}
\includegraphics[scale=0.34]{figures/inference_times_with_testing_benchmarks.png}\par
\end{adjustwidth}
\caption{Graphs of the inference time plotted against the number of galaxies used in the testing set to show how each algorithm will scale with different sizes of testing datasets (and a constant training set of 983608 galaxies).
In inference we saw all algorithms scaling very similarly with the main difference being the RF and ERT where, during the period between $10^2$ to $10^5$ galaxies used in the test set, the inference time didn't increase despite the number of galaxies to provide an estimate for increasing by a factor of $10^3$. This meant that both algorithms ended up being faster to provide redshift estimations for larger test sets.}
\label{fig:benchmarks_inference_times_testing}
\includegraphics[scale=0.342]{figures/compare_scores_training_points.png}\par
\caption{Graphs of the Mean Squared Error (MSE) plotted against the number of galaxies used in the training set to show how each algorithm's performance will scale with different sizes of training datasets (and a constant test set of 327870 galaxies).
As expected, in general we saw all algorithms (other than LR) achieving lower MSE as the number of galaxies included in the training set was increased. However, we saw this increased error performance quickly plateau, and past $10^4$ galaxies in the training set there was very little reduction in error. }
\label{fig:benchmarks_mse}
\end{multicols}
\end{figure*}
\section{Results} \label{sec:results}
The results given in Table~\ref{tab:mag_results} show how the seven machine learning algorithms performed at producing photometric redshift estimations. Furthermore Figure \ref{fig:redshifts} displays the true spectroscopic redshifts plotted against the photometric redshift estimates for each machine learning algorithm. We also plotted the distributions of the redshift estimations for each of the algorithms as well as the true spectroscopic redshift in a violin plot in Figure~\ref{fig:violin} to quickly see which algorithms were able to capture the correct distribution.
\begin{figure*}
\centering
\includegraphics[scale = 0.6]{figures/contour_plot_smaller.jpg}
\caption{Graphs of photometric redshift estimates against the true spectroscopic redshift where the lighter shaded contours display the more densely populated regions. From top left to bottom right - Linear Regression (LR), k-Nearest Neighbours (kNN), Multi-layer Perceptron (MLP), Decision Tree (DT), Boosted Decision Tree (BDT), Random Forest (RF), and Extremely Randomised Trees (ERT).}
\label{fig:redshifts}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale = 0.45]{figures/violin_plot_2.png}
\caption{Violin plots showing the kernel density estimation of the underlying distributions of photometric redshift estimates of each algorithm along with the true spectroscopic redshift. From left to right - True spectroscopic redshift (zspec), Linear Regression (LR), k-Nearest Neighbours (kNN), Decision Tree (DT), Boosted Decision Tree (BDT), Random Forest (RF), Extremely Randomised Trees (ERT), and Multi-layer Perceptron (MLP).}
\label{fig:violin}
\end{figure*}
From these results we saw that all algorithms were able to successfully provide photometric redshift estimations. Using the violin plots from Figure~\ref{fig:violin} we could see that the rough distribution was recovered by each algorithm, with the Multi-layer Perceptron (MLP) producing a slightly more similar shape to the true redshifts. However, from simply looking at the outputs shown in Figures~\ref{fig:redshifts} \& \ref{fig:violin} it would be very difficult to determine which algorithm would be best to use. While the Decision Tree (DT) might be excluded due to the estimates being put into bands at set redshifts, its errors were still found to be quite low and it outperformed both the Linear Regression (LR) and MLP algorithms.
Looking at the metrics in Table~\ref{tab:mag_results} alone, the Random Forest (RF) performed best having the lowest errors with a mean absolute error $(MAE) = 0.0385$ and mean squared error $(MSE) = 0.0042$, however, the other algorithms k-Nearest Neighbours (kNN), Boosted Decision Tree (BDT), and Extremely Randomised Trees (ERT) all performed incredibly similarly with $MAE < 0.042$ and $MSE < 0.0046$. Indeed, the BDT was almost identically performing to the RF with a slightly improved precision and outlier fraction, and with such close performances of all the other algorithms, it was impossible to sufficiently determine which algorithm would be the most useful. To be able to further differentiate between them and determine which would be the best algorithm to use, it was therefore necessary to use the benchmarking results.
The results of the benchmarking performed for each algorithm are plotted in Figure~\ref{fig:benchmarks_training_times} (that shows the speed of training with varying sizes of training datasets), Figures~\ref{fig:benchmarks_inference_times_training} \& \ref{fig:benchmarks_inference_times_testing} (that show the inference speeds with varying sizes of either training or testing datasets), and Figure~\ref{fig:benchmarks_mse} (that shows how the MSE varies as the number of galaxies in the training set increases). As shown by these figures, the fastest algorithm overall was LR which remained the fastest both in training and inference with increasing sizes of training and testing datasets. This was perhaps not surprising as out of the algorithms tested it was the most simple model and as such required less computational resources both to train the model and to make its predictions. However, as LR also had by far the worst errors out of the algorithms tested (with errors around $30\%$ higher than those of the better performing algorithms), it seemed unlikely that it would ever be implemented for the problem of photometric redshift estimation.
Out of the other algorithms the DT and MLP were the poorer performing in terms of error. The DT was the second fastest behind LR in terms of inference, using its simple decision rules to quickly obtain the redshift estimations; however, as it also resulted in only estimating certain redshift bands, the final estimates weren't as useful as other algorithms. Furthermore, the DT was the worst scaling algorithm for training and became the second slowest algorithm to train on a million galaxies. The MLP was also one of the slowest algorithms tested, starting as the slowest to train with small training sets and also being one of the slowest in inference. Although it's the simplest example of deep learning, it suffered from being one of the more complex algorithms tested, and would perform better on even larger datasets with far more features, where it would have more chance to catch up to the other algorithms in both speed and error performance.
The remaining kNN, BDT, RF, and ERT algorithms all performed well in terms of error and were the hardest to differentiate between, however, using the benchmarking results it was possible to see how differently they scaled. kNN was the simplest of the four better performing algorithms, and using the nearest neighbours to produce it's estimates resulted in the second fastest training times, only being beaten by LR. Although kNN was very fast to train, it was the slowest in inference and exhibited a bad `training bloat' whereby the inference time increased as the number of galaxies in the training set was increased. While most other algorithms also displayed some level of this training bloat, it was worst for kNN due to the nature of it's nearest neighbour search which became more and more computationally expensive as more training points were added, and as such it wouldn't be as useful an algorithm for giving estimates for large datasets.
Out of the three ensemble tree-based methods, the RF scaled the worst in terms of training, becoming the slowest algorithm to train on the 1 million galaxies. Whereas, the ERT scaled surprisingly well and became the third fastest algorithm in training and similar to kNN. In training the BDT was quite fast, scaling much better than the RF but worse than the ERT; however, when it came to inference the BDT scaled worse than both the ERT and RF and was the second slowest algorithm for large datasets. The RF and ERT scaled almost identically in inference, which made sense being such similar algorithms, both only being beaten by the much simpler LR and DT.
As a result it seemed like there was no clear best performing algorithm, but rather each algorithm could be useful in different situations. While the RF had the best error metrics, its terrible scaling with increasing training data meant that it would only be the best algorithm for problems where it could be trained once and it would be inefficient to use for problems which required the algorithm to be regularly retrained on large amounts of data. In that case the BDT which had similar errors but was faster to train could be a more useful alternative, and similarly if the inference times were required to be lower the ERT would be a good compromise.
\section{Conclusions} \label{sec:conclusion}
Producing reliable photometric redshift estimations will continue to be an incredibly important area of cosmology, and with future surveys producing more data than ever before it will be vital to ensure that the methods chosen to produce the redshifts can be run efficiently.
Here we showed how benchmarking can be used to provide a more complete view of how various machine learning algorithms' performances scale with differing sizes of training and testing datasets. By combining the benchmarking results with the regression metrics we were able to demonstrate how it's possible to distinguish between algorithms which appear to perform almost identically and suggest which could be better to implement in different scenarios. Furthermore, by suggesting a novel time-considered optimisation process which takes into account the benchmarking results during model selection, it was possible to provide additional insight into how machine learning algorithms can be fine-tuned to provide more appropriate models.
From our tests we determined that while the kNN, BDT, RF, and ERT methods all seemed to perform very similarly, obtaining a good result for the MSE $< 0.0046$, it was the RF which achieved the best metrics, and was also one of the faster algorithms in inference. However, depending on which area of the pipeline an experiment requires to be faster, the RF method could also be inefficient as it scaled worse than all other algorithms in training. Hence for problems which require regular retraining of models on large datasets one of the other algorithms such as the BDT or ERT could allow for a greater improvement. As large sky surveys producing enormous datasets will require the most efficient methods possible it could also be necessary to investigate the use of deep learning neural networks which could benefit the most when using even larger amounts of data with more features.
Further work could be done to include a wider range of machine learning algorithms, including more deep learning networks, and to test them on larger simulated datasets to confirm their scaling. By making use of the time-considered optimisation it would also be possible to further examine the trade-offs between minimising errors and the training/inference times in each individual algorithm. We could also run the benchmarks on a variety of computer architectures, making use of GPUs which have the potential to speed up the algorithms that are most parallisable, as well as allowing us to examine the environmental impact of running such computationally expensive tasks.
\section*{Acknowledgements}
B.H. was supported by the STFC UCL Centre for Doctoral Training in Data Intensive Science (grant No. ST/P006736/1).
Authors also acknowledge the support from following grants: O.L.'s European Research Council Advanced Grant (TESTDE FP7/291329), STFC Consolidated Grants (ST/M001334/1 and ST/R000476/1),
J.T.'s UKRI Strategic Priorities Fund (EP/T001569/1), particularly the AI for Science theme in that grant and the Alan Turing Institute, Benchmarking for AI for Science at Exascale (BASE), EPSRC ExCALIBUR Phase I Grant (EP/V001310/1).
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
\section*{Data Availability}
The data used in this paper came entirely from the Sloan Digital Sky
Survey data release 12 (SDSS-DR12), and is openly available from: \url{https://www.sdss.org/dr12/}.
\bibliographystyle{mnras}
|
1,108,101,565,811 | arxiv | \section{Introduction}\label{sec1}
\noindent
In the last few years, nonlocal operators have taken increasing relevance, because they arise in a number of applications, in such fields as game theory, finance, image processing, and optimization, see \cite{A, BV, LC, RO} and the references therein. \\
The main reason is that nonlocal operators are the infinitesimal generators of L\'{e}vy-type stochastic processes. A L\'{e}vy process is a stochastic process with independent and stationary increments, it represents the random motion of a particle whose successive displacements are independent and statistically identical over different time intervals of the same length.
These processes extend the concept of Brownian motion, where the infinitesimal generator is the Laplace operator.\\
\noindent The linear operator $L_K$ is defined for any sufficiently smooth function
$u:\mathbb{R}^n \rightarrow \mathbb{R}$ and all $x \in \mathbb{R}^n$ by
$$\mathit{L}_K u(x)= P.V. \int_{\mathbb{R}^n} (u(x)-u(y))K(x-y)\,dy,$$ where
$$K(y)=a\Bigl(\cfrac{y}{|y|}\Bigr)\cfrac{1}{|y|^{n+2s}}$$
is a singular kernel for a suitable function $a$.
\noindent The infinitesimal generator $L_K$ of any L\'{e}vy processes is defined in this way, under the hypothesis that the process is symmetric, and the measure $a$ is absolutely continuous on $S^{n-1}$.
In the particular case $a\equiv 1$ we obtain the fractional Laplacian operator $(-\Delta)^s$. \\
Among all the nonlocal operators we choose the anisotropic type, because we want to consider
L\'{e}vy processes that are as general as possible.\\
\noindent In order to explain our choice, we observe that the \emph{nonlocal evolutive equation}
\[
u_t(x,t)+\mathit{L_K}u(x,t)=0
\]
naturally arises from a probabilistic process in which a particle moves randomly in the space subject to a probability that allows long jumps with a polynomial tail \cite{BV}. In this case, at each step the particle selects randomly a direction $v \in S^{n-1}$ with the probability density $a$, differently from the case of the fractional heat equation \cite{BV}.
Another probabilistic motivation for the operator $L_K$ arises from a \emph{pay-off} approach \cite{BV}-\cite{RO}.\\
\noindent In this paper we study the nonlinear Dirichlet problem
\[
\begin{cases}
L_K u = f(x,u) & \text{in $\Omega$ } \\
u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,}
\end{cases}
\]
where $\Omega\subset\mathbb{R}^n$ is a bounded domain with a $C^{1,1}$ boundary, $n>2s$, $s\in(0,1)$, and $f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a Carath\'{e}odory function.\\
The choice of the functional setting $X(\Omega)$, which will be defined later on, is extremely delicate and it is crucial to show our results.
By the results of Ros Oton in \cite{RO}, if $a$ is nonnegative the Poincar\'{e} inequality and regularity results still hold, therefore they are used to solve linear problems; on the other hand, by results of Servadei and Valdinoci in \cite{SV1}, if $a$ is positive $X(\Omega)$ is continuously embedded in $L^q(\Omega)$ for all $q\in[1, 2^*_s]$ and compactly for all $q\in[1, 2^*_s)$, and these tools are necessary to solve nonlinear problems.
Here the fractional critical exponent is $2^*_s=\frac{2n}{n-2s}$ for $n>2s$. In analogy with the classical cases, if $n<2s$ then $X(\Omega)$ is embedded in $C^{\alpha}(\overline{\Omega})$ with $\alpha=\frac{2s-n}{2}$ \cite[Theorem 8.2]{DNPV}, while in the limit case $n=2s$ it is embedded in $L^q(\Omega)$ for all $q \geq 1$.
Therefore, due to Corollary 4.53 and Theorem 4.54 in \cite{DDE}, we can state that the results of this paper hold true even when $n\leq 2s$, but we only focus on the case $n>2s$, with subcritical or critical nonlinearities, to avoid trivialities (for instance, the $L^\infty$ bounds are obvious for $n<2s$).
Note that $n\leq2s$ requires $n=1$, hence this case falls into the framework of ordinary nonlocal equations.
In the limit case $n=1$, $s=\frac{1}{2}$ the critical growth for the nonlinearity is of exponential type, according to the fractional Trudinger-Moser inequality. Such case is open for general nonlocal operators, though some results are known for the operator $(-\Delta)^{\frac{1}{2}}$, see \cite{IS}.\\
An alternative to preserve regularity results is taking kernels between two positive constants, for instance considering $a \in L^{\infty}(\Omega)$, but in this way the operator $L_K$ behaves exactly as the fractional Laplacian and, in particular $X(\Omega)$ coincides with the Sobolev space $H^s_0(\Omega)$, consequently there is not any real novelty.
These reasons explain our assumptions on the kernel $K$.\\
\noindent A typical feature of this operator is the \emph{nonlocality}, in the sense that the value of $L_K u(x)$ at any point $x \in \Omega$ depends not only on the values of $u$ on a neighborhood of $x$, but actually on the whole $\mathbb{R}^n$, since $u(x)$ represents the expected value of a random variable tied to a process randomly jumping arbitrarily far from the point $x$.
This operator is said \emph{anisotropic}, because the role of the function $a$ in the kernel is to weight differently the different spacial directions.\\
Servadei and Valdinoci have established variational methods for nonlocal operators and they have proved an existence result for equations driven by integrodifferential operator $L_K$, with a general kernel $K$, satisfying \textquotedblleft structural properties\textquotedblright, as we will see later \eqref{P2}-\eqref{P3}-\eqref{P4}.
They have shown that problem \eqref{P} admits a Mountain Pass type solution, not identically zero, under the assumptions that the nonlinearity $f$ satisfies a subcritical growth, the Ambrosetti-Rabinowitz's condition and $f$ is superlinear at $0$, see \cite{SV1}-\cite{SV2}.\\
Ros Oton and Valdinoci have studied the linear Dirichlet problem, proving existence of solutions, maximum principles and constructing some useful barriers, moreover they focus on the regularity properties of solutions, under weaker hypothesis on the function $a$ in the kernel $K$, see \cite{RO}-\cite{ROV}.\\
\noindent In \cite{IMS} Iannizzotto, Mosconi, Squassina have studied the problem \eqref{P} with the fractional Laplacian and they have proved that, for the corresponding functional $J$, being a local minimizer for $J$ with respect to a suitable weighted $C^0$-norm, is equivalent to being an $H_0^s(\Omega)$-local minimizer. Such result represents an extension to the fractional setting of the classic result by Brezis and Nirenberg for Laplacian operator \cite{BN}.\\
We hope to make a contribution to the knowledge of nonlocal anisotropic operators, using the existing tools already in literature to prove new results, as $L^{\infty}$-bounds and the principle of equivalence of minimizers. We have extended this minimizers principle to the case of anisotropic operator $L_K$, considering a suitable functional analytical setting instead of $H_0^s$.
This last fact has allowed us to prove a multiplicity result, under suitable assumptions we show that problem \eqref{P} admits at least three non trivial solution: one positive, one negative ad one of unknown sign, using variational methods and, in particular Morse theory.\\
The paper has the following structure: in Section 2 we compare different definitions of the operator $L_K$, in Section 3 we recall the variational formulation of our problem, together with some results from critical point theory. In Section 4 we prove a $L^{\infty}$ bound on the weak solutions and the equivalence of minimizers in the two topologies $C_{\delta}^0(\overline{\Omega})$ and $X(\Omega)$, respectively. Moreover we deal with an eigenvalue problem driven by the nonlocal anisotropic operator $L_K$ and we discuss some properties of its eigenvalues and eigenfunctions. In Section 4 we prove a multiplicity result and in the Appendix we study a general Hopf's lemma where the nonlinearity is slightly negative.
\section{The nonlocal anisotropic operator $L_K$}\label{sec2}
\noindent
\begin{Def} \label{D1}
The linear operator $L_K$ is defined for any $u$ in the Schwartz space $\mathit{S}(\mathbb{R}^n)$ as
\begin{equation}
\begin{split}
\mathit{L}_K u(x) & = P.V. \int_{\mathbb{R}^n} (u(x)-u(y))K(x-y)\,dy \\
& =\lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n \setminus B_{\epsilon}(x)} (u(x)-u(y))K(x-y)\,dy,
\end{split}
\label{E1}
\end{equation}
where the singular kernel $K: \mathbb{R}^n \setminus \{0\} \rightarrow (0, +\infty)$ is given by
$$K(y)=a\Bigl(\cfrac{y}{|y|}\Bigr)\cfrac{1}{|y|^{n+2s}}, \qquad a \in L^1(S^{n-1}), \inf_{S^{n-1}} a>0, \text{even}.$$
Here P.V. is a commonly used abbreviation for \textquotedblleft in the principal value sense" (as defined by the latter equation).
\end{Def}
\noindent In general, the $u$'s we will be dealing with, do not belong in $\mathit{S}(\mathbb{R}^n)$, as the optimal regularity for solutions of nonlocal problems is only $C^s(\mathbb{R}^n)$. We will give a weaker definition of $L_K$ in Subsection 3.1.\\
We notice that the kernel of the operator $L_K$ satisfies some important properties for the following results, namely
\begin{align}
& m K \in L^1(\mathbb{R}^n), \text{ where } m(y)=\min\{|y|^2,1\}; \label{P2} \\
& \text{there exists } \beta>0 \text{ such that } K(y)\geq \beta |y|^{-(n+2s)} \text{ for any } y \in \mathbb{R}^n \setminus \{0\}; \label{P3} \\
&K(y)=K(-y) \text{ for any } y \in \mathbb{R}^n \setminus \{0\}. \label{P4}
\end{align}
\noindent The typical example is $K(y)= |y|^{-(n+2s)}$, which corresponds to $L_K=(-\Delta)^s$, the fractional Laplacian.\\
We remark that we do not assume any regularity on the kernel $K(y)$. As we will see, there is an interesting relation between the regularity properties of solutions and the regularity of kernel $K(y)$.\\
We recall some special properties of the case $a \in L^{\infty}(S^{n-1})$.
\begin{Oss} \label{Oss1}
Due to the singularity at $0$ of the kernel, the right-hand side of \eqref{E1} is not well defined in general. In the case $s \in (0,\frac{1}{2})$ the integral in \eqref{E1} is not really singular near $x$. Indeed, for any $u \in \mathit{S}(\mathbb{R}^n)$, $a \in L^{\infty}(S^{n-1})$ we have
\begin{align*}
& \int_{\mathbb{R}^n} \cfrac{|u(x)-u(y)|}{|x-y|^{n+2s}} \; a\Bigl(\cfrac{x-y}{|x-y|}\Bigr)\,dy \\
&\leq C ||a||_{L^\infty} \int_{B_R} \cfrac{|x-y|}{|x-y|^{n+2s}}\,dy
+ C ||a||_{L^\infty} ||u||_{L^\infty} \int_{\mathbb{R}^n \setminus B_R} \cfrac{1}{|x-y|^{n+2s}}\,dy\\
&=C \left(\int_{B_R} \cfrac{1}{|x-y|^{n+2s-1}}\,dy +\int_{\mathbb{R}^n \setminus B_R} \cfrac{1}{|x-y|^{n+2s}}\,dy \right)< \infty,
\end{align*}
where $C$ is a positive constant depending only on the dimension and on the $L^{\infty}$ norms of $u$ and $a$, see \cite[Remark 3.1]{DNPV} in the case of the fractional Laplacian.
\end{Oss}
\noindent The singular integral given in Definition \ref{D1} can be written as a weighted second-order
differential quotient as follows (see \cite[Lemma 3.2] {DNPV} for the fractional Laplacian):
\begin{Lem}
For all $u \in \mathit{S}(\mathbb{R}^n)$ $L_K$ can be defined as
\begin{equation}
\mathit{L}_K u(x) = \frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) K(z)\,dz, \quad x \in \mathbb{R}^n.
\label{E2}
\end{equation}
\end{Lem}
\begin{Oss}
We notice that the expression in \eqref{E2} doesn't require the P.V. formulation since, for instance, taking $u \in L^\infty(\mathbb{R}^n)$ and locally $C^2$, $a \in L^{\infty}(S^{n-1})$, using a Taylor expansion of $u$ in $B_1$, we obtain
\begin{align*}
& \int_{\mathbb{R}^n}\cfrac{|2u(x)-u(x+z)-u(x-z)|} {|z|^{n+2s}} \; a\Bigl(\cfrac{z}{|z|}\Bigr)\,dz \\
& \leq c ||a||_{L^\infty} ||u||_{L^\infty}\int_{\mathbb{R}^n \setminus B_1} \cfrac{1}{|z|^{n+2s}}\,dz + ||a||_{L^\infty} ||D^2u||_{L^\infty(B_1)} \int_{B_1} \cfrac{1}{|z|^{n+2s-2}}\,dz < \infty.
\end{align*}
\end{Oss}
\noindent We show that the two definitions are equivalent, hence we have
\begin{align*}
\mathit{L}_K u(x) & = \frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) K(z)\,dz \\
&=\frac{1}{2} \lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n \setminus B_{\epsilon}} (2u(x)-u(x+z)-u(x-z)) K(z)\,dz,\\
&=\frac{1}{2} \lim_{\epsilon \rightarrow 0^+} \left[\int_{\mathbb{R}^n \setminus B_{\epsilon}}(u(x)-u(x+z)) K(z)\,dz + \int_{\mathbb{R}^n\setminus B_{\epsilon}}(u(x)-u(x-z)) K(z)\,dz\right],
\end{align*}
we make a change of variables $\tilde{z}=-z$ in the second integral and we set $\tilde{z}=z$
$$=\lim_{\epsilon \rightarrow 0^+}\int_{\mathbb{R}^n}(u(x)-u(x+z)) K(z)\,dz,$$
we make another change of variables $z=y-x$ and we obtain the first definition
$$=\lim_{\epsilon \rightarrow 0^+}\int_{\mathbb{R}^n}(u(x)-u(y)) K(x-y)\,dy.$$
It is important stressing that this holds only if the kernel is even, more precisely if the function $a$ is even.\\
\noindent There exists a third definition of $L_K$ that uses a Fourier transform, we can define it as
$$\mathit{L_K}u(x)= \mathcal{F}^{-1}(S(\xi)(\mathcal{F}u))$$
where $\mathcal{F}$ is a Fourier transform and $S: \mathbb{R}^n \rightarrow \mathbb{R}$ is a multiplier, $S(\xi)=\int_{\mathbb{R}^n} (1-\cos(\xi \cdot z)) a\bigl(\frac{z}{|z|}\bigr)\,dz$.
We consider \eqref{E2} and we apply the Fourier transform to obtain
\begin{align*}
\mathcal{F}(\mathit{L_K}u) & =\mathcal{F}\left(\frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) a\Bigl(\frac{z}{|z|}\Bigr)\,dz\right)\\
& =\frac{1}{2} \int_{\mathbb{R}^n} (\mathcal{F}(2u(x)-u(x+z)-u(x-z)) a\Bigl(\frac{z}{|z|}\Bigr)\,dz \\
& =\frac{1}{2} \int_{\mathbb{R}^n} (2-e^{i \xi \cdot z} -e^{-i \xi \cdot z}) (\mathcal{F}u)(\xi) a\Bigl(\frac{z}{|z|}\Bigr)\,dz\\
&=\frac{1}{2} (\mathcal{F}u)(\xi) \int_{\mathbb{R}^n} (2-e^{i \xi \cdot z} -e^{-i \xi \cdot z}) a\Bigl(\frac{z}{|z|}\Bigr)\,dz \\
& = (\mathcal{F}u)(\xi) \int_{\mathbb{R}^n} (1-\cos(\xi \cdot z) a\Bigl(\frac{z}{|z|}\Bigr)\,dz.
\end{align*}
We recall that in the case $a\equiv 1$, namely for the fractional Laplacian (see \cite[Proposition 3.3] {DNPV}), $S(\xi)=|\xi|^{2s}$.\\
If $a$ is unbounded from above, $L_K$ is better dealt with by a convenient functional approach.
\section{Preliminaries}\label{sec3}
\noindent
In this preliminary section, we collect some basic results that will be used in the forthcoming sections. In the following, for any Banach space $(X,||.||)$ and any functional $J \in C^1(X)$ we will denote by $K_J$
the set of all critical points of $J$, i.e., those points $u \in X$ such that $J'(u)=0$ in $X^*$ (dual space of $X$), while for all $c \in \mathbb{R}$ we set
$$K_{J}^c=\{u \in K_J: J(u)=c\},$$
$$J^c=\{u \in X: J(u) \leq c\} \quad (c \in \mathbb{R}),$$
beside we set
$$\overline{B}_{\rho}(u_0)=\{u \in X: ||u-u_0||\leq \rho\} \quad (u_0 \in X, \rho >0).$$
Moreover, in the proofs of our results, $C$ will denote a positive constant (whose value may change case by case).\\
Most results require the following \emph{Cerami compactness condition} (a weaker version of the Palais-Smale condition):\\
\emph{Any sequence $(u_n)$ in $X$, such that $(J(u_n))$ is bounded in $\mathbb{R}$ and
$(1+||u_n||)J'(u_n)\rightarrow 0$ in $X^{*}$ admits a strongly convergent subsequence}.
\subsection{Variational formulation of the problem}\label{subsec31}
\noindent
Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with $C^{1,1}$ boundary $\partial \Omega$, $n>2s$ and $s\in (0,1)$. We consider the following Dirichlet problem
\begin{equation}
\begin{cases}
\mathit{L}_K u = f(x,u) & \text{in $\Omega$ } \\
u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$.}
\end{cases}
\label{P}
\end{equation}
We remark that the Dirichlet datum is given in $\mathbb{R}^n \setminus \Omega$ and not simply on $\partial \Omega$, consistently with the non-local character of the operator $\mathit{L_K}$.\\
The nonlinearity $f: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ is a Carath\'{e}odory function which satisfies the growth condition
\begin{equation}
|f(x,t)|\leq C(1 + |t|^{q-1}) \text{ a.e. in } \Omega, \forall t \in \mathbb{R} \; (C>0, q \in [1, 2_{s}^{*}])
\label{G}
\end{equation}
(here $2_{s}^{*}:= 2n/(n-2s)$ is the fractional critical exponent).\,Condition \eqref{G} is referred to as a subrictical or critical growth if $q<2_{s}^{*}$ or $q=2_{s}^{*}$, respectively.\\
\noindent The aim of this paper is to study nonlocal problems driven by $L_K$ and with Dirichlet boundary data via variational methods. For this purpose, we need to work in a suitable fractional Sobolev space: for this, we consider a functional analytical setting that is inspired by the fractional Sobolev spaces $H_0^s(\Omega)$ \cite{DNPV}
in order to correctly encode the Dirichlet boundary datum in the variational formulation.\\
We introduce the space \cite{SV1}
$$X(\Omega)=\{u \in L^2(\mathbb{R}^n): [u]_{K} < \infty, u=0 \text{ a.e. in } \mathbb{R}^n \setminus \Omega \},$$
with
$$[u]_{K}^2 := \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy.$$
$X(\Omega)$ is a Hilbert space with inner product
$$\left\langle u,v \right\rangle_{X(\Omega)} = \int_{\mathbb{R}^{2n}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy,$$
which induces a norm
$$||u||_{X(\Omega)}= \left(\int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy \right)^\frac{1}{2}.$$
\noindent (We indicate for simplicity $||u||_{X(\Omega)}$ only with $||u||$, when we will consider a norm in different spaces, we will specify it.)
\noindent By the fractional Sobolev inequality and the continuous embedding of $X(\Omega)$ in $H^s_0(\Omega)$ (see \cite[Subsection 2.2] {SV1}), we have that the embedding $X(\Omega)\hookrightarrow L^q(\Omega)$ is continuous for all $q \in [1,2_s^*]$ and compact if $q \in [1,2_s^*)$ (see \cite[Theorem 6.7, Corollary 7.2]{DNPV}).
\noindent We set for all $u \in X(\Omega)$
$$J(u)=\frac{1}{2} \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy - \int_{\Omega} F(x,u(x))\,dx,$$
where the function $F$ is the primitive of $f$ with respect to the second variable, that is $$F(x,t)=\int_0^t f(x,\tau)\, d\tau, \quad x\in \Omega, t \in \mathbb{R}.$$
Then, $J \in C^1(X(\Omega))$ and all its critical points are weak solutions of \eqref{P}, namely they satisfy
\begin{equation}
\int_{\mathbb{R}^{2n}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy=\int_{\Omega} f(x,u(x))v(x)\,dx, \quad \forall v \in X(\Omega).
\label{Fd}
\end{equation}
\subsection{Critical groups}\label{subsec32}
\noindent
We recall the definition and some basic properties of critical groups, referring the reader to the monograph \cite{MMP} for a detailed account on the subject.
Let $X$ be a Banach space, $J \in C^1(X)$ be a functional, and let $u \in X$ be an isolated critical point of $J$, i.e., there exists a neighbourhood $U$ of $u$ such that $K_J \cap U = \{u\}$, and $J(u)=c$. For all $k \in \mathbb{N}_0$, the \emph{k-th critical group of $J$ at $u$} is defined as
$$C_k(J,u)=H_k(J^c \cap U, J^c \cap U \setminus \{u\}),$$
where $H_k(\cdot , \cdot)$ is the k-th (singular) homology group of a topological pair with coefficients in $\mathbb{R}$.\\
\noindent The definition above is well posed, since homology groups are invariant under excision, hence $C_k(J,u)$ does not depend on $U$. Moreover, critical groups are invariant under homotopies preserving isolatedness of critical points.
We recall some special cases in which the computation of critical groups is immediate ($\delta_{k,h}$ is the Kronecker symbol).
\begin{Pro}{\rm\cite[Example 6.45]{MMP}} \label{M}
Let $X$ be a Banach space, $J \in C^1(X)$ a functional and $u \in K_J$ an isolated critical point of $J$. The following hold:
\begin{itemize}
\item
if $u$ is a local minimizer of $J$, then $C_k(J,u)=\delta_{k,0} \mathbb{R}$ for all $k \in \mathbb{N}_0$,
\item
if $u$ is a local maximizer of $J$, then
$C_k(J,u)=
\begin{cases}
0 & \text{if $\mathrm{dim}(X)=\infty$} \\
\delta_{k,m} \mathbb{R} & \text{if $\mathrm{dim}(X)=m$}
\end{cases}$ for all $k \in \mathbb{N}_0$.
\end{itemize}
\end{Pro}
\noindent Next we pass to critical points of mountain pass type.
\begin{Def}{\rm\cite[Definition 6.98]{MMP}}
Let $X$ be a Banach space, $J \in C^1(X)$ and $x \in K_J$, $u$ is of mountain pass type if, for any open neighbourhood $U$ of $u$, the set $\{y \in U: J(y)<J(u)\}$ is nonempty and not path-connected.
\end{Def}
\noindent The following result is a variant of the mountain pass theorem \cite{PS} and establishes the existence of critical points of mountain pass type.
\begin{Teo}{\rm\cite[Theorem 6.99]{MMP}} \label{MPT}
If $X$ is a Banach space, $J \in C^1(X)$ satisfies the (C)-condition, $x_0,x_1 \in X$, $\Gamma:=\{\gamma \in C([0,1],X): \gamma(0)=x_0, \gamma(1)=x_1\}$,
$c:=\inf_{\gamma \in \Gamma} \max_{t \in [0,1]} J(\gamma(t))$, and $c>\max\{J(x_0),J(x_1)\}$, then $K_{J}^c \neq \emptyset$ and, moreover, if $K_{J}^c$ is discrete, then we can find
$u \in K_{J}^c $ which is of mountain pass type.
\end{Teo}
\noindent We now describe the critical groups for critical points of mountain pass type.
\begin{Pro}{\rm\cite[Proposition 6.100]{MMP}} \label{Gcr}
Let $X$ be a reflexive Banach space, $J \in C^1(X)$, and $u \in K_J$ isolated with $c:=J(u)$ isolated in $J(K_J)$. If $u$ is of mountain pass type, then $C_1(J,u)\neq 0$.
\end{Pro}
\noindent If the set of critical values of $J$ is bounded below and $J$ satisfies the (C)-condition, we define for all $k \in \mathbb{N}_0$ the \emph{k-th critical group at infinity of $J$} as
$$C_k(J,\infty)=H_k(X, J^a),$$
where $a < \inf_{u \in K_J} J(u)$.\\
\noindent We recall the \emph{Morse identity}:
\begin{Pro}{\rm\cite[Theorem 6.62 (b)]{MMP}} \label{MI}
Let $X$ be a Banach space and let $J \in C^1(X)$ be a functional satisfying (C)-condition such that $K_J$ is a finite set. Then, there exists a formal power series
$Q(t)=\sum_{k=0}^{\infty} q_k t^k \; (q_k \in \mathbb{N}_0 \; \forall k \in \mathbb{N}_0)$ such that for all $t \in \mathbb{R}$
$$\sum_{k=0}^{\infty} \sum_{u \in K_J} \mathrm{dim}\, C_k(J,u) t^k = \sum_{k=0}^{\infty} \mathrm{dim}\, C_k(J,\infty) t^k + (1+t) Q(t).$$
\end{Pro}
\section{Results}\label{sec4}
\noindent
This section is divided in the following way: in Subsection 1 we prove a priori bound for the weak solution of problem (\ref{P}), in the subcritical and critical case, and we recall some preliminary results, including the weak and strong maximum principles, and a Hopf lemma.
In Subsection 2 we prove the equivalence of minimizers in the $X(\Omega)$-topology and in $C_{\delta}^0({\overline{\Omega}})$-topology, respectively; in Subsection 3 we consider an eigenvalue problem for nonlocal, anisotropic operator $L_K$.
\subsection{$L^{\infty}$ bound on the weak solutions}\label{subsec41}
\noindent We prove an $L^{\infty}$ bound on the weak solutions of \eqref{P} (in the subcritical case such bound is uniform)\cite[Theorem 3.2]{IMS}.
\begin{Teo} \label{SL}
If $f$ satisfies the growth condition \eqref{G}, then for any weak solution $u \in X(\Omega)$ of \eqref{P} we have $u \in L^{\infty}(\Omega)$. Moreover, if $q<2_{s}^{*}$ in \eqref{G}, then there exists a function
$M \in C(\mathbb{R_+})$, only depending on the constants $C$, $n$, $s$ and $\Omega$, such that $$||u||_{\infty} \leq M(||u||_{2_{s}^{*}}).$$
\end{Teo}
\begin{proof}
Let $u \in X(\Omega)$ be a weak solution of \eqref{P} and set $\gamma=(2_s^*/2)^{1/2}$ and $t_k=sgn(t) \min\{|t|,k\}$ for all $t \in \mathbb{R}$ and $k>0$.
We define $v=u|u|_k^{r-2}$, for all $r \geq 2$, $k>0$, $v \in X(\Omega)$.
By (\ref{P3}) and applying the fractional Sobolev inequality we have that
$$||u|u|_k^{\frac{r}{2}-1}||_{2_s^*}^2 \leq C ||u|u|_k^{\frac{r}{2}-1}||_{H_0^s}^2 \leq
\frac{C}{\beta} ||u|u|_k^{\frac{r}{2}-1}||^2.$$
By \cite[Lemma 3.1]{IMS} and assuming $v$ as test function in \eqref{Fd}, we obtain
$$||u|u|_k^{\frac{r}{2}-1}||_{2_s^*}^2 \leq C ||u|u|_k^{\frac{r}{2}-1}||^2
\leq \frac{C r^2}{r-1} \left\langle u,v\right\rangle_{X(\Omega)}
\leq C r \int_{\Omega} |f(x,u)| |v|\,dx,$$
for some $C>0$ independent of $r \geq 2$ and $k>0$. Applying $\eqref{G}$ and the Fatou Lemma as
$k \rightarrow \infty$ yields
$$||u||_{\gamma^2 r} \leq C r^{1/r} \left( \int_{\Omega} (|u|^{r-1} + |u|^{r+q-2})\,dx\right) ^{1/r}.$$
The rest of the proof follows arguing as in \cite{IMS}, using a suitable bootstrap argument, providing in the end $u \in L^{\infty}(\Omega)$. The main difference is that such bound is uniform only in the subcritical case and not in the critical case.
\end{proof}
\noindent Theorem \ref{SL} allows to set $g(x):=f(x,u(x)) \in L^{\infty}(\mathbb{R}^n)$ and now we
rephrase the problem as a linear Dirichlet problem
\begin{equation}
\begin{cases}
\mathit{L}_K u = g(x) & \text{in $\Omega$} \\
u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,}
\end{cases}
\label{L}
\end{equation}
with $g \in L^\infty(\Omega)$.
\begin{Pro}{\rm\cite[Proposition 4.1, Weak maximum principle]{RO}} \label{WmP}
Let $u$ be any weak solution to \eqref{L}, with $g \geq 0$ in $\Omega$. Then, $u \geq 0$ in $\Omega$.
\end{Pro}
\noindent We observe that the weak maximum principle also holds when the Dirichlet datum is given by $u=h$, with $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$.\\
For problem \eqref{L}, the interior regularity of solutions depends on the regularity of $g$, but it also depends on the regularity of $K(y)$ in the $y$-variable. Furthermore, if the kernel $K$ is not regular, then the interior regularity of $u$ will in addition depend on the boundary regularity of $u$.
\begin{Teo}{\rm\cite[Theorem 6.1, Interior regularity]{RO}} \label{IR}
Let $\alpha>0$ be such that $\alpha + 2s$ is not an integer, and $u \in L^{\infty}(\mathbb{R}^n)$ be any weak solution to $L_K u=g$ in $B_1$. Then,
$$||u||_{C^{2s+\alpha}(B_{1/2})}\leq C(||g||_{C^{\alpha}(B_1)}+||u||_{C^{\alpha}(\mathbb{R}^n)}).$$
\end{Teo}
\noindent It is important to remark that the previous estimate is valid also in case $\alpha =0$ (in which the $C^{\alpha}$ norm has to be replaced by the $L^{\infty}$).
With no further regularity assumption on the kernel $K$, this estimate is sharp, in the sense that the norm $||u||_{C^{\alpha}(\mathbb{R}^n)}$ can not be replaced by a weaker one.
Under the extra assumption that the kernel $K(y)$ is $C^{\alpha}$ outside the origin, the following estimate holds
$$||u||_{C^{2s+\alpha}(B_{1/2})}\leq C(||g||_{C^{\alpha}(B_1)}+||u||_{L^{\infty}(\mathbb{R}^n)}).$$
\noindent We focus now on the boundary regularity of solutions to \eqref{L}.
\begin{Pro}{\rm\cite[Proposition 7.2, Optimal H\"{o}lder regularity]{RO}} \label{Opt}
Let $g \in L^{\infty}(\Omega)$, and $u$ be the weak solution of \eqref{L}. Then, $$||u||_{C^s(\overline{\Omega})} \leq C ||g||_{L^{\infty}(\Omega)},$$
for some positive constant $c$.
\end{Pro}
\noindent Finally, we conclude that the solutions to \eqref{L} are $C^{3s}$ inside $\Omega$ whenever $g \in C^s$, but only $C^s$ on the boundary, and this is the best regularity that we can obtain. For instance, we consider the following torsion problem
\[
\begin{cases}
\mathit{L}_K u = 1 & \text{in $B_1$} \\
u = 0 & \text{in $\mathbb{R}^n \setminus B_1$,}
\end{cases}
\]
The solution $u_0:= (1-|x|^2)_{+}^s$ belongs to $C^s(\overline{B_1})$, but $u_0 \notin C^{s+\epsilon}(\overline{B_1})$ for any $\epsilon > 0$, as a consequence we can not expect solutions to be better than $C^s(\overline{\Omega})$.
\noindent While solutions of fractional equations exhibit good interior regularity properties, they may have a singular behaviour on the boundary. Therefore, instead of the usual space $C^1(\overline{\Omega})$, they are better embedded in the following weighted H\"{o}lder-type spaces $C_{\delta}^{0}(\overline{\Omega})$ and $C_{\delta}^{\alpha}(\overline{\Omega})$ as defined here below.\\
We set $\delta(x)=\mathrm{dist}(x,\mathbb{R}^n \setminus \Omega)$ with $x \in \overline{\Omega}$ and we define
$$C_{\delta}^0(\overline{\Omega})=\{u\in C^0(\overline{\Omega}):u/\delta^s \in C^0(\overline{\Omega})\},$$
$$C_{\delta}^{\alpha}(\overline{\Omega})=\{u\in C^0(\overline{\Omega}):u/\delta^s \in C^{\alpha}(\overline{\Omega})\} \quad (\alpha \in (0,1)),$$
endowed with the norms
$$||u||_{0,\delta}= \left\|\cfrac{u}{\delta^s}\right\|_{\infty}, \quad
||u||_{\alpha,\delta}= ||u||_{0,\delta} +
\sup_{x \neq y} \frac{|u(x)/\delta^s(x) - u(y)/\delta^s(y)|}{|x-y|^{\alpha}},$$
respectively. For all $0 \leq \alpha < \beta <1$ the embedding $C_{\delta}^{\beta}(\overline{\Omega}) \hookrightarrow C_{\delta}^{\alpha}(\overline{\Omega})$ is continuous and compact. In this case, the positive cone $C_{\delta}^0(\overline{\Omega})_{+}$
has a nonempty interior given by
$$\mathrm{int}(C_{\delta}^0(\overline{\Omega})_{+})=\left\{u \in C_{\delta}^0(\overline{\Omega}): \frac{u(x)}{\delta^s(x)}>0 \text{ for all } x \in \overline{\Omega} \right\}.$$
\noindent The function $\frac{u}{\delta^s}$ on $\partial \Omega$ plays sometimes the role that the normal derivative $\frac{\partial u}{\partial \nu}$ plays in second order equations.
Furthermore, we recall that another fractional normal derivative can be considered, namely the one in formula (1.2) of \cite{DROV}.
\begin{Lem}{\rm\cite[Lemma 7.3, Hopf's lemma]{RO}} \label{Hopf}
Let $u$ be any weak solution to \eqref{L}, with $g \geq 0$. Then, either
$$u \geq c \delta^s \qquad \text{in } \overline{\Omega} \text{ for some } \; c>0 \quad
\text{or} \quad u \equiv 0 \text{ in } \overline{\Omega}.$$
\end{Lem}
\noindent Furthermore, the quotient $\frac{u}{\delta^s}$ is not only bounded, but it is also H\"{o}lder continuous up to the boundary. Using the explicit solution $u_0$ and similar barriers, it is possible to show that solutions $u$ satisfy $|u| \leq C \delta^s$ in $\Omega$.
\begin{Teo}[\cite{RO}, Theorem 7.4] \label{Rap}
Let $s \in (0,1)$, and $u$ be any weak solution to \eqref{L}, with $g \in L^{\infty}(\Omega)$. Then,
$$\left\|\cfrac{u}{\delta^s}\right\|_{C^{\alpha}(\overline{\Omega})} \leq C ||g||_{L^{\infty}(\overline{\Omega})}, \quad \alpha \in (0,s).$$
\end{Teo}
\begin{Oss}
The results, in \cite{RO}, hold even if $a\geq0$ in the kernel $K$.
\end{Oss}
\noindent We observe that Hopf's lemma involves Strong maximum principle and
we will see another general version of Hopf's lemma, where the nonlinearity is slightly negative, but this requires an higher regularity for $f$ (see the appendix). Moreover, we recall \cite[Proposition 2.5]{DI} for the fractional Laplacian analogy.
\subsection{Equivalence of minimizers in the two topologies}\label{subsec42}
In Theorem \ref{Equiv} we present an useful topological result, relating the minimizers in the $X(\Omega)$-topology and in $C_{\delta}^0({\overline{\Omega}})$-topology, respectively.
This is an anisotropic version of the result of \cite{IMS}, previously proved in \cite[Proposition 2.5]{BCSS}, which in turn is inspired from \cite{BN}. In the proof of Theorem \ref{Equiv} the critical case, i.e. $q=2_s^*$ in \eqref{G}, presents a twofold difficulty: a loss of compactness which prevents minimization of $J$, and the lack of uniform a priori estimate for the weak solutions of \eqref{P}.
\begin{Teo} \label{Equiv}
Let \eqref{G} hold, $J$ be defined as above, and $u_0 \in X(\Omega)$. Then, the following conditions are equivalent:\\
i) there exists $\rho>0$ such that $J(u_0 + v)\geq J(u_0)$ for all
$v \in X(\Omega) \cap \emph{C}_{\delta}^{0}(\overline{\Omega})$, $||v||_{0, \delta} \leq \rho$ ;\\
ii) there exists $\epsilon>0$ such that $J(u_0 + v)\geq J(u_0)$ for all
$v \in X(\Omega)$, $||v|| \leq \epsilon$.
\end{Teo}
\noindent We remark that, contrary to the result of \cite{BN} in the local case $s=1$, there is no relationship between the topologies of $X(\Omega)$ and $C_{\delta}^{0}(\overline{\Omega})$.
\begin{proof}
We define $J \in C^1(X(\Omega))$ as in the Section 3.1.\\
We argue as in \cite[Theorem 1.1]{IMS}.\\
\textbf{i)} $\Rightarrow$ \textbf{ii)}\\
We suppose $u_0=0$, hence we can rewrite the hypothesis as
$$\inf_{u \in X(\Omega) \cap \overline{B}_{\rho}^{\delta}} J(u)=0,$$
where $\overline{B}_{\rho}^{\delta}$ denotes the closed ball in $C_{\delta}^0(\overline{\Omega})$
centered at $0$ with radius $\rho$.\\
We argue by contradiction: we assume i) and that there exist sequences $(\epsilon_n)\in (0,\infty)$, $(u_n)$ in $X(\Omega)$ such that $\epsilon_n \rightarrow 0$, $||u_n|| \leq \epsilon_n$, and $J(u_n) < J(0)$ for all $n \in \mathbb{N}$.\\
We consider two cases:
\begin{itemize}
\item
If $q<2_s^*$ in \eqref{G}, by the compact embedding $X(\Omega) \hookrightarrow L^q(\Omega)$, $J$ is sequentially weakly lower semicontinuous in $X(\Omega)$, then we may assume
$$J(u_n)= \inf_{\overline{B}_{\epsilon_n}^X} J <0,$$
where $\overline{B}_{\epsilon_n}^X$ denotes the closed ball in $X(\Omega)$
centred at $0$ with radius $\epsilon_n$.\\
Therefore there exists a Lagrange multiplier $\mu_n \leq 0$ such that for all $v \in X(\Omega)$
$$\left\langle J'(u_n),v\right\rangle=\mu_n \left\langle u_n,v\right\rangle_{X(\Omega)},$$
which is equivalent to $u_n$ being a weak solution of
\[
\begin{cases}
\mathit{L}_K u = C_n f(x,u) & \text{in $\Omega$ } \\
u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,}
\end{cases}
\]
with $C_n=(1-\mu_n)^{-1} \in (0,1]$. By Theorem \ref{SL}, $||u_n||_{\infty} \leq C$, hence by Proposition \ref{Opt} and by Theorem \ref{Rap} we have $u_n \in C_{\delta}^{\alpha}(\overline{\Omega})$ and $||u_n||_{\alpha,\delta} \leq C$.
By the compact embedding $C_{\delta}^{0,\alpha}(\overline{\Omega}) \hookrightarrow C_{\delta}^0(\overline{\Omega})$, passing to a subsequence, $u_n \rightarrow 0$ in $C_{\delta}^0(\overline{\Omega})$, consequently for
$n \in \mathbb{N}$ big enough we have $||u_n||_{\delta}\leq \rho$ together with $J(u_n)<0$, a contradiction.
\item
If $q=2_s^*$ in \eqref{G}, then we use a truncated functional
$$J_k(u)=\frac{||u||^2}{2} - \int_{\Omega} F_k(x,u(x))\,dx,$$
with $f_k(x,t)=f(x, \text{sgn(t)} \min\{|t|,k\})$, $F_k(x,t)=\int_0^t f_k(x,\tau)\, d\tau$
to overcome the lack of compactness and of uniform $L^{\infty}$-bound.
\end{itemize}
\textbf{Case $u_0 \neq 0$.}\\
Since $C_c^{\infty}(\Omega)$ is a dense subspace of $X(\Omega)$ (see \cite[Theorem 6]{FSV}, \cite[Theorem 2.6]{MBRS}) and
$J'(u_0) \in X(\Omega)^*$,
\begin{equation}
\left\langle J'(u_0),v\right\rangle =0
\label{PS}
\end{equation}
holds, not only for all $v \in C_c^{\infty}(\Omega)$ (in particular
$v \in X(\Omega) \cap C_{\delta}^0(\overline{\Omega})$), but for all $v \in X(\Omega)$, i.e., $u_0$ is a weak solution of \eqref{P}. By $L^{\infty}$- bounds, we have $u_0 \in L^{\infty}(\Omega)$, hence
$f(.,u_0(.)) \in L^{\infty}(\Omega)$. Now Proposition \ref{Opt} and Theorem \ref{Rap} imply that $u_0 \in C_{\delta}^0(\overline{\Omega})$.
We set for all $v \in X(\Omega)$
$$\tilde{J}(v)=\frac{||v||^2}{2} - \int_{\Omega} \tilde{F}(x,v(x))\,dx,$$
with for all $(x,t) \in \Omega \times \mathbb{R}$
$$\tilde{F}(x,t)=F(x, u_0(x)+t)-F(x,u_0(x))-f(x,u_0(x))t.$$
$\tilde{J} \in C^1(X(\Omega))$ and the mapping
$\tilde{f}: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ defined by
$\tilde{f}(x,t)= \partial_t \tilde{F}(x,t)$ satisfies a subcritical growth condition of the type \eqref{G}.
Besides, by \eqref{PS}, we have for all $v \in X(\Omega)$
$$\tilde{J}(v)=\frac{1}{2}(||u_0+v||^2 - ||u_0||^2)
- \int_{\Omega} (F(x,u_0+v)-F(x,u_0))\,dx = J(u_0+v) - J(u_0),$$
in particular $\tilde{J}(0)=0$.
The hypothesis i) thus rephrases as
$$\inf_{v \in X(\Omega) \cap \overline{B}_{\rho}^{\delta}} \tilde{J}(v)=0$$
and by the previous cases, we obtain the thesis.\\
\textbf{ii)} $\Rightarrow$ \textbf{i)} \\
By contradiction: we assume ii) and we suppose there exists a sequence $(u_n)$ in
$X(\Omega)\cap C_{\delta}^{0}(\overline{\Omega})$ such that
$u_n \rightarrow u_0$ in $C_{\delta}^{0}(\overline{\Omega})$ and $J(u_n)<J(u_0)$.
Then
$$\limsup_n ||u_n||^2 \leq ||u_0||^2,$$
in particular $(u_n)$ is bounded in $X(\Omega)$, so (up to a subsequence) $u_n \rightharpoonup u_0$ in $X(\Omega)$, hence, by \cite[Proposition 3.32]{B}, $u_n \rightarrow u_0$ in $X(\Omega)$.
For $n\in \mathbb{N}$ big enough we have $||u_n-u_0||\leq \epsilon$, a contradiction.
\end{proof}
\subsection{An eigenvalue problem}\label{subsec43}
We consider the following eigenvalue problem
\begin{equation}
\begin{cases}
\mathit{L_K} u = \lambda u & \text{in $\Omega$ } \\
u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$.}
\end{cases}
\label{EP}
\end{equation}
\noindent We recall that $\lambda \in \mathbb{R}$ is an \emph{eigenvalue} of $L_K$ provided there exists a nontrivial solution $u \in X(\Omega)$ of problem \eqref{EP}, and, in this case, any solution will be called an \emph{eigenfunction} corresponding to the eigenvalue $\lambda$.
\begin{Pro}
The set of the eigenvalues of problem \eqref{EP} consists of a sequence $\{\lambda_k \}_{k \in \mathbb{N}}$ with
$$ 0 < \lambda_1 < \lambda_2 \leq \cdots \leq \lambda_k \leq \lambda_{k+1} \leq \cdots \quad \text{and} \quad \lambda_k \rightarrow + \infty \quad \text{as} \quad k \rightarrow + \infty,$$
with associated eigenfunctions $e_1, e_2, \cdots, e_k, e_{k+1}, \cdots$
such that
\begin{itemize}
\item
the eigenvalues can be characterized as follows:
\begin{align}
\lambda_1 & = \min_{u \in X(\Omega), \quad ||u||_{L^2(\Omega)=1}}
\int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy, \label{A1} \\
\lambda_{k+1} & = \min_{u \in \mathbb{P}_{k+1},\quad ||u||_{L^2(\Omega)=1}}
\int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy \quad \forall k \in \mathbb{N}, \label{Ak}
\end{align}
dove $\mathbb{P}_{k+1}:= \{u \in X(\Omega) \; \mathrm{s.t.} \; \left\langle u,e_j \right\rangle_{X(\Omega)}= 0 \; \forall j=1, \cdots, k\};$
\item
there exists a positive function $e_1 \in X(\Omega)$, which is an eigenfunction corresponding to $\lambda_1$, attaining the minimum in \eqref{A1}, that is $||e_1||_{L^2(\Omega)}=1$; moreover, for any $k \in \mathbb{N}$ there exists a nodal function $e_{k+1} \in \mathbb{P}_{k+1}$, which is an eigenfunction corresponding to $\lambda_{k+1}$, attaining the minimum in \eqref{Ak}, that is
$||e_{k+1}||_{L^2(\Omega)}=1$;
\item
$\lambda_1$ is simple, namely the eigenfunctions $u \in X(\Omega)$ corresponding to $\lambda_1$ are $u=\zeta e_1$, with $\zeta \in \mathbb{R}$;
\item
the sequence $\{e_k\}_{k \in \mathbb{N}}$ of eigenfunctions corresponding to $\lambda_k$ is an orthonormal basis of $L^2(\Omega)$ and an orthogonal basis of $X(\Omega)$;
\item
each eigenvalue $\lambda_k$ has finite multiplicity, more precisely, if $\lambda_k$ is such that
$$\lambda_{k-1} < \lambda_k = \cdots = \lambda_{k+h} < \lambda_{k+h+1}$$
for some $h \in \mathbb{N}_0$, then the set of all the eigenfunctions corresponding to $\lambda_k$ agrees with
$$\emph{span}\{e_k, \ldots, e_{k+h}\}.$$
\end{itemize}
\end{Pro}
\begin{Oss}
The proof of this result can be found in \cite{SV2} with the following differences, due to the kind of kernel considered.
For $L_K$ with a general kernel $K$, satisfying \eqref{P2}-\eqref{P3}-\eqref{P4}, the first eigenfunction $e_1$ is non-negative and every eigenfunction is bounded, there aren't any better regularity results \cite{SV2}. While for this particular kernel $K(y)= a(\frac{y}{|y|})\frac{1}{|y|^{n+2s}}$ we stress that the first eigenfunction is positive and all eigenfunctions belong to $C^s(\overline{\Omega})$, like in the case of fractional Laplacian. More precisely, $u_1 \in \mathrm{int}(C_{\delta}^{0}(\overline{\Omega})_{+})$, by applying Lemma \ref{Hopf} and Theorem \ref{Rap}.
\end{Oss}
\section{Application: a multiplicity result }\label{sec5}
\noindent
In this section we present an existence and multiplicity result for the solution of problem \eqref{P}, under condition \eqref{G} plus some further conditions; in the proof Theorem \ref{Equiv} will play an essential part.
This application is an extension to the anisotropic case of a result on the fractional Laplacian
\cite[Theorem 5.2] {IMS}. By a truncation argument and minimization, we show the existence of two constant sign solutions, then we apply Morse theory to find a third nontrivial solution.
\begin{Teo} \label{MR}
Let $f: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ be a Carath\'{e}odory function satisfying \\
i) $|f(x,t)|\leq a(1+|t|^{q-1})$ a.e. in $\Omega$ and for all $t \in \mathbb{R}$ $(a>0, 1<q<2_s^{*})$;\\
ii) $f(x,t)t \geq 0$ a.e. in $\Omega$ and for all $t \in \mathbb{R}$;\\
iii) $\lim_{t \to 0} \frac{f(x,t)-b |t|^{r-2} t}{t}=0$ uniformly a.e. in $\Omega$ $(b>0, 1<r<2)$;\\
iv) $\limsup_{|t| \to \infty} \frac{2 F(x,t)}{t^2} < \lambda_1$ uniformly a.e. in $\Omega$.\\
\noindent Then problem \eqref{P} admits at least three non-zero solutions
$u^{\pm} \in \pm \ \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$, $\tilde{u} \in C_{\delta}^0(\overline{\Omega})\setminus \{0\}$.
\end{Teo}
\begin{Ese}
As a model for $f$ we can take the function
\[
f(t):=
\begin{cases}
b |t|^{r-2} t + a_1 |t|^{q-2} t, & \text{if } |t| \leq 1, \\
\beta_1 t, & \text{if } |t|>1,
\end{cases}
\]
with $1<r<2<q<2_s^*$, $a_1, b >0$, $\beta_1 \in (0,\lambda_1)$ s.t. $a_1 + b = \beta_1$.
\end{Ese}
\begin{proof}[Proof of Theorem \ref{MR}]
We define $J \in C^1(X(\Omega))$ as
$$J(u)=\frac{||u||^2}{2} - \int_{\Omega} F(x,u(x))\,dx.$$
Without loss of generality, we assume $q>2$ and $\epsilon, \epsilon_1, b_1, a_1, a_2$ are positive constants.\\
From ii) we have immediately that $0 \in K_J$, but from iii) $0$ is not a local minimizer.
Indeed, let be $\delta >t>0$, by iii) we have $$\frac{f(x,t)-bt^{r-1}}{t} \geq - \epsilon,$$ by integrating $F(x,t) \geq b_1 t^r - \epsilon_1 t^2$ $(\epsilon_1 < b_1)$, but by i) $F(x,t) \geq -a_1 t-a_2 t^q$, hence, in the end, we obtain a.e. in $\Omega$ and for all $t \in \mathbb{R}$
\begin{equation}
F(x,t) \geq c_0 |t|^r - c_1 |t|^q \quad (c_0,c_1 >0).
\label{VA}
\end{equation}
We consider a function $u \in X(\Omega)$, $u(x)>0$ a.e. in $\Omega$, for all $\tau >0$ we have
$$J(\tau u)= \frac{\tau^2 ||u||^2}{2} - \int_{\Omega} F(x,\tau u)\,dx
\leq \frac{\tau^2 ||u||^2}{2} - c_0 \tau^r ||u||_{L^r(\Omega)}^r + c_1 \tau^q ||u||_{L^q(\Omega)}^q,$$
and the latter is negative for $\tau >0$ close enough to $0$, therefore, $0$ is not a local minimizer of $J$.
\noindent We define two truncated energy functionals
$$J_{\pm}(u):=\frac{||u||^2}{2} - \int_{\Omega} F_{\pm}(x,u)\,dx \quad
\forall u \in X(\Omega),$$
setting for all $(x,t) \in \Omega \times \mathbb{R}$
$$f_{\pm}(x,t)=f(x,\pm t_{\pm}), \; F_{\pm}(x,t)=\int_0^t f_{\pm}(x,\tau)\, d\tau, \; t_{\pm}=\max\{\pm t, 0\} \; \forall t \in \mathbb{R}.$$
In a similar way, by \eqref{VA}, we obtain that $0$ is not a local minimizer for the truncated functionals $J_{\pm}$.\\
\noindent We focus on the functional $J_+$, clearly $J_+ \in C^1(X(\Omega))$ and $f_+$ satisfies \eqref{G}. We now prove that $J_+$ is coercive in $X(\Omega)$, i.e.,
$$\lim_{||u|| \to \infty} J_+(u)=\infty.$$
Indeed, by iv), for all $\epsilon >0$ small enough, we have a.e. in $\Omega$ and for all
$t \in \mathbb{R}$
$$F_+(x,t) \leq \frac{\lambda_1 - \epsilon}{2} t^2 + C.$$
By the definition of $\lambda_1$, we have for all $u \in X(\Omega)$
$$J_+(u)\geq \frac{||u||^2}{2} - \frac{\lambda_1 - \epsilon}{2} ||u||_{L^2(\Omega)}^2 - C
\geq \frac{\epsilon}{2 \lambda_1} ||u||^2 - C,$$
and the latter goes to $\infty$ as $||u||\rightarrow \infty$. Consequently, $J_+$ is coercive in $X(\Omega)$.\\
Moreover, $J_+$ is sequentially weakly lower semicontinuous in $X(\Omega)$.
Indeed, let $u_n \rightharpoonup u$ in $X(\Omega)$, passing to a subsequence, we may assume $u_n \rightarrow u$ in $L^q(\Omega)$ and $u_n(x) \rightarrow u(x)$ for a.e.
$x \in \Omega$, moreover, there exists $g \in L^q(\Omega)$ such that $|u_n(x)| \leq g(x)$ for a.e. $x \in \Omega$ and all $n \in \mathbb{N}$ \cite[Theorem 4.9]{B}. Hence,
$$\lim_n \int_{\Omega} F_+(x,u_n)\,dx = \int_{\Omega} F_+(x,u)\,dx.$$
Besides, by convexity we have
$$\liminf_n \frac{||u_n||^2}{2} \geq \frac{||u||^2}{2},$$
as a result
$$ \liminf_n J_+(u_n) \geq J_+(u).$$
\noindent Thus, there exists $u^+ \in X(\Omega) \setminus \{0\}$ such that
$$J_+(u^+)=\inf_{u \in X(\Omega)} J_+(u).$$
\noindent By Proposition \ref{WmP} and by ii) we have that $u^+$ is a nonnegative weak solution to \eqref{P}.
By Theorem \ref{SL}, we obtain $u^+ \in L^{\infty}(\Omega)$, hence by Proposition \ref{Opt} and Theorem \ref{Rap} we deduce $u^+ \in C_{\delta}^0(\overline{\Omega})$.
Furthermore, by Hopf's lemma $\frac{u^+}{\delta^s}>0$ in $\overline{\Omega}$, and by
\cite[Lemma 5.1]{ILPS} $u^+ \in \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$. \\
Let $\rho >0$ be such that $B_{\rho}^{\delta}(u^+) \subset C_{\delta}^0(\overline{\Omega})_+$,
$u^+ +v \in B_{\rho}^{\delta}(u^+)$, $\forall v \in C_{\delta}^0(\overline{\Omega})$ con
$||v||_{0,\delta}\leq \rho$, since $J$ and $J_+$ agree on $C_{\delta}^0(\overline{\Omega})_+ \cap X(\Omega)$,
$$J(u^+ +v) \geq J(u^+), \qquad v \in B_{\rho}(0) \cap X(\Omega)$$
and by Theorem \ref{Equiv}, $u^+$ is a strictly positive local minimizer for $J$ in $X(\Omega)$.
Similarly, looking at $J_-$, we can detect another strictly negative local minimizer
$u^- \in - \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$ of $J$.
Now, by Theorem \ref{MPT} there exists, besides the two points of minimum, a third critical point $\tilde{u}$, such point is of mountain pass type. We only have to show that $\tilde{u} \neq 0$,
to do this we use a Morse-theoretic argument.
First of all, we prove that $J$ satisfies Cerami condition (which in this case is equivalent to the Palais-Smale condition) to apply Morse theory.\\
Let $(u_n)$ be a sequence in $X(\Omega)$ such that $|J(u_n)| \leq C$ for all $n \in \mathbb{N}$ and $(1+||u_n||)J'(u_n)\rightarrow 0$ in $X(\Omega)^*$. Since $J$ is coercive, the sequence $(u_n)$ is bounded in $X(\Omega)$, hence, passing to a subsequence, we may assume
$u_n \rightharpoonup u$ in $X(\Omega)$, $u_n \rightarrow u$ in $L^q(\Omega)$ and $L^1(\Omega)$, and $u_n(x) \rightarrow u(x)$ for a.e. $x \in \Omega$, with some $u \in X(\Omega)$. Moreover, by
\cite[Theorem 4.9]{B} there exists $g \in L^q(\Omega)$ such that $|u_n(x)| \leq g(x)$ for all $n \in \mathbb{N}$ and a.e. $x \in \Omega$. Using such relations along with i), we obtain
\begin{align*}
||u_n-u||^2 & =\left\langle u_n, u_n-u\right\rangle_{X(\Omega)} - \left\langle u, u_n-u\right\rangle_{X(\Omega)} \\
& =J'(u_n)(u_n-u) + \int_{\Omega} f(x,u_n)(u_n-u)\,dx - \left\langle u, u_n-u\right\rangle _{X(\Omega)}\\
& \leq ||J'(u_n)||_* ||u_n-u||+ \int_{\Omega} a (1+|u_n|^{q-1})|u_n-u|\,dx
- \left\langle u, u_n-u\right\rangle_{X(\Omega)} \\
& \leq ||J'(u_n)||_* ||u_n-u|| + a (||u_n-u||_{L^1(\Omega)}+||u_n||_{L^q(\Omega)}^{q-1} ||u_n-u||_{L^q(\Omega)})
- \left\langle u, u_n-u\right\rangle_{X(\Omega)}
\end{align*}
for all $n \in \mathbb{N}$ and the latter tends to $0$ as $n\rightarrow \infty$.
Thus, $u_n\rightarrow u$ in $X(\Omega)$.\\
Without loss of generality, we assume that $0$ is an isolated critical point, therefore we can determine the corresponding critical group.\\
\textbf{Claim:} $C_k(J,0)=0 \quad \forall k \in \mathbb{N}_0$.\\
By iii), we have
$$\lim_{t \to 0} \frac{r F(x,t)-f(x,t)t}{t^2}=0,$$
hence, for all $\epsilon >0$ we can find $C_\epsilon >0$ such that a.e. in $\Omega$ and for all
$t \in \mathbb{R}$
$$\left| F(x,t) - \frac{f(x,t) t}{r}\right| \leq \epsilon t^2 + C_\epsilon |t|^q.$$
By the relations above we obtain
$$\int_{\Omega} \left(F(x,u) - \frac{f(x,u) u}{r} \right)\,dx= o(||u||^2) \qquad \text{as } ||u||\rightarrow 0.$$
For all $u \in X(\Omega) \setminus \{0\}$ such that $J(u)>0$ we have
$$\frac{1}{r} \frac{d}{d\tau} J(\tau u)|_{\tau =1}= \frac{||u||^2}{r} -
\int_{\Omega} \frac{f(x,u) u}{r}\,dx= J(u)+ \left(\frac{1}{r}-\frac{1}{2}\right) ||u||^2
+o(||u||^2) \qquad \text{as } ||u||\rightarrow 0.$$
Therefore we can find some $\rho >0$ such that, for all $u \in B_{\rho}(0)\setminus \{0\}$ with $J(u) > 0$,
\begin{equation}
\frac{d}{d\tau} J(\tau u)|_{\tau =1}>0.
\label{De}
\end{equation}
Using again \eqref{VA}, there exists $\tau(u) \in (0,1)$ such that $J(\tau u) <0$ for all $0<\tau<\tau(u)$ and $J(\tau(u) u)=0$.
This assures uniqueness of $\tau(u)$ defined as above, for all $u \in B_\rho(0)$ with $J(u)>0$.
We set $\tau(u)=1$ for all $u \in B_\rho(0)$ with $J(u)\leq0$, hence we have defined a map
$\tau: B_\rho (0)\rightarrow (0,1]$ such that for $\tau \in (0,1)$ and for all $u \in B_\rho (0)$ we have
\[
\begin{cases}
J(\tau u)<0 & \text{if $\tau<\tau(u)$} \\
J(\tau u)=0 & \text{if $\tau=\tau(u)$}\\
J(\tau u)>0 & \text{if $\tau>\tau(u).$}\\
\end{cases}
\]
\noindent By \eqref{De} and the Implicit Function Theorem, $\tau$ turns out to be continuous. We set for all
$(t,u) \in [0,1]\times B_\rho (0)$
$$h(t,u)=(1-t)u+t \tau(u)u,$$
hence $h: [0,1] \times B_\rho (0)\rightarrow B_\rho (0)$ is a continuous deformation and the set $B_\rho(0) \cap J^0 = \{\tau u: u \in B_{\rho}(0), \tau \in [0,\tau(u)]\}$ is a deformation retract of $B_\rho(0)$.
Similarly we deduce that the set $B_\rho (0) \cap J^0 \setminus \{0\}$ is a deformation retract of $B_\rho(0) \setminus \{0\}$.
Consequently, we have
$$C_k(J,0)=H_k(J^0 \cap B_\rho (0), J^0 \cap B_\rho (0) \setminus \{0\})=
H_k(B_\rho (0), B_\rho (0) \setminus \{0\})=0 \quad \forall k \in \mathbb{N}_0,$$
the last passage following from contractibility of $B_\rho(0)\setminus\{0\}$, recalling that $\mathrm{dim}(X(\Omega))=\infty$.\\
Since by Proposition \ref{Gcr} $C_1(J,\tilde{u})\neq 0$ and $C_k(J,0)=0$ $\forall k \in \mathbb{N}_0$,
then $\tilde{u}$ is a non-zero solution.
\end{proof}
\begin{Oss}
We remark that we can use Morse identity (Proposition \ref{MI}) to conclude the proof.
Indeed, we note that $J({u_\pm})< J(0)=0$, in particular $0$ and $u_{\pm}$ are isolated critical points, hence we can compute the corresponding critical groups. By Proposition \ref{M}, since $u_{\pm}$ are strict local minimizers of $J$, we have $C_k(J,u_{\pm})=\delta_{k,0} \mathbb{R}$ for all
$k \in \mathbb{N}_0$.
We have already determined $C_k(J,0)=0$ for all $k \in \mathbb{N}_0$, and we already know
the k-th critical group at infinity of $J$. Since $J$ is coercive and sequentially weakly lower semicontinuous, $J$ is bounded below in $X(\Omega)$, then, by \cite[Proposition 6.64 (a)]{MMP},
$C_k(J,\infty)=\delta_{k,0} \mathbb{R}$ for all $k \in \mathbb{N}_0$.
Applying Morse identity and choosing, for instance, $t=-1$, we obtain a contradiction, therefore there exists another critical point $\tilde{u} \in K_J \setminus \{0, u_{\pm}\}$. \\
But in this way we lose the information that $\tilde{u}$ is of mountain pass type.
\end{Oss}
\section{Appendix: General Hopf's lemma}\label{sec6}
\noindent
As stated before, we show that weak and strong maximum principle, Hopf's lemma can be generalized to the case in which the sign of $f$ is unknown.
Now we focus on the following problem
\begin{equation}
\begin{cases}
\mathit{L_K} u = f(x,u) & \text{in $\Omega$ } \\
u = h & \text{in $\mathbb{R}^n \setminus \Omega$,}
\end{cases}
\label{DNO}
\end{equation}
where $h \in C^s(\mathbb{R}^n \setminus \Omega)$, and we have the same assumptions on the function $f$, in addition we assume
\begin{equation}
f(x,t) \geq -ct \quad \forall (x,t) \in \overline{\Omega} \times \mathbb{R_{+}} \quad (c>0).
\label{SF}
\end{equation}
\begin{Oss}
Since Dirichlet data is not homogeneous in \eqref{DNO}, the energy functional associated to the problem \eqref{DNO} is
\begin{equation}
J(u)=\frac{1}{2} \int_{\mathbb{R}^{2n} \setminus \mathcal{O}} |u(x)-u(y)|^2 K(x-y)\,dxdy - \int_{\Omega} F(x,u(x))\,dx,
\label{NH}
\end{equation}
for all $u \in \tilde{X}:=\{u \in L^2(\mathbb{R}^n): [u]_K< \infty\}$ with $u=h$ a.e. in
$\mathbb{R}^n \setminus \Omega$, where $\mathcal{O}= (\mathbb{R}^n \setminus \Omega) \times (\mathbb{R}^n \setminus \Omega)$.
When $h$ is not zero, the term $ \int_{\mathcal{O}} |h(x)-h(y)|^2 K(x-y)\,dxdy$ could be infinite, this is the reason why one has to take \eqref{NH}, see \cite{RO}.
\end{Oss}
\noindent We begin with a weak maximum principle for \eqref{DNO}.
\begin{Pro}[Weak maximum principle]
Let \eqref{SF} hold and let $u$ be a weak solution of \eqref{DNO} with $h \geq 0$ in
$\mathbb{R}^n \setminus \Omega$. Then, $u \geq 0$ in $\Omega$.
\end{Pro}
\begin{proof}
Let $u$ be a weak solution of \eqref{DNO}, i.e.
\begin{equation}
\int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy = \int_{\Omega} f(x,u(x)) v(x)\,dx
\label{Sol}
\end{equation}
for all $v \in X(\Omega)$.
We write $u=u^+ - u^-$ in $\Omega$, where $u^+$ and $u^-$ stand for the positive and the negative part of $u$, respectively. We take $v=u^-$, we assume that $u^-$ is not identically zero, and we argue by contradiction.\\
From hypotheses we have
\begin{equation}
\int_{\Omega} f(x,u(x)) v(x)\,dx = \int_{\Omega} f(x,u(x)) u^-(x)\,dx
\geq - \int_{\Omega} c u(x) u^-(x)\,dx = \int_{\Omega^{-}} c u(x)^2\,dx >0,
\label{Sgn}
\end{equation}
where $\Omega^{-}:=\{x \in \Omega : u(x)<0\}$.\\
On the other hand, we obtain that
\begin{align*}
&\int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy \\
&=\int_{\Omega \times \Omega} (u(x)-u(y))(u^-(x)-u^-(y))K(x-y)\,dxdy \;+ \\
&+2\int_{\Omega \times (\mathbb{R}^n \setminus \Omega)} (u(x)-h(y)) u^-(x)K(x-y)\,dxdy.
\end{align*}
Moreover, $(u^+(x)-u^+(y))(u^-(x)-u^-(y)) \leq 0$, and thus
\begin{align*}
&\int_{\Omega \times \Omega} (u(x)-u(y))(u^-(x)-u^-(y))K(x-y)\,dxdy \\
& \leq - \int_{\Omega \times \Omega} (u^-(x)-u^-(y))^2 K(x-y)\,dxdy < 0.
\end{align*}
Since $h \geq 0$, then
$$\int_{\Omega \times (\mathbb{R}^n \setminus \Omega)} (u(x)-h(y)) u^-(x)K(x-y)\,dxdy \leq 0.$$
Therefore, we have obtained that
$$\int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy <0,$$
and this contradicts \eqref{Sol}-\eqref{Sgn}.
\end{proof}
\noindent The next step consists in proving a strong maximum principle for \eqref{DNO}. To do so we will need a slightly more restrictive notion of solution, namely a pointwise solution, which is equivalent to that of weak solution under further regularity assumptions on the reaction $f$.
Therefore we add extra hypotheses on $f$ to obtain a better interior regularity of the solutions, as we have seen previously, as a consequence we can show a strong maximum principle and Hopf's Lemma in a more general case.
\begin{Pro}[Strong maximum principle] \label{SMP}
Let \eqref{SF} hold and $f(.,t) \in C^s(\overline{\Omega})$ for all $t\in \mathbb{R}$, $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$, $a \in L^{\infty}(S^{n-1})$,
let $u$ a weak solution of \eqref{DNO} with $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$. Then either $u(x)=0$ for all $x \in \Omega$ or $u > 0$ in $\Omega$.
\end{Pro}
\begin{proof}
The assumptions $f(.,t) \in C^s(\overline{\Omega})$ for all $t \in \mathbb{R}$ and $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$ imply that $f(x,u(x))\in C^s(\overline{\Omega}\times \mathbb{R})$.\\
We fix $x \in \Omega$, since $\Omega$ is an open set, there exists a ball $B_R(x)$ such that $u$ satisfies $L_K(u)=f(x,u)$ weakly in $B_R(x)$, hence by Theorem \ref{IR} $u \in C^{3s}\bigl(B_{\frac{R}{2}}\bigr)$ and by Proposition \ref{Opt} $u \in C^s(\mathbb{R}^n)$, then $u$ is a pointwise solution, namely the operator $\mathit{L_K}$ can be evaluated pointwise:
\begin{align*}
& \int_{\mathbb{R}^n} \frac{|u(x)-u(y)|}{|x-y|^{n+2s}} a\Bigl(\frac{x-y} {|x-y|}\Bigr)\,dy\\
& \leq C ||a||_{L^\infty} \int_{B_{\frac{R}{2}}} \frac{|x-y|^{3s}}{|x-y|^{n+2s}}\,dy + C ||a||_{L^\infty} \int_{\mathbb{R}^n \setminus B_{\frac{R}{2}}} \frac{|x-y|^s}{|x-y|^{n+2s}}\,dy \\
&=C \left(\int_{B_{\frac{R}{2}}} \frac{1}{|x-y|^{n-s}}\,dy +\int_{\mathbb{R}^n \setminus B_{\frac{R}{2}}} \frac{1}{|x-y|^{n+s}}\,dy \right)< \infty.
\end{align*}
Therefore, if $u$ is a weak solution of problem \eqref{DNO}, under these hypotheses, $u$ becomes a pointwise solution of this problem. \\
By weak maximum principle, $u \geq 0$ in $\mathbb{R}^n$. We assume that $u$ does not vanish identically.\\
Now, we argue by contradiction. We suppose that there exists a point $x_0 \in \Omega$ such that $u(x_0)=0$, hence $x_0$ is a minimum of $u$ in $\mathbb{R}^n$ , then
$$0=-cu(x_0) \leq L_Ku(x_0)= \int_{\mathbb{R}^n} (u(x_0)-u(y))K(x_0-y)\,\mathrm{d}y < 0,$$
a contradiction.
\end{proof}
\noindent Finally, by using the previous results, we can prove a generalised Hopf's Lemma for \eqref{DNO} with possibly negative reaction.
\begin{Lem}[Hopf's Lemma] \label{Hopf1}
Let \eqref{G} and \eqref{SF} hold, $f(.,t) \in C^s(\overline{\Omega})$ for all $t \in \mathbb{R}$, $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$, $a \in L^{\infty}(S^{n-1})$.
If $u$ is a solution of \eqref{DNO} and $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$, then either $u(x)=0$ for all $x \in \Omega$ or
$$ \liminf_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta(x)^s} > 0 \quad \forall x_0 \in \partial\Omega.$$
\end{Lem}
\begin{proof}
The proof is divided in two parts, firstly we show this result in a ball $B_R$, $R>0$, and secondly in a general $\Omega$ satisfying an interior ball condition.
(We assume that $B_R$ is centered at the origin without loss of generality).
We argue as in \cite[Lemma 1.2]{GS}.\\
\textbf{Case $\Omega=B_R$}\\
We suppose that $u$ does not vanish identically in $B_R$. By Proposition \ref{SMP} $u>0$ in $B_R$, hence for every compact set $K \subset B_R$ we have $\min_{K} u>0$.
We recall \cite[Lemma 5.4]{RO} that $u_R(x)= C (R^2-|x|^2)_{+}^{s}$ is a solution of
\[
\begin{cases}
\mathit{L}_K u_R =1 & \text{in $B_R$ } \\
u_R=0 & \text{in $\mathbb{R}^n \setminus B_R$,}
\end{cases}
\]
we define $v_m(x)= \frac{1}{m} u_R(x)$ for $x \in \mathbb{R}^n$ and $\forall m \in \mathbb{N}$, consequently $L_K v_m=\frac{1}{m}$.\\
\textbf{Claim}: There exists some $\bar{m} \in \mathbb{N}$ such that $u \geq v_{\bar{m}}$ in $\mathbb{R}^n$.\\
We argue by contradiction, we define $w_m=v_m-u \; \forall m \in \mathbb{N}$, and we suppose that $w_m >0$ in $\mathbb{R}^n$. Since $v_m=0 \leq u$ in $\mathbb{R}^n \setminus B_R$,
there exists $x_m \in B_R$ such that $w_m(x_m)=\max_{B_R} w_m >0$, hence we may write
$0<u(x_m)<v_m(x_m)$. As a consequence of this and of the fact that
\begin{equation}
v_m \rightarrow 0 \text{ uniformly in } \mathbb{R}^n,
\label{Cv}
\end{equation}
we obtain
\begin{equation}
\lim_{m \to +\infty} u(x_m)=0.
\label{Cu}
\end{equation}
This and the fact of $\min_{K} u>0$ imply $|x_m|\rightarrow R$ as $m \rightarrow + \infty$. Consequently, as long as $y$ ranges in the ball $\overline{B}_{\frac{R}{2}} \subset B_R$, the difference $x_m-y$ keeps far from zero when $m$ is large. Therefore, recalling also Remark \ref{Oss1}, there exist a positive constant $C>1$, independent of $m$, such that
\begin{equation}
\frac{1}{C} \leq \int_{B_{\frac{R}{2}}} a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr) \frac{1} {|x_m-y|^{n+2s}}\,dy \leq C.
\label{S}
\end{equation}
By assumption and arguing as in the previous proof, the operator $L_K$ can be evaluated pointwise, hence we obtain
\begin{align}
\begin{split}
&-cu(x_m)\leq L_K u(x_m) = \int_{\mathbb{R}^n} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\
&=\int_{B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy + \int_{\mathbb{R}^n\setminus B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\
&= A_m + B_m. \label{GS}
\end{split}
\end{align}
We concentrate on the first integral, since there exists a positive constant $b$ such that $\min_{B_{\frac{R}{2}}} u = b$, and by previous estimates and by Fatou's lemma we have
$$\limsup_{m} A_m = \limsup_{m} \int_{B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \leq -\frac{b}{C} <0,$$
where we used \eqref{Cu} and \eqref{S}.\\
For the second integral we observe $u(x_m)-u(y)\leq v_m(x_m) - v_m(y)$, indeed we recall that $w_m(y) \leq w_m(x_m)$ for all $y \in \mathbb{R}^n$ (being $x_m$ the maximum of $w_m$ in $\mathbb{R}^n$), hence, passing to the limit, by \eqref{Cv} and \eqref{S} we obtain
\begin{align*}
B_m & \leq \int_{\mathbb{R}^n\setminus B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\
&= L_K v_m(x_m) - \int_{B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \\
&= \frac{1}{m} - \int_{B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \rightarrow 0 \quad m \rightarrow \infty.
\end{align*}
Therefore, inserting these in \eqref{GS}, we obtain $0 \leq - \frac{b}{C}$, a contradiction.\\
Then $u \geq v_{\bar{m}}$ for some $\bar{m}$, therefore
$$u(x) \geq \frac{1}{\bar{m}} (R^2-|x|^2)^s=\frac{1}{\bar{m}} (R+|x|)^s (R-|x|)^s \geq \frac{2^s R^s}{\bar{m}}(\mathrm{dist}(x, \mathbb{R}^n \setminus B_R))^s,$$
then
$$ \liminf_{B_R \ni x \rightarrow x_0} \frac{u(x)}{(\delta(x)^s)} \geq \frac{1}{\bar{m}} 2^s R^s >0.$$
\noindent \textbf{Case of a general domain $\Omega$}\\
We define $\Omega_{\rho}=\{x \in \Omega: \delta_{\Omega}(x) < \rho\}$ with $\rho >0$, for all $x \in \Omega_{\rho}$ there exists $x_0 \in \partial \Omega$ such that $|x-x_0|=\delta_{\Omega}(x)$.
Since $\Omega$ satisfies an interior ball condition, there exists $x_1 \in \Omega$ such that
$B_{\rho}(x_1) \subseteq \Omega$, tangent to $\partial \Omega$ at $x_0$.
Then we have that $x \in [x_0,x_1]$ and $\delta_{\Omega}(x)=\delta_{B_{\rho} (x_1)}(x)$.\\
Since $u$ is a solution of \eqref{DNO} and by Proposition \ref{SMP} we observe that either $u\equiv 0$ in $\Omega$, or $u >0$ in $\Omega$. If $u>0$ in $\Omega$, in particular $u>0$ in $B_{\rho}(x_1)$ and $u\geq 0$ in $\mathbb{R}^n \setminus B_{\rho}(x_1)$, then $u$ is a solution of
\[
\begin{cases}
\mathit{L_K} u = f(x,u) & \text{in $B_{\rho}(x_1)$ } \\
u = \tilde{h} & \text{in $\mathbb{R}^n \setminus B_{\rho}(x_1)$,}
\end{cases}
\]
with
\[
\tilde{h}(y)=
\begin{cases}
u(y), & \text{if } y \in \Omega, \\
h(y), & \text{if } y \in \mathbb{R}^n \setminus \Omega.
\end{cases}
\]
Therefore, by the first case there exists $C=C(\rho, m, s)>0$ such that
$u(y)\geq C \delta_{B_{\rho} (x_1)}^s (y)$ for all $y \in \mathbb{R}^n$, in particular we obtain $u(x)\geq C \delta_{B_{\rho} (x_1)}^s (x)$.\\
Then, by $\delta_{\Omega}(x)=\delta_{B_{\rho} (x_1)}(x)$, we have
$$ \liminf_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta_{\Omega}(x)^s}
\geq \liminf_{\Omega_{\rho} \ni x \rightarrow x_0} \frac{C \delta_{\Omega}(x)^s }{\delta_{\Omega}(x)^s} = C > 0 \quad \forall x_0 \in \partial\Omega.$$
\end{proof}
\begin{Oss}
We stress that in Lemma \ref{Hopf} we consider only weak solutions, while in Lemma \ref{Hopf1} pointwise solutions. Moreover, the regularity of $u/\delta^s$ yields in particular the existence of the limit
$$\lim_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta(x)^s}$$
for all $x_0 \in \partial\Omega$.
\end{Oss}
\vskip4pt
\noindent
{\small {\bf Acknowledgement.} S.F. would like to acknowledge Antonio Iannizzotto for many valuable discussions on the subject.}
|
1,108,101,565,812 | arxiv | \section{INTRODUCTION}
Our knowledge of the structure functions of hadrons, and the parton density
functions (PDFs) derived from them, has improved
over time, due both to the steadily increasing quantity and precision
of a wide variety of measurements, and a more sophisticated theoretical
understanding of QCD. Structure functions, and PDFs, play a dual role:
they are a necessary input to predictions for high momentum transfer
processes involving hadrons, and they contain important information
themselves about the underlying physics of hadrons. Their study is an
essential element for future progress in the understanding of fundamental
particles and interactions.
Because of the ubiquitousness of structure functions, the activities of the
subgroup had significant and
productive overlap with several other subgroups, and were focused in a
number of different directions. This summary roughly follows
these directions. We start with the precision of our knowledge of the PDFs.
There was an attempt to define a `Snowmass convention' on PDF errors,
reviewing the experimental and theoretical input to the extraction of the PDFs
and an appraisal of what is left to do. Next, we explore the important
connection between the strong coupling constant, \mbox{$\alpha_s$}, and the structure
functions. One of the important inputs provided by the structure functions
is in the precise extraction of electroweak parameters at hadron colliders.
The systematic uncertainties in the
structure functions may be the limiting factor in the determination of
electroweak parameters, and this is discussed in the subsequent section.
There is then a review of some relevant aspects of heavy quark
hadroproduction. Finally, as a summary, we present an $\{x,Q^2\}$
map of what is known and what is to come.
\section{PRECISION OF PDFS AND GLOBAL ANALYSES}
The extraction of PDFs from measurements is a complex process, involving
information from different experiments and a range of phenomenological and
theoretical input.
\subsection{Experimental systematic errors}
Since the extraction of the PDFs usually requires using data from
different experiments, and since the most precise data are usually
limited by systematic, rather than statistical, errors, it is
important that the systematic errors are taken properly into account.
In particular, it is necessary to understand the correlations of different
systematic errors on the measurements within and across experiments. Several
groups have begun to make this information available in electronic and tabular
form. Contributions to these proceedings by Tim Bolton (NuTeV) and
Allen Caldwell (ZEUS) give the details.
\subsection{$\{x,Q^2\}$ Kinematic Map for PDFs}
\def\figdis{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figdis.eps}
\end{center}
\caption{Fixed target DIS data. Note the full $\{x,Q^2\}$ region
is clipped by the plot.
}
\label{fig:dis}
\end{figure}
}
\def\figdph{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figdph.eps}
\end{center}
\caption{Drell-Yan (E605), Direct Photon (E706, WA70, UA6),
and DY asymmetry (NA51) data.
}
\label{fig:dph}
\end{figure}
}
\def\fighera{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{fighera.eps}
\end{center}
\caption{$ep$ collider data. Note the full $\{x,Q^2\}$ region
is clipped by the plot.
}
\label{fig:hera}
\end{figure}
}
\def\figtev{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figtev.eps}
\end{center}
\caption{Hadron-hadron collider data.
}
\label{fig:tev}
\end{figure}
}
\newcommand{\figGluAB}
{
\begin{figure}[hbt]
\epsfxsize=\hsize
\centerline{\epsfbox{GluAB.eps}}
\caption{Comparison of gluons obtained with pre-1995 DIS
data (A-series) with those using current DIS data (B-series).
(Cf., Ref.~\protect\cite{cteq4}.)
}
\label{figGluAB}
\end{figure}
}
\newcommand{\figGluCdA}
{
\begin{figure}[hbt]
\epsfxsize=\hsize
\centerline{\epsfbox{GluC4A.eps}}
\caption{Comparison of gluons obtained without jet
data (C-series) with those obtained with jet data, D-series (CTEQ4Ax).
(Cf., Ref.~\protect\cite{cteq4}.)
}
\label{figGluC4A}
\end{figure}
}
\figdis
\figdph
\fighera
\figtev
\figGluAB
\figGluCdA
Global QCD analysis of lepton-hadron and hadron-hadron processes has made
steady progress in testing the consistency of perturbative QCD (pQCD)
within many different sets of data, and in yielding increasingly detailed
information on the universal parton
distributions.\footnote{PDF sets are available via WWW on the CTEQ page at
http://www.phys.psu.edu/$\sim$cteq/ and
on the The Durham/RAL HEP Database at
http://durpdg.dur.ac.uk/HEPDATA/HEPDATA.html.
}
We present a detailed compilation of the kinematic ranges covered by
selected experiments from all high energy processes
relevant for the determination of the universal parton
distributions. This allows an overall view of the overlaps and the
gaps in the systematic determination of parton distributions; hence,
this compilation
provides a useful guide to the planning of future experiments and to
the design of strategies for global analyses.
These analyses incorporate diverse data sets including
fixed-target deeply-inelastic scattering (DIS) data\cite{one,DisExp} of
BCDMS, CCFR, NMC, E665;
collider DIS data of H1, ZEUS;
lepton pair production (DY) data of E605, CDF;
direct photon data of E706, WA70, UA6, CDF;
DY asymmetry data of NA51;
W-lepton asymmetry data of CDF; and
hadronic inclusive jet data\cite{JetExp} of CDF and D0.
The total number of data points from these experiments is $\sim 1300$,
and these cover a wide region in the kinematic $\{x,Q^2\}$ space
anchored by HERA data at small $x$ and Tevatron jet data at high $Q$.
We now present the various experimental processes.
Note that while this is a comprehensive selection of
experiments, it is by no means exhaustive; we have attempted
to display those data which are characteristic for the
structure function determination.
In some cases, we have taken
the liberty to interpret the data so as to facilitate comparison among
the diverse processes we
consider.\footnote{In particular,
since we have taken the data points from the
global fitting files, there is
a cut on the minimum value of $Q \sim 2 GeV$ to avoid the non-perturbative region.}
Also note that we have not attempted to deal with the different precision
of different measurements, or to separately consider the quark and gluon
determination; the reader should keep these points in mind when comparing
the figures.
The quark distributions inside the nucleon have been quite well
determined from precise DIS and other processes, cf., Fig.~\ref{fig:dis}:
\begin{equation}
\mu,\nu + N \rightarrow \mu,\nu + X
\ .
\end{equation}
Improved DIS data in the small-$x$ region is available from HERA, and
this is of sufficient precision to be sensitive to the indirect influence
of gluons via high order processes.
The Drell-Yan process is related by crossing to DIS.
In lowest order QCD it is described by quark-antiquark annihilation:
\begin{equation}
q + \bar{q} \rightarrow \gamma^* \rightarrow \ell^+ + \ell^-
\ .
\end{equation}
The kinematic coverage is shown in Fig.~\ref{fig:dph} and Fig.~\ref{fig:tev}.
Recent emphasis has focused on the more elusive gluon distribution,
$G(x,Q)$, which is strongly coupled to the
measurement of $\alpha_s$.
Direct photon production,
\begin{equation}
g + q \rightarrow \gamma + q
\qquad , \qquad
q + \bar q \rightarrow \gamma + g
\quad ,
\end{equation}
in particular from the high statistics fixed target
experiments, has long been regarded as the most useful source of
information on
$G(x,Q)$, cf., Fig.~\ref{fig:dph}.
However, there are a number of large theoretical uncertainties (e.g.,
significant scale dependence, and $k_T$ broadening of initial state
partons due to gluon radiation)\cite{CtqDph,TungJet} that need to be brought
under control before direct photon data can place a tight constraint on the
gluon distribution.
For example, the $k_T$ broadening due to soft gluon radiation is
essentially a higher twist effect (but with a large coefficient), and
should affect all hard scattering cross sections. The magnitude of the
correction to the cross section should be on the order of
$n(n+1) \langle k_T\rangle^2/(4
p_T^2)$, where $\langle k_T \rangle$ is the average $k_T$ in the hard
scatter, and $n$ is the exponent of the differential cross section with
respect to $p_T:$ ($d\sigma/dp_T \propto 1/p_T^n$). For the Tevatron
collider regime, the effect should fall off as $\sim 1/p_T^2$, as is
observed for example in direct photon production in CDF. For $p_T > 50\,
GeV$, the effect is negligible. For fixed target experiments, the
effective value of $n$ is large and changes rapidly with $p_T$ (due to the
rapidly falling parton distributions). The soft gluon radiation tends to
make the cross section steeper at low $p_T$ and at high $p_T$, and to cause an
overall normalization shift of a factor of 2.\cite{CtqDph}
There are several approximate methods to predict the effects of
soft gluon radiation, as for example in gaussian $k_T$ smearing, or the
incorporation of parton showers into a NLO Monte Carlo. Further
understanding may await the development of a more formal treatment of the
effect. Several theoretical ideas are under development.
Inclusive jet production in hadron-hadron collisions,
\begin{equation}
\{g g,\ q \bar{q}\} \rightarrow \{g g,\ q \bar{q}\}
\quad , \quad
g+\{ q,\ \bar{q}\} \rightarrow g+\{ q,\ \bar{q}\}
\ ,
\end{equation}
is very sensitive to
$\alpha_s$ and $G(x,Q)$, (Fig.~\ref{fig:tev}). NLO inclusive jet cross
sections yield relatively small $\mu$ scale dependence for moderate to
large
$E_t$ values.\cite{JetTh}
High precision data on single jet production is now available over a wide
range of energies, $15\, GeV <E_t<450\, GeV$.\cite{JetExp}
For $E_T > 50 GeV$, both the theoretical and experimental systematic
errors are felt to be under control.
Thus, it is natural to incorporate inclusive jet data in a global QCD
analysis.
In reviewing the figures we see the large kinematic range which is
explored by these processes.
It is a useful exercise to overlay the curves according to the separate
determination of the valence-quarks, light-sea-quarks, heavy-quarks, and
gluons. Although there is no room here for such a presentation, we leave
this as an exercise to the interested reader.
Obviously, when comparing such a wide range of processes, one must keep
in mind considerations beyond just the kinematic ranges. For example, the
DIS and Drell-Yan processes are useful in determining the quark
distributions, whereas the direct photon and photoproduction experiments
yield information about the gluon distributions--though {\it not} with
comparable accuracy; the determination of the gluon distribution is
subject to many more theoretical and experimental uncertainties.
Likewise, the systematics for hadron-hadron and lepton-hadron
processes are quite different.
Specifically, while the hadron-hadron colliders can in principle
determine parton distributions out to large $Q^2$, extractions of PDFs
from these data are only beginning.
DIS experiments probe small $x$ (HERA)
and high $x$ (NuTeV), and low-mass
Drell-Yan collider measurements yield complementary results at higher
$Q^2$. This combination of experiments improves the reliability of the PDFs,
allows for cross checks among the different experiments, and yields precise
tests of the QCD evolution of the parton distributions.
\subsubsection{Progress of PDFs}
As new global PDF fits are being updated and improved, it can be difficult to quantify
our progress as to how precisely we are measuring the hadronic structure.
To illustrate this progress we consider sets of global PDF fits using
various subsets of the complete data
set.\footnote{For the details of how these fits were performed,
see the original paper, Ref.~\cite{TungJet}.}
First, we compare the A- and B-series of fits shown in Fig.~\ref{figGluAB}.
The A-series shows a selection of gluon PDFs extracted from pre-1995 DIS
data using various values of $\alpha_s(M_Z^2)$ as indicated in the figure.
The B-series shows the same selection, but including the recent DIS data.
By comparing the A- and B-series of
fits, we found that recent DIS data
\cite{DisExp} of NMC, E665, H1 and ZEUS considerably narrow
down the allowed range of the parton distributions.
Next, we compare the B- and C-series of fits shown in Fig.~\ref{figGluAB} and
Fig.~\ref{figGluC4A}.
These fits were performed with the same data set, but the C-series fit used
a more generalized parametrization with additional degrees of freedom.
By contrasting the B- and C-series we see that we must be careful to ensure
that our parameterization of the initial PDFs at $Q_0$ is not restricting the
extracted distributions.
Finally, we compare the C- and D-series (CTEQ4Ax) of fits shown in Fig.~\ref{figGluC4A}.
For the D-series fits, the Tevatron jet data was used, whereas this was
excluded from the C-series fits.
The jet data has a significant effect in more
fully constraining $G(x,Q)$ as compared to the C-series.
The quality of the final D-series fits (CTEQ4Ax) is indicative of
the progress that has been made in this latest round
of global analysis.
\subsection{High $E_t$ Jets and Parton Distributions}
\newcommand{\figJetFit}
{
\begin{figure}[htbp]
\epsfxsize=\hsize
\centerline{\epsfbox{JetFit.eps}}
\caption{CDF and D0 data compared to NLO QCD using a) CTEQ4M and b) CTEQ4HJ.
{\it Cf.}, Refs.~\protect\cite{TungJet,cdfIa,cdfIb,d0Iab}.
}
\label{figJetFit}
\end{figure}
}
\newcommand{\figHiEtGluon}
{
\begin{figure}[htbp]
\epsfxsize=3.5in
\centerline{\epsfbox{hjglue.eps}}
\caption{(a) The CTEQ4HJ gluon distributions are compared to that of
CTEQ3M: (b) the ratio of the CTEQ4HJ gluons to
CTEQ3M. {\it Cf.}, Ref.~\protect\cite{TungJet}.
}
\label{fig:higl150}
\end{figure}
}
\newcommand{\tblChiSq}
{
\begin{table}[htbp]
\begin{center}
\caption{Total $\chi^2$ ($\chi^2/point$) values and their distribution among
the DIS and DY experiments for CTEQ4M and CTEQ4HJ.
{\it Cf.}, Ref.~\protect\cite{cteq4}.
}
\vskip 10pt
\label{tblChiSq}
\begin{tabular}{|c|c||c|c|c|}
\hline
Experiment & \#pts & CTEQ4M & CTEQ4HJ \\
\hline\hline
DIS-Fixed Target & 817 & 855.2(1.05) & 884.3(1.08) \\ \hline
DIS-HERA & 351 & 362.3(1.03) & 352.9(1.01) \\ \hline
DY rel. & 129 & 102.6(0.80) & 105.5(0.82) \\ \hline
\hline
Total & 1297 & 1320 & 1343 \\ \hline
\end{tabular}
\end{center}
\end{table}
}
\figJetFit
\figHiEtGluon
\tblChiSq
High statistics inclusive jet production measurements at the Tevatron
have received much attention recently because the high jet $E_t$
cross-section\cite{cdfIa,cdfIb,d0Iab} is larger than expected from NLO
QCD calculations.\cite{JetTh} A comparison of the inclusive jet data
of CDF and D0 and results is given in Fig.~\ref{figJetFit}a. We see
that there is a discernible rise of the data above the fit curve
(horizontal axis) in the high $E_t$ region. The essential question is
whether the high $E_t$ jet data can be explained in the conventional
theoretical framework, or require the presence of ``new
physics''.\cite{mrsd0,GMRS,TungJet}
Although inclusive jet data was included in the global fit of the PDF,
it is understandable why the new parton distributions (e.g.,
CTEQ4M) still underestimate the experimental cross-section: these data
points have large errors, so they do not carry much statistical weight
in the fitting process, and the simple (unsigned) total $\chi^2$ is
not sensitive to the pattern that the points are uniformly higher in
the large $E_t$ region. A recent study investigated the feasibility
of accommodating these data in the conventional QCD framework by
exploiting the flexibility of $G(x,Q)$ at higher values of $x$ (where
there are few independent constraints), while maintaining good
agreement with other data sets in the global analysis.\cite{TungJet}
A result of this study is the CTEQ4HJ parton sets which are tailored
to accommodate the high $E_t$ ($>200$ GeV) jets,\footnote{This set is
tailored to accommodate the high $E_t$ jets by artificially decreasing
the errors in the fit. See Ref.\cite{TungJet} for details. The
$\chi^2$ of Table~\ref{tblChiSq} is computed using the true errors.}
as well as the other data of the global fit.\cite{TungJet}
Fig.~\ref{figJetFit}b compares predictions of CTEQ4HJ with the results
of both CDF and D0.\footnote{For this comparison, an overall
normalization factor of 1.01(0.98) for the CDF(D0) data set is found
to be optimal in bringing agreement between theory and experiment.}
Results shown in Fig.~\ref{fig:higl150} and Table~\ref{tblChiSq}
quantifies the changes in $\chi^2$ values due to the requirement of
fitting the high $E_t$ jets. Compared to the best fit CTEQ4M, the
overall $\chi ^2$ for CTEQ4HJ is indeed only slightly
higher.\cite{cteq3,cteq4} Thus the price for accommodating the high
$E_t$ jets is negligible.
The much discussed high $E_t$ inclusive jet cross-section has been
shown to be compatible with all existing data within the framework of
conventional pQCD {\it provided} flexibility is given to the
non-perturbative gluon distribution shape in the large-$x$ region.
Presently, we note that the direct photon data from the Fermilab
experiment E706 are sensitive to the same $x$ range that affect the
Tevatron high $E_t$ jet data. A more quantitative theoretical
treatment of soft gluons may allow the direct photon data to probe
this question more precisely. We will need such accurate, independent
measurements of the large-$x$ gluons to verify if the high-$E_t$ jet
puzzle is resolved, or whether we have only absorbed the ``new
physics" into the PDFs.
Nevertheless, this episode provides an instructive lesson: the
precision with which we know the PDFs is not indicated from a simple
comparison of different global fit sets. These fits proceed from
similar assumptions and procedures, so their relative agreement should
not be taken as assurance of our knowledge of the PDFs. In the
present case, the gluon density was naively estimated to be less than
10-20\% (in the $x$ kinematic range relevant for high $E_t$ jet
production) from a simple comparison of different PDF sets.
Surprisingly, a large change was eventually required (and accommodated)
by the data (assuming the Tevatron result is not an indication of some
new physics).
\subsection{Challenges for Global Fitting}
Global fitting of PDFs is a highly complex procedure which
is both an art and a science. This requires fitting a large number
of data points from diverse experiments with differing systematics.
Furthermore, the data are compared with NLO theory which introduces
additional complications on the theoretical side.
There was extensive discussion as to how
to determine the uncertainty of the PDFs. We note that one of the
most important uncertainties for the PDFs is the choice of
$\alpha_S$, since this affects the gluon distribution directly
as well as the singlet quarks. Both MRS and CTEQ now provide
different PDFs with different choices of $\alpha_S$, this is a big
improvement in determining the uncertainty of PDFs. But this
group did not succeed in answering all the questions related to the
goal of a true one-standard-deviation covariance matrix of
PDF uncertainties, although we did focus on some points that
deserve further study. We list some of these below.
\begin{enumerate}
\item
A reminder: when adding two experiments, you simply add their
$\chi^2$'s, and $\Delta\chi^2=1$ of the total $\chi^2$ is one
$\sigma$.
\begin{enumerate}
\item
Due to direct photon theory $\mu$-scale and $k_T$
uncertainties, there is no way to define one standard deviation
for these data. The handling of the $\mu$-scale is done differently
in different groups and can lead to somewhat different gluon distributions.
\item
Other ``choices'' can lead to significant
differences in $\chi^2$ ($\Delta \chi^2 \approx 50-100$ units is typical).
These choices include which data sets to use, the starting $Q_0$ value,
etc. One example is the small-x CCFR neutrino data which disagrees
with the electron/muon DIS data. This difference is unlikely
to be caused entirely by parton distributions, and how this
is handled in the global fits can cause significant changes
in the global $\chi^2$.
\end{enumerate}
\item
Many experiments do not provide correlation matrices, and we've
never seen a correlation matrix for a theoretical uncertainty.
Without both of these for every experiment, one cannot expect
$\Delta\chi^2=1$ to work.
\item
In principle we should add in LEP/tau/lattice
constraints on $\alpha_{S}$. But if they are treated as only a
single data point, they will be swamped by the other 1200 points.
(This would not be true if $\Delta\chi^2=1$ were valid.)
\item
What to do about the charm mass in DIS? It will change $F_2$
predictions, but the resulting parton distributions then
have a different definition of ``heavy quark in the proton'' and
this must be accounted for in the theoretical calculations.
\item
When the CDF W asymmetry and NA51 data were added (the change
between CTEQ2 and CTEQ3), they gave a consistent picture
of $\bar{u}$ and $\bar{d}$. But the $\chi^2$ went up for the rest of the
experiments by 30! Once again $\Delta\chi^2=1$ is invalid.
The choice was to accept the larger $\chi^2$'s to incorporate the
new data presented.
\item
What about higher twists? Should a higher twist theoretical
uncertainty be added to DIS data?
\end{enumerate}
\subsection{Choice of Parametrization} \label{subsec:parm}
\hyphenation{ha-dro-pro-duc-tion re-nor-mal-i-za-tion ex-ci-ta-tion }
\def\, \lower0.5ex\hbox{$\stackrel{>}{\sim}$}\, {\, \lower0.5ex\hbox{$\stackrel{>}{\sim}$}\, }
\def\, \lower0.5ex\hbox{$\stackrel{<}{\sim}$}\, {\, \lower0.5ex\hbox{$\stackrel{<}{\sim}$}\, }
\def\alpha_S{\alpha_S}
\def\figMRSg{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{glue.eps}
\end{center}
\caption{a) The gluon distribution $x G(x)$ at
$Q_0 =1.6 GeV$ using the MRS and CTEQ parametrizations.
The two curves are indistinguishable in this plot.
b) Fractional deviation for gluon of the
CTEQ and MRS parametrizations.
Note the full range of the y-axis is $\pm 1\%$.
}
\label{fig:MRSg}
\end{figure}
}
\def\figMRSuv{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{uval.eps}
\end{center}
\caption{a) The u-valence distribution $x u_v(x)$ at
$Q_0 =1.6 GeV$ using the MRS and CTEQ parametrizations.
The two curves are indistinguishable in this plot.
b) Fractional deviation for u-valence of the
CTEQ and MRS parametrizations.
Note the full range of the y-axis is $\pm 1\%$.
}
\label{fig:MRSuv}
\end{figure}
}
\def\figMRSdv{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{dval.eps}
\end{center}
\caption{a) The d-valence distribution $x d_v(x)$ at
$Q_0 =1.6 GeV$ using the MRS and CTEQ parametrizations.
The two curves are indistinguishable in this plot.
b) Fractional deviation for d-valence of the
CTEQ and MRS parametrizations.
Note the full range of the y-axis is $\pm 2\%$.
}
\label{fig:MRSdv}
\end{figure}
}
\def\figMRSev{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{evol.eps}
\end{center}
\caption{The ratio of a) $c(x)/s(x)$ and b) $b(x)/c(x)$
for a range of $Q$. For increasing $Q$, the evolution reduces any
difference between the distributions.
}
\label{fig:MRSev}
\end{figure}
}
\figMRSg
\figMRSuv
\figMRSdv
\figMRSev
The choice of boundary conditions for the PDFs at the initial $Q_0$
has received increasing attention as the accuracy of data
improves.\cite{cteq4,GMRS,cteq3,Yndurain} Although the DGLAP evolution
equation clearly tells us how to relate PDFs at differing scales, the
form of the distribution at $Q_0$ cannot yet be derived from first
principles, and must be extracted from data. For this purpose, it is
practical to choose a parametrization for the PDFs at the initial
$Q_0$ with a small number of free parameters that can be fit to the data.
A question that was repeatedly raised in the workshop is the extent to
which the choice of parametrization limits the extracted PDFs of the
global fit. It is important to note that the evolution equation for
the global fits is solved numerically on an $\{x,Q^2\}$ grid.
Therefore the issue of the parametrization is only relevant at $Q_0$.
For $Q>Q_0$, the parametrized form is replaced by a discrete
$\{x,Q^2\}$ grid.
To approach these questions in a quantitative manner, we performed a
simple exercise to examine the potential difference of PDFs that can
be attributed to different choices of parametrizations.
Specifically, we investigated the difference between the
MRS\cite{GMRS} and CTEQ\cite{cteq3} parametrizations, which take the
general form:
\noindent
MRS:
\begin{equation}
f(x,Q) = a_0 x^{a_1} (1-x)^{a_2} (1+a_3 \sqrt{x} + a_4 x)
\label{eq:parm}
\end{equation}
CTEQ:
\begin{equation}
f(x,Q) = b_0 x^{b_1} (1-x)^{b_2} (1+b_3 x^{b_4})
\label{eq:parm2}
\end{equation}
We used the CTEQ3M PDF set at $Q_0 = 1.6\, GeV$ (which is naturally
described by the CTEQ parametrization shown above), and performed a
fit to describe the same PDF set using the MRS parametrization. Note
that this is an academic exercise that does not fit data, but rather
explores the flexibility of the parametrizations.
If we can accurately describe the CTEQ3 PDFs with the MRS form, then
it is plausible that the particular parametrization choice for the
PDFs has little consequence. However, if we cannot accurately
describe the CTEQ3 PDFs with the MRS form, we will need to
investigate more thoroughly whether the parametrization introduces a
strong bias as to the possible PDFs which will come from the global
fitting procedure.
In Figs.~\ref{fig:MRSg}, \ref{fig:MRSuv}, and \ref{fig:MRSdv}, we plot
both the CTEQ3 PDFs and the fit to the CTEQ3 PDFs using the MRS
parametrized form, Eq.~\ref{eq:parm}. We only show the gluon,
u-valence, and d-valence; the results for the sea-quarks will be
similar to the gluon. First we plot $x\, f(x)$ at $Q_0=1.6\, GeV$ on
a Log-Log scale. The two separate curves are indistinguishable in
this plot. To better illustrate the differences, we plot the
fractional difference between the two PDFs. Observing that the scale
on this plot is $\leq 2\%$, we see that the variation over the range
$x=[10^{-4},1]$ is relatively small. We find larger deviations in the
high $x$ region, but the significance of this is diminished by the
fact that the PDFs are small in this region.
Although we do not claim that this is an exhaustive investigation,
this simple exercise appears to indicate that the PDFs extracted from
a global fit should be insensitive to the choice of the above
parametrizations (Eq.~\ref{eq:parm}). One might speculate that the
same conclusion would also hold for other parametrizations; however,
such an exercise has yet to be performed.
Furthermore, since the QCD evolution is stable as one evolves up to
higher values of $Q$, any small differences at $Q_0$ will decrease for
$Q>Q_0$. We can roughly see this effect by examing the ratios of
$c(x)/s(x)$ and $b(x)/c(x)$ as shown in Fig.~\ref{fig:MRSev}. For
example, at $Q= 10\, GeV$, the b-quark is less than half of the
c-quark distribution; at $Q= 100\, GeV$ the b-quark distribution is
significantly closer to the c-quark.\cite{collinstung86} This
observation suggests that the small differences we observed at $Q_0 =
1.6\, GeV$ will quickly wash out as we evolve upwards.
The above observations, however, only apply in regions of $x$ where PDFs
are well-determined; and they cannot be taken literally without qualification.
An important example which illustrates the importance of exercising caution
is the behavior of the gluon distribution at large $x$ brought to focus by the
high $E_t$ jet data, as discussed in the last section. Whereas, using
``conventional'' parametrization of the form Eq.(\ref{eq:parm}),
GMRS \cite{GMRS} found it
impossible to fit the jet results along with the rest of the global data, CTEQ
showed that allowing for a more flexible parametrization of the gluon
distribution at large $x$ can accommodate both. To accomplish this, one will
need a functional form such as Eq.(\ref{eq:parm2}), with $b_4$ substantially
bigger than 1,
or equivalently, $e^{b_4x}$ in place of $x^{b_4}$. (Since $0<x<1$, and the
whole expression is multiplied by $(1-x)^{b_2}$ which is steeply falling,
$G(x,Q)$ is still well-behaved.) The difference in the size of $G(x,Q)$
resulting from these parametrizations can be as much as 100\% at $x=0.5$, as
shown in Fig.~\ref{fig:higl150}.
\section{STRUCTURE FUNCTIONS AND $\alpha_{S}$} \label{subsec:alphas}
Structure functions are important in that they give us
information on the value of $\alpha_s$, and also in that they are
often inputs to many different measurements, some of which themselves
are used to determine $\alpha_s$. In this summary of the work of the
joint $\alpha_s$-structure function groups we investigate
how structure functions themselves give us direct information on
$\alpha_s$, and the expected uncertainties of possible new measurements of
structure functions at future proposed machines. There are two
categories of structure function analyses which result in an
$\alpha_s$ measurement: $Q^2$ evolution of structure functions, and
measurements of sum rules, which pertain to the integrals of specific
structure functions over $x$. Since the theoretical and experimental
errors are comparable for some of these analyses, this report
examines how improvements might be made in both areas.
On the experimental side of the study, we consider a
$\mu p$ collider or an $e p$ collider, and also a neutrino scattering
experiment at a $\mu^+ \mu^-$
collider. Given the current level of
error in $\alpha_s$ measurements, we consider here only analyses which may
result in few per cent or less error on $\alpha_s(M_Z^2)$. To address the
theoretical issues within the scope of this report we can at best point
out the largest problems and how they are currently being investigated,
in hopes of inspiring theorists to devote more time to them.
\subsection{Evolution of Structure Functions}
When looking at the $Q^2$ evolution of structure
functions, one can use the DGL-Altarelli-Parisi Equations to find
$\alpha_s$ \cite{dglap}.
In the case of the non-singlet structure functions the evolution as a
function of $Q^2$ is simply related to $\alpha_s$ and the
non-singlet structure function itself. In the case of the singlet
structure functions, the evolution is related to $\alpha_s$, the
structure function itself, AND the gluon distribution, which
complicates matters. In either case care must be taken avoid large higher
twist effects, which are particularly important at low \mbox{$Q^2$}.
Non-singlet structure functions can be measured in both
neutrino and charged lepton scattering experiments.
One way is by taking the average of
$xF_3^{\nu N}$ and $xF_3^{\overline \nu N}$, where $\nu N$ in the
superscript indicates the presence of an isoscalar
target. Similarly, averaging $xW_3^{l^+d}$ and $xW_3^{l^-d}$ also results
in a pure non-singlet structure function, where the lepton is either
an electron or muon, and the scattering center is a deuteron. Finally,
one can use the structure function $F_2^{\nu N}$ or $F_2^{l^\pm N}$ at
high x, since there are virtually no sea quarks at high $x$.
Many high statistics determinations of $\alpha_s$ have to date been performed,
using a variety of techniques. By fitting only $xF_3$ or $F_2$ at high
$x$, one can do a pure non-singlet fit to the evolution, with no dependences
on the gluon distribution. Given the wealth of precise data in charged lepton
scattering structure functions, however, there are also determinations of
\mbox{$\alpha_s$}\ from fitting $F_2$ at all $x$, but including a contribution to the
evolution from the gluons. These two different kinds of determinations
do not show any systematic difference in the final result, as is shown
in table \ref{tab:disresults}.
\begin{table}[h]
\begin{center}
\caption{A selection of $\alpha_s$ measurements from structure functions, and
the total error on $\alpha_s(M_Z^2)$.}
\label{tab:disresults}
\begin{tabular}{lccl}
Method & Experiment &\mbox{$Q^2$}\ & \mbox{$\alpha_s$}(\mbox{$M_Z^2$}) \\
\hline\hline
$xF_3$ only & CCFR \cite{newccfr} & 25 & $.118\pm.007$ \\
$xF_3$ and $F_2$ & CCFR \cite{newccfr} & 25 & $.119\pm.0055$ \\
$F_2$ low $x$ & NMC \cite{nmc} & 7 & $.118\pm.015$ \\
$F_2$ high $x$ & SLAC$/$BCDMS \cite{virmil}& 50 & $.113\pm.005$ \\
$F_2$ low $x$ & HERA \cite{hera}& 4-100 & $.120\pm.010$ \\
\end{tabular}
\end{center}
\end{table}
The errors listed in \ref{tab:disresults} are deceiving, however,
because in fact they are all dominated by either experimental or theoretical
systematic errors. In the remainder of this section we consider the
largest two systematic errors, and how new machines (and new calculations)
could hopefully reduce these errors.
\subsubsection{Experimental Errors on \mbox{$\alpha_s$}\ and possible improvements}
The dominant experimental systematic error in the measurements listed
in the table come from energy uncertainties. These
can come from spectrometer resolution, calibration uncertainty in the
detector, or overall detector energy scale. The key to improving the
overall experimental error in these measurements is not higher
statistics or higher energies, but better calorimetry, and better
calibration techniques. The challenge in determining the energy scale
in deep inelastic scattering experiments is in finding some ``standard
candle'' from which to calibrate. For example, if there were some way
of measuring the known mass of some particle decaying in the system,
or if the initial beam energy was very well known because of
accelerator constraints, this could substantially improve the energy
scale determination over previous experiments.
A number of machines have been proposed at this workshop in a
variety of energies and initial particles. While it is true that machines
(and experiments) are not proposed these days to do precise QCD
measurements alone, there are some interesting possibilities that may arise
from these machines.
Because of other considerations (namely the rise of $F_2$ at low $x$)
a lepton/hadron collider is an attractive possibility. Currently,
however, the HERA \mbox{$\alpha_s$}\ experimental error is dominated by
uncertainties in the $x$ distribution of the structure functions
measured, (particularly that of the gluon).
In order to do a DGLAP-style evolution measurement in a
lepton hadron collider, one would need to have both $\ell^+p$ and
$\ell^-p$ collisions, measure the different cross sections, and
extract a non-singlet structure function. The statistics needed for a
precise structure function measurement at the energies that have been
proposed would be well above current HERA expectations, and the higher
in \mbox{$Q^2$}\ these machines operate the lower the cross section, and the
smaller the effect one is trying to measure.
Another intriguing possibility would be a neutrino experiment at a
muon collider. A 2TeV muon collider could (with considerable
engineering) make very high rate 800GeV neutrino and antineutrino
beams. If one knew the muon beam energy very well (taking as an
example how well the LEP energy scale is now known after much work!)
then a neutrino beam coming from muon decays would be at a very
well-understood energy as well. There would be negligible production
uncertainty from a neutrino beam coming from a muon beam, and the
rates for such a beam would be astronomical simply starting with the
current proposals for muon intensities in the accelerator.
\subsubsection{Theoretical Errors on \mbox{$\alpha_s$}\ and possible improvements}
Currently the renormalization and factorization scale uncertainties
dominate the theoretical error on \mbox{$\alpha_s$}\ from structure function
evolution. This is true for both singlet and non-singlet structure
functions evolution. By assuming the factorization and
renormalization scales were $k_1\mbox{$Q^2$}$ and $k_2\mbox{$Q^2$}$ respectively, and
varying $k_1$ and $k_2$ between 0.10 and 4, Virchaux and Milsztajn
arrive at an error of $\delta(\mbox{$\alpha_s$}(\mbox{$M_Z^2$}))= .004$.\cite{virmil}.
They claim that the overall $\chi^2$ of the fit did not increase
significantly when these variations were made. Similar or larger QCD
scale errors apply to the other \mbox{$\alpha_s$}\ measurements listed in table
\ref{tab:disresults}. Certainly the most straightforward (and perhaps
naive) way to reduce this error would be to calculate the next
higher-order term in the DGLAP equations.
Still another method of reducing these errors is to actually fit for
$k_1$ and $k_2$, and see what the resulting error on these values are
within the fit. By floating those constants, however, one is assuming
QCD works, and getting a good fit for one consistent value of
\mbox{$\Lambda_{\overline{MS}}$}\ in the experiment can no longer be claimed by itself as a test
of QCD. If $k_1$ and $k_2$ are floated, one does not test QCD until
one compares one experiment's \mbox{$\alpha_s$}\ value with another experiment's
value. Furthermore, if the fit prefers values of $k_1$ and $k_2$ far
away from one, one would also question the validity of the
measurement.
\subsection{Sum Rules are Better than Others }
The two sum rules that have thus far been used to measure
$\alpha_s$ are the Gross Llewellyn Smith sum rule \cite{gls} and the
Bjorken sum rule \cite{bjsr}, which are related to $xF_3$, and the
polarized structure functions $g_n(x)$ and $g_p(x)$ respectively.
Since these methods of determining \mbox{$\alpha_s$}\ are far less mature than the
structure function analysis, the corresponding experimental errors on
\mbox{$\alpha_s$}\ are much larger. Since both Sum Rules come are fundamental
theoretical predictions, and the higher order corrections to the sum
rules are so straightforward to compute, the QCD scale error on
these measurements is much smaller than those of the evolution
measurements. Table \ref{tab:errglsbj} gives a list of systematic and
statistical errors for both sum rules.
\begin{table}[h]
\caption{Table of errors on $\alpha_s(M_Z^2)$ from Sum Rules.
$^\dagger$ From E142 result, E154 claims should be higher}
\label{tab:errglsbj}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
& \multicolumn{2}{|c|}{$\delta\mbox{$\alpha_s$}(\mbox{$M_Z^2$})$} \\
Error Source & GLS & Bjorken \\
\hline
Statistical & .004 & $<.001$ \\
\hline
Low $x$ extrapolation & .002 & .005$^\dagger$ \\
\hline
Overall Normalization & .003 & .002 \\
\hline
Experimental Systematics & .004 & .006 \\
\hline
Higher Twist & .005 & .003-.008 \\
\hline
QCD Scale Dependence &.001 & .002 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Low $x$ Uncertainties}
The largest uncertainties in sum rule measurements come from the
fact that they involve integrals from $x=0$ to $x=1$. Of course no
experiment can measure all the way down to $x=0$, and the closer to
0 one can reach the smaller the error in extrapolating from the lowest
data point to zero will be. What is usually done to extrapolate to
$x=0$ is a functional form is assumed, and the data are either fit to
that functional form and the resulting parameters are checked with a
theory, or if the data do not have enough statistical precision some
functional form is simply assumed. While for the GLS sum rule the
data seem to agree with simple quark counting arguments for the form
of $xF_3$ at low $x$, the newest data from SLAC E154 (shown after
Snowmass'96 at ICHEP96) do not fit to a function whose integral
converges as $x$ goes to 0. The collaboration does not yet
report a measurement of \mbox{$\alpha_s$}\ from their new data, saying that the low
$x$ behavior of the integral is too uncertain; however,
this analysis is in progress. For future improvement
on the Bjorken Sum Rule one will need to go to lower $x$ than what has
currently been reached ($x=.015$). If the Bjorken integral is not finite
then much more is called into question than the validity of QCD!
Another uncertainty associated with the low $x$ region is that one also
needs to go to low \mbox{$Q^2$}\ to measure low $x$. At low \mbox{$Q^2$}\ higher twist
uncertainties become important, and these higher twist contributions have
never been measured for these sum rules. The present state
of higher twist calculations for DIS sum rules is given in reference
\cite{bktwist}, which discusses results from many models of higher
twist calculations, including QCD Sum Rules,
and a non-relativistic quark model. Again, there is more trouble associated
with the Bjorken Sum Rule than the GLS sum rule, because the different
models predict very different higher twist contributions to the former,
while agreeing at the $50\%$ level for the latter. So, whether one takes
as the higher twist error the spread of theoretical predictions or the
error on one such prediction (shown in the table above) one can arrive
at very different errors. In either case that error is significant at
the currently relevant \mbox{$Q^2$}\ region. Unless a proven agreed-upon
method of higher twist calculations arises the best bet in the future
will be to simply fit the sum rule results for a higher
twist contribution and an $\alpha_s$ contribution. This will require
much higher statistical precision in the structure function
measurements themselves than what is currently available.
\subsubsection{Normalization Uncertainties}
Finally, if one proposes to improve these measurements by going to a
higher \mbox{$Q^2$}\, the next most important error (assuming one has solved the
problem of extrapolating to low $x$) will be the overall normalization
error. Since the effect one is measuring is
proportional to 1-\mbox{$\alpha_s$} and not \mbox{$\alpha_s$}, as \mbox{$Q^2$}\ gets larger and
\mbox{$\alpha_s$}\ gets smaller then an
overall $1\%$ error on the normalization of the structure functions (and
hence the integral itself) turns into a larger fractional error on
\mbox{$\alpha_s$}. This is shown
quantitatively in figure \ref{fig:wheregls}, which shows the effects
of the higher twist error as a function of \mbox{$Q^2$}\ and a $1\%$
normalization error on the structure functions as a function of \mbox{$Q^2$} .
The sum of the two errors in quadrature show that measuring the sum
rules at a \mbox{$Q^2$}\ above 100\,GeV$^2$ will not reduce the overall error
for even an ambitious normalization error of $0.25\%$. The current
normalization error on the overall $\nu$-nucleon cross section is
$1\%$, and the error on the ratio of $\nu$ and ${\overline \nu}$ cross
sections is another $1\%$, which translates into presently a total
$xF_3$ normalization of $1.4\%$. There are currently no plans to
improve this measurement, one would need a tagged neutrino beam (which
might be possible with a muon beam at a muon collider) to do so.
\begin{figure}[b]
\leavevmode
\centerline{\epsfxsize=6cm \epsfbox{wheregls.eps}}
\caption{Variation of (a) higher twist, (b) $1\%$ normalization errors,
and (c) the sum in quadrature of the two as a function of \mbox{$Q^2$}\ for the
GLS sum rule.}
\label{fig:wheregls}
\end{figure}
\subsection{\mbox{$\alpha_s$}\ conclusions }
Structure functions and QCD provide us with the possibility
of two complementary measurements of \mbox{$\alpha_s$}; the \mbox{$Q^2$}\
evolution and sum rules. The current errors on \mbox{$\alpha_s$}\ from structure function
evolution are in the $4-5\%$ range, and will be improved only with
a reduction of the renormalization and factorization scale uncertainties.
For this, next to next to leading order (NNLO) corrections to the
DGLAP equations must be computed. By far the most important experimental
uncertainty in evolution measurements comes from how precisely experiments
know their energy scale and resolution. Sum Rules have very different
outstanding issues, namely the low $x$ uncertainty and measurement, and
also the higher twist terms. The best way to eliminate higher twist
uncertainties would be to simply measure their contributions in the
lowest \mbox{$Q^2$}\, yet still have enough statistics at higher \mbox{$Q^2$}\ for a
measurement of \mbox{$\alpha_s$}. While the sum rule analyses would benefit from
much higher statistics, in general, to arrive at new measurements of \mbox{$\alpha_s$}\
from structure functions we must do more than simply raise the energies of
the experiments and run them longer!
\section{STRUCTURE FUNCTION INPUTS TO PRECISION ELECTROWEAK MEASUREMENTS}
Structure functions are inputs to many precision electroweak
measurements--a few examples are from $sin^2\theta_W$ measured in
$\nu$N scattering and global electroweak fits which include $\alpha_s$
from structure function data along with other fundamental parameters.
A measurement expected to have significant experimental improvement
in the future such that the structure function uncertainty becomes
important relative to other uncertainties is the W
mass ($M_W$) measurement from on-shell production at collider experiments.
Even at the current level of precision of this analysis there are
outstanding questions about how that uncertainty is evaluated, and
whether this could be improved, even before new experiments come around.
At a hadron collider experiment, the W mass itself cannot be directly
measured on an event by event basis, because the clean signatures
of $W$ production contain a charged lepton and therefore also contain a
neutrino. Furthermore, the initial center of mass
energy of the partons which interact to give a $W$ is not known, so one
cannot simply require the total momenta to balance to give the energy
of the outgoing neutrino. One can use the constraint that the total
initial transverse momentum is zero, however. The way the mass is then
measured in an experiment is that the transverse mass is computed,
$M_T=\sqrt{2p_t^\ell p_t^\nu (1-\cos \phi^{\ell \nu})}$, where
$p_t^{\ell,\nu}$ are the transverse momenta of the charged lepton and
neutrino, and $\phi^{\ell \nu}$ is the angle between the charged lepton
and neutrino in the transverse plane. The shape of the $M_T$ distribution
is then extremely sensitive to $M_W$, but is also dependent on the
parton distributions used in the Monte Carlo simulation, in particular,
the transverse component of $u-d$.
Table \ref{tab:werr} gives the uncertainty in $M_W$ from CDF and D0 from
direct production, and measurement of the transverse mass \cite{youngkee}.
\begin{table}[b]
\caption{Table of uncertainties for both the CDF and D0 $W$ mass
measurements, for different final states ($e\nu$ or $\mu\nu$) and
different run periods (Ia,Ib).}
\label{tab:werr}
\begin{tabular}{|l||cc||c|c||} \hline\hline
Source & \multicolumn{2}{c}{CDF} & D0 & D0 \\ \hline
& \multicolumn{2}{c}{Ia} & Ia & Ib \\
& $e$ & $\mu$ & $e$ & $e$ \\ \hline\hline
Statistics & 145 & 205 & 140 & 70 \\ \hline
Lepton Scale & 120 & 50 & 160 & 80 \\\hline
Lepton Resolution & 80 & 60 & 85 & 50 \\ \hline
Lepton Efficiency & 25 & 10 & 30 & 20 \\ \hline
$P_T^W$, PDF & 65 & 65 & 65 & 65 \\ \hline
$P_T^{\rm\textstyle Recoil}$ Model & 60 & 60 & 100 & 55 \\ \hline
Underlying Event & & & & \\
in Lepton Towers & 10 & 5 & 35 & 30 \\ \hline
Background & 10 & 25 & 35 & 15 \\ \hline
Trigger Bias & 0 & 25 & - & - \\ \hline
QCD Higher Order Terms & 20 & 20 & - & - \\ \hline
QED Radiative Corrections & 20 & 20 & 20 & 20 \\ \hline
Luminosity Dependence & - & - & - & 70 \\ \hline\hline
Total & \multicolumn{2}{c}{180} & 270 & 170 \\ \hline\hline
\end{tabular}
\end{table}
Currently the structure function uncertainty is estimated by doing the
analysis with several different sets of parton distribution functions,
and comparing the results, using the W asymmetry measurement as
another constraint. Figure \ref{fig:wasym} shows the measured W
asymmetry from CDF and the predictions from various PDFs \cite{wasym}.
Given that most of these PDFs come from the same input data (deep inelastic
structure functions), the spread of the
predictions represents an error in the technique of parametrizing the
distribution which accounts for the W asymmetry, not the error on
the distribution itself. By requiring a PDF to reproduce the measured
W asymmetry, one is choosing a more appropriate parametrization, but one
must go further to assign errors on that specific parametrization.
\begin{figure}[tbph]
\vspace{6cm}
\special{psfile=wasym.eps hscale=50 vscale=50 hoffset=-20
voffset=-80}
\vspace{1cm}
\caption{The W asymmetry as measured in CDF and the prediction of various
different parton distribution functions.}
\label{fig:wasym}
\end{figure}
Figure \ref{fig:wmass_pdferr} shows the resulting
change in $M_W$ for different PDFs, and how many standard deviations
each PDF is from predicting the $W$ asymmetry \cite{wmass}.
\begin{figure}[th]
\vspace{6cm}
\special{psfile=wmass_pdferr.eps hscale=90 vscale=90 hoffset=-10
voffset=-290}
\vspace{1cm}
\caption{The change in W mass versus the signed standard deviation
of agreement with the measured W charge asymmetry for different PDFs.}
\label{fig:wmass_pdferr}
\end{figure}
The problem with estimating this uncertainty by comparing different
PDFs is the following: if all of these PDFs are simply different
parametrizations which come from the same sets of deep inelastic
scattering data, then two different PDFs do not necessarily encompass
the uncertainty on whatever quark distributions are relevant. There
must be errors on the PDFs in order for the correct error on the
W mass uncertainty to be evaluated. Of course, since at the present
time there are no errors given with PDFs, this is not possible.
There was much discussion at Snowmass about the difficulties associated
with assigning errors to PDFs, and one should refer to that section
of this write-up, and a separate submission by Tim Bolton on this
topic. Given that the job of assigning those errors is one that is
far from completion, a temporary solution was suggested at this
meeting. Namely, a PDF-generator could produce a set of PDFs
that span the range of the possible values of the distribution in
question. For example, for the jet $E_T$ analysis, a set of PDFs
with different values of \mbox{$\alpha_s$}\ has been generated. Similarly,
a set of PDFs with the acceptable range of $u-d$, which is important
for the W mass measurement ( and also the W asymmetry measurement) could
also be generated. Then the $M_W$ analysis could simply compare the
different PDFs in one set provided for an estimate on the $M_W$ error
from uncertainty in the PDFs, and compare different parametrizations
for the uncertainty on the parametrization.
Furthermore, much care must taken when
using the measured $W$ asymmetry to constrain the $W$ mass error from
PDFs. Since the $W$ mass is measured using a
distribution dependent mostly on transverse quantities,
and the asymmetries depend on longitudinal differences
between the $u$ and $d$ quark distributions, the correlations (and/or
lack of correlations) must be taken into account appropriately.
In order to use PDFs to their full potential, and also make precision
measurements at hadron collider experiments, collaboration between
PDF-generators and experimenters is essential. The $W$ mass illustrates
where this would be useful probably better than any other precision
electroweak measurement. Given that the future seems to be evolving towards
higher energy hadron colliders, the necessity of errors on parton distribution
functions can only increase, as will the care required in using these
functions correctly.
\section{HEAVY QUARK HADROPRODUCTION}
\def{\rm GeV}{{\rm GeV}}
\def\figHQdata{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in \epsfbox{figD0hq.eps}\\
\epsfxsize=3in \epsfbox{figCDFhq.eps}
\end{center}
\caption{
Heavy quark hadroproduction data.
{\it Cf.}, Ref.~\protect\cite{hqdata}.
}
\label{fig:figHQdata}
\end{figure}
}
\def\figcacciari{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfxsize=3in \epsfbox{cacciari.eps}
\end{center}
\caption{
Scale dependence of the heavy quark hadroproduction cross section
as a function of $\mu = \xi \mu_{ref}$ at $y=0$ and $p_t= 80\, {\rm GeV}$.
The NDE curve is the calculation of
Ref.~\protect\cite{nde}.
The {\it fragm., funct.} and {\it born} curves are the calculation of
Ref.~\protect\cite{Greco}. }
\label{fig:figcacciari}
\end{figure}
}
\def\figProd{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\hbox{
\epsfxsize=0.45\textwidth \epsfbox{figProd1.eps}
}
\end{center}
\caption{
a)~Generic leading-order diagram for flavor-excitation (LO-FE), $gQ\to gQ$.
b)~Subtraction diagram for flavor-excitation (SUB-FE),
${}^1f_{g\to Q} \otimes \sigma(gQ\to gQ)$.
c)~Next-to-leading-order diagram for flavor-creation (NLO-FC).
\null\hfill\null}
\label{fig:figProd}
\end{figure}
}
\def\figDecay{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\hbox{
\epsfxsize=0.45\textwidth \epsfbox{figDecay1.eps}
}
\end{center}
\caption{
a)~Generic leading-order diagram for flavor-fragmentation (LO-FF),
$\sigma(gg\to gg) \otimes D_{g\to Q}$.
b)~Subtraction diagram for flavor-fragmentation (SUB-FF),
$\sigma(gg\to gg) \otimes {}^1d_{g\to Q}$.
c)~Next-to-leading-order diagram for flavor-creation (NLO-FC).
\null\hfill\null}
\label{fig:figDecay}
\end{figure}
}
\def\figFeSub{
\begin{figure}[htbp]
\begin{center}
\leavevmode
\hbox{
\epsfxsize=0.20\textwidth \epsfbox{figCS1.eps}
\hfill
\epsfxsize=0.25\textwidth \epsfbox{figCS2.eps}
}
\end{center}
\caption{
The differential cross section $d^2 \sigma/dp_T^2/dy_1$ at
$p_T=20, \, 40 \, GeV$ and $y_1=0$ in $(pb/GeV^2)$ {\it vs.} $\mu$.
The lower curves (thin line) are the heavy quark
production cross sections {\it ignoring}
flavor-excitation (FE) and flavor-fragmentation (FF).
The upper curves (thick line) are the heavy quark
production cross sections {\it including}
FE and FF. {\it Cf.}, Ref.~\protect\cite{cost}.
\null\hfill\null}
\label{fig:figfesub}
\end{figure}
}
\figHQdata
\figcacciari
\figProd
\figDecay
\figFeSub
Improved experimental measurements of heavy quark hadroproduction have
increased the demand on the theoretical community for more precise
predictions.\cite{hqdata} The first Next-to-Leading-Order (NLO)
calculations of charm and bottom hadroproduction cross sections were
performed some years ago.\cite{nde} As the accuracy of the data
increased, the theoretical predictions displayed some shortcomings:
1) the theoretical cross-sections fell well short of the measured values,
and
2) they displayed a strong dependence on the unphysical renormalization
scale $\mu$.
Both these difficulties indicated that these predictions were missing
important physics.
One possible solution for these deficiencies was to consider
contributions from large logarithms associated with the new quark mass
scale, such as\footnote{Here, $m_Q$ is the heavy quark mass, $s$ is
the energy squared, and $p_T$ is the transverse momentum.}
$\ln(s/m_Q^2)$ and $\ln(p_T^2/m_Q^2)$, Pushing the calculation to one
more order, formidable as it is, would not improve the situation since
these large logarithms persist to every order of perturbation theory.
Therefore, a new approach was required to include these logs.
In 1994, Cacciari and Greco\cite{Greco} observed that since the heavy
quark mass played a limited dynamical role in the high $p_t$ region,
one could instead use the massless NLO jet calculation convoluted with
a fragmentation into a massive heavy quark pair to more accurately
compute the production cross section in the region $p_t \gg m_Q$. In
particular, they find that the dependence on the renormalization scale
is significantly reduced, (cf., Fig.~\ref{fig:figcacciari}).
A recent study\cite{cost} investigated using initial-state heavy quark
PDFs and final-state fragmentation functions to resum the large
logarithms of the quark mass. The principle ingredient was to include
the leading-order flavor-excitation (LO-FE) graph (Fig.~\ref{fig:figProd})
and the leading-order flavor-fragmentation (LO-FF) graph
(Fig.~\ref{fig:figDecay}) in the traditional NLO heavy quark
calculation.\cite{nde} These contributions can not be added naively to
the ${\cal O}(\alpha_s^3)$ calculation as they would double-count
contributions already included in the NLO terms; therefore, a
subtraction term must be included to eliminate the region of phase
space where these two contributions overlap. This subtraction term
plays the dual role of eliminating the large unphysical collinear logs
in the high energy region, and minimizing the renormalization scale
dependence in the threshold region. The complete calculation
including the contribution of the heavy quark PDFs and fragmentation
functions 1) increases the theoretical prediction, thus moving it
closer to the experimental data, and 2) reduces the $\mu$-dependence
of the full calculation, thus improving the predictive power of the
theory. (Cf., Fig~\ref{fig:figfesub}.)
In summary, heavy quark hadroproduction is of interest experimentally
because of the wealth of data allows precise tests of many different
aspects of the theory, namely radiative corrections, resummation of
logs, and multi-scale problems. Hence, this is a natural testing
ground for QCD, and will allow us to extend the region of validity for
the heavy quark calculation. This is an essential step necessary to
bring theory in agreement with experiment.
\section{SUMMARY}
\subsection{Kinematic Reach of Future Machines} \label{sec:reach}
\def\figmapi{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figmap1.eps}
\end{center}
\caption{Kinematic reach of present and planned facilities.
Note the full $\{x,Q^2\}$ region is clipped by the plot.
}
\label{fig:figmapi}
\end{figure}
}
\def\figmapii{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figmap2.eps}
\end{center}
\caption{Kinematic reach of future facilities.
}
\label{fig:figmapii}
\end{figure}
}
\def\figmapiii{
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=3in
\epsfbox{figmap3.eps}
\end{center}
\caption{Kinematic reach of future facilities.
}
\label{fig:figmapiii}
\end{figure}
}
\def\tablemap{
\begin{table}[htbp]
\begin{center}
\small
\caption{
Future $ep$ collider machines chosen for study.
}
\vskip 10pt
\label{tablemap}
\begin{tabular}{|c|r|r|r|c|}
\hline
Index & E$_{\rm lepton}$ & E$_{\rm proton}$ & $\sqrt{s}$
& Machine(s)\\
& (Gev) & (Gev) & (Gev) & \\
\hline\hline
1 & 27 & 820 & 300 & Hera\\ \hline
2 & 35 & 7,000 & 990 & Lep $\times$ LHC\\ \hline
3 & 8 & 30,000 & 980 & Low E lepton $\times$ 60 GeV pp\\ \hline
4 & 30 & 30,000 & 1900 & Lep $\times$ 60 GeV pp\\ \hline
5 & 500 & 500 & 1000 & NLC $\times$ conv. p\\ \hline
6 & 2,000 & 500 & 2000 & $\mu$ collider $\times$ conv. p\\
\hline
\end{tabular}
\end{center}
\normalsize
\end{table}
}
\tablemap
\figmapii
\figmapiii
\figmapi
A central goal of this workshop was to study the physics
potential of future facilities. Here, we focus on
lepton-hadron colliders. We expand our study beyond the single $ep$ machine
proposed in the workshop outline, and consider a mix of lepton and
hadron beams from those proposed for the lepton-lepton and hadron-hadron options.
The complete list is given in Table~\ref{tablemap}.
To covert these parameters into the $\{x,Q^2\}$ range, we make use of:
\begin{equation}
y = 1 - \frac{E'_{e}}{2E_{e}}(1 - \cos\theta_{\ell})
\quad ,
\end{equation}
\begin{equation}
Q^2 = 2 E_{e} E'_{e} (1 + \cos\theta_{\ell})
\quad ,
\end{equation}
and
\begin{equation}
x = Q^2/sy
\quad .
\end{equation}
For collider kinematics, we use
\begin{equation}
s \sim 4 E_{e} E_{p}
\quad .
\end{equation}
Here,
$E_{e}$ is the incoming lepton energy,
$E'_{e}$ is the outgoing lepton energy,
$E_{p}$ is the incoming hadron energy,
and $\theta_{\ell}$ is the lepton scattering angle.
To set practical limits on measurement of the final state, we impose:
\begin{itemize}
\item $y > 0.01$ \quad (resolution),
\item $y<1$ \quad (kinematic cut)
\item $\theta_{\ell} > 10^{\circ}$
\item $\theta_{\ell} < 179^{\circ}$
\end{itemize}
The constraint $\theta_{\ell} < 179^{\circ}$ may be somewhat optimistic;
if we relax this to $\theta_{\ell} \, \lower0.5ex\hbox{$\stackrel{<}{\sim}$}\, 176^{\circ}$, the result
is to lose some of the low $Q$ region.
The constraint $\theta_{\ell} > 10^{\circ}$ has a relatively small effect;
for the higher energy machines ({\it e.g}, 2 \& 3), it clips the upper $Q$ region.
We display the kinematic reach for these proposed machines in
Figs.~\ref{fig:figmapii} and \ref{fig:figmapiii}. We include HERA for reference.
In Fig.~\ref{fig:figmapii}, we show the three machine options with a CMS of
$\sqrt{s} \sim 1\, {\rm TeV}$.
In Fig.~\ref{fig:figmapiii}, we show HERA and the remaining two machine options.
In Fig.~\ref{fig:figmapi}, we show the present and planned (LHC) facilities.
Although there is currently no plan to extract the primary beam to
make a neutrino fixed target experiment at either the LHC or at a 2 TeV muon
collider, there is a case to be made for doing precisely that.
First of all, it would be very interesting to see if there were
an anomalous rise in $xF_3$ similar to that seen in $F_2$ at HERA.
Secondly, the low $x$ region of the Bjorken integral is anomalously
large, and an outstanding question is, what is the very low $x$ behavior of
the Gross-Llewellyn Smith integral ($xF_3$)? Either an experiment at
the LHC or one at a 2 TeV muon collider could extend the
range of the "Fixed Target" region indicated in Figure 23 by an order of
magnitude in the log ($1/x$) direction, assuming an order of magnitude
higher neutrino energies than what CCFR/NuTeV has. The neutrino cross
section would be an order of magnitude higher than the one applicable for
CCFR/NuTeV, so good statistics are in principle attainable. Although
these experiments would not have the kinematic reach to extremely low
$x$ that $ep$ machines have, they can measure to high precision the
non-singlet structure function, which at present has only been measured
down to $x=.01$. In principle an ep machine running with both positive and
negative leptons could do the same, but the luminosity requirements may
be prohibitively high. We have still not learned all that we can learn
from neutrino experiments, and even modest improvements in neutrino
energies can uncover much new ground.
While we would of course like to probe the full $\{x,Q^2\}$ space, there
are some particular reasons why the small $x$ region is of special
interest. For example, the rapid rise of the $F_2$ structure function
observed at HERA suggests that we may reach the parton density
saturation region more quickly than anticipated. Additionally, the
small $x$ region can serve as a useful testing ground for BFKL,
diffractive phenomena, and similar processes. We can clearly see in
Fig.~\ref{fig:figmapii} that with a fixed $\sqrt{s}$, we can best
probe the small $x$ region with a high energy hadron beam colliding
with a low energy lepton beam, and the loss in the high $Q$ region is
minimal. From these (preliminary) studies, it would seem the optimal
$ep$ facility would match the highest energy hadron beam
available with a modest energy lepton beam.
|
1,108,101,565,813 | arxiv | \section{Introduction}
\global\long\def\bs#1{\boldsymbol{#1}}
\global\long\def\av#1{\left\langle #1\right\rangle }
\global\long\def\lv#1{\overleftarrow{#1}}
\global\long\def\rv#1{\overrightarrow{#1}}
\global\long\def\prtl#1#2{\frac{\partial#1}{\partial#2}}
An important problem, for both humans and machines, is to extract
relevant information from complex data. To do so,
one must be able to define which aspects of data are relevant and which
should be discarded. The `information bottleneck' (IB) approach,
developed by Tishby and colleagues [1], provides a principled way to approach
this problem. The idea behind the IB approach is to use additional `variables
of interest' to determine which aspects of a signal are relevant.
For example, for speech signals, variables of interest could be the words being pronounced, or alternatively, the speaker
identity. One then seeks a coding scheme that retains
maximal information about these variables of interest, constrained on the information
encoded about the input.
The IB approach has been used to tackle a wide variety of problems,
including filtering, prediction and learning [2-5]. However, it quickly
becomes intractable with high-dimensional and/or non-gaussian data.
Consequently, previous research has primarily focussed on tractable cases, where the data comprises a countably
small number of discrete states [1-5], or is gaussian [6].
Here, we extend the IB algorithm of Tishby et al.\ [1] using a variational approximation.
The algorithm maximizes a lower bound on the IB objective
function, and is closely related to variational EM. Using this approach, we derive an IB algorithm that can be effectively applied to `sparse' data in which input and relevance variables are generated by sparsely occurring latent features. The
resulting solutions share many properties with previous sparse coding models, used to model early sensory processing [7]. However, unlike these sparse coding models, the learned representation depends on: (i) the relation between the input and variable of interest; (ii) the trade-off between encoding quality and compression. Finally, we present a kernelized version of the algorithm, that can be applied to a large range of problems with non-linear relation between the input data and variables of interest.
\section{Variational IB}
Let us define an input variable $X$, as well as a `relevance variable',
$Y$, with joint distribution $p\left(y,x\right)$. The goal of the
IB approach is to compress the variable $X$ through another variable
$R$, while conserving information about $Y$. Mathematically, we seek
an encoding model, $p\left(r|x\right)$, that maximizes:
\begin{eqnarray}
L_{p\left(r|x\right)} & = & I\left(R; Y\right)-\gamma I\left(R; X\right)\nonumber \\
& \equiv & \av{\log p\left(y|r\right)-\log p\left(y\right)+\gamma\log p\left(r\right)-\gamma\log p\left(r|x\right)}_{p\left(r,x,y\right)},\label{eq:IB_loss_exact}
\end{eqnarray}
where $0<\gamma<1$ is a Lagrange multiplier that determines the strength
of the bottleneck.
Tishby and colleagues showed that the IB loss function can be optimized
by applying iterative updates: $p_{t+1}\left(r|x\right)\propto p_{t}\left(r\right)\exp\left[-\frac{1}{\gamma}\int_{y}p\left(y|x\right)\log\frac{p\left(y|x\right)}{p_{t}\left(y|r\right)}\right]$,
$p_{t+1}\left(r\right)=\int_{x}p\left(x\right)p_{t+1}\left(r|x\right)$
and $p_{t+1}\left(y|r\right)=\int_{x}p\left(y|x\right)p_{t+1}\left(x|r\right)$ [1].
Unfortunately however, when $p\left(x,y\right)$ is high-dimensional
and/or non-gaussian these updates become intractable, and approximations
are required.
Due to the positivity of the KL divergence, we can write, $\av{\log q\left(\cdot\right)}_{p(\cdot)}\leq\av{\log p\left(\cdot\right)}_{p(\cdot)}$ for any approximative distribution $q(\cdot)$. This allows us to formulate a variational lower bound for the IB objective function:
\begin{eqnarray}
\tilde{L}_{p\left(r|x\right),q\left(y|r\right),q\left(r\right)}&=&\frac{1}{N}\sum_{n=1}^{N}\av{\log q\left(y_{n}|r\right)+\gamma\log q\left(r\right)-\gamma\log p\left(r|x_{n}\right)}_{p\left(r|x_{n}\right)}\\
&\leq& L_{p(r|x)}, \nonumber
\label{eq:IB_loss_apprx}
\end{eqnarray}
where $q\left(y_{n}|r\right)$ and $q\left(r\right)$ are variational distributions, and we have replaced the expectation over $p\left(x,y\right)$ with the empirical expectation over training data. (Note that, for notational simplicity we have also omitted the constant term, $H_{Y}=-\av{\log p\left(y\right)}_{p\left(y\right)}$.)
Setting $q\left(y_{n}|r\right)\leftarrow p\left(y_{n}|r\right)$ and
$q\left(r\right)\leftarrow p\left(r\right)$ fully tightens the bound
(so that $\tilde{L}=L$), and leads to the iterative algorithm of Tishby
et al. However, when these exact updates are not possible, one can instead
choose a restricted class of distributions $q\left(y|r\right)\in Q_{y|r}$
and $q\left(r\right)\in Q_{r}$ for which inference is tractable.
Thus, to maximize $\tilde{L}$ with respect to parameters $\Theta$
of the encoding distribution $p\left(r|x,\Theta\right)$, we repeat the following steps until
convergence:
\begin{itemize}
\item For fixed $\Theta$, find $\left\{ q^{new}\left(y|r\right),q^{new}\left(r\right)\right\} =\arg\max_{\left\{ q\left(y|r\right),q\left(r\right)\right\} \in\left\{ Q_{y|r},Q_{r}\right\} }\tilde{L}$
\item For fixed $q\left(y|r\right)$ and $q\left(r\right)$, find $\Theta=\arg\max_{\Theta}\tilde{L}.$
\end{itemize}
We note that using a simple approximation for the decoding distribution, $q(y|r)$, can carry additional benefits, besides rendering the IB algorithm tractable. Specifically, while an advantage of mutual information is its generality, in certain cases this can also be a drawback. That is, because Shannon information does not make any assumptions about the code, it is not always apparent how information should be best extracted from the responses: just because information is `there' does not mean we know how to get at it.
In contrast, using a simple approximation for the decoding distribution, $q(y|r)$ (e.g.\ linear gaussian), constrains the IB algorithm to find solutions where information about $Y$ can be easily extracted from the responses (e.g.\ via linear regression).
\section{Sparse IB}
In previous work on gaussian IB [6], responses were equal to a linear projection of the input, plus noise: $r=Wx+\eta$, where $W$ is an $N_r \times N_x$ matrix of encoding weights, and
$\eta\sim\mathcal{N}\left(\eta|0,\Sigma\right)$, where $\Sigma$ is an $N_r \times N_r$ covariance matrix. When the
joint distribution, $p\left(x,y\right)$, is gaussian, it follows that the marginal
and decoding distributions, $p\left(r\right)$ and $p\left(y|r\right)$,
are also gaussian, and the parameters of the encoding distribution, $W$ and
$\Sigma$, can be found analytically.
To illustrate the capabilities of the variational algorithm, while
permitting comparison to gaussian IB, we begin by adding a single degree of
complexity. In common with gaussian IB, we consider a linear gaussian encoder, $p\left(r|x\right)=\mathcal{N}\left(r|Wx,\Sigma\right)$, and decoder, $q\left(y|r\right)=\mathcal{N}\left(y|Ur,\Lambda\right)$.
However, unlike gaussian IB, we use a student-t distribution to approximate
the response marginal: $q\left(r\right)=\prod_{i}\mathrm{Student}\left(r_{i}|0,\omega_{i}^{2},\nu_{i}\right)$,
with scale and shape parameters, $\omega_{i}^{2}$ and $\nu_{i}$,
respectively. When the shape parameter, $\nu_{i}$, is small then the student-t distribution is heavy-tailed, or `sparse', compared to a gaussian distribution. Thus, we call the resulting algorithm `sparse IB'. Unlike gaussian IB, the introduction of a student-t marginal means the IB algorithm cannot be solved analytically, and one requires approximations.
\subsection{Iterative algorithm}
Recall that the IB objective function consists of two terms: $I\left(R;Y\right)$, and $I\left(R;X\right)$. We begin by describing how to optimize the lower and upper bound of each of these two terms with respect to the variational distributions $q(y|r)$ and $q(r)$, respectively.
The first term of the IB objective function is bounded from below by:
\begin{equation}
I\left(R;Y\right) \geq -\frac{1}{2}\log|\Lambda|-\frac{1}{2N}\sum_{n}\av{ \left(y_{n}-Ur\right)^{T}\Lambda^{-1}\left(y_{n}-Ur\right)}_{p(r|x_n)}+\mathrm{const.}
\end{equation}
Maximizing the lower bound on $I\left(R;Y\right)$ with respect to the decoding parameters, $U$ and $\Lambda$, gives:
\begin{equation}
\Lambda=C_{yy}-UWC_{xy},\hspace{1em}U=C_{xy}^{T}W^{T}\left(WC_{xx}W^{T}+\Sigma\right)^{-1}
\end{equation}
where $C_{yy}=\frac{1}{N}\sum_{n}y_{n}y_{n}^{T}$, $C_{xy}=\frac{1}{N}\sum_{n}x_{n}y_{n}^{T}$,
and $C_{xx}=\frac{1}{N}\sum_{n}x_{n}x_{n}^{T}$.
Unfortunately, it is not straightforward to express the bound on
$I\left(R;X\right)$ in closed form. Instead, we use an additional variational approximation, utilising the fact that
the student-t distribution can be expressed as an infinite mixture of
gaussians: $\mathrm{Student}\left(r|0,\omega^{2},\nu\right)=\int_{\eta}\mathcal{N}\left(r|0,\omega^{2}\right)\mathrm{Gamma}\left(\eta|\frac{\nu}{2},\frac{\nu}{2}\right)$ [8].
Following a standard EM procedure [9], one can thus write a tractable lower bound
on the log-likelihood, $l\equiv\log\left[\mathrm{Student}\left(r|0,\omega^{2},\nu\right)\right]$, which corresponds to an upper-bound on the bottleneck term:
\begin{eqnarray}
I\left(R;X\right) & \leq &\sum_{i,n}\av{ - \log q\left(r_{i}\right) + \log p\left(r_{i}|x_{n}\right) }_{p\left(r_{i}|x_{n}\right)}
\\ & \leq & \sum_{i}\Bigg[\frac{1}{2}\log\omega_{i}^{2} +\frac{1}{2N\omega_{i}^{2}}\sum_{n=1}^{N}\xi_{ni}\av{r_{ni}^2} +f\left(\nu_{i}, \xi_{i},a_i\right)\Bigg] -\frac{1}{2}\log|\Sigma|+\mathrm{const}.\nonumber
\end{eqnarray}
where $\xi_{ni}$, and $a_{i}$ denote variational parameters
for the $i^{th}$ unit and $n^{th}$ data instance. We used the shorthand notation, $\av{r_{ni}^2} = w_ix_nx_n^Tw_i^T+\sigma_i^2$, where $\sigma_{i}^{2}$ is the $i^{th}$ diagonal element of $\Sigma$ and $w_i$ is the $i^{th}$ row of $W$. For notational simplicity, terms that do
not depend on the encoding parameters were pushed into the
function, $f\left(\nu_{i}, \xi_{i},a_i\right)$\footnote{
$f\left(\nu_{i},\xi_{i},a_{i}\right)=\log\Gamma\left(\frac{\nu_{i}}{2}\right)-\frac{\nu_{i}}{2}\log\frac{\nu_{i}}{2}-\frac{1}{N}\sum_{n}\left[\frac{\nu_{i}-1}{2}\left(\psi(a_i)-\ln \frac{a_i}{\xi_{ni}}\right)-\frac{\nu_i}{2}\xi_{ni} + H_{ni} \right]$, where
$H_{ni}$ is the entropy of a gamma distribution with shape and rate parameters: $a_i$, and $a_{i}/\xi_{ni}$, respectively [9].}.
Minimizing the upper bound on $I\left(R;X\right)$
with respect to $\omega_{i}^{2}$, $\xi_{ni}$ and $a_{i}$ (for fixed $\nu_i$) gives:
\begin{equation}
\omega_{i}^{2} = \frac{1}{N}\sum_{n=1}^{N}\xi_{ni}\av{r_{ni}^2}, \hspace{0.15cm} \xi_{ni} = \frac{\nu_{i}+1}{\nu_{i}+\av{r_{ni}^{2}}/\omega_{i}^{2}}, \hspace{0.15cm} a_{i} = \frac {1}{2}(\nu_i+1),
\end{equation}
The shape parameter, $\nu_{i}$, is then found numerically on each iteration (for fixed $\xi_{ni}$ and $a_i$), by solving:
\begin{equation}
\psi\left(\frac{\nu_{i}}{2}\right)-\log\left(\frac{\nu_{i}}{2}\right)=1+\frac{1}{N}\sum_{n=1}^{N}\left[\psi(a_i)-\log\frac{a_i}{\xi_{ni}}-\xi_{ni}\right],
\end{equation}
where $\psi(\cdot)$ is the digamma function [9].
Next we maximize the full variational objective function $\tilde{L}$ with respect to the encoding distribution, $p\left(r|x\right)$ (for fixed $q(y|r)$ and $q(r)$). Maximizing $\tilde{L}$ with respect to the encoding noise covariance, $\Sigma$, gives:
\begin{equation}
\Sigma^{-1}=\frac{1}{\gamma}U^{T}\Lambda^{-1}U+\frac{1}{N}\Omega^{-1}\sum_{n=1}^N\Xi_n,
\end{equation}
where $\Omega$ and $\Xi_{n}$ are $N_{r}\times N_{r}$ diagonal covariance matrices with diagonal elements $\Omega_{ii}=\omega_{i}^{2}$, and $\left(\Xi_{n}\right)_{ii}=\xi_{ni}$, respectively.
Finally, taking the derivative of $\tilde{L}$ with respect to the encoding weights, $W$, gives:
\begin{equation}
\frac{\partial \tilde{L}}{\partial W}=U^{T}\Lambda^{-1}C_{xy}^{T}-U^{T}\Lambda^{-1}UWC_{xx}-\gamma\Omega^{-1}\frac{1}{N}\sum_{n}\Xi_{n}Wx_{n}x_{n}^{T},
\end{equation}
Setting the derivative to zero, we can solve for $W$ directly. One may verify that, when variational parameters, $\xi_{ni}$, are unity, the above iterative updates are identical to the iterative gaussian IB algorithm described in [6].
\subsection{Simulations}
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{fig1}}
\caption{Behaviour of sparse IB and gaussian IB algorithms, on denoising task. (A) Artificial image patches were constructed from combinations of orientated edge-like features. Patches were corrupted with white noise to generate the input, $X$. The goal of the IB algorithm is to learn a linear code that maximized information about the original patches, $Y$, constrained on information encoded about the input, $X$. (B) A selection of linear encoding (left), and decoding (right) filters obtained with the gaussian IB algorithm. (C) Same as B, but for the sparse IB algorithm. (D) Response histograms for the 10 units with highest variance, for the gaussian (red) and sparse (blue) IB algorithms. (E) Information curves for the gaussian (red) and sparse (blue) algorithms, alongside a `null' model, where responses were equal to the original input, plus white noise. (F) Fraction of response variance attributed to signal fluctuations, for each unit. Solid and dashed curves correspond to strong and weak bottlenecks, respectively (corresponding to the vertical dashed lines in panel E).}
\end{figure}
In our framework, the approximation of the response marginal, $q\left(r\right)$, plays an analogous role to the prior distribution in a probabilistic generative model. Thus, we hypothesized that a sparse approximation for the response marginal, $q(r)$, would permit the IB algorithm to recover sparsely occurring input features, analogous to the effect of using a sparse prior.
To show this, we constructed artificial $9\times9$ image patches from combinations of orientated bar features.
Each bar had a gaussian cross-section, with maximum amplitude drawn from a standard normal distribution of width 1.2 pixels. Patches were constructed by linearly combining 3 bars, with uniformly random orientation and position.
Initially, we considered a simple de-noising task,
where the input, $X$, was a noisy version of the original image
patches (gaussian noise, with variance $\sigma^{2}=0.005$; figure 1A). Training data consisted of 10,000 patches. Figure 1B and 1C show a selection of encoding ($W$) and decoding ($U$) filters obtained with the gaussian
and sparse IB models, respectively. As predicted, only the sparse IB model was able to recover the original bar features. In addition, response histograms were considerably more heavy-tailed for the sparse IB model (fig.~1D).
The relevant information, $I(R; Y)$, encoded by the sparse model was greater than for the gaussian model, over a range of bottleneck strengths (fig.~1E). While the difference may appear small, it is consistent with work showing that sparse coding models achieve only a small improvement in log-likelihood for natural image patches [10]. We also plotted
the information curve for a `null model', with responses sampled from $p(r|x)=\mathcal{N}(r|x,\sigma^2 I)$.
Interestingly, the performance of this null model was almost identical
to the gaussian IB model.
Figure 1F plots the fraction of response variance due to the signal,
for each unit ($\frac{w_{i}C_{xx}w_{i}^{T}}{w_{i}C_{xx}w_{i}^{T}+\sigma_{i}^{2}}$).
Solid and dashed curves denote strong and weak bottlenecks,
respectively. In both cases, the gaussian model gave a smooth spectrum of response magnitudes, while the sparse model was more `all-or-nothing'.
One way the sparse IB algorithm differs qualitatively from
traditional sparse coding algorithms, is that the learned representation depends on the relation between $X$
and $Y$, rather than just the input statistics. To illustrate this, we conducted simulations with patches corrupted by spatially correlated noise, aligned along the vertical direction (fig.~2A). The spatial covariance of the noise
was described by a gaussian envelope, with standard deviation $3$ pixels in the vertical direction and $1$
pixel in horizontal direction.
\begin{figure}
\centerline{\includegraphics[width=1\linewidth]{fig2}}
\caption{Variant of the task in figure 1, in which the input noise is spatically correlated. (A) Example input $X$ and patch, $Y$. Spatial noise correlations were aligned along the vertical direction. (B) Subset of decoding filters obtained with the sparse IB algorithm. (C) Distribution of encoded orientations. (D) Example stimulus (left) and reconstruction (right) of bars presented at variable orientations (presented with zero input noise, so that $X\equiv Y$ for this example).}
\end{figure}
Figure 2B shows a selection of decoding filters obtained from the
sparse IB model, with correlated input noise. The shape of individual filters was qualitatively
similar to those obtained with uncorrelated noise (fig.\ 1C). However,
with this stimulus, the IB model avoided `wasting' bits by representing features co-orientated with the noise (fig.~2C). Consequently, it was not possible to reconstruct vertical bars from the responses, when bars were presented alone, even with zero noise (fig.~2D).
\section{Kernel IB}
\begin{figure}
\centerline{\includegraphics[width=0.95\linewidth]{fig3}}
\caption{Behaviour of kernel IB algorithm on occlusion task. (A) Image patches were the same as for figure 1. However, the input, $X$, was restricted to 2 columns to either side of the patch. The target variable, $Y$, was the central region. (B) Subset of decoding filters, $U$, for the sparse kernel IB (`sparse kIB') algorithm. (C) As for B, for other versions of the IB algorithm. (D) Information curves for the gaussian kIB (blue) sparse kIB (green) and sparse IB algorithms (red). The bottleneck strength for the other panels in this figure is indicated by a vertical dashed line. (E) Response histogram for the 10 units with highest variance, for the gaussian and sparse kIB models. (F) (above) Three test stimuli, used to demonstrate the non-linear properties of the sparse KIB code. (below) Reconstruction obtained from responses to test stimulus. (G) Responses of two units which showed strong responses to stimulus 3. The decoding filters for these units are shown above the bar plots.}
\end{figure}
One way to improve the IB algorithm is to consider non-linear encoders. A general choice is: $p\left(r|x\right)=\mathcal{N}(r|W\phi(x),\Sigma)$, where $\phi(x)$ is an embedding to a high-dimensional non-linear feature space.
The variational objective functions for both gaussian and sparse IB algorithms are quadratic in the responses, and thus can be expressed in terms of dot products of the row vector, $\phi(x)$. Consequently, every solution for $w_i$ can be expressed as an expansion of mapped training data, $w_i = \sum_{n=1}^N a_{in} \phi(x_n)$ [11]. It follows that the variational IB algorithm can be expressed in`dual space', with responses to the $n^{th}$ input drawn from $r\sim \mathcal{N}(r|Ak_n, \Sigma)$, where $A$ is an $N_r \times N$ matrix of expansion coefficients, and $k_n$ is the $n^{th}$ column of the $N \times N$ kernel-gram matrix, $K$, with elements $K_{nm} = \phi(x_n)\phi(x_m)^T$. In this formulation, the problem of finding the linear encoding weights, $W$, is replaced by finding the expansion coefficients, $A$.
The advantage of expressing the algorithm in the dual space is that we never have to deal with $\phi(x)$ directly, so are free to consider high- (or even infinite) dimensional feature spaces. However, without additional constraints on the expansion coefficients, $A$, the IB algorithm becomes degenerate (i.e.~the solutions are independent of the input, $X$). A standard way to deal with this is to add an L2 regularization term that favours solutions with small expansion coefficients. Here, this is achieved here by replacing $\phi_n^T\phi_n$ with $\phi_{n}^{T}\phi_{n}+\lambda I$, where $\lambda$ is a fixed regularization parameter. Doing so, the derivative of $\tilde{L}$ with respect to $A$ becomes:
\begin{equation}
\frac{\partial \tilde{L}}{\partial A}=U^{T}\Lambda^{-1}YK-\sum_{n}\left(U^{T}\Lambda^{-1}U+\gamma\Omega^{-1}\Xi_{n}\right)A\left(k_{n}k_{n}^{T}+\lambda K\right)
\label{eq:aeq}
\end{equation}
Setting the derivative to zero and solving for $A$ directly requires inverting an $N N_r\times N N_r$ matrix, which is expensive. Instead, one can use an iterative solver (we used the conjugate gradients squared method). In addition, the computational complexity can be reduced by restricting the solution to lie on a subspace of training instances, such that, $w_i = \sum_{n=1}^M a_{in} \phi(x_n)$, where $M<N$. The derivation does not change, only now $K$ has dimensions $M\times N$ [11].
When $q(r)$ is gaussian (equivalent to setting $\Xi_n=I$), solving for $A$ gives:
\begin{equation}
A = \left(U^{T}\Lambda^{-1}U+\gamma\Omega^{-1}\right)^{-1}U^{T}\Lambda^{-1} A_{KRR}
\label{a_gauss}
\end{equation}
where $A_{KRR}= Y(K+\lambda I)^{-1}$ are the coefficients obtained from kernel ridge-regression (KRR). This suggests the following two stage algorithm: first, we learn the regularisation constant, $\lambda$, and parameters of the kernel matrix, $K$, to maximize KRR performance on hold-out data; next, we perform variational IB, with fixed $K$ and $\lambda$.
\begin{figure}
\centerline{\includegraphics[width=0.8\linewidth]{fig4}}
\caption{Behaviour of kernel IB algorithm on handwritten digit data. (A) As with figure 4, we considered an occlusion task. This time, units were provided with the left hand side of the image patch, and had to reconstruct the right hand side. (B) Response distribution for 10 neurons with highest variance, for the gaussian (blue) and sparse (green) kIB algorithms. (C) Decoding filters for a subset of units, obtained with the sparse kIB algorithm. Note that, for clearer visualization, we show here the decoding filter for the entire image patch, not just the occluded region. (D) A selection of decoding filters obtained with the alternative IB algorithms.}
\end{figure}
\subsection{Simulations}
To illustrate the capabilities of the kernel IB algorithm, we considered an `occlusion' task, with the outer columns of each patch presented as input, $X$ (2 columns to the far left and right), and the inner columns as the relevance variable $Y$, to be reconstructed. Image patches were as before. Note that performing the occlusion task optimally requires detecting combinations of features presented to either side of the occluded region, and is thus inherently nonlinear.
We used gaussian kernels, with scale parameter, $\kappa$, and regularisation constant, $\lambda$, chosen to maximize KRR performance on test data. Both test and training data consisted of 10,000 images. However, $A$ was restricted to lie on a subset of 1000 randomly chosen training patches (see earlier).
Figure 3B shows a selection of decoding filters ($U$) learned by the sparse kernel IB algorithm (`sparse kIB'). A large fraction of filters resembled near-horizontal bars, traversing the occluded region. This was not the case for the sparse linear IB algorithm, which recovered localized blobs either side of the occluded region, nor the gaussian linear or kernelized models, which recovered non-local features (fig.\ 3C). Figure 3D shows a small but significant improvement in performance for the sparse kIB versus the gaussian kIB model. Most noticeable, however, is the distribution of responses, which are much more heavy tailed for the sparse kIB algorithm (fig.\ 3E).
To demonstrate the non-linear behaviour of the sparse kIB model, we presented bar segments: first to either side of the occluded patch, then to both sides simultaneously. When bar segments were presented to both sides simultaneously, the sparse KIB model `filled in' the missing bar segment, in contrast to the reconstruction obtained with single bar segments (fig.\ 3F). This behaviour was reflected in the non-linear responses of certain encoding units, which were large when two segments were presented together, but near zero when one segment was presented alone (fig.\ 3G).
Finally, we repeated the occlusion task with handwritten digits, taken from the USPS dataset (\url{www.gaussianprocess.org/gpml/data}). We used 4649 training and 4649 test patches, of 16$\times$16 pixels. However, expansion coeffecients were restricted to a lie on subset of 500 randomly patches. We set $X$ and $Y$, to be the left and right side of each patch, respectively (fig.~4A).
In common with the artificial data, the response distributions achieved with the sparse kIB algorithm were more heavy-tailed than for the gaussian kIB algorithm (fig.~4B). Likewise, recovered decoding filters closely resembled handwritten digits, and extended far into the occluded region (fig.~4C). This was not the case for the alternative IB algorithms (fig.~4D).
\section{Discussion}
Previous work has shown close parallels between the IB framework and maximum-likelihood estimation in a latent variable model [12, 13]. For the sparse IB algorithm presented here, maximizing the IB objective function is closely related to maximizing the likelihood of a `sparse coding' latent variable model, with student-t prior and linear gaussian likelihood function. However, unlike traditional sparse coding models, the encoding (or `recognition') model $p(r|x)$ is conditioned on a seperate set of inputs, $X$, distinct from the image patches themselves. Thus, the solutions depend on the relation between $X$ and $Y$, not just the image statistics (e.g.\ see fig.~2). Second, an additional parameter, $\gamma$, not present in sparse coding models, controls the trade-off between encoding and compression. Finally, in contrast to traditional sparse coding algorithms, IB gives an unambiguous ordering of features, which can be arranged according to the response variance of each unit (fig.~1F).
Our work is also closely related to the IM algorithm, proposed by Barber et al.~to solve the information maximization (`infomax') problem [14]. However, a general issue with infomax problems is that they are usually ill-posed, necessitating additional \emph{ad hoc} constraints on the encoding weights or responses [15]. In contrast, in the IB approach, such constraints emerge automatically from the bottleneck term.
A related method to find low-dimensional projections of $X$/$Y$ pairs is canonical correlation analysis (`CCA'), and its kernel analogue [16]. In fact, the features obtained with gaussian IB are identical to those obtained with CCA [6]. However, unlike CCA, the number and `scale' of the features are not specified in advance, but determined by the bottleneck parameter, $\gamma$. Secondly, kernel CCA is symmetric in $X$ and $Y$, and thus performs nonlinear embedding of both $X$ \emph{and} $Y$. In contrast, the IB problem is assymetric: we are interested in recovering $Y$ from an input $X$. Thus, only $X$ is kernelized, while the decoder remains linear. Finally, the features obtained from gaussian IB (and thus, CCA) differ qualitatively from the sparse IB algorithm, which recovers sparse features that account jointly for $X$ and $Y$.
Sparse IB can be extended to the nonlinear regime using a kernel expansion. For the gaussian model, the expansion coefficients, $A$, are a linear projection of the coefficients used for kernel-ridge-regression (`KRR'). A general disadvantage of KRR, is that it can be difficult to know which aspects of $X$ are relied on to perform the regression. In contrast, the kernel IB framework provides an intermediate representation, allowing one to visualize the features that jointly account for both $X$ and $Y$ (figs.~3B \& 4C). Furthermore, this learned representation permits generalisation across different tasks that rely on the same set of latent features; something not possible with KRR.
Finally, the IB approach has important implications for models of early sensory processing [17, 18]. Notably, `efficient coding' models typically consider the low-noise limit, where the goal is to reduce the neural response redundancy [7]. In contrast, the IB approach provides a natural way to explore the family of solutions that emerge as one varies internal coding constraints (by varying $\gamma$) and external constraints (by varying the input, $X$) [19, 20]. Further, our simulations suggest how the framework can be used to go beyond early sensory processing: for example to explain higher-level cognitive phenomena such as perceptual filling in (fig.~3G). In future, it would be interesting to explore how the IB framework can be used to extend the efficient coding theory, by accounting for modulations in sensory processing that occur due to changing task demands (i.e.~via changes to the relevance variable, $Y$), rather than just the input statistics ($X$).
\section*{References}
\medskip
\small
[1] Tishby, N.\, Pereira, F C.\ \& Bialek, W.\ (1999) The information bottleneck method. {\it The 37th annual Allerton Conference on Communication, Control and Computing.}\ pp.\ 368--377
[2] Bialek, W.\, Nemenman, I.\ \& Tishby, N.\ (2001) Predictability, complexity, and learning. {\it Neural computation}, 13(11) pp.\ 240- 63
[3] Slonim, N.\ (2003) {\it Information bottleneck theory and applications}. PhD thesis, Hebrew University of Jerusalem
[4] Chechik, G.\ \& Tishby, N.\ (2002) Extracting relevant structures with side information. In {\it Advances in Neural Information Processing Systems 15}
[5] Hofmann, T.\ \& Gondek, D. (2003) Conditional information bottleneck clustering. In {\it 3rd IEEE International conference in data mining, workshop on clustering large data sets}
[6] Chechik, G.\, Globerson, A., Tishby, N.\ \& Weiss, Y.\ (2005) Information bottleneck for gaussian variables. {\it Journal of Machine Learning Research}, (6) pp.\ 165--188
[7] Simoncelli, E.\ P.\ \& Olshausen, B.\ A.\ (2001) Natural image statistics and neural representation. {\it Ann.\ Rev.\ Neurosci.} 24:1194--216
[8] Andrews, D.\ F.\ \& Mallows C.\ L.\ (1974). Scale mixtures of normal distributions. {\it J.\ of the Royal Stat.\ Society. Series B} 36(1) pp.\ 99--102
[9] Scheffler, C.\ (2008). A derivation of the EM updates for finding the maximum likelihood parameter estimates of the student-t distribution. Technical note. URL \url{www.inference.phy.cam.ac.uk/cs482/publications/scheffler2008derivation.pdf}
[10] Eichhorn, J.\, Sinz, F.\, \& Bethge, M.\ (2009). Natural image coding in V1: how much use is orientation selectivity?. {\it PLoS Comput Biol}, 5(4), e1000336.
[11] Mika, S.\, Ratsch, G.\, Weston, J.\, Scholkopf, B.\, Smola, A.\ J.\, \& Muller, K.\ R.\ (1999). Invariant Feature Extraction and Classification in Kernel Spaces. In {\it Advances in neural information processing systems 12} pp. 526--532.
[12] Slonim, N., \& Weiss, Y.\ (2002). Maximum likelihood and the information bottleneck. In {\it Advances in neural information processing systems} pp.\ 335--342
[13] Elidan, G., \& Friedman, N.\ (2002). The information bottleneck EM algorithm. In {\it Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence} pp.\ 200--208
[14] Barber, D.\ \& Agakov, F.\ (2004) The IM algorithm: a variational approach to information maximization. In {\it Advances in Neural Information Processing Systems 16} pp.\ 201--208
[15] Doi, E., Gauthier, J.\ L.\, Field, G.\ D.\, Shlens, J.\, Sher, A.\, Greschner, M. (2012). Efficient Coding of Spatial Information in the Primate Retina. {\it The Journal of neuroscience} 32(46), pp. 16256--16264
[16] Hardoon, D.\ R., Szedmak, S.\, \& Shawe-Taylor, J.\ (2004). Canonical correlation analysis: An overview with application to learning methods. {\it Neural computation}, 16(12), 2639-2664.
[17] Bialek, W., de Ruyter Van Steveninck, R. R., \& Tishby, N. (2008). Efficient representation as a design principle for neural coding and computation. In {\it Information Theory, 2006 IEEE International Symposium} pp.\ 659--663
[18] Palmer, S. E., Marre, O., Berry, M.\ J., \& Bialek, W.\ (2015). Predictive information in a sensory population. {\it Proceedings of the National Academy of Sciences} 112(22) pp.\ 6908--6913.
[19] Doi, Eizaburo.\ \& Lewicki, M.\ S.\ (2005). Sparse coding of natural images using an overcomplete set of limited capacity units. In {\it Advances in Neural Information Processing Systems 17} pp.\ 377--384
[20] Tkacik, G.\, Prentice, J.\ S.\, Balasubramanian, V.\, \& Schneidman, E.\ (2010). Optimal population coding by noisy spiking neurons. {\it Proceedings of the National Academy of Sciences} 107(32), pp.\ 14419-14424.
\end{document} |
1,108,101,565,814 | arxiv | \section{Introduction}
Nucleon knockout by a high-energy electron in the quasi-free (QF) regime has
extensively been used as a powerful tool to investigate single-particle properties
of nuclei. The reason is that the electromagnetic interaction with nucleons is well
known from quantum electrodynamics and, in the one-photon-exchange approximation
and neglecting final-state interactions, the coincidence cross section for a
detected electron of energy $E_{k'}$ and angle $\Omega_{k'}$ and detected nucleon
of angle $\Omega$ has a factorised form:
\begin{equation}
{{\rm d}^3\sigma\over{\rm d}\Omega_{k'}{\rm d}E_{k'}{\rm d}\Omega}
= K\sigma_{\rm ep}S({\vec p}_m,E_m),
\label{eq:factorised}
\end{equation}
where $K$ is a kinematical factor, $\sigma_{\rm ep}$ the elementary (off-shell)
electron-nucleon cross section and $S({\vec p}_m,E_m)$ the hole spectral function
depending on the missing momentum ${\vec p}_m$ and energy $E_m$. The momentum
${\vec p}_m$ is also the recoil momentum of the residual nucleus and $E_m$ its
excitation energy with respect to the target nucleus. Excitation spectra and
momentum distributions of the produced hole have been measured for a variety of
nuclei along the whole periodic table (for a recent review, see
ref.~\cite{book96}).
However, to extract from the data precise information, such as e.g. the values of
spectroscopic factors, an accurate treatment of final-state interactions (FSI) is
necessary with the result that the simple factorisation (\ref{eq:factorised}) is
destroyed~\cite{frullo79}. In the one-photon-exchange approximation the general
expression of the coincidence unpolarized cross section can be written in
terms of four structure functions $W_i$ as~\cite{book96}
\begin{eqnarray}
{{\rm d}^3\sigma\over {\rm d}\Omega_{k'}{\rm d}E_{k'}{\rm d}\Omega}
&=& { 2\pi^2\alpha \over \vert{\vec q}\vert}\, \Gamma_{\rm V}\,
K \{ W_{\rm T} + \epsilon_{\rm L}\, W_{\rm L} \nonumber \\
& & \nonumber \\
& & + \sqrt{\epsilon_{\rm L}(1+ \epsilon)} W_{\rm TL} \cos\phi +
\epsilon\, W_{\rm TT}\cos 2\phi\},
\label{eq:nonfact}
\end{eqnarray}
where $\Gamma_{\rm V}$ is the flux of virtual photons, $\phi$ the out-of-plane
angle of the proton with respect to the electron scattering plane,
\begin{equation}
\epsilon =
\left [ 1 + 2{\vert{\vec q}\vert^2\over Q^2} \tan^2{\textstyle {1\over 2}}\theta
\right]^{-1},\qquad
\epsilon_{\rm\scriptstyle L} = {Q^2\over\vert{\vec q}\vert^2} \epsilon ,
\end{equation}
and $Q^2 = \vert{\vec q}\vert^2 - \omega^2$ is the negative mass squared of the
virtual photon defined in terms of the momentum ${\vec q}$ and energy $\omega$
transferred by the incident electron through a scattering angle $\theta$. In
plane-wave impulse approximation, all structure functions become proportional to
the hole spectral functions and eq. (\ref{eq:factorised}) is recovered.
It turns out that for removal of valence protons a distorted-wave impulse
approximation (DWIA) is a suitable one~\cite{book96,rep93}. On the contrary, at
high missing energy and/or high missing momentum clear evidence for a better
approximation has been accumulated~\cite{ulmer87}-\cite{bob94}. In addition, other
processes beyond the simple one-body mechanism become important above the threshold
of two-nucleon emission and in the socalled dip-region, i.e. in the region between
the QF peak and the $\Delta$-resonance excitation~\cite{lourie86,kester95}.
In the dip region the semi-inclusive $^{12}$C(e,e$'$p)\ data of ref.~\cite{kester95}
have been compared with two calculations, one focusing on two-body meson-exchange
and $\Delta$ currents~\cite{jan94}, and the other one on short-range
correlations~\cite{ciofi91}. However, in this region many-body effects leading to
multi-nucleon emission are important. Limitations of the two-body process as a
mechanism for understanding the (e,e$'$p)\ reaction have been indicated in
refs.~\cite{tak89,gp94}.
In the QF region the one-body mechanism is dominant, but multiple
scattering of the ejected proton with the residual nucleus is also important as
shown by the large effects introduced by DWIA. A detailed analysis of the effects
of multiple scattering has only been performed in a classical approach by means of
a Monte Carlo study~\cite{mc85} where an (e,e$'$p)\ reaction in a given nucleus is
simulated by taking into account multiple Coulomb and nuclear scattering by the
outgoing proton while crossing through the residual nucleus. A quantum-mechanical
treatment of FSI is proposed in the present paper taking advantage of the
multistep direct (MSD) scattering theory of Feshbach, Kerman and
Koonin~\cite{fkk80}.
The MSD theory has been extensively applied to describe the continuum spectrum in
nuclear reactions for energies up to the pion threshold (see~\cite{gad92} and
references therein; \cite{chad93}-\cite{kon93}) establishing the validity of the
theory over a wide range of energies and target nuclei. The reactions are described
as a series of two-body interactions between the projectile and the target nucleons
leading to the excitation of intermediate states of increasing complexity. At each
stage a nucleon may be emitted contributing to the pre-equilibrium energy spectrum.
The theory combines a quantum-mechanical treatment of multistep scattering with
statistical assumptions that lead to the convolution nature of the multistep
cross sections and enables the calculation of higher order contributions -- up to
six steps -- which would otherwise be impracticable.
In the present paper we apply the multistep scattering theory to describe the
continuum spectrum of the QF (e,e$'$p)\ knockout reaction. Following the
electromagnetic interaction between the scattered electron and the target nucleus,
a target nucleon is excited to the continuum with energy $E_{1}$ and angle
$\Omega_{1}$ and subsequently undergoes a series of two-body interactions with the
residual nucleons before being emitted with energy $E$ and angle $\Omega$. We aim
to give a quantitative estimate of the multi-scattering effects in the high missing
energy region as a first step in the study of FSI.
In sect. 2 the MSD theory is briefly recalled and adapted to describe the proton
emission in (e,e$'$p)\ reactions. Calculational details are given in sect. 3 and
the results obtained in a case example are discussed in sect. 4.
\section{Theory}
The MSD theory has been described in detail in refs.~\cite{gad92,bon94} so here we
give a brief account of the theoretical formalism adapted to (e,e$'$p)\ reactions
without details of the derivations. The average cross section for an
ejectile electron of energy $E_{k'}$ and angle $\Omega_{k'}$ and ejectile proton
of energy $E$ and angle $\Omega$ is written as an incoherent sum of a one-step and
multistep ($n$-step) terms
\begin{equation}
\frac{{\rm d}^{4}\sigma}{{\rm d}\Omega_{k'} {\rm d}E_{k'} {\rm d}\Omega {\rm d}E}
=
\frac{{\rm d}^{4}\sigma^{(1)}}{ {\rm d}\Omega_{k'} {\rm d}E_{k'} {\rm d}\Omega
{\rm d}E }
+ \sum_{n=2}^{\infty}\frac{{\rm d}^{4}\sigma^{(n)}}{ {\rm d}\Omega_{k'}
{\rm d}E_{k'} {\rm d}\Omega {\rm d}E},
\label{eq:msdeep}
\end{equation}
where the $n$-step term is given by a convolution of the direct (e,e$'$p)\
knockout cross sections and one-step MSD cross sections over all intermediate
energies $E_{1},\, E_{2}\dots$ and angles $\Omega_{1},\,\Omega_{2}\dots$ obeying
energy and momentum conservation rules:
\begin{eqnarray}
\frac{{\rm d}^{4}\sigma^{(n)}}{
{\rm d}\Omega_{k'}{\rm d}E_{k'} {\rm d}\Omega {\rm d}E}
& =& \left(\frac{m}{4\pi^{2}}\right)^{n-1}
\int {\rm d}\Omega_{n-1}\int {\rm d}E_{n-1}E_{n-1}\dots \nonumber \\
& & \times \int {\rm d}\Omega_{1}\int {\rm d}E_{1}E_{1}
\frac{{\rm d}^{2}\sigma^{(1)}}{{\rm d}\Omega dE}(E,\Omega
\leftarrow E_{n-1},\Omega_{n-1})\dots \nonumber \\
& &\times \frac{{\rm d}^{2}\sigma^{(1)}}{{\rm d}\Omega_{2}
{\rm d}E_{2}}(E_{2},\Omega_{2} \leftarrow E_{1},\Omega_{1})
\frac{{\rm d}^{4}\sigma}{{\rm d}\Omega_{k'}{\rm d}E_{k'} {\rm d}\Omega_{1} dE_{1}}.
\label{eq:msdneep}\\ \nonumber
\end{eqnarray}
\noindent
The cross section for the (e,e$'$p)\ direct knockout reaction is given by eq.
(\ref{eq:nonfact}) after having included the energy distribution (see eq.
(\ref{eq:distrib}) below). The one-step MSD cross sections for the subsequent NN
scatterings are calculated by extending the DWBA theory to the continuum and can be
written as \begin{eqnarray} & & \frac{{\rm d}^{2}\sigma^{(1)}}{{\rm d}\Omega {\rm
d}E} (E,\Omega \leftarrow E_{0},\Omega_{0}) \nonumber \\
& &\qquad\qquad =\sum_{J}(2J+1)\rho_{1{\rm p}1{\rm h},J}(U)
\left\langle\frac{{\rm d}\sigma(E,\Omega \leftarrow E_{0},\Omega_{0})}
{{\rm d}\Omega}\right\rangle^{\rm DWBA}_{J},
\label{eq:onestep}\\ \nonumber
\end{eqnarray}
where $J$ is the orbital angular momentum transfer, $\langle {\rm d}\sigma/
{\rm d}\Omega\rangle^{\rm DWBA}_{J}$ is the average of DWBA cross sections exciting
1p1h states consistent with energy, angular momentum and parity conservation and
$\rho_{1{\rm p}1{\rm h},J}(U)$ is the density of such states with residual nucleus
energy $U=E_{0}-E$. The latter is factorised into a level-dependent density and a
spin distribution, $\rho_{1{\rm p}1{\rm h},J}(U) = \rho_{1{\rm p}1{\rm
h}}(U)\,R_{n}(J)$. The energy-dependent density $\rho_{1{\rm p}1{\rm h}}$ is
obtained from an equidistant Fermi-gas model with finite hole-depth restrictions
taken into account. $R_{n}$ is a Gaussian spin distribution,
\begin{eqnarray}
R_{n}(J) =
\frac{(2J+1)}{2(2\pi)^{1/2}\sigma_{n}^{3}}\exp\left[-\frac{(J+{\textstyle {1\over 2}})^{2}}{2
\sigma_{n}^{2}}\right] ,
\end{eqnarray}
with $\sigma_{n}$ the spin cut-off parameter.
The transitions are
induced by an effective
NN interaction which is given by a finite-range Yukawa potential with
strength $V_{0}$ adjusted to reproduce the experimental (p, p$'$)
cross sections.
\section{Calculational details}
The QF (e,e$'$p)\ cross sections were calculated in DWIA~\cite{rep93}, including the
effect of Coulomb distortion of the electron waves, through the effective momentum
approximation, which is a good approximation for light nuclei~\cite{gp88}. A full
out-of-plane kinematics was considered, with an outgoing-proton energy up to the
maximum value compatible with the energy distribution of the bound single-particle
states.
The distorted waves were thus obtained from the optical potential of
ref.~\cite{gian76} which extends up to energies of 150 MeV and the bound-state
wavefunctions from a Woods-Saxon potential with a radius parameter $r_{0} = 1.3$
fm, diffuseness $a = 0.6$ fm~\cite{mou76} and a depth fixed to reproduce the input
energy eigenvalues. The quantum numbers and energy eigenvalues of the states that
can be excited were obtained from a spherical Nilsson shell model
scheme~\cite{seg67}. Such a scheme has been adopted because it is easily extended
into the continuum as required in calculating the MSD cross section. The price of
consistency between bound and continuum states is, however, paid by removal
energies that are somehow different from experimental values.
The energy distribution of the bound states was taken as a
Lorentzian~\cite{mahaux88}
\begin{equation}
S(E_m) = \frac{2}{\pi}\frac{\Gamma(E_m)}
{4 (E_m - E_{\rm F} - E_{\rm b.e.})^2 - \Gamma^2(E_m)},
\label{eq:distrib}
\end{equation}
where $E_{\rm b.e.}$ is the g.s. nucleon binding energy and $E_{\rm F}$ the
Fermi energy of the Nilsson level scheme. The energy dependent width is given
by~\cite{br81}
\begin{equation}
\Gamma(E_m) = \frac{24(E_m - E_{\rm F})^2}{500 + (E_m-E_{\rm F})^2} ,
\label{eq:width}
\end{equation}
where the energies are in MeV.
The MSD cross sections were calculated using DWUCK4~\cite{kun93} to obtain the
microscopic DWBA cross sections with a Yukawa effective NN potential of range 1
fm. The strength of the potential $V_{0}$ was extracted from previous studies of
the systematics of the (p,p$'$) reaction on the atomic mass $A$ and the incident
energy~\cite{ric94,chad94}. For sake of consistency the same distorted waves and
bound-state wavefunctions were used for the calculation of the (e,e$'$p)\ and DWBA
(p,p$'$) cross-sections.
The microscopic transitions are averaged over transferred angular momentum and
residual nucleus energy by the MSD code~\cite{ola92} according to eq.
(\ref{eq:onestep}) where the 1p1h state density was calculated with an average
single-particle density $g=A/13$~\cite{gad92} and the spin cut-off parameter of
its spin distribution was given by $\sigma^{2}_{2} = 0.28\times 2\times
A^{2/3}$~\cite{fu86}. When calculating the multistep cross sections with eq.
(\ref{eq:msdneep}) the one-step MSD cross sections are obtained at several incident
energies lower than that of the excited proton of the direct (e,e$'$p)\ knockout
reaction and interpolated for other values. The convolution integral in eq.
(\ref{eq:msdneep}) is then evaluated using Monte Carlo integration (MSD
code~\cite{ola92}).
\section{Results}
The approach to MSD scattering of the ejectile proton presented in the previous
sections was applied to the $^{40}$Ca(e,e$'$p)\ reaction as a case example. The
$^{40}$Ca nucleus is an appropriate target for the statistical treatment of the
MSD theory and data at high missing energy exist with the following
kinematics~\cite{mou76}: the incident electron energy is $E_k = 497$
MeV,the electron scattering angle $\theta = 52.9^{\circ}$ and the outgoing proton
energy $E = 87\pm 10$ MeV. The scattered electron energy $E_{k'}$ varied in the
experiment from 350 to 410 MeV. In our calculations we fixed $E_{k'} = 350$ MeV
and worked at constant $({\vec q}, \omega)$ by varying the proton energy
$E$ accordingly. This kinematics is unable to reach missing momenta $p_m\simle 100$
MeV/$c$ for the deep states, contrary to the experimental situation, where the
detector acceptances also allow to probe low values of missing momenta.
In fig. 1 we show the theoretical direct (e,e$'$p)\ knockout and multistep cross
sections as a function of the angle $\gamma$ between the emitted proton ${\vec p}'$
and the momentum transfer ${\vec q}$ at four different residual nucleus energies
$U_{\rm res}= E_m - E_{\rm b.e.}$. At the lowest excitation energies, the direct nucleon knockout
process dominates and exhibits a strong forward-peaking. The multistep
contributions are important at large scattering angles over the whole energy
range. With increasing excitation energy the two-step and three-step contributions
become gradually more important than the one-step direct process over most of the
angular range, apart from the very small scattering angles $\gamma \leq
10^{\circ}$. The domination of multistep processes at large scattering angles is
expected since as a result of multistep scattering the leading proton gradually
loses memory of its initial direction yielding thus increasingly symmetric angular
distributions.
In order to compare with data it is useful to define the reduced cross section as
\begin{equation}
\rho(p_m,E_m) = \frac{1}{\sigma_{\rm cc1}}
{{\rm d}^3\sigma\over{\rm d}\Omega_{k'}{\rm d}E_{k'}{\rm d}\Omega} ,
\label{eq:reduced}
\end{equation}
where $\sigma_{\rm cc1}$ is the electron-nucleon cross section taken according to
ref.~\cite{def83}.
In fig. 2 we compare $\rho(p_m,E_m)$ integrated over two different energy ranges
with the experimental data of ref.~\cite{mou76}. The theoretical curves are
multiplied by 0.5, a factor that can be interpreted as an average spectroscopic
factor. At low missing energies the direct process dominates, whereas at high
$E_m$ the major contribution comes from two-step and three-step processes. The
relative importance of multistep processes increases with the missing momentum and
is analogous to the behaviour of the multistep cross sections at large scattering
angles in fig. 1.
\section{Conclusions}
We have calculated the multi-nucleon-nucleon scattering contributions in (e,e$'$p)\
reactions in the quasi-free region using the MSD theory of ref.~\cite{fkk80}, and
have shown that such processes are important at high missing energy and momentum.
Therefore, one can foresee that in
other kinematics involving even higher missing energy and momentum values like,
e.g., in the dip region~\cite{tak89}, multistep scattering processes would be
helpful in determining the final-sate interaction.
\bigskip
The authors are grateful to P.~E.~Hodgson for useful discussions.
\clearpage
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.